All of lore.kernel.org
 help / color / mirror / Atom feed
* 2.6.30 panic - xfs_fs_destroy_inode
@ 2009-06-17 17:04 Patrick Schreurs
  2009-06-17 21:31 ` Eric Sandeen
  0 siblings, 1 reply; 20+ messages in thread
From: Patrick Schreurs @ 2009-06-17 17:04 UTC (permalink / raw)
  To: linux-xfs

[-- Attachment #1: Type: text/plain, Size: 316 bytes --]

Hi all,

We are experiencing kernel panics on servers running 2.6.29(.1) and 
2.6.30. I've included two attachments to demonstrate.

The error is:
Kernel panic - not syncing: xfs_fs_destroy_inode: cannot reclaim ...

OS is 64bit Debian lenny.

Is this a known issue? Any comments on this?

Thanks,

Patrick Schreurs

[-- Attachment #2: 20090613-sb06.jpg --]
[-- Type: image/jpeg, Size: 59901 bytes --]

[-- Attachment #3: sb04-20090617.png --]
[-- Type: image/png, Size: 23504 bytes --]

[-- Attachment #4: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-06-17 17:04 2.6.30 panic - xfs_fs_destroy_inode Patrick Schreurs
@ 2009-06-17 21:31 ` Eric Sandeen
  2009-06-18  7:55   ` Patrick Schreurs
  0 siblings, 1 reply; 20+ messages in thread
From: Eric Sandeen @ 2009-06-17 21:31 UTC (permalink / raw)
  To: Patrick Schreurs; +Cc: linux-xfs

Patrick Schreurs wrote:
> Hi all,
> 
> We are experiencing kernel panics on servers running 2.6.29(.1) and 
> 2.6.30. I've included two attachments to demonstrate.
> 
> The error is:
> Kernel panic - not syncing: xfs_fs_destroy_inode: cannot reclaim ...
> 
> OS is 64bit Debian lenny.
> 
> Is this a known issue? Any comments on this?

It's not known to me, was this a recent upgrade?  (IOW, did it start
with .29(.1)?

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-06-17 21:31 ` Eric Sandeen
@ 2009-06-18  7:55   ` Patrick Schreurs
  2009-06-20 10:18     ` Patrick Schreurs
  0 siblings, 1 reply; 20+ messages in thread
From: Patrick Schreurs @ 2009-06-18  7:55 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: linux-xfs, Tommy van Leeuwen

Eric Sandeen wrote:
> Patrick Schreurs wrote:
>> Hi all,
>>
>> We are experiencing kernel panics on servers running 2.6.29(.1) and 
>> 2.6.30. I've included two attachments to demonstrate.
>>
>> The error is:
>> Kernel panic - not syncing: xfs_fs_destroy_inode: cannot reclaim ...
>>
>> OS is 64bit Debian lenny.
>>
>> Is this a known issue? Any comments on this?
> 
> It's not known to me, was this a recent upgrade?  (IOW, did it start
> with .29(.1)?

We've seen this on 2 separate servers. It probably happened more often, 
but we didn't captured the panic message. One server was running 
2.6.29.1, the other server was running 2.6.30. Currently we've updated 
all similar servers to 2.6.30.

If we can provide you with more details to help fix this issue, please 
let us know.

-Patrick

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-06-18  7:55   ` Patrick Schreurs
@ 2009-06-20 10:18     ` Patrick Schreurs
  2009-06-20 13:29       ` Eric Sandeen
  0 siblings, 1 reply; 20+ messages in thread
From: Patrick Schreurs @ 2009-06-20 10:18 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: linux-xfs, Tommy van Leeuwen

[-- Attachment #1: Type: text/plain, Size: 1111 bytes --]

Unfortunately another panic. See attachment.

Would love to receive some advice on this issue.

Thanks in advance.

-Patrick

Patrick Schreurs wrote:
> Eric Sandeen wrote:
>> Patrick Schreurs wrote:
>>> Hi all,
>>>
>>> We are experiencing kernel panics on servers running 2.6.29(.1) and 
>>> 2.6.30. I've included two attachments to demonstrate.
>>>
>>> The error is:
>>> Kernel panic - not syncing: xfs_fs_destroy_inode: cannot reclaim ...
>>>
>>> OS is 64bit Debian lenny.
>>>
>>> Is this a known issue? Any comments on this?
>>
>> It's not known to me, was this a recent upgrade?  (IOW, did it start
>> with .29(.1)?
> 
> We've seen this on 2 separate servers. It probably happened more often, 
> but we didn't captured the panic message. One server was running 
> 2.6.29.1, the other server was running 2.6.30. Currently we've updated 
> all similar servers to 2.6.30.
> 
> If we can provide you with more details to help fix this issue, please 
> let us know.
> 
> -Patrick
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

[-- Attachment #2: sb06-20090619.png --]
[-- Type: image/png, Size: 23603 bytes --]

[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-06-20 10:18     ` Patrick Schreurs
@ 2009-06-20 13:29       ` Eric Sandeen
  2009-06-20 16:31         ` Patrick Schreurs
  0 siblings, 1 reply; 20+ messages in thread
From: Eric Sandeen @ 2009-06-20 13:29 UTC (permalink / raw)
  To: Patrick Schreurs; +Cc: linux-xfs, Tommy van Leeuwen

Others aren't hitting this, what sort of workload are you running when  
you hit it?

I have not had time to look at it yet but some sort of testcase may  
greatly help.

-Eric

On Jun 20, 2009, at 5:18 AM, Patrick Schreurs <patrick@news- 
service.com> wrote:

> Unfortunately another panic. See attachment.
>
> Would love to receive some advice on this issue.
>
> Thanks in advance.
>
> -Patrick
>
> Patrick Schreurs wrote:
>> Eric Sandeen wrote:
>>> Patrick Schreurs wrote:
>>>> Hi all,
>>>>
>>>> We are experiencing kernel panics on servers running 2.6.29(.1)  
>>>> and 2.6.30. I've included two attachments to demonstrate.
>>>>
>>>> The error is:
>>>> Kernel panic - not syncing: xfs_fs_destroy_inode: cannot  
>>>> reclaim ...
>>>>
>>>> OS is 64bit Debian lenny.
>>>>
>>>> Is this a known issue? Any comments on this?
>>>
>>> It's not known to me, was this a recent upgrade?  (IOW, did it start
>>> with .29(.1)?
>> We've seen this on 2 separate servers. It probably happened more  
>> often, but we didn't captured the panic message. One server was  
>> running 2.6.29.1, the other server was running 2.6.30. Currently  
>> we've updated all similar servers to 2.6.30.
>> If we can provide you with more details to help fix this issue,  
>> please let us know.
>> -Patrick
>> _______________________________________________
>> xfs mailing list
>> xfs@oss.sgi.com
>> http://oss.sgi.com/mailman/listinfo/xfs
> <sb06-20090619.png>

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-06-20 13:29       ` Eric Sandeen
@ 2009-06-20 16:31         ` Patrick Schreurs
  2009-06-23  7:24           ` Patrick Schreurs
  0 siblings, 1 reply; 20+ messages in thread
From: Patrick Schreurs @ 2009-06-20 16:31 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: linux-xfs, Tommy van Leeuwen

Just had another one. It's likely we'll have to downgrade to 2.6.28.x.

These servers have 28 SCSI disks mounted separately (JBOD). The workload 
is basically i/o load (90% read, 10% write) from these disks. The 
servers are not extreme busy (overloaded).

xfs_info from a random disk:

sb02:~# xfs_info /dev/sdb
meta-data=/dev/sdb               isize=256    agcount=4, agsize=18310547 
blks
          =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=73242187, imaxpct=25
          =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=32768, version=2
          =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

As you can see we use lazy-count=1. Mount options aren't very exotic: 
rw,noatime,nodiratime

We are seeing these panic's on at least 3 different servers.

If you have any hints on how to investigate, we would greatly appreciate 
it.

-Patrick

Eric Sandeen wrote:
> Others aren't hitting this, what sort of workload are you running when 
> you hit it?
> 
> I have not had time to look at it yet but some sort of testcase may 
> greatly help.
> 
> -Eric
> 
> On Jun 20, 2009, at 5:18 AM, Patrick Schreurs <patrick@news-service.com> 
> wrote:
> 
>> Unfortunately another panic. See attachment.
>>
>> Would love to receive some advice on this issue.
>>
>> Thanks in advance.
>>
>> -Patrick
>>
>> Patrick Schreurs wrote:
>>> Eric Sandeen wrote:
>>>> Patrick Schreurs wrote:
>>>>> Hi all,
>>>>>
>>>>> We are experiencing kernel panics on servers running 2.6.29(.1) and 
>>>>> 2.6.30. I've included two attachments to demonstrate.
>>>>>
>>>>> The error is:
>>>>> Kernel panic - not syncing: xfs_fs_destroy_inode: cannot reclaim ...
>>>>>
>>>>> OS is 64bit Debian lenny.
>>>>>
>>>>> Is this a known issue? Any comments on this?
>>>>
>>>> It's not known to me, was this a recent upgrade?  (IOW, did it start
>>>> with .29(.1)?
>>> We've seen this on 2 separate servers. It probably happened more 
>>> often, but we didn't captured the panic message. One server was 
>>> running 2.6.29.1, the other server was running 2.6.30. Currently 
>>> we've updated all similar servers to 2.6.30.
>>> If we can provide you with more details to help fix this issue, 
>>> please let us know.
>>> -Patrick
>>> _______________________________________________
>>> xfs mailing list
>>> xfs@oss.sgi.com
>>> http://oss.sgi.com/mailman/listinfo/xfs
>> <sb06-20090619.png>
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-06-20 16:31         ` Patrick Schreurs
@ 2009-06-23  7:24           ` Patrick Schreurs
  2009-06-23  8:17             ` Lachlan McIlroy
  0 siblings, 1 reply; 20+ messages in thread
From: Patrick Schreurs @ 2009-06-23  7:24 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: linux-xfs, Tommy van Leeuwen

[-- Attachment #1: Type: text/plain, Size: 3755 bytes --]

Another one (see attachement). This time on a server with SAS drives and 
without the lazy-count option:

meta-data=/dev/sdb               isize=256    agcount=4, agsize=27471812 
blks
          =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=109887246, imaxpct=25
          =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=32768, version=2
          =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

We really don't want to rollback to 2.6.28.x as this doesn't solve the 
issue.

Any hint would be appreciated.

-Patrick

Patrick Schreurs wrote:
> Just had another one. It's likely we'll have to downgrade to 2.6.28.x.
> 
> These servers have 28 SCSI disks mounted separately (JBOD). The workload 
> is basically i/o load (90% read, 10% write) from these disks. The 
> servers are not extreme busy (overloaded).
> 
> xfs_info from a random disk:
> 
> sb02:~# xfs_info /dev/sdb
> meta-data=/dev/sdb               isize=256    agcount=4, agsize=18310547 
> blks
>          =                       sectsz=512   attr=2
> data     =                       bsize=4096   blocks=73242187, imaxpct=25
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0
> log      =internal               bsize=4096   blocks=32768, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> 
> As you can see we use lazy-count=1. Mount options aren't very exotic: 
> rw,noatime,nodiratime
> 
> We are seeing these panic's on at least 3 different servers.
> 
> If you have any hints on how to investigate, we would greatly appreciate 
> it.
> 
> -Patrick
> 
> Eric Sandeen wrote:
>> Others aren't hitting this, what sort of workload are you running when 
>> you hit it?
>>
>> I have not had time to look at it yet but some sort of testcase may 
>> greatly help.
>>
>> -Eric
>>
>> On Jun 20, 2009, at 5:18 AM, Patrick Schreurs 
>> <patrick@news-service.com> wrote:
>>
>>> Unfortunately another panic. See attachment.
>>>
>>> Would love to receive some advice on this issue.
>>>
>>> Thanks in advance.
>>>
>>> -Patrick
>>>
>>> Patrick Schreurs wrote:
>>>> Eric Sandeen wrote:
>>>>> Patrick Schreurs wrote:
>>>>>> Hi all,
>>>>>>
>>>>>> We are experiencing kernel panics on servers running 2.6.29(.1) 
>>>>>> and 2.6.30. I've included two attachments to demonstrate.
>>>>>>
>>>>>> The error is:
>>>>>> Kernel panic - not syncing: xfs_fs_destroy_inode: cannot reclaim ...
>>>>>>
>>>>>> OS is 64bit Debian lenny.
>>>>>>
>>>>>> Is this a known issue? Any comments on this?
>>>>>
>>>>> It's not known to me, was this a recent upgrade?  (IOW, did it start
>>>>> with .29(.1)?
>>>> We've seen this on 2 separate servers. It probably happened more 
>>>> often, but we didn't captured the panic message. One server was 
>>>> running 2.6.29.1, the other server was running 2.6.30. Currently 
>>>> we've updated all similar servers to 2.6.30.
>>>> If we can provide you with more details to help fix this issue, 
>>>> please let us know.
>>>> -Patrick
>>>> _______________________________________________
>>>> xfs mailing list
>>>> xfs@oss.sgi.com
>>>> http://oss.sgi.com/mailman/listinfo/xfs
>>> <sb06-20090619.png>
>>
>> _______________________________________________
>> xfs mailing list
>> xfs@oss.sgi.com
>> http://oss.sgi.com/mailman/listinfo/xfs
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

[-- Attachment #2: sb08-20090623.jpg --]
[-- Type: image/jpeg, Size: 70970 bytes --]

[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-06-23  7:24           ` Patrick Schreurs
@ 2009-06-23  8:17             ` Lachlan McIlroy
  2009-06-23 17:13               ` Christoph Hellwig
  0 siblings, 1 reply; 20+ messages in thread
From: Lachlan McIlroy @ 2009-06-23  8:17 UTC (permalink / raw)
  To: Patrick Schreurs; +Cc: linux-xfs, Tommy van Leeuwen, Eric Sandeen

[-- Attachment #1: Type: text/plain, Size: 4585 bytes --]

It looks to me like xfs_reclaim_inode() has returned EAGAIN because the
XFS_RECLAIM flag was set on the xfs inode.  This implies we are trying
to reclaim an inode that is already in the process of being reclaimed.
I'm not sure how this happened but it could be a simple case of ignoring
this error since the reclaim is already in progress.

----- "Patrick Schreurs" <patrick@news-service.com> wrote:

> Another one (see attachement). This time on a server with SAS drives
> and 
> without the lazy-count option:
> 
> meta-data=/dev/sdb               isize=256    agcount=4,
> agsize=27471812 
> blks
>           =                       sectsz=512   attr=2
> data     =                       bsize=4096   blocks=109887246,
> imaxpct=25
>           =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0
> log      =internal               bsize=4096   blocks=32768, version=2
>           =                       sectsz=512   sunit=0 blks,
> lazy-count=0
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> 
> We really don't want to rollback to 2.6.28.x as this doesn't solve the
> 
> issue.
> 
> Any hint would be appreciated.
> 
> -Patrick
> 
> Patrick Schreurs wrote:
> > Just had another one. It's likely we'll have to downgrade to
> 2.6.28.x.
> > 
> > These servers have 28 SCSI disks mounted separately (JBOD). The
> workload 
> > is basically i/o load (90% read, 10% write) from these disks. The 
> > servers are not extreme busy (overloaded).
> > 
> > xfs_info from a random disk:
> > 
> > sb02:~# xfs_info /dev/sdb
> > meta-data=/dev/sdb               isize=256    agcount=4,
> agsize=18310547 
> > blks
> >          =                       sectsz=512   attr=2
> > data     =                       bsize=4096   blocks=73242187,
> imaxpct=25
> >          =                       sunit=0      swidth=0 blks
> > naming   =version 2              bsize=4096   ascii-ci=0
> > log      =internal               bsize=4096   blocks=32768,
> version=2
> >          =                       sectsz=512   sunit=0 blks,
> lazy-count=1
> > realtime =none                   extsz=4096   blocks=0, rtextents=0
> > 
> > As you can see we use lazy-count=1. Mount options aren't very
> exotic: 
> > rw,noatime,nodiratime
> > 
> > We are seeing these panic's on at least 3 different servers.
> > 
> > If you have any hints on how to investigate, we would greatly
> appreciate 
> > it.
> > 
> > -Patrick
> > 
> > Eric Sandeen wrote:
> >> Others aren't hitting this, what sort of workload are you running
> when 
> >> you hit it?
> >>
> >> I have not had time to look at it yet but some sort of testcase may
> 
> >> greatly help.
> >>
> >> -Eric
> >>
> >> On Jun 20, 2009, at 5:18 AM, Patrick Schreurs 
> >> <patrick@news-service.com> wrote:
> >>
> >>> Unfortunately another panic. See attachment.
> >>>
> >>> Would love to receive some advice on this issue.
> >>>
> >>> Thanks in advance.
> >>>
> >>> -Patrick
> >>>
> >>> Patrick Schreurs wrote:
> >>>> Eric Sandeen wrote:
> >>>>> Patrick Schreurs wrote:
> >>>>>> Hi all,
> >>>>>>
> >>>>>> We are experiencing kernel panics on servers running 2.6.29(.1)
> 
> >>>>>> and 2.6.30. I've included two attachments to demonstrate.
> >>>>>>
> >>>>>> The error is:
> >>>>>> Kernel panic - not syncing: xfs_fs_destroy_inode: cannot
> reclaim ...
> >>>>>>
> >>>>>> OS is 64bit Debian lenny.
> >>>>>>
> >>>>>> Is this a known issue? Any comments on this?
> >>>>>
> >>>>> It's not known to me, was this a recent upgrade?  (IOW, did it
> start
> >>>>> with .29(.1)?
> >>>> We've seen this on 2 separate servers. It probably happened more
> 
> >>>> often, but we didn't captured the panic message. One server was 
> >>>> running 2.6.29.1, the other server was running 2.6.30. Currently
> 
> >>>> we've updated all similar servers to 2.6.30.
> >>>> If we can provide you with more details to help fix this issue, 
> >>>> please let us know.
> >>>> -Patrick
> >>>> _______________________________________________
> >>>> xfs mailing list
> >>>> xfs@oss.sgi.com
> >>>> http://oss.sgi.com/mailman/listinfo/xfs
> >>> <sb06-20090619.png>
> >>
> >> _______________________________________________
> >> xfs mailing list
> >> xfs@oss.sgi.com
> >> http://oss.sgi.com/mailman/listinfo/xfs
> > 
> > _______________________________________________
> > xfs mailing list
> > xfs@oss.sgi.com
> > http://oss.sgi.com/mailman/listinfo/xfs
> 
> 
> [image/jpeg:sb08-20090623.jpg]
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

[-- Attachment #2: sb08-20090623.jpg --]
[-- Type: image/jpeg, Size: 70970 bytes --]

[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-06-23  8:17             ` Lachlan McIlroy
@ 2009-06-23 17:13               ` Christoph Hellwig
  2009-06-30 20:13                 ` Patrick Schreurs
  0 siblings, 1 reply; 20+ messages in thread
From: Christoph Hellwig @ 2009-06-23 17:13 UTC (permalink / raw)
  To: Lachlan McIlroy
  Cc: Patrick Schreurs, linux-xfs, Tommy van Leeuwen, Eric Sandeen

On Tue, Jun 23, 2009 at 04:17:13AM -0400, Lachlan McIlroy wrote:
> It looks to me like xfs_reclaim_inode() has returned EAGAIN because the
> XFS_RECLAIM flag was set on the xfs inode.  This implies we are trying
> to reclaim an inode that is already in the process of being reclaimed.
> I'm not sure how this happened but it could be a simple case of ignoring
> this error since the reclaim is already in progress.

Well, having the reclaim already in progress means we're racing here.
And I suspect this fits into the other bugs with possibly duplicat
inodes we see after the inode+xfs_inode unification.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-06-23 17:13               ` Christoph Hellwig
@ 2009-06-30 20:13                 ` Patrick Schreurs
  2009-06-30 20:42                   ` Christoph Hellwig
  2009-07-01 12:44                   ` Christoph Hellwig
  0 siblings, 2 replies; 20+ messages in thread
From: Patrick Schreurs @ 2009-06-30 20:13 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-xfs, Tommy van Leeuwen, Lachlan McIlroy, Eric Sandeen

Hi (again),

Anyone has any advice to prevent this from happening? We've seen 10 
crashes in the last 14 days. Would it be helpful to enable 
CONFIG_XFS_DEBUG? Does this result in a big performance hit on busy xfs 
filesystems? If we can help troubleshoot this problem, please advice.

If i understand correctly this issue also exists in 2.6.29? Should i 
downgrade to the latest 2.6.28 kernel to regain stability?

Thanks.

-Patrick

Christoph Hellwig wrote:
> On Tue, Jun 23, 2009 at 04:17:13AM -0400, Lachlan McIlroy wrote:
>> It looks to me like xfs_reclaim_inode() has returned EAGAIN because the
>> XFS_RECLAIM flag was set on the xfs inode.  This implies we are trying
>> to reclaim an inode that is already in the process of being reclaimed.
>> I'm not sure how this happened but it could be a simple case of ignoring
>> this error since the reclaim is already in progress.
> 
> Well, having the reclaim already in progress means we're racing here.
> And I suspect this fits into the other bugs with possibly duplicat
> inodes we see after the inode+xfs_inode unification.
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-06-30 20:13                 ` Patrick Schreurs
@ 2009-06-30 20:42                   ` Christoph Hellwig
  2009-07-20 19:19                     ` Patrick Schreurs
  2009-07-01 12:44                   ` Christoph Hellwig
  1 sibling, 1 reply; 20+ messages in thread
From: Christoph Hellwig @ 2009-06-30 20:42 UTC (permalink / raw)
  To: Patrick Schreurs
  Cc: Christoph Hellwig, linux-xfs, Tommy van Leeuwen, Lachlan McIlroy,
	Eric Sandeen

On Tue, Jun 30, 2009 at 10:13:57PM +0200, Patrick Schreurs wrote:
> Hi (again),
>
> Anyone has any advice to prevent this from happening? We've seen 10  
> crashes in the last 14 days. Would it be helpful to enable  
> CONFIG_XFS_DEBUG? Does this result in a big performance hit on busy xfs  
> filesystems? If we can help troubleshoot this problem, please advice.
>
> If i understand correctly this issue also exists in 2.6.29? Should i  
> downgrade to the latest 2.6.28 kernel to regain stability?

For now please downgrade to the latest 2.6.28, yes.  I hope I will have
time and machine ressources to dig deeper into the problem this week.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-06-30 20:13                 ` Patrick Schreurs
  2009-06-30 20:42                   ` Christoph Hellwig
@ 2009-07-01 12:44                   ` Christoph Hellwig
  2009-07-02  7:09                     ` Tommy van Leeuwen
  2009-07-02 17:31                     ` Patrick Schreurs
  1 sibling, 2 replies; 20+ messages in thread
From: Christoph Hellwig @ 2009-07-01 12:44 UTC (permalink / raw)
  To: Patrick Schreurs
  Cc: linux-xfs, Tommy van Leeuwen, Lachlan McIlroy, Eric Sandeen

Actually you might want to give this patch a try which fixes a race
affecting the reclaim tag in iget:


Index: xfs/fs/xfs/xfs_iget.c
===================================================================
--- xfs.orig/fs/xfs/xfs_iget.c	2009-06-04 13:27:41.901946950 +0200
+++ xfs/fs/xfs/xfs_iget.c	2009-06-04 14:08:08.837816707 +0200
@@ -132,80 +132,89 @@ xfs_iget_cache_hit(
 	int			flags,
 	int			lock_flags) __releases(pag->pag_ici_lock)
 {
+	struct inode		*inode = VFS_I(ip);
 	struct xfs_mount	*mp = ip->i_mount;
-	int			error = EAGAIN;
+	int			error;
+
+	spin_lock(&ip->i_flags_lock);
 
 	/*
-	 * If INEW is set this inode is being set up
-	 * If IRECLAIM is set this inode is being torn down
-	 * Pause and try again.
+	 * This inode is being torn down, pause and try again.
 	 */
-	if (xfs_iflags_test(ip, (XFS_INEW|XFS_IRECLAIM))) {
+	if (ip->i_flags & XFS_IRECLAIM) {
 		XFS_STATS_INC(xs_ig_frecycle);
+		error = EAGAIN;
 		goto out_error;
 	}
 
-	/* If IRECLAIMABLE is set, we've torn down the vfs inode part */
-	if (xfs_iflags_test(ip, XFS_IRECLAIMABLE)) {
+	/*
+	 * If we are racing with another cache hit that is currently recycling
+	 * this inode out of the XFS_IRECLAIMABLE state, wait for the
+	 * initialisation to complete before continuing.
+	 */
+	if (ip->i_flags & XFS_INEW) {
+		spin_unlock(&ip->i_flags_lock);
+		read_unlock(&pag->pag_ici_lock);
 
-		/*
-		 * If lookup is racing with unlink, then we should return an
-		 * error immediately so we don't remove it from the reclaim
-		 * list and potentially leak the inode.
-		 */
-		if ((ip->i_d.di_mode == 0) && !(flags & XFS_IGET_CREATE)) {
-			error = ENOENT;
-			goto out_error;
-		}
+		XFS_STATS_INC(xs_ig_frecycle);
+		wait_on_inode(inode);
+		return EAGAIN;
+	}
 
+	/*
+	 * If lookup is racing with unlink, then we should return an
+	 * error immediately so we don't remove it from the reclaim
+	 * list and potentially leak the inode.
+	 */
+	if (ip->i_d.di_mode == 0 && !(flags & XFS_IGET_CREATE)) {
+		error = ENOENT;
+		goto out_error;
+	}
+
+	/*
+	 * If IRECLAIMABLE is set, we've torn down the vfs inode part already.
+	 * Need to carefully get it back into useable state.
+	 */
+	if (ip->i_flags & XFS_IRECLAIMABLE) {
 		xfs_itrace_exit_tag(ip, "xfs_iget.alloc");
 
 		/*
-		 * We need to re-initialise the VFS inode as it has been
-		 * 'freed' by the VFS. Do this here so we can deal with
-		 * errors cleanly, then tag it so it can be set up correctly
-		 * later.
+		 * We need to set XFS_INEW atomically with clearing the
+		 * reclaimable tag so that we do have an indicator of the
+		 * inode still being initialized.
 		 */
-		if (!inode_init_always(mp->m_super, VFS_I(ip))) {
+		ip->i_flags |= XFS_INEW;
+		__xfs_inode_clear_reclaim_tag(pag, ip);
+
+		spin_unlock(&ip->i_flags_lock);
+		read_unlock(&pag->pag_ici_lock);
+
+		if (unlikely(!inode_init_always(mp->m_super, inode))) {
+			printk("node_init_always failed!!\n");
+
+			/*
+			 * Re-initializing the inode failed, and we are in deep
+			 * trouble.  Try to re-add it to the reclaim list.
+			 */
+			read_lock(&pag->pag_ici_lock);
+			spin_lock(&ip->i_flags_lock);
+
+			ip->i_flags &= ~XFS_INEW;
+			__xfs_inode_set_reclaim_tag(pag, ip);
+
 			error = ENOMEM;
 			goto out_error;
 		}
-
-		/*
-		 * We must set the XFS_INEW flag before clearing the
-		 * XFS_IRECLAIMABLE flag so that if a racing lookup does
-		 * not find the XFS_IRECLAIMABLE above but has the igrab()
-		 * below succeed we can safely check XFS_INEW to detect
-		 * that this inode is still being initialised.
-		 */
-		xfs_iflags_set(ip, XFS_INEW);
-		xfs_iflags_clear(ip, XFS_IRECLAIMABLE);
-
-		/* clear the radix tree reclaim flag as well. */
-		__xfs_inode_clear_reclaim_tag(mp, pag, ip);
-	} else if (!igrab(VFS_I(ip))) {
+	} else {
 		/* If the VFS inode is being torn down, pause and try again. */
-		XFS_STATS_INC(xs_ig_frecycle);
-		goto out_error;
-	} else if (xfs_iflags_test(ip, XFS_INEW)) {
-		/*
-		 * We are racing with another cache hit that is
-		 * currently recycling this inode out of the XFS_IRECLAIMABLE
-		 * state. Wait for the initialisation to complete before
-		 * continuing.
-		 */
-		wait_on_inode(VFS_I(ip));
-	}
+		if (!igrab(inode))
+			goto out_error;
 
-	if (ip->i_d.di_mode == 0 && !(flags & XFS_IGET_CREATE)) {
-		error = ENOENT;
-		iput(VFS_I(ip));
-		goto out_error;
+		/* We've got a live one. */
+		spin_unlock(&ip->i_flags_lock);
+		read_unlock(&pag->pag_ici_lock);
 	}
 
-	/* We've got a live one. */
-	read_unlock(&pag->pag_ici_lock);
-
 	if (lock_flags != 0)
 		xfs_ilock(ip, lock_flags);
 
@@ -215,6 +224,7 @@ xfs_iget_cache_hit(
 	return 0;
 
 out_error:
+	spin_unlock(&ip->i_flags_lock);
 	read_unlock(&pag->pag_ici_lock);
 	return error;
 }
Index: xfs/fs/xfs/linux-2.6/xfs_sync.c
===================================================================
--- xfs.orig/fs/xfs/linux-2.6/xfs_sync.c	2009-06-04 13:40:09.135939715 +0200
+++ xfs/fs/xfs/linux-2.6/xfs_sync.c	2009-06-04 13:59:17.978816696 +0200
@@ -607,6 +607,17 @@ xfs_reclaim_inode(
 	return 0;
 }
 
+void
+__xfs_inode_set_reclaim_tag(
+	struct xfs_perag	*pag,
+	struct xfs_inode	*ip)
+{
+	xfs_agino_t	agino = XFS_INO_TO_AGINO(ip->i_mount, ip->i_ino);
+
+	radix_tree_tag_set(&pag->pag_ici_root, agino, XFS_ICI_RECLAIM_TAG);
+	__xfs_iflags_set(ip, XFS_IRECLAIMABLE);
+}
+
 /*
  * We set the inode flag atomically with the radix tree tag.
  * Once we get tag lookups on the radix tree, this inode flag
@@ -621,9 +632,7 @@ xfs_inode_set_reclaim_tag(
 
 	read_lock(&pag->pag_ici_lock);
 	spin_lock(&ip->i_flags_lock);
-	radix_tree_tag_set(&pag->pag_ici_root,
-			XFS_INO_TO_AGINO(mp, ip->i_ino), XFS_ICI_RECLAIM_TAG);
-	__xfs_iflags_set(ip, XFS_IRECLAIMABLE);
+	__xfs_inode_set_reclaim_tag(pag, ip);
 	spin_unlock(&ip->i_flags_lock);
 	read_unlock(&pag->pag_ici_lock);
 	xfs_put_perag(mp, pag);
@@ -631,30 +640,15 @@ xfs_inode_set_reclaim_tag(
 
 void
 __xfs_inode_clear_reclaim_tag(
-	xfs_mount_t	*mp,
-	xfs_perag_t	*pag,
-	xfs_inode_t	*ip)
-{
-	radix_tree_tag_clear(&pag->pag_ici_root,
-			XFS_INO_TO_AGINO(mp, ip->i_ino), XFS_ICI_RECLAIM_TAG);
-}
-
-void
-xfs_inode_clear_reclaim_tag(
-	xfs_inode_t	*ip)
+	struct xfs_perag	*pag,
+	struct xfs_inode	*ip)
 {
-	xfs_mount_t	*mp = ip->i_mount;
-	xfs_perag_t	*pag = xfs_get_perag(mp, ip->i_ino);
+	xfs_agino_t	agino = XFS_INO_TO_AGINO(ip->i_mount, ip->i_ino);
 
-	read_lock(&pag->pag_ici_lock);
-	spin_lock(&ip->i_flags_lock);
-	__xfs_inode_clear_reclaim_tag(mp, pag, ip);
-	spin_unlock(&ip->i_flags_lock);
-	read_unlock(&pag->pag_ici_lock);
-	xfs_put_perag(mp, pag);
+	ip->i_flags &= ~XFS_IRECLAIMABLE;
+	radix_tree_tag_clear(&pag->pag_ici_root, agino, XFS_ICI_RECLAIM_TAG);
 }
 
-
 STATIC void
 xfs_reclaim_inodes_ag(
 	xfs_mount_t	*mp,
Index: xfs/fs/xfs/linux-2.6/xfs_sync.h
===================================================================
--- xfs.orig/fs/xfs/linux-2.6/xfs_sync.h	2009-06-04 13:53:32.994814723 +0200
+++ xfs/fs/xfs/linux-2.6/xfs_sync.h	2009-06-04 13:58:54.746942001 +0200
@@ -51,7 +51,6 @@ int xfs_reclaim_inode(struct xfs_inode *
 int xfs_reclaim_inodes(struct xfs_mount *mp, int noblock, int mode);
 
 void xfs_inode_set_reclaim_tag(struct xfs_inode *ip);
-void xfs_inode_clear_reclaim_tag(struct xfs_inode *ip);
-void __xfs_inode_clear_reclaim_tag(struct xfs_mount *mp, struct xfs_perag *pag,
-				struct xfs_inode *ip);
+void __xfs_inode_set_reclaim_tag(struct xfs_perag *pag, struct xfs_inode *ip);
+void __xfs_inode_clear_reclaim_tag(struct xfs_perag *pag, struct xfs_inode *ip);
 #endif

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-07-01 12:44                   ` Christoph Hellwig
@ 2009-07-02  7:09                     ` Tommy van Leeuwen
  2009-07-02 17:31                     ` Patrick Schreurs
  1 sibling, 0 replies; 20+ messages in thread
From: Tommy van Leeuwen @ 2009-07-02  7:09 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Patrick Schreurs, linux-xfs, Lachlan McIlroy, Eric Sandeen

On Wed, Jul 1, 2009 at 2:44 PM, Christoph Hellwig<hch@infradead.org> wrote:
> Actually you might want to give this patch a try which fixes a race
> affecting the reclaim tag in iget:

Thanks Christoph, we'll try this out in the next couple of days and
let you know.

Tommy

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-07-01 12:44                   ` Christoph Hellwig
  2009-07-02  7:09                     ` Tommy van Leeuwen
@ 2009-07-02 17:31                     ` Patrick Schreurs
  2009-07-21 14:12                       ` Christoph Hellwig
  1 sibling, 1 reply; 20+ messages in thread
From: Patrick Schreurs @ 2009-07-02 17:31 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-xfs, Tommy van Leeuwen, Lachlan McIlroy, Eric Sandeen

[-- Attachment #1: Type: text/plain, Size: 10772 bytes --]

Hi Christoph,

With this patch we see the following:

kernel BUG at fs/inode.c:1288!
invalid opcode: 0000 [#2] SMP
last sysfs file: /sys/devices/system/cpu/cpu3/cache/index2/shared_cpu_map
CPU 1
Modules linked in: acpi_cpufreq cpufreq_ondemand ipmi_si ipmi_devintf 
ipmi_msghandler bonding mptspi 8250_pnp rng_core scsi_transport_spi 
thermal serio_raw processor 8250 serial_core bnx2 thermal_sys
Pid: 8048, comm: diablo Tainted: G      D    2.6.30xfspatch #1 PowerEdge 
1950
RIP: 0010:[<ffffffff8028aaa3>]  [<ffffffff8028aaa3>] iput+0x13/0x60
RSP: 0018:ffff88007ec6db58  EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffff88022cbef5c0 RCX: ffff88017d1edd30
RDX: ffff88022cbef5f0 RSI: ffff88017d1edcc8 RDI: ffff88022cbef5c0
RBP: ffff8801383ae788 R08: ffff88007ec6db98 R09: 0000000000000246
R10: ffff88008c2156a0 R11: ffffffff8028b7a8 R12: ffff88022e831c00
R13: ffff88007ec6db98 R14: ffff88022e831d18 R15: ffff88007ec6dc0c
FS:  0000000001495860(0063) GS:ffff88002804d000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00007fa14f9fa000 CR3: 000000007ee5c000 CR4: 00000000000006a0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process diablo (pid: 8048, threadinfo ffff88007ec6c000, task 
ffff8800855926f0)
Stack:
  ffff88017d1edcc0 ffffffff802884f7 ffff88008c2156a0 ffff88017d1edcc0
  ffff88022e831c00 ffffffff80288783 00000000000000c0 0000000000000008
  ffff8800b9ad5a00 ffff880138304ac0 ffff88007ec6dba8 ffff88007ec6dba8
Call Trace:
  [<ffffffff802884f7>] ? d_kill+0x34/0x55
  [<ffffffff80288783>] ? __shrink_dcache_sb+0x26b/0x301
  [<ffffffff802888f8>] ? shrink_dcache_memory+0xdf/0x16e
  [<ffffffff8025e3dd>] ? shrink_slab+0xe0/0x153
  [<ffffffff8025efa6>] ? try_to_free_pages+0x22e/0x31b
  [<ffffffff8025c68a>] ? isolate_pages_global+0x0/0x231
  [<ffffffff80259543>] ? __alloc_pages_internal+0x25f/0x3ff
  [<ffffffff8025b05a>] ? __do_page_cache_readahead+0xab/0x1b1
  [<ffffffff8025b218>] ? force_page_cache_readahead+0x57/0x7e
  [<ffffffff80264164>] ? sys_madvise+0x394/0x4e0
  [<ffffffff8020ae2b>] ? system_call_fastpath+0x16/0x1b
Code: 4b 70 be 01 00 00 00 48 89 df e8 f8 86 00 00 eb db 48 83 c4 28 5b 
5d c3 53 48 85 ff 48 89 fb 74 55 48 83 bf f8 01 00 00 40 75 04 <0f> 0b 
eb fe 48 8d 7f 48 48 c7 c6 f0 aa 5b 80 e8 51 4b 0a 00 85
RIP  [<ffffffff8028aaa3>] iput+0x13/0x60
  RSP <ffff88007ec6db58>
---[ end trace 06a9d5e318d14bf7 ]---

This server also crahed twice. Unfortunately i don't have a complete 
logging of this event. See attachment for a partial log.

Thanks for looking into this.

Patrick Schreurs

Christoph Hellwig wrote:
> Actually you might want to give this patch a try which fixes a race
> affecting the reclaim tag in iget:
> 
> 
> Index: xfs/fs/xfs/xfs_iget.c
> ===================================================================
> --- xfs.orig/fs/xfs/xfs_iget.c	2009-06-04 13:27:41.901946950 +0200
> +++ xfs/fs/xfs/xfs_iget.c	2009-06-04 14:08:08.837816707 +0200
> @@ -132,80 +132,89 @@ xfs_iget_cache_hit(
>  	int			flags,
>  	int			lock_flags) __releases(pag->pag_ici_lock)
>  {
> +	struct inode		*inode = VFS_I(ip);
>  	struct xfs_mount	*mp = ip->i_mount;
> -	int			error = EAGAIN;
> +	int			error;
> +
> +	spin_lock(&ip->i_flags_lock);
>  
>  	/*
> -	 * If INEW is set this inode is being set up
> -	 * If IRECLAIM is set this inode is being torn down
> -	 * Pause and try again.
> +	 * This inode is being torn down, pause and try again.
>  	 */
> -	if (xfs_iflags_test(ip, (XFS_INEW|XFS_IRECLAIM))) {
> +	if (ip->i_flags & XFS_IRECLAIM) {
>  		XFS_STATS_INC(xs_ig_frecycle);
> +		error = EAGAIN;
>  		goto out_error;
>  	}
>  
> -	/* If IRECLAIMABLE is set, we've torn down the vfs inode part */
> -	if (xfs_iflags_test(ip, XFS_IRECLAIMABLE)) {
> +	/*
> +	 * If we are racing with another cache hit that is currently recycling
> +	 * this inode out of the XFS_IRECLAIMABLE state, wait for the
> +	 * initialisation to complete before continuing.
> +	 */
> +	if (ip->i_flags & XFS_INEW) {
> +		spin_unlock(&ip->i_flags_lock);
> +		read_unlock(&pag->pag_ici_lock);
>  
> -		/*
> -		 * If lookup is racing with unlink, then we should return an
> -		 * error immediately so we don't remove it from the reclaim
> -		 * list and potentially leak the inode.
> -		 */
> -		if ((ip->i_d.di_mode == 0) && !(flags & XFS_IGET_CREATE)) {
> -			error = ENOENT;
> -			goto out_error;
> -		}
> +		XFS_STATS_INC(xs_ig_frecycle);
> +		wait_on_inode(inode);
> +		return EAGAIN;
> +	}
>  
> +	/*
> +	 * If lookup is racing with unlink, then we should return an
> +	 * error immediately so we don't remove it from the reclaim
> +	 * list and potentially leak the inode.
> +	 */
> +	if (ip->i_d.di_mode == 0 && !(flags & XFS_IGET_CREATE)) {
> +		error = ENOENT;
> +		goto out_error;
> +	}
> +
> +	/*
> +	 * If IRECLAIMABLE is set, we've torn down the vfs inode part already.
> +	 * Need to carefully get it back into useable state.
> +	 */
> +	if (ip->i_flags & XFS_IRECLAIMABLE) {
>  		xfs_itrace_exit_tag(ip, "xfs_iget.alloc");
>  
>  		/*
> -		 * We need to re-initialise the VFS inode as it has been
> -		 * 'freed' by the VFS. Do this here so we can deal with
> -		 * errors cleanly, then tag it so it can be set up correctly
> -		 * later.
> +		 * We need to set XFS_INEW atomically with clearing the
> +		 * reclaimable tag so that we do have an indicator of the
> +		 * inode still being initialized.
>  		 */
> -		if (!inode_init_always(mp->m_super, VFS_I(ip))) {
> +		ip->i_flags |= XFS_INEW;
> +		__xfs_inode_clear_reclaim_tag(pag, ip);
> +
> +		spin_unlock(&ip->i_flags_lock);
> +		read_unlock(&pag->pag_ici_lock);
> +
> +		if (unlikely(!inode_init_always(mp->m_super, inode))) {
> +			printk("node_init_always failed!!\n");
> +
> +			/*
> +			 * Re-initializing the inode failed, and we are in deep
> +			 * trouble.  Try to re-add it to the reclaim list.
> +			 */
> +			read_lock(&pag->pag_ici_lock);
> +			spin_lock(&ip->i_flags_lock);
> +
> +			ip->i_flags &= ~XFS_INEW;
> +			__xfs_inode_set_reclaim_tag(pag, ip);
> +
>  			error = ENOMEM;
>  			goto out_error;
>  		}
> -
> -		/*
> -		 * We must set the XFS_INEW flag before clearing the
> -		 * XFS_IRECLAIMABLE flag so that if a racing lookup does
> -		 * not find the XFS_IRECLAIMABLE above but has the igrab()
> -		 * below succeed we can safely check XFS_INEW to detect
> -		 * that this inode is still being initialised.
> -		 */
> -		xfs_iflags_set(ip, XFS_INEW);
> -		xfs_iflags_clear(ip, XFS_IRECLAIMABLE);
> -
> -		/* clear the radix tree reclaim flag as well. */
> -		__xfs_inode_clear_reclaim_tag(mp, pag, ip);
> -	} else if (!igrab(VFS_I(ip))) {
> +	} else {
>  		/* If the VFS inode is being torn down, pause and try again. */
> -		XFS_STATS_INC(xs_ig_frecycle);
> -		goto out_error;
> -	} else if (xfs_iflags_test(ip, XFS_INEW)) {
> -		/*
> -		 * We are racing with another cache hit that is
> -		 * currently recycling this inode out of the XFS_IRECLAIMABLE
> -		 * state. Wait for the initialisation to complete before
> -		 * continuing.
> -		 */
> -		wait_on_inode(VFS_I(ip));
> -	}
> +		if (!igrab(inode))
> +			goto out_error;
>  
> -	if (ip->i_d.di_mode == 0 && !(flags & XFS_IGET_CREATE)) {
> -		error = ENOENT;
> -		iput(VFS_I(ip));
> -		goto out_error;
> +		/* We've got a live one. */
> +		spin_unlock(&ip->i_flags_lock);
> +		read_unlock(&pag->pag_ici_lock);
>  	}
>  
> -	/* We've got a live one. */
> -	read_unlock(&pag->pag_ici_lock);
> -
>  	if (lock_flags != 0)
>  		xfs_ilock(ip, lock_flags);
>  
> @@ -215,6 +224,7 @@ xfs_iget_cache_hit(
>  	return 0;
>  
>  out_error:
> +	spin_unlock(&ip->i_flags_lock);
>  	read_unlock(&pag->pag_ici_lock);
>  	return error;
>  }
> Index: xfs/fs/xfs/linux-2.6/xfs_sync.c
> ===================================================================
> --- xfs.orig/fs/xfs/linux-2.6/xfs_sync.c	2009-06-04 13:40:09.135939715 +0200
> +++ xfs/fs/xfs/linux-2.6/xfs_sync.c	2009-06-04 13:59:17.978816696 +0200
> @@ -607,6 +607,17 @@ xfs_reclaim_inode(
>  	return 0;
>  }
>  
> +void
> +__xfs_inode_set_reclaim_tag(
> +	struct xfs_perag	*pag,
> +	struct xfs_inode	*ip)
> +{
> +	xfs_agino_t	agino = XFS_INO_TO_AGINO(ip->i_mount, ip->i_ino);
> +
> +	radix_tree_tag_set(&pag->pag_ici_root, agino, XFS_ICI_RECLAIM_TAG);
> +	__xfs_iflags_set(ip, XFS_IRECLAIMABLE);
> +}
> +
>  /*
>   * We set the inode flag atomically with the radix tree tag.
>   * Once we get tag lookups on the radix tree, this inode flag
> @@ -621,9 +632,7 @@ xfs_inode_set_reclaim_tag(
>  
>  	read_lock(&pag->pag_ici_lock);
>  	spin_lock(&ip->i_flags_lock);
> -	radix_tree_tag_set(&pag->pag_ici_root,
> -			XFS_INO_TO_AGINO(mp, ip->i_ino), XFS_ICI_RECLAIM_TAG);
> -	__xfs_iflags_set(ip, XFS_IRECLAIMABLE);
> +	__xfs_inode_set_reclaim_tag(pag, ip);
>  	spin_unlock(&ip->i_flags_lock);
>  	read_unlock(&pag->pag_ici_lock);
>  	xfs_put_perag(mp, pag);
> @@ -631,30 +640,15 @@ xfs_inode_set_reclaim_tag(
>  
>  void
>  __xfs_inode_clear_reclaim_tag(
> -	xfs_mount_t	*mp,
> -	xfs_perag_t	*pag,
> -	xfs_inode_t	*ip)
> -{
> -	radix_tree_tag_clear(&pag->pag_ici_root,
> -			XFS_INO_TO_AGINO(mp, ip->i_ino), XFS_ICI_RECLAIM_TAG);
> -}
> -
> -void
> -xfs_inode_clear_reclaim_tag(
> -	xfs_inode_t	*ip)
> +	struct xfs_perag	*pag,
> +	struct xfs_inode	*ip)
>  {
> -	xfs_mount_t	*mp = ip->i_mount;
> -	xfs_perag_t	*pag = xfs_get_perag(mp, ip->i_ino);
> +	xfs_agino_t	agino = XFS_INO_TO_AGINO(ip->i_mount, ip->i_ino);
>  
> -	read_lock(&pag->pag_ici_lock);
> -	spin_lock(&ip->i_flags_lock);
> -	__xfs_inode_clear_reclaim_tag(mp, pag, ip);
> -	spin_unlock(&ip->i_flags_lock);
> -	read_unlock(&pag->pag_ici_lock);
> -	xfs_put_perag(mp, pag);
> +	ip->i_flags &= ~XFS_IRECLAIMABLE;
> +	radix_tree_tag_clear(&pag->pag_ici_root, agino, XFS_ICI_RECLAIM_TAG);
>  }
>  
> -
>  STATIC void
>  xfs_reclaim_inodes_ag(
>  	xfs_mount_t	*mp,
> Index: xfs/fs/xfs/linux-2.6/xfs_sync.h
> ===================================================================
> --- xfs.orig/fs/xfs/linux-2.6/xfs_sync.h	2009-06-04 13:53:32.994814723 +0200
> +++ xfs/fs/xfs/linux-2.6/xfs_sync.h	2009-06-04 13:58:54.746942001 +0200
> @@ -51,7 +51,6 @@ int xfs_reclaim_inode(struct xfs_inode *
>  int xfs_reclaim_inodes(struct xfs_mount *mp, int noblock, int mode);
>  
>  void xfs_inode_set_reclaim_tag(struct xfs_inode *ip);
> -void xfs_inode_clear_reclaim_tag(struct xfs_inode *ip);
> -void __xfs_inode_clear_reclaim_tag(struct xfs_mount *mp, struct xfs_perag *pag,
> -				struct xfs_inode *ip);
> +void __xfs_inode_set_reclaim_tag(struct xfs_perag *pag, struct xfs_inode *ip);
> +void __xfs_inode_clear_reclaim_tag(struct xfs_perag *pag, struct xfs_inode *ip);
>  #endif
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

[-- Attachment #2: sb02-20090702.jpg --]
[-- Type: image/jpeg, Size: 80654 bytes --]

[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-06-30 20:42                   ` Christoph Hellwig
@ 2009-07-20 19:19                     ` Patrick Schreurs
  2009-07-20 20:14                       ` Christoph Hellwig
  0 siblings, 1 reply; 20+ messages in thread
From: Patrick Schreurs @ 2009-07-20 19:19 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-xfs, Tommy van Leeuwen, Lachlan McIlroy, Eric Sandeen

Hi Christoph,

I saw some patches from your hand on this list ("fixes for memory 
allocator recursions into the filesystem"). Are these patches related to 
this issue?

Thanks,

-Patrick

Christoph Hellwig wrote:
> On Tue, Jun 30, 2009 at 10:13:57PM +0200, Patrick Schreurs wrote:
>> Hi (again),
>>
>> Anyone has any advice to prevent this from happening? We've seen 10  
>> crashes in the last 14 days. Would it be helpful to enable  
>> CONFIG_XFS_DEBUG? Does this result in a big performance hit on busy xfs  
>> filesystems? If we can help troubleshoot this problem, please advice.
>>
>> If i understand correctly this issue also exists in 2.6.29? Should i  
>> downgrade to the latest 2.6.28 kernel to regain stability?
> 
> For now please downgrade to the latest 2.6.28, yes.  I hope I will have
> time and machine ressources to dig deeper into the problem this week.
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-07-20 19:19                     ` Patrick Schreurs
@ 2009-07-20 20:14                       ` Christoph Hellwig
  0 siblings, 0 replies; 20+ messages in thread
From: Christoph Hellwig @ 2009-07-20 20:14 UTC (permalink / raw)
  To: Patrick Schreurs
  Cc: Christoph Hellwig, linux-xfs, Tommy van Leeuwen, Lachlan McIlroy,
	Eric Sandeen

On Mon, Jul 20, 2009 at 09:19:43PM +0200, Patrick Schreurs wrote:
> Hi Christoph,
>
> I saw some patches from your hand on this list ("fixes for memory  
> allocator recursions into the filesystem"). Are these patches related to  
> this issue?

I don't think they are, but if you have sparse testing cycles it would
be great if you could test it.  I'm a bit overloaded right now and can't
do as much debugging of these problems as I'd wish to.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-07-02 17:31                     ` Patrick Schreurs
@ 2009-07-21 14:12                       ` Christoph Hellwig
  2009-07-22  8:55                         ` Tommy van Leeuwen
  0 siblings, 1 reply; 20+ messages in thread
From: Christoph Hellwig @ 2009-07-21 14:12 UTC (permalink / raw)
  To: Patrick Schreurs
  Cc: Christoph Hellwig, linux-xfs, Tommy van Leeuwen, Lachlan McIlroy,
	Eric Sandeen

On Thu, Jul 02, 2009 at 07:31:30PM +0200, Patrick Schreurs wrote:
> Hi Christoph,
>
> With this patch we see the following:
>
> kernel BUG at fs/inode.c:1288!

Okay, I think I figured out what this is.  You hit the case where
we re-use an inode that is gone from the VFS point of view, but
still in xfs reclaimable state.  We reinitialize it using
inode_init_always, but inode_init_always does not touch i_state, which
still includes I_CLEAR.  See the patch below which sets it to the
expected state.  What really worries me is that I don't seem to be
able to actually hit that case in testing.

Can you try the patch below ontop of the previous one?


Index: linux-2.6/fs/xfs/xfs_iget.c
===================================================================
--- linux-2.6.orig/fs/xfs/xfs_iget.c	2009-07-21 16:07:41.654923213 +0200
+++ linux-2.6/fs/xfs/xfs_iget.c	2009-07-21 16:08:55.064151137 +0200
@@ -206,6 +206,7 @@ xfs_iget_cache_hit(
 			error = ENOMEM;
 			goto out_error;
 		}
+		inode->i_state = I_LOCK|I_NEW;
 	} else {
 		/* If the VFS inode is being torn down, pause and try again. */
 		if (!igrab(inode))

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-07-21 14:12                       ` Christoph Hellwig
@ 2009-07-22  8:55                         ` Tommy van Leeuwen
  2009-08-17 21:14                           ` Christoph Hellwig
  0 siblings, 1 reply; 20+ messages in thread
From: Tommy van Leeuwen @ 2009-07-22  8:55 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Patrick Schreurs, linux-xfs, Lachlan McIlroy, Eric Sandeen

Hi,

On Tue, Jul 21, 2009 at 4:12 PM, Christoph Hellwig<hch@infradead.org> wrote:
> On Thu, Jul 02, 2009 at 07:31:30PM +0200, Patrick Schreurs wrote:
>> Hi Christoph,
>>
>> With this patch we see the following:
>>
>> kernel BUG at fs/inode.c:1288!
>
> Okay, I think I figured out what this is.  You hit the case where
> we re-use an inode that is gone from the VFS point of view, but
> still in xfs reclaimable state.  We reinitialize it using
> inode_init_always, but inode_init_always does not touch i_state, which
> still includes I_CLEAR.  See the patch below which sets it to the
> expected state.  What really worries me is that I don't seem to be
> able to actually hit that case in testing.
>
> Can you try the patch below ontop of the previous one?
>
>
> Index: linux-2.6/fs/xfs/xfs_iget.c
> ===================================================================
> --- linux-2.6.orig/fs/xfs/xfs_iget.c    2009-07-21 16:07:41.654923213 +0200
> +++ linux-2.6/fs/xfs/xfs_iget.c 2009-07-21 16:08:55.064151137 +0200
> @@ -206,6 +206,7 @@ xfs_iget_cache_hit(
>                        error = ENOMEM;
>                        goto out_error;
>                }
> +               inode->i_state = I_LOCK|I_NEW;
>        } else {
>                /* If the VFS inode is being torn down, pause and try again. */
>                if (!igrab(inode))
>

Unfortunately we still get errors, with this patch on top of the
previous one: The difference is that is now crashes within an hour
instead of once a week, so that might be good for troubleshooting.

Jul 22 10:46:13 sb07 kernel: ------------[ cut here ]------------
Jul 22 10:46:13 sb07 kernel: kernel BUG at fs/inode.c:1288!
Jul 22 10:46:13 sb07 kernel: invalid opcode: 0000 [#1] SMP
Jul 22 10:46:13 sb07 kernel: last sysfs file:
/sys/devices/system/cpu/cpu3/cache/index2/shared_cpu_map
Jul 22 10:46:13 sb07 kernel: CPU 3
Jul 22 10:46:13 sb07 kernel: Modules linked in: cpufreq_ondemand
acpi_cpufreq ipmi_si ipmi_devintf ipmi_msghandler bonding rng_core
serio_raw e1000e bnx2 thermal processor 8250_pnp 8250 serial_core
thermal_sys
Jul 22 10:46:13 sb07 kernel: Pid: 251, comm: kswapd0 Not tainted
2.6.30.1-xfs #5 PowerEdge 1950
Jul 22 10:46:13 sb07 kernel: RIP: 0010:[<ffffffff8028aaab>]
[<ffffffff8028aaab>] iput+0x13/0x60
Jul 22 10:46:13 sb07 kernel: RSP: 0000:ffff88043fab1cb0  EFLAGS: 00010246
Jul 22 10:46:13 sb07 kernel: RAX: 0000000000000000 RBX:
ffff88006fcc7980 RCX: ffff88026411aaf0
Jul 22 10:46:13 sb07 kernel: RDX: ffff88006fcc79b0 RSI:
ffff88026411aa88 RDI: ffff88006fcc7980
Jul 22 10:46:13 sb07 kernel: RBP: ffff880373b1ecc8 R08:
ffff88043fab1cf0 R09: 0000000000000246
Jul 22 10:46:13 sb07 kernel: R10: 0000000000000010 R11:
ffffffff8028b7b0 R12: ffff88043c7f0400
Jul 22 10:46:13 sb07 kernel: R13: ffff88043fab1cf0 R14:
ffff88043c7f0518 R15: ffff88043fab1d64
Jul 22 10:46:13 sb07 kernel: FS:  0000000000000000(0000)
GS:ffff88002807f000(0000) knlGS:0000000000000000
Jul 22 10:46:13 sb07 kernel: CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
Jul 22 10:46:13 sb07 kernel: CR2: 00007f4078709320 CR3:
000000043bd8f000 CR4: 00000000000006a0
Jul 22 10:46:13 sb07 kernel: DR0: 0000000000000000 DR1:
0000000000000000 DR2: 0000000000000000
Jul 22 10:46:13 sb07 kernel: DR3: 0000000000000000 DR6:
00000000ffff0ff0 DR7: 0000000000000400
Jul 22 10:46:13 sb07 kernel: Process kswapd0 (pid: 251, threadinfo
ffff88043fab0000, task ffff88043fa726f0)
Jul 22 10:46:13 sb07 kernel: Stack:
Jul 22 10:46:13 sb07 kernel: ffff88026411aa80 ffffffff802884ff
0000000000000010 ffff88026411aa80
Jul 22 10:46:13 sb07 kernel: ffff8802978404c0 ffff8802978404c0
ffff88043fab1d00 ffff88043fab1d00
Jul 22 10:46:13 sb07 kernel: Call Trace:
Jul 22 10:46:13 sb07 kernel: [<ffffffff8028878b>] ?
__shrink_dcache_sb+0x26b/0x301
Jul 22 10:46:13 sb07 kernel: [<ffffffff80288900>] ?
shrink_dcache_memory+0xdf/0x16e
Jul 22 10:46:13 sb07 kernel: [<ffffffff8025e8a0>] ? kswapd+0x448/0x5bf
Jul 22 10:46:13 sb07 kernel: ffff88043c7f0400 ffffffff8028878b
00000000000000c0 0000000000000008
Jul 22 10:46:13 sb07 kernel: [<ffffffff802884ff>] ? d_kill+0x34/0x55
Jul 22 10:46:13 sb07 kernel: [<ffffffff8025e3e5>] ? shrink_slab+0xe0/0x153
Jul 22 10:46:13 sb07 kernel: [<ffffffff8025c692>] ?
isolate_pages_global+0x0/0x231
Jul 22 10:46:13 sb07 kernel: [<ffffffff8025e458>] ? kswapd+0x0/0x5bf
Jul 22 10:46:13 sb07 kernel: [<ffffffff8020bd7a>] ? child_rip+0xa/0x20
Jul 22 10:46:13 sb07 kernel: [<ffffffff8023e2d1>] ?
autoremove_wake_function+0x0/0x2e
Jul 22 10:46:13 sb07 kernel: [<ffffffff8025e458>] ? kswapd+0x0/0x5bf
Jul 22 10:46:13 sb07 kernel: [<ffffffff8023df28>] ? kthread+0x0/0x80
Jul 22 10:46:13 sb07 kernel: [<ffffffff802243ec>] ? __wake_up_common+0x44/0x73
Jul 22 10:46:13 sb07 kernel: [<ffffffff8023df7c>] ? kthread+0x54/0x80
Jul 22 10:46:13 sb07 kernel: [<ffffffff8020bd70>] ? child_rip+0x0/0x20
Jul 22 10:46:13 sb07 kernel: Code: 4b 70 be 01 00 00 00 48 89 df e8 c2
86 00 00 eb db 48 83 c4 28 5b 5d c3 53 48 85 ff 48 89 fb 74 55 48 83
bf f8 01 00 00 40 75 04 <0f> 0b eb fe 48 8d 7f 48 48 c7 c6 f0 2a 5c 80
e8 25 4b 0a 00 85
Jul 22 10:46:13 sb07 kernel: RIP  [<ffffffff8028aaab>] iput+0x13/0x60
Jul 22 10:46:13 sb07 kernel: RSP <ffff88043fab1cb0>
Jul 22 10:46:13 sb07 kernel: ---[ end trace 2d9673758108d2e3 ]---


Thanks,
Tommy

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-07-22  8:55                         ` Tommy van Leeuwen
@ 2009-08-17 21:14                           ` Christoph Hellwig
  2009-08-20 12:24                             ` Tommy van Leeuwen
  0 siblings, 1 reply; 20+ messages in thread
From: Christoph Hellwig @ 2009-08-17 21:14 UTC (permalink / raw)
  To: Tommy van Leeuwen
  Cc: Christoph Hellwig, Patrick Schreurs, Lachlan McIlroy, linux-xfs,
	Eric Sandeen

On Wed, Jul 22, 2009 at 10:55:54AM +0200, Tommy van Leeuwen wrote:
> Unfortunately we still get errors, with this patch on top of the
> previous one: The difference is that is now crashes within an hour
> instead of once a week, so that might be good for troubleshooting.

Hi Tommy and sorry for dropping the ball on this, I didn't remember this
mail anymore until I look for more reporters of the inode related
problems.

Current mainline (Linus' git as of today) has a lot of the fixes in this
area, any chance I could trick you into trying it?  Maybe including the
debug patch below which adds a printk to that one culprit that I thing
might remain:

Also if it still happens any chance you could send output of the dmesg
command instead of the syslog files?  For some reason syslogd usually
eats up some bits of kernel oops message..


Index: linux-2.6/fs/xfs/xfs_iget.c
===================================================================
--- linux-2.6.orig/fs/xfs/xfs_iget.c	2009-08-17 18:08:39.563217129 -0300
+++ linux-2.6/fs/xfs/xfs_iget.c	2009-08-17 18:09:12.999316531 -0300
@@ -242,6 +242,8 @@ xfs_iget_cache_hit(
 
 		error = -inode_init_always(mp->m_super, inode);
 		if (error) {
+			printk("XFS: inode_init_always failed to re-initialize inode\n");
+
 			/*
 			 * Re-initializing the inode failed, and we are in deep
 			 * trouble.  Try to re-add it to the reclaim list.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: 2.6.30 panic - xfs_fs_destroy_inode
  2009-08-17 21:14                           ` Christoph Hellwig
@ 2009-08-20 12:24                             ` Tommy van Leeuwen
  0 siblings, 0 replies; 20+ messages in thread
From: Tommy van Leeuwen @ 2009-08-20 12:24 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Patrick Schreurs, linux-xfs, Lachlan McIlroy, Eric Sandeen

On Mon, Aug 17, 2009 at 11:14 PM, Christoph Hellwig<hch@infradead.org> wrote:
> On Wed, Jul 22, 2009 at 10:55:54AM +0200, Tommy van Leeuwen wrote:
>> Unfortunately we still get errors, with this patch on top of the
>> previous one: The difference is that is now crashes within an hour
>> instead of once a week, so that might be good for troubleshooting.
>
> Hi Tommy and sorry for dropping the ball on this, I didn't remember this
> mail anymore until I look for more reporters of the inode related
> problems.

Hi Chris, no problem. We're always busy ourself too. We will check out
the new fixes but we need some time to deploy them. We will take the
extra debug info with it just in case. We'll report back when we have
some news.

Cheers,
Tommy


>
> Current mainline (Linus' git as of today) has a lot of the fixes in this
> area, any chance I could trick you into trying it?  Maybe including the
> debug patch below which adds a printk to that one culprit that I thing
> might remain:
>
> Also if it still happens any chance you could send output of the dmesg
> command instead of the syslog files?  For some reason syslogd usually
> eats up some bits of kernel oops message..
>
>
> Index: linux-2.6/fs/xfs/xfs_iget.c
> ===================================================================
> --- linux-2.6.orig/fs/xfs/xfs_iget.c    2009-08-17 18:08:39.563217129 -0300
> +++ linux-2.6/fs/xfs/xfs_iget.c 2009-08-17 18:09:12.999316531 -0300
> @@ -242,6 +242,8 @@ xfs_iget_cache_hit(
>
>                error = -inode_init_always(mp->m_super, inode);
>                if (error) {
> +                       printk("XFS: inode_init_always failed to re-initialize inode\n");
> +
>                        /*
>                         * Re-initializing the inode failed, and we are in deep
>                         * trouble.  Try to re-add it to the reclaim list.
>

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2009-08-20 12:24 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-06-17 17:04 2.6.30 panic - xfs_fs_destroy_inode Patrick Schreurs
2009-06-17 21:31 ` Eric Sandeen
2009-06-18  7:55   ` Patrick Schreurs
2009-06-20 10:18     ` Patrick Schreurs
2009-06-20 13:29       ` Eric Sandeen
2009-06-20 16:31         ` Patrick Schreurs
2009-06-23  7:24           ` Patrick Schreurs
2009-06-23  8:17             ` Lachlan McIlroy
2009-06-23 17:13               ` Christoph Hellwig
2009-06-30 20:13                 ` Patrick Schreurs
2009-06-30 20:42                   ` Christoph Hellwig
2009-07-20 19:19                     ` Patrick Schreurs
2009-07-20 20:14                       ` Christoph Hellwig
2009-07-01 12:44                   ` Christoph Hellwig
2009-07-02  7:09                     ` Tommy van Leeuwen
2009-07-02 17:31                     ` Patrick Schreurs
2009-07-21 14:12                       ` Christoph Hellwig
2009-07-22  8:55                         ` Tommy van Leeuwen
2009-08-17 21:14                           ` Christoph Hellwig
2009-08-20 12:24                             ` Tommy van Leeuwen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.