* xfssyncd and disk spin down
@ 2010-12-23 16:55 Petre Rodan
2010-12-23 19:29 ` Stan Hoeppner
` (2 more replies)
0 siblings, 3 replies; 29+ messages in thread
From: Petre Rodan @ 2010-12-23 16:55 UTC (permalink / raw)
To: xfs
[-- Attachment #1.1: Type: text/plain, Size: 3850 bytes --]
Hello,
I have a problem with a hard drive that never managed to spin down. this drive is a storage space, not a system disk, the only thing that generated writes is the nfs server that exports its contents. it has only one large xfs partition on it.
upon closer inspection it turns out that after the first Write action to that partition, an xfssyncd process continues to write to that partition each 36 seconds and it doesn't stop doing that, even if there are no more Writes from the exterior. this keeps the drive busy with varying consequences. more about that later.
I found that the only easy way to stop the xfssyncd process poking the drive is to run a `mount -o remount /mnt/space`. this will silence any internal xfs process to acessing the drive, thus allowing it to spin down and only be woken up by a NFS access.
here are some simple steps to replicate the problem:
# echo 3 > /proc/sys/vm/drop_caches # free cached fs entities
# ( blktrace -d /dev/sdb -o - | blkparse -i - ) &
# mount -o remount /mnt/space
# find /mnt/space/ -type f > /dev/null # generate some non-cached Read requests
# # absolutely no writes have been performed to the drive,
# # it could spin down now if enough time would pass
# touch /mnt/space/foo
# # process 1352 will start writing to the drive at a 35-36s interval,
# # even if there has been no other write request.
8,16 1 36591 6306.873151576 1352 A WBS 976985862 + 2 <- (8,17) 976985799
8,16 1 36592 6306.873152998 1352 Q WBS 976985862 + 2 [xfssyncd/sdb1]
[..]
8,16 1 36600 6342.875151286 1352 A WBS 976985864 + 2 <- (8,17) 976985801
8,16 1 36601 6342.875152938 1352 Q WBS 976985864 + 2 [xfssyncd/sdb1]
[..]
8,16 1 36609 6378.877225211 1352 A WBS 976985866 + 2 <- (8,17) 976985803
8,16 1 36610 6378.877226935 1352 Q WBS 976985866 + 2 [xfssyncd/sdb1]
there was no file at or near the 976985799 inode (I presume that's an inode?)
I found that the only way to stop it is to remount the partition. I also tried sync(1), but to no avail.
so is there an XFS option somewhere that would make the filesystem be more forgiving with the hardware beneath it? without loosing the journal of course.
I'm using a vanilla 2.6.36.2 kernel patched with grsecurity, default mkfs.xfs options, rw,nosuid,nodev,noexec,noatime,attr2,noquota mount options, and xfs_info looks like this:
meta-data=/dev/sdb1 isize=256 agcount=4, agsize=61047500 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=244190000, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=32768, version=2
= sectsz=512 sunit=0 blks, lazy-count=0
realtime =none extsz=4096 blocks=0, rtextents=0
a probably related issue is with modern Western Digital/Hitachi hard drives that use a Ramp load/unload technology that automatically parks the heads at stupidly small inactivity intervals (some small as 8 seconds), so look what happens when using such a drive and xfs:
# smartctl -a /dev/sda
[..]
Device Model: WDC WD20EARS-00MVWB0
Serial Number: WD-WCAZA0101731
Firmware Version: 50.0AB50
[..]
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 17
[..]
9 Power_On_Hours 0x0032 096 096 000 Old_age Always - 3351
[..]
193 Load_Cycle_Count 0x0032 066 066 000 Old_age Always - 403405
this hard drive has exceeded it's 300k load/unload maximum from the specs in only 140 days, which means it was woken up every 30s or so. not willingly.
cheers,
peter
[-- Attachment #1.2: Type: application/pgp-signature, Size: 198 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2010-12-23 16:55 xfssyncd and disk spin down Petre Rodan
@ 2010-12-23 19:29 ` Stan Hoeppner
2010-12-23 21:16 ` Petre Rodan
2010-12-24 18:17 ` Eric Sandeen
2010-12-27 2:19 ` Dave Chinner
2 siblings, 1 reply; 29+ messages in thread
From: Stan Hoeppner @ 2010-12-23 19:29 UTC (permalink / raw)
To: xfs
Petre Rodan put forth on 12/23/2010 10:55 AM:
> this hard drive has exceeded it's 300k load/unload maximum from the specs in only 140 days, which means it was woken up every 30s or so. not willingly.
Contact WD and request a warranty replacement for the drive due to
exceeding the spec'd cycle max in less than 6 months. They may not
honor your request. Whether they do or not, I'd replace the drive as
the mechanism obviously has too much wear on it already, and within one
year of use will have well over double the spec'd cycles. If you
replace it with another 20EARS, replace the firmware immediately as
mentioned below to decrease the load/unload rate. (It would be nice if
they offered the ability to disable the sleep mode totally, but then it
wouldn't be "green" pfft).
Modern drives consume so little power compared to CPUs and GPUs I can't
understand the big power save quest the HD industry has embarked upon,
especially with potential consequences to drive life you and others are
seeing. I can understand the need WRT huge drive count enterprise near
line disk arrays, but not the average desktop or server. And the EARS
series drives are anything but enterprise caliber. Note the power specs
for the WD20EARS:
Current Requirements
12 VDC
Read/Write 296 mA
Idle 243 mA
Standby 3 mA
Sleep 3 mA
5 VDC
Read/Write 503 mA
Idle 172 mA
Standby 160 mA
Sleep 160 mA
Power Dissipation
Read/Write 6.00 Watts
Idle 3.70 Watts
Standby 0.80 Watts
Sleep 0.80 Watts
6 watts maximum draw for read/write and 3.7 watts idle. Sure, the sleep
mode consumes 10x less power, but when you're already at less than 4
watts idle it doesn't matter except in huge disk arrays. For
comparison, your average dual core cpu today consumes an average of
20-50 watts or more. Power supplies burn more wasted juice than
anything, so get a 90% efficiency PSU and disable all the power save
features of your disk drives to prolong their lifespan.
To replace the 20EARS firmware:
From:
http://community.wdc.com/t5/Desktop/Green-Caviar-High-Load-Cycle-Cout-after-short-operation-time/td-p/15731/page/2
Re: Green Caviar: High Load Cycle Count after short operation time
04-08-2010 04:03 PM
Sorry, but our agent didn't know that this policy was just changed.
Current WDIDLE3 works with the RE and GP drives listed below.
RE Drives - WD1000FYPS-01ZKB0, WD7500AYPS-01ZKB0, WD7501AYPS-01ZKB0
GP Drives - WD20EADS, WD20EARS, WD15EADS, WD15EARS, WD10EADS, WD10EARS,
WD8000AARS, WD7500AADS, WD7500AARS, WD6400AADS, WD6400AARS, WD5000AADS,
WD5000AARS
WDIDLE3
http://support.wdc.com/product/download.asp?groupid=609&sid=113&lang=en
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2010-12-23 19:29 ` Stan Hoeppner
@ 2010-12-23 21:16 ` Petre Rodan
2010-12-24 0:54 ` Stan Hoeppner
0 siblings, 1 reply; 29+ messages in thread
From: Petre Rodan @ 2010-12-23 21:16 UTC (permalink / raw)
To: xfs
Hello,
On Thu, Dec 23, 2010 at 01:29:14PM -0600, Stan Hoeppner wrote:
> Petre Rodan put forth on 12/23/2010 10:55 AM:
>
> > this hard drive has exceeded it's 300k load/unload maximum from the specs in only 140 days, which means it was woken up every 30s or so. not willingly.
>
> Contact WD and request a warranty replacement for the drive due to
> exceeding the spec'd cycle max in less than 6 months. They may not
> honor your request. Whether they do or not, I'd replace the drive as
> the mechanism obviously has too much wear on it already, and within one
> year of use will have well over double the spec'd cycles. If you
> replace it with another 20EARS, replace the firmware immediately as
> mentioned below to decrease the load/unload rate. (It would be nice if
> they offered the ability to disable the sleep mode totally, but then it
> wouldn't be "green" pfft).
thanks for your input. I did run wdidle3 on that drive two days ago stopping the nonsense.
but my original mail had a different target really. I have to recognize that I don't know much about the inner-workings of a filesystem, but I find it odd that once there is no input from the outside, a process keeps writing to the drive indefinitely. in my very narrow thinking the fact that these writes dissapear after a remount would prove their redundance.
to wrap it up, I see no logic to the above and this is why I ask the list to tell me if this is
a. something that can easily be fixed via an option I failed to find
b. a critical part of xfs's internals that cannot be 'disabled' (with a short explanation)
c. simply a bug
with the little side-story with the WD 20EARS i was just portraying where this default behaviour can get to.
I don't usually read marketing material, but WD acknowledges that their green drives are wrecked in Linux and they simply encourage customers to change their OS. I just have to ask why have we got to get to this point.
the drive I was trying to get into standby in the first half of my mail is a different one, an enterprise ST31000340NS placed on an otherwise low power ProLiant MicroServer. Having 8W*12h*30 = 2880 Wh less to pay per hdd per month would be easily achievable if the standby mode would be reached when possible.
cheers,
peter
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2010-12-23 21:16 ` Petre Rodan
@ 2010-12-24 0:54 ` Stan Hoeppner
2010-12-24 5:15 ` Stan Hoeppner
2010-12-24 6:02 ` Petre Rodan
0 siblings, 2 replies; 29+ messages in thread
From: Stan Hoeppner @ 2010-12-24 0:54 UTC (permalink / raw)
To: xfs
Petre Rodan put forth on 12/23/2010 3:16 PM:
> but my original mail had a different target really. I have to recognize that I don't know much about the inner-workings of a filesystem, but I find it odd that once there is no input from the outside, a process keeps writing to the drive indefinitely. in my very narrow thinking the fact that these writes dissapear after a remount would prove their redundance.
Ahh, quite right. Sorry Petre. This might shed some light on your
issue. Old background thread:
http://oss.sgi.com/archives/xfs/2003-09/msg00674.html
Current documentation on this turnable knob:
http://www.mjmwired.net/kernel/Documentation/filesystems/xfs.txt
fs.xfs.xfssyncd_centisecs (Min: 100 Default: 3000 Max: 720000)
The interval at which the xfssyncd thread flushes metadata
out to disk. This thread will flush log activity out, and
do some processing on unlinked inodes.
Maybe related or also worth mentioning:
fs.xfs.age_buffer_centisecs (Min: 100 Default: 1500 Max: 720000)
The age at which xfsbufd flushes dirty metadata buffers to disk.
As of 2.6.36, it looks like you can stretch this out to a more
reasonable (for your application) 12 minutes if you like. I'm not sure
if the max value is different than 2.6.32.2 as I don't have a box or VM
available with 2.6.32.2. Try bumping these values to 720000 and see if
that allows your disk to sleep for 12 mins.
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2010-12-24 0:54 ` Stan Hoeppner
@ 2010-12-24 5:15 ` Stan Hoeppner
2010-12-24 6:02 ` Petre Rodan
1 sibling, 0 replies; 29+ messages in thread
From: Stan Hoeppner @ 2010-12-24 5:15 UTC (permalink / raw)
To: xfs
Stan Hoeppner put forth on 12/23/2010 6:54 PM:
> http://www.mjmwired.net/kernel/Documentation/filesystems/xfs.txt
>
> fs.xfs.xfssyncd_centisecs (Min: 100 Default: 3000 Max: 720000)
> The interval at which the xfssyncd thread flushes metadata
> out to disk. This thread will flush log activity out, and
> do some processing on unlinked inodes.
>
> Maybe related or also worth mentioning:
>
> fs.xfs.age_buffer_centisecs (Min: 100 Default: 1500 Max: 720000)
> The age at which xfsbufd flushes dirty metadata buffers to disk.
These can be found in /proc/sys/fs/xfs and you'd simply
~$ echo 720000 > /proc/sys/fs/xfs/xfssyncd_centisecs
to get the maximum 12 minute interval.
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2010-12-24 0:54 ` Stan Hoeppner
2010-12-24 5:15 ` Stan Hoeppner
@ 2010-12-24 6:02 ` Petre Rodan
2010-12-24 23:00 ` Stan Hoeppner
1 sibling, 1 reply; 29+ messages in thread
From: Petre Rodan @ 2010-12-24 6:02 UTC (permalink / raw)
To: xfs
Hello Stan,
On Thu, Dec 23, 2010 at 06:54:19PM -0600, Stan Hoeppner wrote:
> Petre Rodan put forth on 12/23/2010 3:16 PM:
>
> > but my original mail had a different target really. I have to recognize that I don't know much about the inner-workings of a filesystem, but I find it odd that once there is no input from the outside, a process keeps writing to the drive indefinitely. in my very narrow thinking the fact that these writes dissapear after a remount would prove their redundance.
>
> Ahh, quite right. Sorry Petre. This might shed some light on your
> issue. Old background thread:
>
> http://oss.sgi.com/archives/xfs/2003-09/msg00674.html
>
> Current documentation on this turnable knob:
>
> http://www.mjmwired.net/kernel/Documentation/filesystems/xfs.txt
I appreciate your quick replies, but rest assured that I did search the archives and the documentation that came with the kernel.
I would be delighted to know the underlying reason why internal xfs processes need to flush things to the drive more than once. can't it be as easy as
accept write command -> cache -> actual write to the hardware -> clean cache/log/dirty metadata -> sleep until next command comes (and not touch the drive in any way until then)
what am I missing in this picture? if anything needs to be flushed, why do it over and over again, once is not enough ?
what I found (see first mail from thread) is that a write to an xfs partition triggers an xfssyncd process to continually write to the drive for no apparent reason. and keeps writing at regulated intervals long after any other read or write request has been made from the outside. I left the drive untouched for 12 hours and xfssyncd was still pinging the drive. and there is no xfssyncd process pinging if I do not do initiate a write to the disk after the partition is mounted, so to me that means that xfs would function without constant writing to the underlying device.
> fs.xfs.xfssyncd_centisecs (Min: 100 Default: 3000 Max: 720000)
> fs.xfs.age_buffer_centisecs (Min: 100 Default: 1500 Max: 720000)
just increasing the delay until an inevitable and seemingly redundant disk write is not what I want.
I was searching for an option to make internal xfs processes not touch the drive after the buffers/log/dirty metadata have been flushed (once).
thanks,
peter
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2010-12-23 16:55 xfssyncd and disk spin down Petre Rodan
2010-12-23 19:29 ` Stan Hoeppner
@ 2010-12-24 18:17 ` Eric Sandeen
2010-12-25 12:09 ` Petre Rodan
2010-12-27 2:19 ` Dave Chinner
2 siblings, 1 reply; 29+ messages in thread
From: Eric Sandeen @ 2010-12-24 18:17 UTC (permalink / raw)
To: Petre Rodan; +Cc: xfs
On 12/23/10 10:55 AM, Petre Rodan wrote:
>
> Hello,
>
> I have a problem with a hard drive that never managed to spin down.
> this drive is a storage space, not a system disk, the only thing that
> generated writes is the nfs server that exports its contents. it has
> only one large xfs partition on it.
>
> upon closer inspection it turns out that after the first Write action
> to that partition, an xfssyncd process continues to write to that
> partition each 36 seconds and it doesn't stop doing that, even if
> there are no more Writes from the exterior. this keeps the drive busy
> with varying consequences. more about that later.
Doesn't seem like that should happen.
> I found that the only easy way to stop the xfssyncd process poking
> the drive is to run a `mount -o remount /mnt/space`. this will
> silence any internal xfs process to acessing the drive, thus allowing
> it to spin down and only be woken up by a NFS access.
>
> here are some simple steps to replicate the problem:
>
> # echo 3 > /proc/sys/vm/drop_caches # free cached fs entities
> # ( blktrace -d /dev/sdb -o - | blkparse -i - ) &
> # mount -o remount /mnt/space
> # find /mnt/space/ -type f > /dev/null # generate some non-cached Read requests
> # # absolutely no writes have been performed to the drive,
> # # it could spin down now if enough time would pass
> # touch /mnt/space/foo
> # # process 1352 will start writing to the drive at a 35-36s interval,
> # # even if there has been no other write request.
>
> 8,16 1 36591 6306.873151576 1352 A WBS 976985862 + 2 <- (8,17) 976985799
> 8,16 1 36592 6306.873152998 1352 Q WBS 976985862 + 2 [xfssyncd/sdb1]
> [..]
> 8,16 1 36600 6342.875151286 1352 A WBS 976985864 + 2 <- (8,17) 976985801
> 8,16 1 36601 6342.875152938 1352 Q WBS 976985864 + 2 [xfssyncd/sdb1]
> [..]
> 8,16 1 36609 6378.877225211 1352 A WBS 976985866 + 2 <- (8,17) 976985803
> 8,16 1 36610 6378.877226935 1352 Q WBS 976985866 + 2 [xfssyncd/sdb1]
>
> there was no file at or near the 976985799 inode (I presume that's an
> inode?)
Nope that's a sector on the drive.
> A WBS 976985862 + 2 <- (8,17) 976985799
976985799 is the sector on sdb1, mapped to 976985862 on sdb
(63 sectors in, yay dos partition tables!)
A means remapped (from sdb1 to sdb), Q means Queued.
WBS means Write/Barrier/Synchronous.
Also, you're stepping through, it looks like -
976985799, 976985801, 976985803, ....
> I found that the only way to stop it is to remount the partition. I
> also tried sync(1), but to no avail.
>
> so is there an XFS option somewhere that would make the filesystem be
> more forgiving with the hardware beneath it? without loosing the
> journal of course.
I think we just need to figure out what's causing the writes, and
what's being written.
> I'm using a vanilla 2.6.36.2 kernel patched with grsecurity, default
> mkfs.xfs options, rw,nosuid,nodev,noexec,noatime,attr2,noquota mount
> options, and xfs_info looks like this:>
> meta-data=/dev/sdb1 isize=256 agcount=4, agsize=61047500 blks
> = sectsz=512 attr=2
> data = bsize=4096 blocks=244190000, imaxpct=25
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0
> log =internal bsize=4096 blocks=32768, version=2
> = sectsz=512 sunit=0 blks, lazy-count=0
> realtime =none extsz=4096 blocks=0, rtextents=0
so what is sector 976985799 on the filesystem...
An AG is 61047500 blocks long, or 488380000 sectors long. There are 4 of them,
and the log is at the front of an AG in the middle of the fs.
So the 1st AG starts at 0, 2nd at 488380000 sectors, 3rd at 976760000 sectors.
Your writes are at 976985799-ish sectors, or 225799 sectors into the
3rd AG. The log is 262144 sectors long. So this looks like log writes.
Makes sense with the WBS data too.
xfssyncd forces the log, reclaims inodes, and logs a dummy transaction if needed.
On an idle fs though I wouldn't expect that we need any of this, so probably
need to dig a little to see what's going on. I don't think you need a mount
option, I think we need an explanation and maybe a bugfix. :)
I'll try to find time to look into it unless someone else knows what's going
on off the top of their heads.
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2010-12-24 6:02 ` Petre Rodan
@ 2010-12-24 23:00 ` Stan Hoeppner
2010-12-25 3:36 ` Eric Sandeen
0 siblings, 1 reply; 29+ messages in thread
From: Stan Hoeppner @ 2010-12-24 23:00 UTC (permalink / raw)
To: xfs
Petre Rodan put forth on 12/24/2010 12:02 AM:
>> fs.xfs.xfssyncd_centisecs (Min: 100 Default: 3000 Max: 720000)
>> fs.xfs.age_buffer_centisecs (Min: 100 Default: 1500 Max: 720000)
>
> just increasing the delay until an inevitable and seemingly redundant disk write is not what I want.
> I was searching for an option to make internal xfs processes not touch the drive after the buffers/log/dirty metadata have been flushed (once).
I'm not a dev Petre but just another XFS user. This is the best
"solution" I could come up with for your issue. I assumed this
"unnecessary" regularly scheduled activity was a house cleaning measure
and done intentionally; didn't dawn on me that it may be a bug.
Sorry I wasn't able to fully address your issue. If/until there is a
permanent fix for this you may want to bump this to 720000 anyway as an
interim measure, if you haven't already, as it should yield a
significantly better situation than what you have now. You'll at least
get something like ~1400 minutes of sleep per day instead of none,
decreasing your load/unload cycles from ~2880/day to ~120/day, if my
math is correct.
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2010-12-24 23:00 ` Stan Hoeppner
@ 2010-12-25 3:36 ` Eric Sandeen
2010-12-25 4:32 ` Stan Hoeppner
0 siblings, 1 reply; 29+ messages in thread
From: Eric Sandeen @ 2010-12-25 3:36 UTC (permalink / raw)
To: Stan Hoeppner; +Cc: xfs
On 12/24/10 5:00 PM, Stan Hoeppner wrote:
> Petre Rodan put forth on 12/24/2010 12:02 AM:
>
>>> fs.xfs.xfssyncd_centisecs (Min: 100 Default: 3000 Max: 720000)
>>> fs.xfs.age_buffer_centisecs (Min: 100 Default: 1500 Max: 720000)
>>
>> just increasing the delay until an inevitable and seemingly redundant disk write is not what I want.
>> I was searching for an option to make internal xfs processes not touch the drive after the buffers/log/dirty metadata have been flushed (once).
>
> I'm not a dev Petre but just another XFS user. This is the best
> "solution" I could come up with for your issue. I assumed this
> "unnecessary" regularly scheduled activity was a house cleaning measure
> and done intentionally; didn't dawn on me that it may be a bug.
>
> Sorry I wasn't able to fully address your issue. If/until there is a
> permanent fix for this you may want to bump this to 720000 anyway as an
> interim measure, if you haven't already, as it should yield a
> significantly better situation than what you have now. You'll at least
> get something like ~1400 minutes of sleep per day instead of none,
> decreasing your load/unload cycles from ~2880/day to ~120/day, if my
> math is correct.
Of course, then xfssyncd will not be doing its proper duty regularly ;)
We just need to see why it's always finding work to do when idle.
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2010-12-25 3:36 ` Eric Sandeen
@ 2010-12-25 4:32 ` Stan Hoeppner
0 siblings, 0 replies; 29+ messages in thread
From: Stan Hoeppner @ 2010-12-25 4:32 UTC (permalink / raw)
To: xfs
Eric Sandeen put forth on 12/24/2010 9:36 PM:
> On 12/24/10 5:00 PM, Stan Hoeppner wrote:
>> Petre Rodan put forth on 12/24/2010 12:02 AM:
>>
>>>> fs.xfs.xfssyncd_centisecs (Min: 100 Default: 3000 Max: 720000)
>>>> fs.xfs.age_buffer_centisecs (Min: 100 Default: 1500 Max: 720000)
>>>
>>> just increasing the delay until an inevitable and seemingly redundant disk write is not what I want.
>>> I was searching for an option to make internal xfs processes not touch the drive after the buffers/log/dirty metadata have been flushed (once).
>>
>> I'm not a dev Petre but just another XFS user. This is the best
>> "solution" I could come up with for your issue. I assumed this
>> "unnecessary" regularly scheduled activity was a house cleaning measure
>> and done intentionally; didn't dawn on me that it may be a bug.
>>
>> Sorry I wasn't able to fully address your issue. If/until there is a
>> permanent fix for this you may want to bump this to 720000 anyway as an
>> interim measure, if you haven't already, as it should yield a
>> significantly better situation than what you have now. You'll at least
>> get something like ~1400 minutes of sleep per day instead of none,
>> decreasing your load/unload cycles from ~2880/day to ~120/day, if my
>> math is correct.
>
> Of course, then xfssyncd will not be doing its proper duty regularly ;)
Sure it will. Just not *as regularly*. You, Dave, Alex, Christoph, or
someone made the regularity configurable didn't you? ;)
> We just need to see why it's always finding work to do when idle.
Seems like my grandmother must have been involved with the current code.
She was always telling me "Idle hands are the Devil's work." ;) Or,
whoever wrote the code was somehow influenced by my grandmother (his own).
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2010-12-24 18:17 ` Eric Sandeen
@ 2010-12-25 12:09 ` Petre Rodan
0 siblings, 0 replies; 29+ messages in thread
From: Petre Rodan @ 2010-12-25 12:09 UTC (permalink / raw)
To: Eric Sandeen; +Cc: xfs
[-- Attachment #1.1.1: Type: text/plain, Size: 3186 bytes --]
Hi Eric,
On Fri, Dec 24, 2010 at 12:17:54PM -0600, Eric Sandeen wrote:
> Doesn't seem like that should happen.
music to my ears :)
having this fixed would make me a happy bunny.
> > 8,16 1 36591 6306.873151576 1352 A WBS 976985862 + 2 <- (8,17) 976985799
> >
> > there was no file at or near the 976985799 inode (I presume that's an
> > inode?)
>
> Nope that's a sector on the drive.
great, this should make debugging easier.
> I think we just need to figure out what's causing the writes, and
> what's being written.
not sure about the first thing, but do read on for the "what's written" chapter:
# mount -o remount /dev/sdb1 # to start fresh with no xfssyncd's poking arround
# cp -r /usr/src/linux* /mnt/space # do some writes to the drive, then let it completely unused for at least 30 minutes after the cp finished
# at this point the only thing active on sdb1 is xfssyncd at 36s intervals
# bin/blk_debug.sh > /dev/shm/stdout
blk_debug.sh parses the output of blktrace, blkparse and for each line containing 'A WBS' it uses dd to save 8 sectors from sdb starting with the mapped value I get from blkparse
blktrace -d /dev/sdb -o - | blkparse -i - | while read line; do
echo "${line}"
grep -q 'A WBS' <<< ${line} && {
sect=$(echo "${line}" | awk '{ print $8 }' )
# fork a dd and let xfssyncd time to finish writing
(
sleep 20
# force dd to read the drive, not a cache
echo 3 > /proc/sys/vm/drop_caches
dd if=/dev/sdb of=/dev/shm/dump_${sect} skip=${sect} bs=512 count=8 && \
echo "* sectors ${sect}-$((${sect}+8)) saved into dump_${sect}"
) &
}
done
I thought that since xfssyncd writes to sectors in increments of 2 it would be possible to run a diff between 2 consecutive samples and get some information that may help you see something. I provide an attachment with the entire stdout, sector dumps and scripts. also available here: http://incoming.avira.com/NOSYNC/peter/xfssyncd_debug.tar.bz2
there might be a slight misalignment of a few sectors between dd's seek and where it's been told to go, look closely at the blkparse. apparently xfssyncd is not 'appending' information, it looks more like a bucketload of a similarly-looking array of data each time.
hope this helps, if there is anything amiss I could fix it and try again anytime.
> On an idle fs though I wouldn't expect that we need any of this, so probably
> need to dig a little to see what's going on. I don't think you need a mount
> option, I think we need an explanation and maybe a bugfix. :)
>
> I'll try to find time to look into it unless someone else knows what's going
> on off the top of their heads.
thank you for your interest.
I'll be glad to test kernel patches against 2.6.36.2 that you might come up with.
@Stan, thanks for your input, it's nice to see friendly communities gathered arround a mailing list again.
Happy Holidays everyone.
cheers,
peter
--
petre rodan
<petre.rodan@simplex.ro>
Technical Manager
Simplex SRL, Bucharest
[-- Attachment #1.1.2: xfssyncd_debug.tar.bz2 --]
[-- Type: application/x-bzip2, Size: 26065 bytes --]
[-- Attachment #1.2: Type: application/pgp-signature, Size: 198 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2010-12-23 16:55 xfssyncd and disk spin down Petre Rodan
2010-12-23 19:29 ` Stan Hoeppner
2010-12-24 18:17 ` Eric Sandeen
@ 2010-12-27 2:19 ` Dave Chinner
2010-12-27 6:16 ` Petre Rodan
2 siblings, 1 reply; 29+ messages in thread
From: Dave Chinner @ 2010-12-27 2:19 UTC (permalink / raw)
To: Petre Rodan; +Cc: xfs
On Thu, Dec 23, 2010 at 06:55:32PM +0200, Petre Rodan wrote:
>
> Hello,
>
> I have a problem with a hard drive that never managed to spin
> down. this drive is a storage space, not a system disk, the only
> thing that generated writes is the nfs server that exports its
> contents. it has only one large xfs partition on it.
>
> upon closer inspection it turns out that after the first Write
> action to that partition, an xfssyncd process continues to write
> to that partition each 36 seconds and it doesn't stop doing that,
> even if there are no more Writes from the exterior. this keeps the
> drive busy with varying consequences. more about that later.
Should be fixed in 2.6.37 by the commit:
1a387d3 xfs: dummy transactions should not dirty VFS state
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2010-12-27 2:19 ` Dave Chinner
@ 2010-12-27 6:16 ` Petre Rodan
2010-12-27 14:07 ` Dave Chinner
0 siblings, 1 reply; 29+ messages in thread
From: Petre Rodan @ 2010-12-27 6:16 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs
[-- Attachment #1.1: Type: text/plain, Size: 1109 bytes --]
Hello Dave,
On Mon, Dec 27, 2010 at 01:19:04PM +1100, Dave Chinner wrote:
> On Thu, Dec 23, 2010 at 06:55:32PM +0200, Petre Rodan wrote:
> >
> > Hello,
[..]
> > upon closer inspection it turns out that after the first Write
> > action to that partition, an xfssyncd process continues to write
> > to that partition each 36 seconds and it doesn't stop doing that,
> > even if there are no more Writes from the exterior. this keeps the
> > drive busy with varying consequences. more about that later.
>
> Should be fixed in 2.6.37 by the commit:
>
> 1a387d3 xfs: dummy transactions should not dirty VFS state
nice that someone else was also keen on having this bug fixed, but this particular fix is already present in 2.6.36 [1], [2]
unfortunately something is still written to the log every 36s, as per the first mail in this thread.
I guess we're getting closer.
[1] http://www.kernel.org/pub/linux/kernel/v2.6/ChangeLog-2.6.36
[2] http://git.kernel.org/?p=linux/kernel/git/stable/linux-2.6.36.y.git;a=commit;h=871eae4891a844e1fd065467b940f98dbf7aad1c
cheers,
peter
[-- Attachment #1.2: Type: application/pgp-signature, Size: 198 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2010-12-27 6:16 ` Petre Rodan
@ 2010-12-27 14:07 ` Dave Chinner
2010-12-27 17:19 ` Petre Rodan
0 siblings, 1 reply; 29+ messages in thread
From: Dave Chinner @ 2010-12-27 14:07 UTC (permalink / raw)
To: Petre Rodan; +Cc: xfs
On Mon, Dec 27, 2010 at 08:16:29AM +0200, Petre Rodan wrote:
>
> Hello Dave,
>
> On Mon, Dec 27, 2010 at 01:19:04PM +1100, Dave Chinner wrote:
> > On Thu, Dec 23, 2010 at 06:55:32PM +0200, Petre Rodan wrote:
> > >
> > > Hello,
> [..]
> > > upon closer inspection it turns out that after the first Write
> > > action to that partition, an xfssyncd process continues to write
> > > to that partition each 36 seconds and it doesn't stop doing that,
> > > even if there are no more Writes from the exterior. this keeps the
> > > drive busy with varying consequences. more about that later.
> >
> > Should be fixed in 2.6.37 by the commit:
> >
> > 1a387d3 xfs: dummy transactions should not dirty VFS state
>
> nice that someone else was also keen on having this bug fixed, but this particular fix is already present in 2.6.36 [1], [2]
>
> unfortunately something is still written to the log every 36s, as per the first mail in this thread.
Turn on the XFS tracing so we can see what is being written every
36s. When the problem shows up:
# echo 1 > /sys/kernel/debug/tracing/events/xfs/enable
# sleep 100
# cat /sys/kernel/debug/tracing/trace > trace.out
# echo 0 > /sys/kernel/debug/tracing/events/xfs/enable
And post the trace.out file for us to look at.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2010-12-27 14:07 ` Dave Chinner
@ 2010-12-27 17:19 ` Petre Rodan
2010-12-31 0:13 ` Dave Chinner
0 siblings, 1 reply; 29+ messages in thread
From: Petre Rodan @ 2010-12-27 17:19 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs
[-- Attachment #1.1.1: Type: text/plain, Size: 588 bytes --]
Hello Dave,
On Tue, Dec 28, 2010 at 01:07:50AM +1100, Dave Chinner wrote:
> Turn on the XFS tracing so we can see what is being written every
> 36s. When the problem shows up:
>
> # echo 1 > /sys/kernel/debug/tracing/events/xfs/enable
> # sleep 100
> # cat /sys/kernel/debug/tracing/trace > trace.out
> # echo 0 > /sys/kernel/debug/tracing/events/xfs/enable
>
> And post the trace.out file for us to look at.
attached.
you can disregard all the lvm partitions ('dev 254:.*') since they are on a different drive, probably only 8:17 is of interest.
bye,
peter
[-- Attachment #1.1.2: trace.out.bz2 --]
[-- Type: application/x-bzip2, Size: 58764 bytes --]
[-- Attachment #1.2: Type: application/pgp-signature, Size: 198 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2010-12-27 17:19 ` Petre Rodan
@ 2010-12-31 0:13 ` Dave Chinner
2011-01-20 10:01 ` Petre Rodan
0 siblings, 1 reply; 29+ messages in thread
From: Dave Chinner @ 2010-12-31 0:13 UTC (permalink / raw)
To: Petre Rodan; +Cc: xfs
On Mon, Dec 27, 2010 at 07:19:39PM +0200, Petre Rodan wrote:
>
> Hello Dave,
>
> On Tue, Dec 28, 2010 at 01:07:50AM +1100, Dave Chinner wrote:
> > Turn on the XFS tracing so we can see what is being written every
> > 36s. When the problem shows up:
> >
> > # echo 1 > /sys/kernel/debug/tracing/events/xfs/enable
> > # sleep 100
> > # cat /sys/kernel/debug/tracing/trace > trace.out
> > # echo 0 > /sys/kernel/debug/tracing/events/xfs/enable
> >
> > And post the trace.out file for us to look at.
>
> attached.
>
> you can disregard all the lvm partitions ('dev 254:.*') since they are on a different drive, probably only 8:17 is of interest.
Ok, I can see the problem. The original patch I tested:
http://oss.sgi.com/archives/xfs/2010-08/msg00026.html
Made the log covering dummy transaction a synchronous transaction so
that the log was written and the superblock unpinned immediately to
allow the xfsbufd to write back the superblock and empty the AIL
before the next log covering check.
On review, the log covering dummy transaction got changed to an
async transaction, so the superblock buffer is not unpinned
immediately. This was the patch committed:
http://oss.sgi.com/archives/xfs/2010-08/msg00197.html
As a result, the success of log covering and idling is then
dependent on whether the log gets written to disk to unpin the
superblock buffer before the next xfssyncd run. It seems that there
is a large chance that this log write does not happen, so the
filesystem never idles correctly. I've reproduced it here, and only
in one test out of ten did the filesystem enter an idle state
correctly. I guess I was unlucky enough to hit that 1-in-10 case
when I tested the modified patch.
I'll cook up a patch to make the log covering behave like the
original patch I sent...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2010-12-31 0:13 ` Dave Chinner
@ 2011-01-20 10:01 ` Petre Rodan
2011-01-20 11:06 ` Dave Chinner
2011-01-20 23:43 ` Dave Chinner
0 siblings, 2 replies; 29+ messages in thread
From: Petre Rodan @ 2011-01-20 10:01 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs
[-- Attachment #1.1.1: Type: text/plain, Size: 5808 bytes --]
hello Dave,
On Fri, Dec 31, 2010 at 11:13:23AM +1100, Dave Chinner wrote:
> On Mon, Dec 27, 2010 at 07:19:39PM +0200, Petre Rodan wrote:
> >
> > Hello Dave,
> >
> > On Tue, Dec 28, 2010 at 01:07:50AM +1100, Dave Chinner wrote:
> > > Turn on the XFS tracing so we can see what is being written every
> > > 36s. When the problem shows up:
> > >
> > > # echo 1 > /sys/kernel/debug/tracing/events/xfs/enable
> > > # sleep 100
> > > # cat /sys/kernel/debug/tracing/trace > trace.out
> > > # echo 0 > /sys/kernel/debug/tracing/events/xfs/enable
> > >
> > > And post the trace.out file for us to look at.
> >
> > attached.
> >
> > you can disregard all the lvm partitions ('dev 254:.*') since they are on a different drive, probably only 8:17 is of interest.
>
> Ok, I can see the problem. The original patch I tested:
>
> http://oss.sgi.com/archives/xfs/2010-08/msg00026.html
>
> Made the log covering dummy transaction a synchronous transaction so
> that the log was written and the superblock unpinned immediately to
> allow the xfsbufd to write back the superblock and empty the AIL
> before the next log covering check.
>
> On review, the log covering dummy transaction got changed to an
> async transaction, so the superblock buffer is not unpinned
> immediately. This was the patch committed:
>
> http://oss.sgi.com/archives/xfs/2010-08/msg00197.html
>
> As a result, the success of log covering and idling is then
> dependent on whether the log gets written to disk to unpin the
> superblock buffer before the next xfssyncd run. It seems that there
> is a large chance that this log write does not happen, so the
> filesystem never idles correctly. I've reproduced it here, and only
> in one test out of ten did the filesystem enter an idle state
> correctly. I guess I was unlucky enough to hit that 1-in-10 case
> when I tested the modified patch.
>
> I'll cook up a patch to make the log covering behave like the
> original patch I sent...
I presume that the new fix should be provided by "xfs: ensure log covering transactions are synchronous", so I tested 2.6.37 patched with it and then 2.6.38_rc1 that has it included.
instead of having xfssyncd write to the drive every 36s, we now have this:
8,32 0 162 828.393976164 1413 A WS 977599761 + 2 <- (8,33) 977597713
8,32 0 163 828.393977820 1413 Q WS 977599761 + 2 [xfssyncd/sdc1]
8,32 0 167 846.401286530 1407 A W 2048 + 1 <- (8,33) 0
8,32 0 168 846.401288007 1407 Q W 2048 + 1 [xfsbufd/sdc1]
8,32 0 174 864.412155132 1413 A WS 977599763 + 2 <- (8,33) 977597715
8,32 0 175 864.412156637 1413 Q WS 977599763 + 2 [xfssyncd/sdc1]
8,32 1 240 882.467053566 1407 A W 2048 + 1 <- (8,33) 0
8,32 1 241 882.467055152 1407 Q W 2048 + 1 [xfsbufd/sdc1]
8,32 0 179 900.426084634 1413 A WS 977599765 + 2 <- (8,33) 977597717
8,32 0 180 900.426086076 1413 Q WS 977599765 + 2 [xfssyncd/sdc1]
8,32 0 185 918.457066312 1407 A W 2048 + 1 <- (8,33) 0
8,32 0 186 918.457068003 1407 Q W 2048 + 1 [xfsbufd/sdc1]
8,32 1 252 936.448120095 1413 A WS 977599767 + 2 <- (8,33) 977597719
8,32 1 253 936.448121740 1413 Q WS 977599767 + 2 [xfssyncd/sdc1]
8,32 0 192 954.447207652 1407 A W 2048 + 1 <- (8,33) 0
8,32 0 193 954.447209114 1407 Q W 2048 + 1 [xfsbufd/sdc1]
8,32 0 199 972.458423873 1413 A WS 977599769 + 2 <- (8,33) 977597721
8,32 0 200 972.458425189 1413 Q WS 977599769 + 2 [xfssyncd/sdc1]
8,32 1 266 990.457562449 1407 A W 2048 + 1 <- (8,33) 0
8,32 1 267 990.457563909 1407 Q W 2048 + 1 [xfsbufd/sdc1]
8,32 0 204 1008.476781955 1413 A WS 977599771 + 2 <- (8,33) 977597723
8,32 0 205 1008.476783410 1413 Q WS 977599771 + 2 [xfssyncd/sdc1]
8,32 1 278 1026.531988963 1407 A W 2048 + 1 <- (8,33) 0
8,32 1 279 1026.531990718 1407 Q W 2048 + 1 [xfsbufd/sdc1]
8,32 0 209 1044.491342417 1413 A WS 977599773 + 2 <- (8,33) 977597725
8,32 0 210 1044.491344042 1413 Q WS 977599773 + 2 [xfssyncd/sdc1]
8,32 0 214 1062.502603664 1407 A W 2048 + 1 <- (8,33) 0
8,32 0 215 1062.502605117 1407 Q W 2048 + 1 [xfsbufd/sdc1]
8,32 0 216 1062.502611005 1407 G W 2048 + 1 [xfsbufd/sdc1]
in other words xfsyncd and xfsbufd now alternate at 18s intervals keeping the drive busy with nothing constructive hours after the last write to the drive.
to add to the misfortune, 'mount -o remount ' is no longer able to bring the drive to a quiet state since 2.6.37, so now the only way to achieve an idle drive is to fully umount and then remount the partition.
just for the record, this is a different drive then at the beginning of the thread, and it has these parameters:
meta-data=/dev/sdc1 isize=256 agcount=4, agsize=61047552 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=244190208, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=119233, version=2
= sectsz=512 sunit=0 blks, lazy-count=0
realtime =none extsz=4096 blocks=0, rtextents=0
attached you'll find the trace (with accesses to other drives filtered out).
cheers,
peter
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
--
petre rodan
<petre.rodan@simplex.ro>
Technical Manager
Simplex SRL, Bucharest
[-- Attachment #1.1.2: trace --]
[-- Type: text/plain, Size: 32402 bytes --]
# tracer: nop
#
# TASK-PID CPU# TIMESTAMP FUNCTION
# | | | | |
xfsbufd/sdc1-1407 [000] 3328.516937: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3328.516942: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3329.715360: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3329.715364: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3330.913851: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3330.913856: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3332.112309: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3332.112314: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3333.310767: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3333.310771: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3334.509225: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3334.509229: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3335.707682: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3335.707686: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3336.906140: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3336.906145: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3338.104598: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3338.104602: xfs_buf_delwri_split: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfsbufd
xfsbufd/sdc1-1407 [000] 3338.104605: xfs_buf_iorequest: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_bdstrat_cb
xfsbufd/sdc1-1407 [000] 3338.104606: xfs_buf_hold: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iorequest
xfsbufd/sdc1-1407 [000] 3338.104628: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iorequest
<idle>-0 [001] 3338.105096: xfs_buf_ioerror: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 error 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_buf_bio_end_io
<idle>-0 [001] 3338.105100: xfs_buf_iodone: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller _xfs_buf_ioend
kworker/1:0-2942 [001] 3338.105137: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone
kworker/1:0-2942 [001] 3338.105139: xfs_buf_iodone: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone_callbacks
kworker/1:0-2942 [001] 3338.105141: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone_work
kworker/1:0-2942 [001] 3338.105141: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone_work
xfssyncd/sdc1-1413 [000] 3356.093456: xfs_log_reserve: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428523008 grant_write_cycle 2 grant_write_bytes 428523008 curr_cycle 2 curr_block 836959 tail_cycle 2 tail_block 810683
xfssyncd/sdc1-1413 [000] 3356.093462: xfs_log_grant_enter: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428523008 grant_write_cycle 2 grant_write_bytes 428523008 curr_cycle 2 curr_block 836959 tail_cycle 2 tail_block 810683
xfssyncd/sdc1-1413 [000] 3356.093463: xfs_log_grant_exit: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428525748 grant_write_cycle 2 grant_write_bytes 428525748 curr_cycle 2 curr_block 836959 tail_cycle 2 tail_block 810683
xfssyncd/sdc1-1413 [000] 3356.093465: xfs_buf_lock: dev 8:33 bno 0x0 len 0x200 hold 1 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|PAGES caller xfs_getsb
xfssyncd/sdc1-1413 [000] 3356.093467: xfs_buf_lock_done: dev 8:33 bno 0x0 len 0x200 hold 1 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_getsb
xfssyncd/sdc1-1413 [000] 3356.093468: xfs_buf_hold: dev 8:33 bno 0x0 len 0x200 hold 1 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_getsb
xfssyncd/sdc1-1413 [000] 3356.093470: xfs_buf_hold: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_item_init
xfssyncd/sdc1-1413 [000] 3356.093472: xfs_trans_getsb: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|PAGES recur 0 refcount 1 bliflags lidesc 0xffff8800dd23f8f0 liflags
xfssyncd/sdc1-1413 [000] 3356.093474: xfs_trans_log_buf: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags lidesc 0xffff8800dd23f8f0 liflags
xfssyncd/sdc1-1413 [000] 3356.093477: xfs_buf_item_size: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags |DIRTY|LOGGED lidesc 0xffff8800dd23f8f0 liflags
xfssyncd/sdc1-1413 [000] 3356.093479: xfs_buf_item_format: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags |DIRTY|LOGGED lidesc 0xffff8800dd23f8f0 liflags
xfssyncd/sdc1-1413 [000] 3356.093480: xfs_buf_item_pin: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags |DIRTY|LOGGED lidesc 0xffff8800dd23f8f0 liflags
xfssyncd/sdc1-1413 [000] 3356.093487: xfs_log_done_nonperm: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428525748 grant_write_cycle 2 grant_write_bytes 428525748 curr_cycle 2 curr_block 836959 tail_cycle 2 tail_block 810683
xfssyncd/sdc1-1413 [000] 3356.093488: xfs_log_ungrant_enter: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 0 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428525748 grant_write_cycle 2 grant_write_bytes 428525748 curr_cycle 2 curr_block 836959 tail_cycle 2 tail_block 810683
xfssyncd/sdc1-1413 [000] 3356.093489: xfs_log_ungrant_sub: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 0 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428525748 grant_write_cycle 2 grant_write_bytes 428525748 curr_cycle 2 curr_block 836959 tail_cycle 2 tail_block 810683
xfssyncd/sdc1-1413 [000] 3356.093489: xfs_log_ungrant_exit: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 0 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428523748 grant_write_cycle 2 grant_write_bytes 428523748 curr_cycle 2 curr_block 836959 tail_cycle 2 tail_block 810683
xfssyncd/sdc1-1413 [000] 3356.093492: xfs_trans_commit_lsn: dev 8:33 trans 0xffff8800dd2b7ea0 commit_lsn 0x2000cc55f
xfssyncd/sdc1-1413 [000] 3356.093495: xfs_buf_item_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 1 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 2 bliflags |DIRTY lidesc 0x (null) liflags
xfssyncd/sdc1-1413 [000] 3356.093496: xfs_buf_delwri_queue: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 1 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES caller xfs_buf_unlock
xfssyncd/sdc1-1413 [000] 3356.093501: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 1 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_item_unlock
xfssyncd/sdc1-1413 [000] 3356.093502: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 1 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_item_unlock
xfssyncd/sdc1-1413 [000] 3356.093506: xfs_buf_iorequest: dev 8:33 bno 0x3a44f57f len 0x8000 hold 1 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xlog_bdstrat
xfssyncd/sdc1-1413 [000] 3356.093507: xfs_buf_hold: dev 8:33 bno 0x3a44f57f len 0x8000 hold 1 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xfs_buf_iorequest
xfssyncd/sdc1-1413 [000] 3356.093541: xfs_buf_rele: dev 8:33 bno 0x3a44f57f len 0x8000 hold 2 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xfs_buf_iorequest
<idle>-0 [001] 3356.113311: xfs_buf_ioerror: dev 8:33 bno 0x3a44f57f len 0x8000 hold 1 pincount 0 lock 0 error 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xfs_buf_bio_end_io
<idle>-0 [001] 3356.113316: xfs_buf_iodone: dev 8:33 bno 0x3a44f57f len 0x8000 hold 1 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller _xfs_buf_ioend
kworker/1:0-2942 [001] 3356.113359: xfs_buf_item_committed: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 1 lock 1 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q recur 0 refcount 1 bliflags |DIRTY lidesc 0x (null) liflags
kworker/1:0-2942 [001] 3356.113362: xfs_buf_item_unpin: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 1 lock 1 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q recur 0 refcount 1 bliflags |DIRTY lidesc 0x (null) liflags IN_AIL
xfsbufd/sdc1-1407 [001] 3357.291895: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3357.291899: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3358.490410: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3358.490415: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3359.688802: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3359.688806: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3360.887269: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3360.887273: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3362.085688: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3362.085692: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3363.284175: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3363.284180: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3364.482639: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3364.482644: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3365.681096: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3365.681100: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3366.879566: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3366.879571: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3368.078025: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3368.078030: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3369.276482: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3369.276486: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3370.474939: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3370.474944: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3371.673391: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3371.673395: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3372.871848: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3372.871853: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3374.070306: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3374.070310: xfs_buf_delwri_split: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfsbufd
xfsbufd/sdc1-1407 [000] 3374.070312: xfs_buf_iorequest: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_bdstrat_cb
xfsbufd/sdc1-1407 [000] 3374.070314: xfs_buf_hold: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iorequest
xfsbufd/sdc1-1407 [000] 3374.070331: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iorequest
<idle>-0 [001] 3374.070797: xfs_buf_ioerror: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 error 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_buf_bio_end_io
<idle>-0 [001] 3374.070801: xfs_buf_iodone: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller _xfs_buf_ioend
kworker/1:0-2942 [001] 3374.070839: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone
kworker/1:0-2942 [001] 3374.070841: xfs_buf_iodone: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone_callbacks
kworker/1:0-2942 [001] 3374.070843: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone_work
kworker/1:0-2942 [001] 3374.070843: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone_work
xfssyncd/sdc1-1413 [000] 3392.067122: xfs_log_reserve: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428524032 grant_write_cycle 2 grant_write_bytes 428524032 curr_cycle 2 curr_block 836961 tail_cycle 2 tail_block 810683
xfssyncd/sdc1-1413 [000] 3392.067127: xfs_log_grant_enter: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428524032 grant_write_cycle 2 grant_write_bytes 428524032 curr_cycle 2 curr_block 836961 tail_cycle 2 tail_block 810683
xfssyncd/sdc1-1413 [000] 3392.067129: xfs_log_grant_exit: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428526772 grant_write_cycle 2 grant_write_bytes 428526772 curr_cycle 2 curr_block 836961 tail_cycle 2 tail_block 810683
xfssyncd/sdc1-1413 [000] 3392.067131: xfs_buf_lock: dev 8:33 bno 0x0 len 0x200 hold 1 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|PAGES caller xfs_getsb
xfssyncd/sdc1-1413 [000] 3392.067133: xfs_buf_lock_done: dev 8:33 bno 0x0 len 0x200 hold 1 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_getsb
xfssyncd/sdc1-1413 [000] 3392.067134: xfs_buf_hold: dev 8:33 bno 0x0 len 0x200 hold 1 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_getsb
xfssyncd/sdc1-1413 [000] 3392.067138: xfs_buf_hold: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_item_init
xfssyncd/sdc1-1413 [000] 3392.067140: xfs_trans_getsb: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|PAGES recur 0 refcount 1 bliflags lidesc 0xffff8800dca401f0 liflags
xfssyncd/sdc1-1413 [000] 3392.067142: xfs_trans_log_buf: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags lidesc 0xffff8800dca401f0 liflags
xfssyncd/sdc1-1413 [000] 3392.067145: xfs_buf_item_size: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags |DIRTY|LOGGED lidesc 0xffff8800dca401f0 liflags
xfssyncd/sdc1-1413 [000] 3392.067147: xfs_buf_item_format: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags |DIRTY|LOGGED lidesc 0xffff8800dca401f0 liflags
xfssyncd/sdc1-1413 [000] 3392.067148: xfs_buf_item_pin: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags |DIRTY|LOGGED lidesc 0xffff8800dca401f0 liflags
xfssyncd/sdc1-1413 [000] 3392.067154: xfs_log_done_nonperm: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428526772 grant_write_cycle 2 grant_write_bytes 428526772 curr_cycle 2 curr_block 836961 tail_cycle 2 tail_block 810683
xfssyncd/sdc1-1413 [000] 3392.067155: xfs_log_ungrant_enter: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 0 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428526772 grant_write_cycle 2 grant_write_bytes 428526772 curr_cycle 2 curr_block 836961 tail_cycle 2 tail_block 810683
xfssyncd/sdc1-1413 [000] 3392.067156: xfs_log_ungrant_sub: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 0 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428526772 grant_write_cycle 2 grant_write_bytes 428526772 curr_cycle 2 curr_block 836961 tail_cycle 2 tail_block 810683
xfssyncd/sdc1-1413 [000] 3392.067156: xfs_log_ungrant_exit: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 0 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428524772 grant_write_cycle 2 grant_write_bytes 428524772 curr_cycle 2 curr_block 836961 tail_cycle 2 tail_block 810683
xfssyncd/sdc1-1413 [000] 3392.067159: xfs_trans_commit_lsn: dev 8:33 trans 0xffff8800dca3a4c8 commit_lsn 0x2000cc561
xfssyncd/sdc1-1413 [000] 3392.067161: xfs_buf_item_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 1 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 2 bliflags |DIRTY lidesc 0x (null) liflags
xfssyncd/sdc1-1413 [000] 3392.067163: xfs_buf_delwri_queue: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 1 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES caller xfs_buf_unlock
xfssyncd/sdc1-1413 [000] 3392.067168: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 1 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_item_unlock
xfssyncd/sdc1-1413 [000] 3392.067169: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 1 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_item_unlock
xfssyncd/sdc1-1413 [000] 3392.067173: xfs_buf_iorequest: dev 8:33 bno 0x3a44f581 len 0x8000 hold 1 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xlog_bdstrat
xfssyncd/sdc1-1413 [000] 3392.067174: xfs_buf_hold: dev 8:33 bno 0x3a44f581 len 0x8000 hold 1 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xfs_buf_iorequest
xfssyncd/sdc1-1413 [000] 3392.067211: xfs_buf_rele: dev 8:33 bno 0x3a44f581 len 0x8000 hold 2 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xfs_buf_iorequest
<idle>-0 [001] 3392.084946: xfs_buf_ioerror: dev 8:33 bno 0x3a44f581 len 0x8000 hold 1 pincount 0 lock 0 error 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xfs_buf_bio_end_io
<idle>-0 [001] 3392.084951: xfs_buf_iodone: dev 8:33 bno 0x3a44f581 len 0x8000 hold 1 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller _xfs_buf_ioend
kworker/1:0-2942 [001] 3392.084986: xfs_buf_item_committed: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 1 lock 1 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q recur 0 refcount 1 bliflags |DIRTY lidesc 0x (null) liflags
kworker/1:0-2942 [001] 3392.084989: xfs_buf_item_unpin: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 1 lock 1 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q recur 0 refcount 1 bliflags |DIRTY lidesc 0x (null) liflags IN_AIL
xfsbufd/sdc1-1407 [001] 3393.269554: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3393.269558: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3394.472001: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3394.472006: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3395.674456: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3395.674460: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3396.872917: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 3396.872921: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3398.071397: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3398.071401: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3399.269841: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3399.269845: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3400.468299: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3400.468304: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3401.666718: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3401.666722: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3402.865235: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3402.865239: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3404.063706: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3404.063710: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3405.262163: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3405.262167: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3406.460622: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3406.460626: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3407.659078: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3407.659083: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3408.857536: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3408.857540: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3410.055988: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 3410.055992: xfs_buf_delwri_split: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfsbufd
xfsbufd/sdc1-1407 [000] 3410.055995: xfs_buf_iorequest: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_bdstrat_cb
xfsbufd/sdc1-1407 [000] 3410.055996: xfs_buf_hold: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iorequest
xfsbufd/sdc1-1407 [000] 3410.056014: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iorequest
<idle>-0 [001] 3410.056482: xfs_buf_ioerror: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 error 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_buf_bio_end_io
<idle>-0 [001] 3410.056486: xfs_buf_iodone: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller _xfs_buf_ioend
kworker/1:0-2942 [001] 3410.056524: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone
kworker/1:0-2942 [001] 3410.056526: xfs_buf_iodone: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone_callbacks
kworker/1:0-2942 [001] 3410.056527: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone_work
kworker/1:0-2942 [001] 3410.056528: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone_work
[-- Attachment #1.2: Type: application/pgp-signature, Size: 198 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2011-01-20 10:01 ` Petre Rodan
@ 2011-01-20 11:06 ` Dave Chinner
2011-01-20 12:07 ` Petre Rodan
2011-01-20 23:43 ` Dave Chinner
1 sibling, 1 reply; 29+ messages in thread
From: Dave Chinner @ 2011-01-20 11:06 UTC (permalink / raw)
To: Petre Rodan; +Cc: xfs
On Thu, Jan 20, 2011 at 12:01:43PM +0200, Petre Rodan wrote:
> On Fri, Dec 31, 2010 at 11:13:23AM +1100, Dave Chinner wrote:
> > On Mon, Dec 27, 2010 at 07:19:39PM +0200, Petre Rodan wrote:
> > >
> > > Hello Dave,
> > >
> > > On Tue, Dec 28, 2010 at 01:07:50AM +1100, Dave Chinner wrote:
> > > > Turn on the XFS tracing so we can see what is being written every
> > > > 36s. When the problem shows up:
> > > >
> > > > # echo 1 > /sys/kernel/debug/tracing/events/xfs/enable
> > > > # sleep 100
> > > > # cat /sys/kernel/debug/tracing/trace > trace.out
> > > > # echo 0 > /sys/kernel/debug/tracing/events/xfs/enable
> > > >
> > > > And post the trace.out file for us to look at.
> > >
> > > attached.
> > >
> > > you can disregard all the lvm partitions ('dev 254:.*') since they are on a different drive, probably only 8:17 is of interest.
> >
> > Ok, I can see the problem. The original patch I tested:
> >
> > http://oss.sgi.com/archives/xfs/2010-08/msg00026.html
> >
> > Made the log covering dummy transaction a synchronous transaction so
> > that the log was written and the superblock unpinned immediately to
> > allow the xfsbufd to write back the superblock and empty the AIL
> > before the next log covering check.
> >
> > On review, the log covering dummy transaction got changed to an
> > async transaction, so the superblock buffer is not unpinned
> > immediately. This was the patch committed:
> >
> > http://oss.sgi.com/archives/xfs/2010-08/msg00197.html
> >
> > As a result, the success of log covering and idling is then
> > dependent on whether the log gets written to disk to unpin the
> > superblock buffer before the next xfssyncd run. It seems that there
> > is a large chance that this log write does not happen, so the
> > filesystem never idles correctly. I've reproduced it here, and only
> > in one test out of ten did the filesystem enter an idle state
> > correctly. I guess I was unlucky enough to hit that 1-in-10 case
> > when I tested the modified patch.
> >
> > I'll cook up a patch to make the log covering behave like the
> > original patch I sent...
>
> I presume that the new fix should be provided by "xfs: ensure log
> covering transactions are synchronous", so I tested 2.6.37 patched
> with it and then 2.6.38_rc1 that has it included..
>
> instead of having xfssyncd write to the drive every 36s, we now have this:
....
> in other words xfsyncd and xfsbufd now alternate at 18s intervals
> keeping the drive busy with nothing constructive hours after the
> last write to the drive.
>
> to add to the misfortune, 'mount -o remount ' is no longer able to
> bring the drive to a quiet state since 2.6.37, so now the only way
> to achieve an idle drive is to fully umount and then remount the
> partition.
>
> just for the record, this is a different drive then at the
> beginning of the thread, and it has these parameters:
>
> meta-data=/dev/sdc1 isize=256 agcount=4, agsize=61047552 blks
> = sectsz=512 attr=2
> data = bsize=4096 blocks=244190208, imaxpct=25
> = sunit=0 swidth=0 blks
> naming =version 2 bsize=4096 ascii-ci=0
> log =internal bsize=4096 blocks=119233, version=2
> = sectsz=512 sunit=0 blks, lazy-count=0
^^^^^^^^^^^^
> realtime =none extsz=4096 blocks=0, rtextents=0
>
> attached you'll find the trace (with accesses to other drives filtered out).
It's something to do with lazy-count=0. I'm look into it when I get
the chance - I almost never test w/ lazy-count=0 because =1 is
the default value.
I'd recommend that you convert the fs to lazy-count=1 when you get a
chance, anyway, because of the fact it reduces the latency of
transactions significantly...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2011-01-20 11:06 ` Dave Chinner
@ 2011-01-20 12:07 ` Petre Rodan
2011-01-20 13:24 ` Christoph Hellwig
0 siblings, 1 reply; 29+ messages in thread
From: Petre Rodan @ 2011-01-20 12:07 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs
[-- Attachment #1.1.1: Type: text/plain, Size: 2706 bytes --]
Hello Dave,
On Thu, Jan 20, 2011 at 10:06:05PM +1100, Dave Chinner wrote:
> ....
> > in other words xfsyncd and xfsbufd now alternate at 18s intervals
> > keeping the drive busy with nothing constructive hours after the
> > last write to the drive.
[..]
> > meta-data=/dev/sdc1 isize=256 agcount=4, agsize=61047552 blks
> > = sectsz=512 attr=2
> > data = bsize=4096 blocks=244190208, imaxpct=25
> > = sunit=0 swidth=0 blks
> > naming =version 2 bsize=4096 ascii-ci=0
> > log =internal bsize=4096 blocks=119233, version=2
> > = sectsz=512 sunit=0 blks, lazy-count=0
> ^^^^^^^^^^^^
> > realtime =none extsz=4096 blocks=0, rtextents=0
>
> It's something to do with lazy-count=0. I'm look into it when I get
> the chance - I almost never test w/ lazy-count=0 because =1 is
> the default value.
thanks for the tip. looks like I'm using xfsprogs 3.0.3 (latest version in the gentoo stable tree) to mkfs these filesystems. can the antique version be somehow the root of the problem?
anyhow, changed it to:
meta-data=/dev/sdc1 isize=256 agcount=4, agsize=61047552 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=244190208, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=119233, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
used:
xfsprogs-3.0.3
Linux starshine 2.6.38-rc1-s001 #1 SMP Thu Jan 20 10:49:20 EET 2011 x86_64 AMD Athlon(tm) II Neo N36L Dual-Core Processor AuthenticAMD GNU/Linux
Seagate Barracuda ES.2 ST31000340NS
Hitachi HUA722020ALA330
default (for 3.0.3 xfsprogs) mkfs options, 'noatime,nodev,nosuid,noexec' as mount options.
and the behaviour did not change at all, trace attached.
it's very easy to reproduce this. all it took was
reboot
cp -a /usr/src/linux* /mnt/foo
sleep 1800
grab trace
cheers,
peter
>
> I'd recommend that you convert the fs to lazy-count=1 when you get a
> chance, anyway, because of the fact it reduces the latency of
> transactions significantly...
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
--
petre rodan
<petre.rodan@simplex.ro>
Technical Manager
Simplex SRL, Bucharest
[-- Attachment #1.1.2: trace --]
[-- Type: text/plain, Size: 38464 bytes --]
# tracer: nop
#
# TASK-PID CPU# TIMESTAMP FUNCTION
# | | | | |
xfssyncd/sdc1-1409 [000] 2198.419444: xfs_log_reserve: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45622272 grant_write_cycle 1 grant_write_bytes 45622272 curr_cycle 1 curr_block 89106 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [000] 2198.419449: xfs_log_grant_enter: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45622272 grant_write_cycle 1 grant_write_bytes 45622272 curr_cycle 1 curr_block 89106 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [000] 2198.419451: xfs_log_grant_exit: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45625012 grant_write_cycle 1 grant_write_bytes 45625012 curr_cycle 1 curr_block 89106 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [000] 2198.419453: xfs_buf_lock: dev 8:33 bno 0x0 len 0x200 hold 1 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|PAGES caller xfs_getsb
xfssyncd/sdc1-1409 [000] 2198.419455: xfs_buf_lock_done: dev 8:33 bno 0x0 len 0x200 hold 1 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_getsb
xfssyncd/sdc1-1409 [000] 2198.419456: xfs_buf_hold: dev 8:33 bno 0x0 len 0x200 hold 1 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_getsb
xfssyncd/sdc1-1409 [000] 2198.419458: xfs_buf_hold: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_item_init
xfssyncd/sdc1-1409 [000] 2198.419460: xfs_trans_getsb: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|PAGES recur 0 refcount 1 bliflags lidesc 0xffff880106b4b570 liflags
xfssyncd/sdc1-1409 [000] 2198.419462: xfs_trans_log_buf: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags lidesc 0xffff880106b4b570 liflags
xfssyncd/sdc1-1409 [000] 2198.419465: xfs_buf_item_size: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags |DIRTY|LOGGED lidesc 0xffff880106b4b570 liflags
xfssyncd/sdc1-1409 [000] 2198.419467: xfs_buf_item_format: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags |DIRTY|LOGGED lidesc 0xffff880106b4b570 liflags
xfssyncd/sdc1-1409 [000] 2198.419468: xfs_buf_item_pin: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags |DIRTY|LOGGED lidesc 0xffff880106b4b570 liflags
xfssyncd/sdc1-1409 [000] 2198.419474: xfs_log_done_nonperm: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45625012 grant_write_cycle 1 grant_write_bytes 45625012 curr_cycle 1 curr_block 89106 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [000] 2198.419475: xfs_log_ungrant_enter: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 0 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45625012 grant_write_cycle 1 grant_write_bytes 45625012 curr_cycle 1 curr_block 89106 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [000] 2198.419476: xfs_log_ungrant_sub: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 0 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45625012 grant_write_cycle 1 grant_write_bytes 45625012 curr_cycle 1 curr_block 89106 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [000] 2198.419476: xfs_log_ungrant_exit: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 0 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45623012 grant_write_cycle 1 grant_write_bytes 45623012 curr_cycle 1 curr_block 89106 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [000] 2198.419479: xfs_trans_commit_lsn: dev 8:33 trans 0xffff880101c58b58 commit_lsn 0x100015c12
xfssyncd/sdc1-1409 [000] 2198.419482: xfs_buf_item_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 1 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 2 bliflags |DIRTY lidesc 0x (null) liflags
xfssyncd/sdc1-1409 [000] 2198.419483: xfs_buf_delwri_queue: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 1 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES caller xfs_buf_unlock
xfssyncd/sdc1-1409 [000] 2198.419488: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 1 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_item_unlock
xfssyncd/sdc1-1409 [000] 2198.419489: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 1 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_item_unlock
xfssyncd/sdc1-1409 [000] 2198.419493: xfs_buf_iorequest: dev 8:33 bno 0x3a398c32 len 0x8000 hold 1 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xlog_bdstrat
xfssyncd/sdc1-1409 [000] 2198.419494: xfs_buf_hold: dev 8:33 bno 0x3a398c32 len 0x8000 hold 1 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xfs_buf_iorequest
xfssyncd/sdc1-1409 [000] 2198.419530: xfs_buf_rele: dev 8:33 bno 0x3a398c32 len 0x8000 hold 2 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xfs_buf_iorequest
<idle>-0 [000] 2198.439397: xfs_buf_ioerror: dev 8:33 bno 0x3a398c32 len 0x8000 hold 1 pincount 0 lock 0 error 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xfs_buf_bio_end_io
<idle>-0 [000] 2198.439402: xfs_buf_iodone: dev 8:33 bno 0x3a398c32 len 0x8000 hold 1 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller _xfs_buf_ioend
kworker/0:1-10 [000] 2198.439444: xfs_buf_item_committed: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 1 lock 1 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q recur 0 refcount 1 bliflags |DIRTY lidesc 0x (null) liflags
kworker/0:1-10 [000] 2198.439447: xfs_buf_item_unpin: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 1 lock 1 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q recur 0 refcount 1 bliflags |DIRTY lidesc 0x (null) liflags IN_AIL
xfsbufd/sdc1-1407 [001] 2199.621878: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2199.621882: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2200.824324: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2200.824329: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2202.026783: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2202.026787: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2203.229229: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2203.229233: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2204.431687: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2204.431692: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2205.634134: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2205.634139: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2206.836595: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2206.836600: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2208.039039: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2208.039044: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2209.241491: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2209.241495: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2210.443954: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2210.443959: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2211.646365: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2211.646370: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2212.844825: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2212.844830: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2214.043282: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2214.043287: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2215.241740: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2215.241745: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2216.440198: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [001] 2216.440201: xfs_buf_delwri_split: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfsbufd
xfsbufd/sdc1-1407 [001] 2216.440204: xfs_buf_iorequest: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_bdstrat_cb
xfsbufd/sdc1-1407 [001] 2216.440205: xfs_buf_hold: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iorequest
xfsbufd/sdc1-1407 [001] 2216.440224: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iorequest
<idle>-0 [001] 2216.440695: xfs_buf_ioerror: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 error 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_buf_bio_end_io
<idle>-0 [001] 2216.440700: xfs_buf_iodone: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller _xfs_buf_ioend
kworker/1:2-1448 [001] 2216.440737: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone
kworker/1:2-1448 [001] 2216.440739: xfs_buf_iodone: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone_callbacks
kworker/1:2-1448 [001] 2216.440740: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone_work
kworker/1:2-1448 [001] 2216.440741: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone_work
xfssyncd/sdc1-1409 [001] 2234.393111: xfs_log_reserve: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45623296 grant_write_cycle 1 grant_write_bytes 45623296 curr_cycle 1 curr_block 89108 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [001] 2234.393113: xfs_log_grant_enter: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45623296 grant_write_cycle 1 grant_write_bytes 45623296 curr_cycle 1 curr_block 89108 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [001] 2234.393115: xfs_log_grant_exit: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45626036 grant_write_cycle 1 grant_write_bytes 45626036 curr_cycle 1 curr_block 89108 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [001] 2234.393117: xfs_buf_lock: dev 8:33 bno 0x0 len 0x200 hold 1 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|PAGES caller xfs_getsb
xfssyncd/sdc1-1409 [001] 2234.393118: xfs_buf_lock_done: dev 8:33 bno 0x0 len 0x200 hold 1 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_getsb
xfssyncd/sdc1-1409 [001] 2234.393119: xfs_buf_hold: dev 8:33 bno 0x0 len 0x200 hold 1 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_getsb
xfssyncd/sdc1-1409 [001] 2234.393122: xfs_buf_hold: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_item_init
xfssyncd/sdc1-1409 [001] 2234.393123: xfs_trans_getsb: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|PAGES recur 0 refcount 1 bliflags lidesc 0xffff880101f1f370 liflags
xfssyncd/sdc1-1409 [001] 2234.393125: xfs_trans_log_buf: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags lidesc 0xffff880101f1f370 liflags
xfssyncd/sdc1-1409 [001] 2234.393128: xfs_buf_item_size: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags |DIRTY|LOGGED lidesc 0xffff880101f1f370 liflags
xfssyncd/sdc1-1409 [001] 2234.393130: xfs_buf_item_format: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags |DIRTY|LOGGED lidesc 0xffff880101f1f370 liflags
xfssyncd/sdc1-1409 [001] 2234.393131: xfs_buf_item_pin: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags |DIRTY|LOGGED lidesc 0xffff880101f1f370 liflags
xfssyncd/sdc1-1409 [001] 2234.393137: xfs_log_done_nonperm: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45626036 grant_write_cycle 1 grant_write_bytes 45626036 curr_cycle 1 curr_block 89108 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [001] 2234.393138: xfs_log_ungrant_enter: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 0 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45626036 grant_write_cycle 1 grant_write_bytes 45626036 curr_cycle 1 curr_block 89108 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [001] 2234.393140: xfs_log_ungrant_sub: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 0 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45626036 grant_write_cycle 1 grant_write_bytes 45626036 curr_cycle 1 curr_block 89108 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [001] 2234.393141: xfs_log_ungrant_exit: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 0 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45624036 grant_write_cycle 1 grant_write_bytes 45624036 curr_cycle 1 curr_block 89108 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [001] 2234.393143: xfs_trans_commit_lsn: dev 8:33 trans 0xffff88011da67b58 commit_lsn 0x100015c14
xfssyncd/sdc1-1409 [001] 2234.393146: xfs_buf_item_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 1 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 2 bliflags |DIRTY lidesc 0x (null) liflags
xfssyncd/sdc1-1409 [001] 2234.393147: xfs_buf_delwri_queue: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 1 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES caller xfs_buf_unlock
xfssyncd/sdc1-1409 [001] 2234.393152: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 1 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_item_unlock
xfssyncd/sdc1-1409 [001] 2234.393153: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 1 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_item_unlock
xfssyncd/sdc1-1409 [001] 2234.393157: xfs_buf_iorequest: dev 8:33 bno 0x3a398c34 len 0x8000 hold 1 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xlog_bdstrat
xfssyncd/sdc1-1409 [001] 2234.393158: xfs_buf_hold: dev 8:33 bno 0x3a398c34 len 0x8000 hold 1 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xfs_buf_iorequest
xfssyncd/sdc1-1409 [001] 2234.393194: xfs_buf_rele: dev 8:33 bno 0x3a398c34 len 0x8000 hold 2 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xfs_buf_iorequest
<idle>-0 [001] 2234.411005: xfs_buf_ioerror: dev 8:33 bno 0x3a398c34 len 0x8000 hold 1 pincount 0 lock 0 error 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xfs_buf_bio_end_io
<idle>-0 [001] 2234.411009: xfs_buf_iodone: dev 8:33 bno 0x3a398c34 len 0x8000 hold 1 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller _xfs_buf_ioend
kworker/1:2-1448 [001] 2234.411044: xfs_buf_item_committed: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 1 lock 1 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q recur 0 refcount 1 bliflags |DIRTY lidesc 0x (null) liflags
kworker/1:2-1448 [001] 2234.411047: xfs_buf_item_unpin: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 1 lock 1 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q recur 0 refcount 1 bliflags |DIRTY lidesc 0x (null) liflags IN_AIL
xfsbufd/sdc1-1407 [000] 2235.591555: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2235.591560: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2236.790013: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2236.790018: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2237.988479: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2237.988484: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2239.186952: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2239.186956: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2240.385424: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2240.385428: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2241.583884: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2241.583888: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2242.782341: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2242.782345: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2243.980799: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2243.980804: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2245.179256: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2245.179260: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2246.377715: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2246.377719: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2247.576172: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2247.576177: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2248.774630: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2248.774635: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2249.973087: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2249.973092: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2251.171547: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2251.171552: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2252.370002: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2252.370006: xfs_buf_delwri_split: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfsbufd
xfsbufd/sdc1-1407 [000] 2252.370008: xfs_buf_iorequest: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_bdstrat_cb
xfsbufd/sdc1-1407 [000] 2252.370009: xfs_buf_hold: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iorequest
xfsbufd/sdc1-1407 [000] 2252.370029: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iorequest
<idle>-0 [001] 2252.370505: xfs_buf_ioerror: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 error 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller xfs_buf_bio_end_io
<idle>-0 [001] 2252.370510: xfs_buf_iodone: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|PAGES caller _xfs_buf_ioend
kworker/1:2-1448 [001] 2252.370547: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone
kworker/1:2-1448 [001] 2252.370549: xfs_buf_iodone: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone_callbacks
kworker/1:2-1448 [001] 2252.370551: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone_work
kworker/1:2-1448 [001] 2252.370551: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_iodone_work
xfssyncd/sdc1-1409 [000] 2270.362854: xfs_log_reserve: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45624320 grant_write_cycle 1 grant_write_bytes 45624320 curr_cycle 1 curr_block 89110 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [000] 2270.362858: xfs_log_grant_enter: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45624320 grant_write_cycle 1 grant_write_bytes 45624320 curr_cycle 1 curr_block 89110 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [000] 2270.362860: xfs_log_grant_exit: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45627060 grant_write_cycle 1 grant_write_bytes 45627060 curr_cycle 1 curr_block 89110 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [000] 2270.362862: xfs_buf_lock: dev 8:33 bno 0x0 len 0x200 hold 1 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|PAGES caller xfs_getsb
xfssyncd/sdc1-1409 [000] 2270.362863: xfs_buf_lock_done: dev 8:33 bno 0x0 len 0x200 hold 1 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_getsb
xfssyncd/sdc1-1409 [000] 2270.362864: xfs_buf_hold: dev 8:33 bno 0x0 len 0x200 hold 1 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_getsb
xfssyncd/sdc1-1409 [000] 2270.362866: xfs_buf_hold: dev 8:33 bno 0x0 len 0x200 hold 2 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|PAGES caller xfs_buf_item_init
xfssyncd/sdc1-1409 [000] 2270.362868: xfs_trans_getsb: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|PAGES recur 0 refcount 1 bliflags lidesc 0xffff880106aa3350 liflags
xfssyncd/sdc1-1409 [000] 2270.362870: xfs_trans_log_buf: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags lidesc 0xffff880106aa3350 liflags
xfssyncd/sdc1-1409 [000] 2270.362873: xfs_buf_item_size: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags |DIRTY|LOGGED lidesc 0xffff880106aa3350 liflags
xfssyncd/sdc1-1409 [000] 2270.362875: xfs_buf_item_format: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags |DIRTY|LOGGED lidesc 0xffff880106aa3350 liflags
xfssyncd/sdc1-1409 [000] 2270.362876: xfs_buf_item_pin: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 1 bliflags |DIRTY|LOGGED lidesc 0xffff880106aa3350 liflags
xfssyncd/sdc1-1409 [000] 2270.362882: xfs_log_done_nonperm: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45627060 grant_write_cycle 1 grant_write_bytes 45627060 curr_cycle 1 curr_block 89110 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [000] 2270.362883: xfs_log_ungrant_enter: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 0 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45627060 grant_write_cycle 1 grant_write_bytes 45627060 curr_cycle 1 curr_block 89110 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [000] 2270.362883: xfs_log_ungrant_sub: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 0 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45627060 grant_write_cycle 1 grant_write_bytes 45627060 curr_cycle 1 curr_block 89110 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [000] 2270.362884: xfs_log_ungrant_exit: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 0 t_curr_res 2000 t_unit_res 2740 t_flags reserveq empty writeq empty grant_reserve_cycle 1 grant_reserve_bytes 45625060 grant_write_cycle 1 grant_write_bytes 45625060 curr_cycle 1 curr_block 89110 tail_cycle 1 tail_block 35956
xfssyncd/sdc1-1409 [000] 2270.362886: xfs_trans_commit_lsn: dev 8:33 trans 0xffff880101c46a40 commit_lsn 0x100015c16
xfssyncd/sdc1-1409 [000] 2270.362889: xfs_buf_item_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 1 lock 0 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES recur 0 refcount 2 bliflags |DIRTY lidesc 0x (null) liflags
xfssyncd/sdc1-1409 [000] 2270.362890: xfs_buf_delwri_queue: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 1 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES caller xfs_buf_unlock
xfssyncd/sdc1-1409 [000] 2270.362897: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 1 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_item_unlock
xfssyncd/sdc1-1409 [000] 2270.362898: xfs_buf_rele: dev 8:33 bno 0x0 len 0x200 hold 4 pincount 1 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_item_unlock
xfssyncd/sdc1-1409 [000] 2270.362902: xfs_buf_iorequest: dev 8:33 bno 0x3a398c36 len 0x8000 hold 1 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xlog_bdstrat
xfssyncd/sdc1-1409 [000] 2270.362903: xfs_buf_hold: dev 8:33 bno 0x3a398c36 len 0x8000 hold 1 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xfs_buf_iorequest
xfssyncd/sdc1-1409 [000] 2270.362939: xfs_buf_rele: dev 8:33 bno 0x3a398c36 len 0x8000 hold 2 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xfs_buf_iorequest
<idle>-0 [001] 2270.382694: xfs_buf_ioerror: dev 8:33 bno 0x3a398c36 len 0x8000 hold 1 pincount 0 lock 0 error 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller xfs_buf_bio_end_io
<idle>-0 [001] 2270.382699: xfs_buf_iodone: dev 8:33 bno 0x3a398c36 len 0x8000 hold 1 pincount 0 lock 0 flags WRITE|MAPPED|ASYNC|DONE|ORDERED|PAGES|RUN_QUEUES|0x2000 caller _xfs_buf_ioend
kworker/1:2-1448 [001] 2270.382741: xfs_buf_item_committed: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 1 lock 1 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q recur 0 refcount 1 bliflags |DIRTY lidesc 0x (null) liflags
kworker/1:2-1448 [001] 2270.382745: xfs_buf_item_unpin: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 1 lock 1 flags |MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q recur 0 refcount 1 bliflags |DIRTY lidesc 0x (null) liflags IN_AIL
xfsbufd/sdc1-1407 [000] 2271.565309: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2271.565314: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2272.763759: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2272.763763: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2273.962216: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2273.962220: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2275.160678: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2275.160683: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2276.359131: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2276.359135: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2277.557589: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2277.557594: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2278.756046: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2278.756051: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2279.954503: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2279.954508: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2281.152961: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2281.152965: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2282.351419: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2282.351423: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2283.549876: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2283.549880: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2284.748334: xfs_buf_cond_lock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 0 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
xfsbufd/sdc1-1407 [000] 2284.748338: xfs_buf_unlock: dev 8:33 bno 0x0 len 0x200 hold 3 pincount 0 lock 1 flags MAPPED|ASYNC|DONE|DELWRI|PAGES|DELWRI_Q caller xfs_buf_delwri_split
[-- Attachment #1.2: Type: application/pgp-signature, Size: 198 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2011-01-20 12:07 ` Petre Rodan
@ 2011-01-20 13:24 ` Christoph Hellwig
0 siblings, 0 replies; 29+ messages in thread
From: Christoph Hellwig @ 2011-01-20 13:24 UTC (permalink / raw)
To: Petre Rodan; +Cc: xfs
On Thu, Jan 20, 2011 at 02:07:00PM +0200, Petre Rodan wrote:
> thanks for the tip. looks like I'm using xfsprogs 3.0.3 (latest version in the gentoo stable tree) to mkfs these filesystems. can the antique version be somehow the root of the problem?
Yes, lazy-count has been the default since xfsprogs 3.1.0.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2011-01-20 10:01 ` Petre Rodan
2011-01-20 11:06 ` Dave Chinner
@ 2011-01-20 23:43 ` Dave Chinner
2011-02-10 20:42 ` Petre Rodan
1 sibling, 1 reply; 29+ messages in thread
From: Dave Chinner @ 2011-01-20 23:43 UTC (permalink / raw)
To: Petre Rodan; +Cc: xfs
On Thu, Jan 20, 2011 at 12:01:43PM +0200, Petre Rodan wrote:
> On Fri, Dec 31, 2010 at 11:13:23AM +1100, Dave Chinner wrote:
> > Ok, I can see the problem. The original patch I tested:
> >
> > http://oss.sgi.com/archives/xfs/2010-08/msg00026.html
> >
> > Made the log covering dummy transaction a synchronous transaction so
> > that the log was written and the superblock unpinned immediately to
> > allow the xfsbufd to write back the superblock and empty the AIL
> > before the next log covering check.
> >
> > On review, the log covering dummy transaction got changed to an
> > async transaction, so the superblock buffer is not unpinned
> > immediately. This was the patch committed:
> >
> > http://oss.sgi.com/archives/xfs/2010-08/msg00197.html
> >
> > As a result, the success of log covering and idling is then
> > dependent on whether the log gets written to disk to unpin the
> > superblock buffer before the next xfssyncd run. It seems that there
> > is a large chance that this log write does not happen, so the
> > filesystem never idles correctly. I've reproduced it here, and only
> > in one test out of ten did the filesystem enter an idle state
> > correctly. I guess I was unlucky enough to hit that 1-in-10 case
> > when I tested the modified patch.
> >
> > I'll cook up a patch to make the log covering behave like the
> > original patch I sent...
>
> I presume that the new fix should be provided by "xfs: ensure log
> covering transactions are synchronous", so I tested 2.6.37 patched
> with it and then 2.6.38_rc1 that has it included..
.....
> in other words xfsyncd and xfsbufd now alternate at 18s intervals
> keeping the drive busy with nothing constructive hours after the
> last write to the drive.
This is a different problem, and not one I've seen before. Looking
at the traces, it appears that we have not empties the AIL. At
least, that's what I'm assuming at this point because log IO
completion is not updating log tail. When we start a log IO, we set
the log header lsn to the current head:
> xfssyncd/sdc1-1413 [000] 3356.093456: xfs_log_reserve: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428523008 grant_write_cycle 2 grant_write_bytes 428523008 curr_cycle 2 curr_block 836959 tail_cycle 2 tail_block 810683
Which in this case is: curr_cycle 2 curr_block 836959
When the log IO completes, that value gets written to the
l_last_sync_lsn. When the AIL tail is removed, the tail lsn is
updated to the new tail item. If the AIL is empty, then the
l_last_sync_lsn is used. That means then next dummy transaction
made to cover the log should have the cycle/block of the above
current cycle.
Instead, what I see is that the next dummmy transaction shows:
> xfssyncd/sdc1-1413 [000] 3392.067122: xfs_log_reserve: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428524032 grant_write_cycle 2 grant_write_bytes 428524032 curr_cycle 2 curr_block 836961 tail_cycle 2 tail_block 810683
The current head has moved: curr_cycle 2 curr_block 836961
But the tail hasn't: tail_cycle 2 tail_block 810683
So effectively we've got some item on the AIL that we haven't
flushed and isn't being flushed by xfssyncd. That's the problem I
need to get to the bottom of and it also explains why it's an
intermitten problem...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2011-01-20 23:43 ` Dave Chinner
@ 2011-02-10 20:42 ` Petre Rodan
2011-02-10 22:18 ` Dave Chinner
0 siblings, 1 reply; 29+ messages in thread
From: Petre Rodan @ 2011-02-10 20:42 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs
[-- Attachment #1.1: Type: text/plain, Size: 2358 bytes --]
Hello Dave,
On Fri, Jan 21, 2011 at 10:43:10AM +1100, Dave Chinner wrote:
> .....
> > in other words xfsyncd and xfsbufd now alternate at 18s intervals
> > keeping the drive busy with nothing constructive hours after the
> > last write to the drive.
>
> This is a different problem, and not one I've seen before. Looking
> at the traces, it appears that we have not empties the AIL. At
> least, that's what I'm assuming at this point because log IO
> completion is not updating log tail. When we start a log IO, we set
> the log header lsn to the current head:
>
> > xfssyncd/sdc1-1413 [000] 3356.093456: xfs_log_reserve: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428523008 grant_write_cycle 2 grant_write_bytes 428523008 curr_cycle 2 curr_block 836959 tail_cycle 2 tail_block 810683
>
> Which in this case is: curr_cycle 2 curr_block 836959
>
> When the log IO completes, that value gets written to the
> l_last_sync_lsn. When the AIL tail is removed, the tail lsn is
> updated to the new tail item. If the AIL is empty, then the
> l_last_sync_lsn is used. That means then next dummy transaction
> made to cover the log should have the cycle/block of the above
> current cycle.
>
> Instead, what I see is that the next dummmy transaction shows:
>
> > xfssyncd/sdc1-1413 [000] 3392.067122: xfs_log_reserve: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428524032 grant_write_cycle 2 grant_write_bytes 428524032 curr_cycle 2 curr_block 836961 tail_cycle 2 tail_block 810683
>
> The current head has moved: curr_cycle 2 curr_block 836961
>
> But the tail hasn't: tail_cycle 2 tail_block 810683
>
> So effectively we've got some item on the AIL that we haven't
> flushed and isn't being flushed by xfssyncd. That's the problem I
> need to get to the bottom of and it also explains why it's an
> intermitten problem...
I have been watching the stream of patches that go into 2.6.38, but I probably missed the one that might be the answer to the problem above. can you please tell me which one to try, or can I help with anything?
much appreciated,
peter
[-- Attachment #1.2: Type: application/pgp-signature, Size: 198 bytes --]
[-- Attachment #2: Type: text/plain, Size: 121 bytes --]
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2011-02-10 20:42 ` Petre Rodan
@ 2011-02-10 22:18 ` Dave Chinner
2011-02-14 18:04 ` Brian Duncan
0 siblings, 1 reply; 29+ messages in thread
From: Dave Chinner @ 2011-02-10 22:18 UTC (permalink / raw)
To: Petre Rodan; +Cc: xfs
On Thu, Feb 10, 2011 at 10:42:54PM +0200, Petre Rodan wrote:
>
> Hello Dave,
>
> On Fri, Jan 21, 2011 at 10:43:10AM +1100, Dave Chinner wrote:
> > .....
> > > in other words xfsyncd and xfsbufd now alternate at 18s intervals
> > > keeping the drive busy with nothing constructive hours after the
> > > last write to the drive.
> >
> > This is a different problem, and not one I've seen before. Looking
> > at the traces, it appears that we have not empties the AIL. At
> > least, that's what I'm assuming at this point because log IO
> > completion is not updating log tail. When we start a log IO, we set
> > the log header lsn to the current head:
> >
> > > xfssyncd/sdc1-1413 [000] 3356.093456: xfs_log_reserve: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428523008 grant_write_cycle 2 grant_write_bytes 428523008 curr_cycle 2 curr_block 836959 tail_cycle 2 tail_block 810683
> >
> > Which in this case is: curr_cycle 2 curr_block 836959
> >
> > When the log IO completes, that value gets written to the
> > l_last_sync_lsn. When the AIL tail is removed, the tail lsn is
> > updated to the new tail item. If the AIL is empty, then the
> > l_last_sync_lsn is used. That means then next dummy transaction
> > made to cover the log should have the cycle/block of the above
> > current cycle.
> >
> > Instead, what I see is that the next dummmy transaction shows:
> >
> > > xfssyncd/sdc1-1413 [000] 3392.067122: xfs_log_reserve: dev 8:33 type DUMMY1 t_ocnt 1 t_cnt 1 t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq empty grant_reserve_cycle 2 grant_reserve_bytes 428524032 grant_write_cycle 2 grant_write_bytes 428524032 curr_cycle 2 curr_block 836961 tail_cycle 2 tail_block 810683
> >
> > The current head has moved: curr_cycle 2 curr_block 836961
> >
> > But the tail hasn't: tail_cycle 2 tail_block 810683
> >
> > So effectively we've got some item on the AIL that we haven't
> > flushed and isn't being flushed by xfssyncd. That's the problem I
> > need to get to the bottom of and it also explains why it's an
> > intermitten problem...
>
> I have been watching the stream of patches that go into 2.6.38,
> but I probably missed the one that might be the answer to the
> problem above. can you please tell me which one to try, or can I
> help with anything?
I know what the problem is, but I haven't had time to work out of
code a fix. Been spending my time trying to work out the cause bugs
that are triggering hangs, crashes or corruptions here...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2011-02-10 22:18 ` Dave Chinner
@ 2011-02-14 18:04 ` Brian Duncan
2011-05-31 14:40 ` Brian Duncan
0 siblings, 1 reply; 29+ messages in thread
From: Brian Duncan @ 2011-02-14 18:04 UTC (permalink / raw)
To: linux-xfs
Dave Chinner <david <at> fromorbit.com> writes:
>
> On Thu, Feb 10, 2011 at 10:42:54PM +0200, Petre Rodan wrote:
> >
> > Hello Dave,
> >
> > On Fri, Jan 21, 2011 at 10:43:10AM +1100, Dave Chinner wrote:
> > > .....
> > > > in other words xfsyncd and xfsbufd now alternate at 18s intervals
> > > > keeping the drive busy with nothing constructive hours after the
> > > > last write to the drive.
> > >
> > > This is a different problem, and not one I've seen before. Looking
> > > at the traces, it appears that we have not empties the AIL. At
> > > least, that's what I'm assuming at this point because log IO
> > > completion is not updating log tail. When we start a log IO, we set
> > > the log header lsn to the current head:
> > >
> > > > xfssyncd/sdc1-1413 [000] 3356.093456: xfs_log_reserve: dev 8:33
type DUMMY1 t_ocnt 1 t_cnt 1
> t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq
empty
> grant_reserve_cycle 2 grant_reserve_bytes 428523008 grant_write_cycle 2
grant_write_bytes
> 428523008 curr_cycle 2 curr_block 836959 tail_cycle 2 tail_block 810683
> > >
> > > Which in this case is: curr_cycle 2 curr_block 836959
> > >
> > > When the log IO completes, that value gets written to the
> > > l_last_sync_lsn. When the AIL tail is removed, the tail lsn is
> > > updated to the new tail item. If the AIL is empty, then the
> > > l_last_sync_lsn is used. That means then next dummy transaction
> > > made to cover the log should have the cycle/block of the above
> > > current cycle.
> > >
> > > Instead, what I see is that the next dummmy transaction shows:
> > >
> > > > xfssyncd/sdc1-1413 [000] 3392.067122: xfs_log_reserve: dev 8:33
type DUMMY1 t_ocnt 1 t_cnt 1
> t_curr_res 2740 t_unit_res 2740 t_flags XLOG_TIC_INITED reserveq empty writeq
empty
> grant_reserve_cycle 2 grant_reserve_bytes 428524032 grant_write_cycle 2
grant_write_bytes
> 428524032 curr_cycle 2 curr_block 836961 tail_cycle 2 tail_block 810683
> > >
> > > The current head has moved: curr_cycle 2 curr_block 836961
> > >
> > > But the tail hasn't: tail_cycle 2 tail_block 810683
> > >
> > > So effectively we've got some item on the AIL that we haven't
> > > flushed and isn't being flushed by xfssyncd. That's the problem I
> > > need to get to the bottom of and it also explains why it's an
> > > intermitten problem...
> >
> > I have been watching the stream of patches that go into 2.6.38,
> > but I probably missed the one that might be the answer to the
> > problem above. can you please tell me which one to try, or can I
> > help with anything?
>
> I know what the problem is, but I haven't had time to work out of
> code a fix. Been spending my time trying to work out the cause bugs
> that are triggering hangs, crashes or corruptions here...
>
> Cheers,
>
> Dave.
FYI,
I found this thread after searching.. I upgraded one of my servers and
encountered this issue with XFS keeping drives from going to sleep with its
constant writing (after a copy to a volume).
I just wanted to note that I have an XFS volume that does have lazy-count set to
1 that this is occuring with, with 2.6.327.
meta-data=/dev/sdf1 isize=256 agcount=4, agsize=122094500 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=488378000, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=238465, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
After writing any data to this drive, this would occur indefinitely:(until I
unmount and remount the drive)
[ 2839.120064] xfssyncd/sdf1(756): WRITE block 1954432155 on sdf1 (2 sectors)
[ 2869.120065] xfssyncd/sdf1(756): WRITE block 1954432157 on sdf1 (2 sectors)
[ 2899.120066] xfssyncd/sdf1(756): WRITE block 1954432159 on sdf1 (2 sectors)
[ 2929.120065] xfssyncd/sdf1(756): WRITE block 1954432161 on sdf1 (2 sectors)
[ 2959.120060] xfssyncd/sdf1(756): WRITE block 1954432163 on sdf1 (2 sectors)
[ 2989.120061] xfssyncd/sdf1(756): WRITE block 1954432165 on sdf1 (2 sectors)
[ 3019.120058] xfssyncd/sdf1(756): WRITE block 1954432167 on sdf1 (2 sectors)
[ 3049.120056] xfssyncd/sdf1(756): WRITE block 1954432169 on sdf1 (2 sectors)
[ 3079.120058] xfssyncd/sdf1(756): WRITE block 1954432171 on sdf1 (2 sectors)
[ 3109.120059] xfssyncd/sdf1(756): WRITE block 1954432173 on sdf1 (2 sectors)
[ 3139.120065] xfssyncd/sdf1(756): WRITE block 1954432175 on sdf1 (2 sectors)
[ 3169.120061] xfssyncd/sdf1(756): WRITE block 1954432177 on sdf1 (2 sectors)
[ 3199.120062] xfssyncd/sdf1(756): WRITE block 1954432179 on sdf1 (2 sectors)
[ 3229.120060] xfssyncd/sdf1(756): WRITE block 1954432181 on sdf1 (2 sectors)
[ 3259.120067] xfssyncd/sdf1(756): WRITE block 1954432183 on sdf1 (2 sectors)
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2011-02-14 18:04 ` Brian Duncan
@ 2011-05-31 14:40 ` Brian Duncan
2011-05-31 15:16 ` Michael Weissenbacher
0 siblings, 1 reply; 29+ messages in thread
From: Brian Duncan @ 2011-05-31 14:40 UTC (permalink / raw)
To: linux-xfs
> Dave Chinner <david <at> fromorbit.com> writes:
> >
> > I know what the problem is, but I haven't had time to work out of
> > code a fix. Been spending my time trying to work out the cause bugs
> > that are triggering hangs, crashes or corruptions here...
> >
> > Cheers,
> >
> > Dave.
>
>
Can anyone please comment if this bug was ever fixed?
I have searched and see no further talk about this.
echo 720000 > /proc/sys/fs/xfs/xfssyncd_centisecs allows a drive to sleep more
often, but still is impacted by this.
Thanks!
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2011-05-31 14:40 ` Brian Duncan
@ 2011-05-31 15:16 ` Michael Weissenbacher
2011-06-01 23:37 ` Dave Chinner
0 siblings, 1 reply; 29+ messages in thread
From: Michael Weissenbacher @ 2011-05-31 15:16 UTC (permalink / raw)
To: xfs
Brian wrote:
>
> Can anyone please comment if this bug was ever fixed?
>
> I have searched and see no further talk about this.
>
> echo 720000 > /proc/sys/fs/xfs/xfssyncd_centisecs allows a drive to sleep more
> often, but still is impacted by this.
>
> Thanks!
>
AFAICT nothing has changed in this regard. I sill see the same behavior
with kernel 2.6.39. Thanks to this thread i found out about the problems
related to accumulating head unloads which i was able to mitigate by
setting xfssyncd_centisecs to a higher value.
cheers,
Michael
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2011-05-31 15:16 ` Michael Weissenbacher
@ 2011-06-01 23:37 ` Dave Chinner
2011-07-11 4:02 ` Brian Duncan
0 siblings, 1 reply; 29+ messages in thread
From: Dave Chinner @ 2011-06-01 23:37 UTC (permalink / raw)
To: Michael Weissenbacher; +Cc: xfs
On Tue, May 31, 2011 at 05:16:29PM +0200, Michael Weissenbacher wrote:
> Brian wrote:
> >
> > Can anyone please comment if this bug was ever fixed?
> >
> > I have searched and see no further talk about this.
> >
> > echo 720000 > /proc/sys/fs/xfs/xfssyncd_centisecs allows a drive to sleep more
> > often, but still is impacted by this.
> >
> > Thanks!
> >
> AFAICT nothing has changed in this regard. I sill see the same behavior
> with kernel 2.6.39. Thanks to this thread i found out about the problems
> related to accumulating head unloads which i was able to mitigate by
> setting xfssyncd_centisecs to a higher value.
commit fd074841cfe01b006465fb9388091012585e8dfb
Author: Dave Chinner <dchinner@redhat.com>
Date: Fri Apr 8 12:45:07 2011 +1000
xfs: push the AIL from memory reclaim and periodic sync
When we are short on memory, we want to expedite the cleaning of
dirty objects. Hence when we run short on memory, we need to kick
the AIL flushing into action to clean as many dirty objects as
quickly as possible. To implement this, sample the lsn of the log
item at the head of the AIL and use that as the push target for the
AIL flush.
Further, we keep items in the AIL that are dirty that are not
tracked any other way, so we can get objects sitting in the AIL that
don't get written back until the AIL is pushed. Hence to get the
filesystem to the idle state, we might need to push the AIL to flush
out any remaining dirty objects sitting in the AIL. This requires
the same push mechanism as the reclaim push.
This patch also renames xfs_trans_ail_tail() to xfs_ail_min_lsn() to
match the new xfs_ail_max_lsn() function introduced in this patch.
Similarly for xfs_trans_ail_push -> xfs_ail_push.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Alex Elder <aelder@sgi.com>
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2011-06-01 23:37 ` Dave Chinner
@ 2011-07-11 4:02 ` Brian Duncan
2011-07-11 14:34 ` Michael Weissenbacher
0 siblings, 1 reply; 29+ messages in thread
From: Brian Duncan @ 2011-07-11 4:02 UTC (permalink / raw)
To: linux-xfs
Thanks Dave for the confirmation that this was patched.
Can anyone confirm that this patch is working within 2.6.39?
The post above Dave's last says the bug is still present in 2.6.39 from their
own testing.. but I see in the changelog for 2.6.39 that Dave's patch/commit ID
is listed as being part of that Kernel update.
http://www.kernel.org/pub/linux/kernel/v2.6/ChangeLog-2.6.39
Thanks
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: xfssyncd and disk spin down
2011-07-11 4:02 ` Brian Duncan
@ 2011-07-11 14:34 ` Michael Weissenbacher
0 siblings, 0 replies; 29+ messages in thread
From: Michael Weissenbacher @ 2011-07-11 14:34 UTC (permalink / raw)
To: xfs
Hi Brian!
> Thanks Dave for the confirmation that this was patched.
>
> Can anyone confirm that this patch is working within 2.6.39?
>
> The post above Dave's last says the bug is still present in 2.6.39 from their
> own testing.. but I see in the changelog for 2.6.39 that Dave's patch/commit ID
> is listed as being part of that Kernel update.
>
> http://www.kernel.org/pub/linux/kernel/v2.6/ChangeLog-2.6.39
>
I can confirm that it works without problems on 2.6.39 - you can dismiss
my previous post.
hth,
Michael
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2011-07-11 14:34 UTC | newest]
Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-12-23 16:55 xfssyncd and disk spin down Petre Rodan
2010-12-23 19:29 ` Stan Hoeppner
2010-12-23 21:16 ` Petre Rodan
2010-12-24 0:54 ` Stan Hoeppner
2010-12-24 5:15 ` Stan Hoeppner
2010-12-24 6:02 ` Petre Rodan
2010-12-24 23:00 ` Stan Hoeppner
2010-12-25 3:36 ` Eric Sandeen
2010-12-25 4:32 ` Stan Hoeppner
2010-12-24 18:17 ` Eric Sandeen
2010-12-25 12:09 ` Petre Rodan
2010-12-27 2:19 ` Dave Chinner
2010-12-27 6:16 ` Petre Rodan
2010-12-27 14:07 ` Dave Chinner
2010-12-27 17:19 ` Petre Rodan
2010-12-31 0:13 ` Dave Chinner
2011-01-20 10:01 ` Petre Rodan
2011-01-20 11:06 ` Dave Chinner
2011-01-20 12:07 ` Petre Rodan
2011-01-20 13:24 ` Christoph Hellwig
2011-01-20 23:43 ` Dave Chinner
2011-02-10 20:42 ` Petre Rodan
2011-02-10 22:18 ` Dave Chinner
2011-02-14 18:04 ` Brian Duncan
2011-05-31 14:40 ` Brian Duncan
2011-05-31 15:16 ` Michael Weissenbacher
2011-06-01 23:37 ` Dave Chinner
2011-07-11 4:02 ` Brian Duncan
2011-07-11 14:34 ` Michael Weissenbacher
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.