All of lore.kernel.org
 help / color / mirror / Atom feed
* 2.6.35 Regression: Ages spent discarding blocks that weren't used!
@ 2010-08-04  1:40 Nigel Cunningham
  2010-08-04  8:59 ` Stefan Richter
                   ` (3 more replies)
  0 siblings, 4 replies; 30+ messages in thread
From: Nigel Cunningham @ 2010-08-04  1:40 UTC (permalink / raw)
  To: LKML, pm list

Hi all.

I've just given hibernation a go under 2.6.35, and at first I thought 
there was some sort of hang in freezing processes. The computer sat 
there for aaaaaages, apparently doing nothing. Switched from TuxOnIce to 
swsusp to see if it was specific to my code but no - the problem was 
there too. I used the nifty new kdb support to get a backtrace, which was:

get_swap_page_of_type
discard_swap_cluster
blk_dev_issue_discard
wait_for_completion

Adding a printk in discard swap cluster gives the following:

[   46.758330] Discarding 256 pages from bdev 800003 beginning at page 
640377.
[   47.003363] Discarding 256 pages from bdev 800003 beginning at page 
640633.
[   47.246514] Discarding 256 pages from bdev 800003 beginning at page 
640889.

...

[  221.877465] Discarding 256 pages from bdev 800003 beginning at page 
826745.
[  222.121284] Discarding 256 pages from bdev 800003 beginning at page 
827001.
[  222.365908] Discarding 256 pages from bdev 800003 beginning at page 
827257.
[  222.610311] Discarding 256 pages from bdev 800003 beginning at page 
827513.

So allocating 4GB of swap on my SSD now takes 176 seconds instead of 
virtually no time at all. (This code is completely unchanged from 2.6.34).

I have a couple of questions:

1) As far as I can see, there haven't been any changes in mm/swapfile.c 
that would cause this slowdown, so something in the block layer has 
(from my point of view) regressed. Is this a known issue?

2) Why are we calling discard_swap_cluster anyway? The swap was unused 
and we're allocating it. I could understand calling it when freeing 
swap, but when allocating?

Regards,

Nigel

^ permalink raw reply	[flat|nested] 30+ messages in thread
* 2.6.35 Regression: Ages spent discarding blocks that weren't used!
@ 2010-08-04  1:40 Nigel Cunningham
  0 siblings, 0 replies; 30+ messages in thread
From: Nigel Cunningham @ 2010-08-04  1:40 UTC (permalink / raw)
  To: LKML, pm list

Hi all.

I've just given hibernation a go under 2.6.35, and at first I thought 
there was some sort of hang in freezing processes. The computer sat 
there for aaaaaages, apparently doing nothing. Switched from TuxOnIce to 
swsusp to see if it was specific to my code but no - the problem was 
there too. I used the nifty new kdb support to get a backtrace, which was:

get_swap_page_of_type
discard_swap_cluster
blk_dev_issue_discard
wait_for_completion

Adding a printk in discard swap cluster gives the following:

[   46.758330] Discarding 256 pages from bdev 800003 beginning at page 
640377.
[   47.003363] Discarding 256 pages from bdev 800003 beginning at page 
640633.
[   47.246514] Discarding 256 pages from bdev 800003 beginning at page 
640889.

...

[  221.877465] Discarding 256 pages from bdev 800003 beginning at page 
826745.
[  222.121284] Discarding 256 pages from bdev 800003 beginning at page 
827001.
[  222.365908] Discarding 256 pages from bdev 800003 beginning at page 
827257.
[  222.610311] Discarding 256 pages from bdev 800003 beginning at page 
827513.

So allocating 4GB of swap on my SSD now takes 176 seconds instead of 
virtually no time at all. (This code is completely unchanged from 2.6.34).

I have a couple of questions:

1) As far as I can see, there haven't been any changes in mm/swapfile.c 
that would cause this slowdown, so something in the block layer has 
(from my point of view) regressed. Is this a known issue?

2) Why are we calling discard_swap_cluster anyway? The swap was unused 
and we're allocating it. I could understand calling it when freeing 
swap, but when allocating?

Regards,

Nigel

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2010-08-14 11:44 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-08-04  1:40 2.6.35 Regression: Ages spent discarding blocks that weren't used! Nigel Cunningham
2010-08-04  8:59 ` Stefan Richter
2010-08-04  9:16   ` Nigel Cunningham
2010-08-04  9:16   ` Nigel Cunningham
2010-08-04  8:59 ` Stefan Richter
2010-08-04 12:44 ` Mark Lord
2010-08-04 12:44 ` Mark Lord
2010-08-04 18:02   ` Martin K. Petersen
2010-08-04 18:02   ` Martin K. Petersen
2010-08-04 21:22   ` Nigel Cunningham
2010-08-04 21:22   ` Nigel Cunningham
2010-08-05  3:58     ` Hugh Dickins
2010-08-05  6:28       ` Nigel Cunningham
2010-08-06  1:15         ` Hugh Dickins
2010-08-06  1:15         ` Hugh Dickins
2010-08-06  4:40           ` Nigel Cunningham
2010-08-06 22:07             ` Hugh Dickins
2010-08-06 22:07             ` Hugh Dickins
2010-08-07 22:47               ` Nigel Cunningham
2010-08-07 22:47               ` Nigel Cunningham
2010-08-13 11:54               ` Christoph Hellwig
2010-08-13 18:15                 ` Hugh Dickins
2010-08-14 11:43                   ` Christoph Hellwig
2010-08-14 11:43                   ` Christoph Hellwig
2010-08-13 18:15                 ` Hugh Dickins
2010-08-13 11:54               ` Christoph Hellwig
2010-08-06  4:40           ` Nigel Cunningham
2010-08-05  6:28       ` Nigel Cunningham
2010-08-05  3:58     ` Hugh Dickins
2010-08-04  1:40 Nigel Cunningham

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.