At 09.18 04/11/01 -0800, Linus Torvalds wrote: > >On Sun, 4 Nov 2001, Lorenzo Allegrucci wrote: >> >> I begin with the last Linus' kernel, three runs and kswapd CPU >> time appended. > >It's interesting how your numbers decrease with more swap-space. That, >together with the fact that the "more swap space" case also degrades the >second time around seems to imply that we leave swap-cache pages around >after they aren't used. > >Does "free" after a run has completed imply that there's still lots of >swap used? We _should_ have gotten rid of it at "free_swap_and_cache()" >time, but if we missed it.. lenstra:~/src/qsort> free total used free shared buffers cached Mem: 255984 16760 239224 0 1092 8008 -/+ buffers/cache: 7660 248324 Swap: 195512 0 195512 lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100 70.590u 7.640s 2:31.06 51.7% 0+0k 0+0io 19036pf+0w lenstra:~/src/qsort> free total used free shared buffers cached Mem: 255984 6008 249976 0 100 1096 -/+ buffers/cache: 4812 251172 Swap: 195512 5080 190432 and with more swap.. lenstra:~/src/qsort> free total used free shared buffers cached Mem: 255984 13488 242496 0 532 5360 -/+ buffers/cache: 7596 248388 Swap: 390592 0 390592 lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100 70.180u 7.650s 2:43.22 47.6% 0+0k 0+0io 21019pf+0w lenstra:~/src/qsort> free total used free shared buffers cached Mem: 255984 6596 249388 0 108 1116 -/+ buffers/cache: 5372 250612 Swap: 390592 5576 385016 lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100 71.030u 7.040s 2:49.45 46.0% 0+0k 0+0io 22734pf+0w lenstra:~/src/qsort> free total used free shared buffers cached Mem: 255984 8808 247176 0 108 1152 -/+ buffers/cache: 7548 248436 Swap: 390592 7948 382644 >What happens if you make the "vm_swap_full()" define in be >unconditionally defined to "1"? lenstra:~/src/qsort> free total used free shared buffers cached Mem: 256000 16772 239228 0 1104 8008 -/+ buffers/cache: 7660 248340 Swap: 195512 0 195512 lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100 70.530u 7.290s 2:33.26 50.7% 0+0k 0+0io 19689pf+0w lenstra:~/src/qsort> free total used free shared buffers cached Mem: 256000 5132 250868 0 116 1144 -/+ buffers/cache: 3872 252128 Swap: 195512 3748 191764 ..and now with 400M of swap: lenstra:~/src/qsort> free total used free shared buffers cached Mem: 256000 13096 242904 0 504 4904 -/+ buffers/cache: 7688 248312 Swap: 390592 0 390592 lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100 70.830u 7.100s 2:29.52 52.1% 0+0k 0+0io 18488pf+0w lenstra:~/src/qsort> free total used free shared buffers cached Mem: 256000 4980 251020 0 108 1132 -/+ buffers/cache: 3740 252260 Swap: 390592 3840 386752 lenstra:~/src/qsort> time ./qsbench -n 90000000 -p 1 -s 140175100 70.560u 6.840s 2:28.66 52.0% 0+0k 0+0io 18203pf+0w lenstra:~/src/qsort> free total used free shared buffers cached Mem: 256000 5044 250956 0 108 1112 -/+ buffers/cache: 3824 252176 Swap: 390592 3896 386696 Performace improved and numbers stabilized. >That should make us be more aggressive >about freeing those swap-cache pages, and it would be interesting to see >if it also stabilizes your numbers. > > Linus I attach qsbench.c