* 2.5.33-mm5
@ 2002-09-08 6:47 Andrew Morton
2002-09-08 15:11 ` 2.5.33-mm5 Axel Siebenwirth
2002-09-09 14:25 ` 2.5.33-mm5 Steven Cole
0 siblings, 2 replies; 6+ messages in thread
From: Andrew Morton @ 2002-09-08 6:47 UTC (permalink / raw)
To: lkml, linux-mm
URL: http://www.zip.com.au/~akpm/linux/patches/2.5/2.5.33/2.5.33-mm5/
+refill-rate-fix.patch
Fix a problem in refill_inactive_zone() which could soak a lot of CPU.
+sleeping-release_page.patch
Allow mapped->releasepage() to sleep again. My passing in non-zero
gfp_mask.
+filemap-integration-fixes.patch
Some fixes to the readv/writev rework.
Plus a lot of stabilisation, tuning and testing of the new VM latency
control code. Including fixing one rarely-occurring infinite loop
which might explain Steve Cole's reported failure.
Some testing with no swap has been performed as well. Works OK,
and some speedups were made in this area (if there's no swap online,
don't bring anon pages onto the inactive list).
It's looking pretty good now - the system is quite responsive under
all heavy writeout workloads. It's still very latent under heavy
swapout load; that is deliberate. It is latent when overloaded by
dirty MAP_SHARED data. We can fix that.
A side-effect of the VM rework is an improvement in many-spindle
pagecache writeout. This is the first kernel which can keep four
queues saturated. I tested six disks - the LEDs never went out.
I'd appreciate it if people could grab this one, be nasty to it
and send a report.
You will probably see increased CPU utilisation by kswapd. I believe
that this is not an efficiency problem - it's due to kswapd doing more
work that it used to, rather than sleeping on request queues all the time.
Also, pdflush appears to be taking more CPU, but profiling shows that it
is not - this may be due to synchronisation with the CPU load accounting.
linus.patch
cset-1.575-to-1.600.txt.gz
scsi_hack.patch
Fix block-highmem for scsi
ext3-htree.patch
Indexed directories for ext3
zone-pages-reporting.patch
Fix the boot-time reporting of each zone's available pages
enospc-recovery-fix.patch
Fix the __block_write_full_page() error path.
fix-faults.patch
Back out the initial work for atomic copy_*_user()
spin-lock-check.patch
spinlock/rwlock checking infrastructure
refill-rate.patch
refill the inactive list more quickly
refill-rate-fix.patch
Don't call shrink_zone with a negative nr_pages
copy_user_atomic.patch
kmap_atomic_reads.patch
Use kmap_atomic() for generic_file_read()
kmap_atomic_writes.patch
Use kmap_atomic() for generic_file_write()
throttling-fix.patch
Fix throttling of heavy write()rs.
sleeping-release_page.patch
Allow a_ops->releasepage() to sleep again
dirty-state-accounting.patch
Make the global dirty memory accounting more accurate
rd-cleanup.patch
Cleanup and fix the ramdisk driver (doesn't work right yet)
discontig-cleanup-1.patch
i386 discontigmem coding cleanups
discontig-cleanup-2.patch
i386 discontigmem cleanups
writeback-thresholds.patch
Downward adjustments to the default dirtymemory thresholds
buffer-strip.patch
Limit the consumption of ZONE_NORMAL by buffer_heads
rmap-speedup.patch
rmap pte_chain space and CPU reductions
wli-highpte.patch
Resurrect CONFIG_HIGHPTE - ia32 pagetables in highmem
readv-writev.patch
O_DIRECT support for readv/writev
filemap-integration.patch
Clean up readv/writev
filemap-integration-fixes.patch
More readv/writev fixes
slablru.patch
age slab pages on the LRU
slablru-speedup.patch
slablru optimisations
llzpr.patch
Reduce scheduling latency across zap_page_range
buffermem.patch
Resurrect buffermem accounting
lpp.patch
ia32 huge tlb pages
lpp2.patch
hugetlbpage fixes
ext3-sb.patch
u.ext3_sb -> generic_sbp
oom-fix.patch
Fix an OOM condition on big highmem machines
tlb-cleanup.patch
Clean up the tlb gather code
dump-stack.patch
arch-neutral dump_stack() function
wli-cleanup.patch
random cleanups
madvise-move.patch
move mdavise implementation into mm/madvise.c
split-vma.patch
VMA splitting patch
mmap-fixes.patch
mmap.c cleanup and lock ranking fixes
buffer-ops-move.patch
Move submit_bh() and ll_rw_block() into fs/buffer.c
writeback-control.patch
Cleanup and extension of the writeback paths
queue-congestion.patch
Infrastructure for communicating request queue congestion to the VM
nonblocking-ext2-preread.patch
avoid ext2 inode prereads if the queue is congested
nonblocking-pdflush.patch
non-blocking writeback infrastructure, use it for pdflush
nonblocking-vm.patch
Non-blocking page reclaim
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: 2.5.33-mm5
2002-09-08 6:47 2.5.33-mm5 Andrew Morton
@ 2002-09-08 15:11 ` Axel Siebenwirth
2002-09-08 16:45 ` 2.5.33-mm5 Andrew Morton
2002-09-09 14:25 ` 2.5.33-mm5 Steven Cole
1 sibling, 1 reply; 6+ messages in thread
From: Axel Siebenwirth @ 2002-09-08 15:11 UTC (permalink / raw)
To: Andrew Morton; +Cc: lkml, linux-mm
Hi Andrew!
On Sat, 07 Sep 2002, Andrew Morton wrote:
> I'd appreciate it if people could grab this one, be nasty to it
> and send a report.
What are your favorite tests to run? I'd like to send you some useful test
results. But which do you like to see?
Best regards,
Axel Siebenwirth
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: 2.5.33-mm5
2002-09-08 15:11 ` 2.5.33-mm5 Axel Siebenwirth
@ 2002-09-08 16:45 ` Andrew Morton
2002-09-09 5:39 ` 2.5.33-mm5 Daniel Phillips
0 siblings, 1 reply; 6+ messages in thread
From: Andrew Morton @ 2002-09-08 16:45 UTC (permalink / raw)
To: Axel Siebenwirth; +Cc: lkml, linux-mm
Axel Siebenwirth wrote:
>
> Hi Andrew!
>
> On Sat, 07 Sep 2002, Andrew Morton wrote:
>
> > I'd appreciate it if people could grab this one, be nasty to it
> > and send a report.
>
> What are your favorite tests to run? I'd like to send you some useful test
> results. But which do you like to see?
I've already run my favourite tests ;) The value of external testing is
in the extra coverage which it gives - different hardware, different
tests. And also different requirements: there may be things which I
think are cool, but which you think suck.
So... The real test is of course "daily use". If it works OK in daily
use for you, and for everyone else then we ship 2.6. By definition.
Of course, on top of daily use it is best to run additional stress
tests to find problems more quickly. Large desktop applications, web
and file servers, databases, etc would be interesting. CD burning,
funny old PIO-mode IDE drives, stress testing with gigabt NICs,
you name it. Coverage.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: 2.5.33-mm5
2002-09-08 16:45 ` 2.5.33-mm5 Andrew Morton
@ 2002-09-09 5:39 ` Daniel Phillips
0 siblings, 0 replies; 6+ messages in thread
From: Daniel Phillips @ 2002-09-09 5:39 UTC (permalink / raw)
To: Andrew Morton, Axel Siebenwirth; +Cc: lkml, linux-mm
On Sunday 08 September 2002 18:45, Andrew Morton wrote:
> Axel Siebenwirth wrote:
> >
> > Hi Andrew!
> >
> > On Sat, 07 Sep 2002, Andrew Morton wrote:
> >
> > > I'd appreciate it if people could grab this one, be nasty to it
> > > and send a report.
> >
> > What are your favorite tests to run? I'd like to send you some useful test
> > results. But which do you like to see?
>
> I've already run my favourite tests ;)
How about some swap-intensive comparisons to 2.4.19?
--
Daniel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: 2.5.33-mm5
2002-09-08 6:47 2.5.33-mm5 Andrew Morton
2002-09-08 15:11 ` 2.5.33-mm5 Axel Siebenwirth
@ 2002-09-09 14:25 ` Steven Cole
1 sibling, 0 replies; 6+ messages in thread
From: Steven Cole @ 2002-09-09 14:25 UTC (permalink / raw)
To: Andrew Morton; +Cc: lkml, linux-mm
On Sun, 2002-09-08 at 00:47, Andrew Morton wrote:
>
> URL: http://www.zip.com.au/~akpm/linux/patches/2.5/2.5.33/2.5.33-mm5/
>
> +refill-rate-fix.patch
>
> Fix a problem in refill_inactive_zone() which could soak a lot of CPU.
>
> +sleeping-release_page.patch
>
> Allow mapped->releasepage() to sleep again. My passing in non-zero
> gfp_mask.
>
> +filemap-integration-fixes.patch
>
> Some fixes to the readv/writev rework.
>
> Plus a lot of stabilisation, tuning and testing of the new VM latency
> control code. Including fixing one rarely-occurring infinite loop
> which might explain Steve Cole's reported failure.
This looks pretty good so far. The test box has run up to 112 dbench
clients successfully with 2.5.33-mm5, ext3 data=ordered, which is much
better than before. Thanks.
...and there was much rejoicing.
Steven
^ permalink raw reply [flat|nested] 6+ messages in thread
* 2.5.33-mm5
@ 2002-09-08 14:09 Paolo Ciarrocchi
0 siblings, 0 replies; 6+ messages in thread
From: Paolo Ciarrocchi @ 2002-09-08 14:09 UTC (permalink / raw)
To: linux-kernel; +Cc: akpm
Hi All/Andrew,
I've just compiled 2.5.33-mm5 (in the test report is 2.5.33M) and ran LMbench on it.
2.5.33 is preemption ON
2.5.33x is preemption OFF
2.5.33M is -mm5 preemption OFF
cd results && make summary percent 2>/dev/null | more
make[1]: Entering directory `/usr/src/LMbench/results'
L M B E N C H 2 . 0 S U M M A R Y
------------------------------------
Basic system parameters
----------------------------------------------------
Host OS Description Mhz
--------- ------------- ----------------------- ----
frodo Linux 2.4.18 i686-pc-linux-gnu 797
frodo Linux 2.4.19 i686-pc-linux-gnu 797
frodo Linux 2.5.33 i686-pc-linux-gnu 797
frodo Linux 2.5.33x i686-pc-linux-gnu 797
frodo Linux 2.5.33M i686-pc-linux-gnu 797
Processor, Processes - times in microseconds - smaller is better
----------------------------------------------------------------
Host OS Mhz null null open selct sig sig fork exec sh
call I/O stat clos TCP inst hndl proc proc proc
--------- ------------- ---- ---- ---- ---- ---- ----- ---- ---- ---- ---- ----
frodo Linux 2.4.18 797 0.40 0.56 3.18 3.97 1.00 3.18 115. 1231 13.K
frodo Linux 2.4.19 797 0.40 0.56 3.07 3.88 1.00 3.19 129. 1113 13.K
frodo Linux 2.5.33 797 0.40 0.61 3.78 4.76 1.02 3.37 201. 1458 13.K
frodo Linux 2.5.33x 797 0.40 0.60 3.51 4.38 1.02 3.27 159. 1430 13.K
frodo Linux 2.5.33M 797 0.40 0.59 3.48 4.37 1.01 3.35 170. 1455 14.K
Context switching - times in microseconds - smaller is better
-------------------------------------------------------------
Host OS 2p/0K 2p/16K 2p/64K 8p/16K 8p/64K 16p/16K 16p/64K
ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw
--------- ------------- ----- ------ ------ ------ ------ ------- -------
frodo Linux 2.4.18 0.990 4.4200 13.8 6.2700 309.8 58.6 310.5
frodo Linux 2.4.19 0.900 4.2900 15.3 5.9100 309.6 57.7 309.9
frodo Linux 2.5.33 1.620 5.2800 15.3 9.3500 312.7 54.9 312.7
frodo Linux 2.5.33x 1.040 4.3200 17.8 7.6200 312.5 49.9 312.5
frodo Linux 2.5.33M 0.700 4.2700 14.0 8.7200 312.2 42.3 311.9
*Local* Communication latencies in microseconds - smaller is better
-------------------------------------------------------------------
Host OS 2p/0K Pipe AF UDP RPC/ TCP RPC/ TCP
ctxsw UNIX UDP TCP conn
--------- ------------- ----- ----- ---- ----- ----- ----- ----- ----
frodo Linux 2.4.18 0.990 4.437 8.66
frodo Linux 2.4.19 0.900 4.561 7.76
frodo Linux 2.5.33 1.620 6.497 9.11
frodo Linux 2.5.33x 1.040 4.888 8.70
frodo Linux 2.5.33M 0.700 4.564 8.25
File & VM system latencies in microseconds - smaller is better
--------------------------------------------------------------
Host OS 0K File 10K File Mmap Prot Page
Create Delete Create Delete Latency Fault Fault
--------- ------------- ------ ------ ------ ------ ------- ----- -----
frodo Linux 2.4.18 68.9 16.0 185.8 31.6 425.0 0.789 2.00000
frodo Linux 2.4.19 68.9 14.9 186.5 29.8 416.0 0.798 2.00000
frodo Linux 2.5.33 77.8 19.1 211.6 38.3 774.0 0.832 3.00000
frodo Linux 2.5.33x 77.2 18.8 206.7 37.0 769.0 0.823 3.00000
frodo Linux 2.5.33M 73.0 16.8 200.4 35.6 734.0 0.777 3.00000
*Local* Communication bandwidths in MB/s - bigger is better
-----------------------------------------------------------
Host OS Pipe AF TCP File Mmap Bcopy Bcopy Mem Mem
UNIX reread reread (libc) (hand) read write
--------- ------------- ---- ---- ---- ------ ------ ------ ------ ---- -----
frodo Linux 2.4.18 810. 650. 181.7 203.7 101.5 101.4 203. 195.3
frodo Linux 2.4.19 808. 680. 187.2 203.8 101.5 101.4 203. 190.1
frodo Linux 2.5.33 571. 636. 185.6 202.5 100.5 100.4 202. 190.3
frodo Linux 2.5.33x 768. 710. 185.4 202.5 100.5 100.4 202. 189.5
frodo Linux 2.5.33M 764. 707. 185.4 202.4 100.5 100.4 202. 185.8
Memory latencies in nanoseconds - smaller is better
(WARNING - may not be correct, check graphs)
---------------------------------------------------
Host OS Mhz L1 $ L2 $ Main mem Guesses
--------- ------------- ---- ----- ------ -------- -------
frodo Linux 2.4.18 797 3.767 8.7890 158.9
frodo Linux 2.4.19 797 3.767 8.7980 158.9
frodo Linux 2.5.33 797 3.798 8.8660 160.1
frodo Linux 2.5.33x 797 3.796 45.5 160.2
frodo Linux 2.5.33M 797 3.797 8.8660 160.2
make[1]: Leaving directory `/usr/src/LMbench/results'
Ciao,
Paolo
--
Get your free email from www.linuxmail.org
Powered by Outblaze
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2002-09-09 14:24 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-09-08 6:47 2.5.33-mm5 Andrew Morton
2002-09-08 15:11 ` 2.5.33-mm5 Axel Siebenwirth
2002-09-08 16:45 ` 2.5.33-mm5 Andrew Morton
2002-09-09 5:39 ` 2.5.33-mm5 Daniel Phillips
2002-09-09 14:25 ` 2.5.33-mm5 Steven Cole
2002-09-08 14:09 2.5.33-mm5 Paolo Ciarrocchi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).