From: Mel Gorman <mgorman@suse.de> To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Subject: [MMTests] Stress high-order allocations on ext3 Date: Mon, 23 Jul 2012 22:20:03 +0100 [thread overview] Message-ID: <20120723212003.GF9222@suse.de> (raw) In-Reply-To: <20120629111932.GA14154@suse.de> Configuration: global-dhp__stress-highalloc-performance-ext3 Result: http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__stress-highalloc-performance-ext3 Benchmarks: kernbench vmr-stream sysbench stress-highalloc Summary ======= Allocation success rates of huge pages were looking great until 3.4 when they dropped through the floor. Benchmark notes =============== All machines were booted with mem=4096M due to limitations of the test This is an old series of benchmarks that stressed anti-fragmentation and the allocation of huge pages. It is being replaced with other series of tests which will be more representative but it still produces some interesting results. I tend to use these results as an early warning system before doing a more detailed series of tests. Only the results from the stress-highalloc benchmark are actually of interest and the other benchmarks are just there to age the machine in terms of fragmentation. =========================================================== Machine: arnold Result: http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__stress-highalloc-performance-ext3/arnold/comparison.html Arch: x86 CPUs: 1 socket, 2 threads Model: Pentium 4 Disk: Single Rotary Disk =========================================================== stress-highalloc ---------------- Generally this is going in the right direction. High-order allocations are reasonably successful and where they drop, they have been matched by a large reduction in the length of time it takes to complete the test. Success rates in 3.4 did drop sharply though. ========================================================== Machine: hydra Result: http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__stress-highalloc-performance-ext3/hydra/comparison.html Arch: x86-64 CPUs: 1 socket, 4 threads Model: AMD Phenom II X4 940 Disk: Single Rotary Disk ========================================================== stress-highalloc ---------------- Until 3.4, this was looking good. Unfortunately in 3.4 there was a massive drop in success rates. This correlates with the removal of lumpy reclaim which compaction indirectly depended upon. This strongly indicates that enough memory is not being reclaimed for compaction to make forward progress or compaction is being disabled routinely due to failed attempts at compaction. The success rates at the end of the test when the machine is idle are still high implying that anti-fragmentation itself is still working as expected. ========================================================== Machine: sandy Result: http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__stress-highalloc-performance-ext3/sandy/comparison.html Arch: x86-64 CPUs: 1 socket, 8 threads Model: Intel Core i7-2600 Disk: Single Rotary Disk ========================================================== Same as hydra, this was looking good until 3.4 and then success rates dropped through the floor. -- Mel Gorman SUSE Labs
WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@suse.de> To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Subject: [MMTests] Stress high-order allocations on ext3 Date: Mon, 23 Jul 2012 22:20:03 +0100 [thread overview] Message-ID: <20120723212003.GF9222@suse.de> (raw) In-Reply-To: <20120629111932.GA14154@suse.de> Configuration: global-dhp__stress-highalloc-performance-ext3 Result: http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__stress-highalloc-performance-ext3 Benchmarks: kernbench vmr-stream sysbench stress-highalloc Summary ======= Allocation success rates of huge pages were looking great until 3.4 when they dropped through the floor. Benchmark notes =============== All machines were booted with mem=4096M due to limitations of the test This is an old series of benchmarks that stressed anti-fragmentation and the allocation of huge pages. It is being replaced with other series of tests which will be more representative but it still produces some interesting results. I tend to use these results as an early warning system before doing a more detailed series of tests. Only the results from the stress-highalloc benchmark are actually of interest and the other benchmarks are just there to age the machine in terms of fragmentation. =========================================================== Machine: arnold Result: http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__stress-highalloc-performance-ext3/arnold/comparison.html Arch: x86 CPUs: 1 socket, 2 threads Model: Pentium 4 Disk: Single Rotary Disk =========================================================== stress-highalloc ---------------- Generally this is going in the right direction. High-order allocations are reasonably successful and where they drop, they have been matched by a large reduction in the length of time it takes to complete the test. Success rates in 3.4 did drop sharply though. ========================================================== Machine: hydra Result: http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__stress-highalloc-performance-ext3/hydra/comparison.html Arch: x86-64 CPUs: 1 socket, 4 threads Model: AMD Phenom II X4 940 Disk: Single Rotary Disk ========================================================== stress-highalloc ---------------- Until 3.4, this was looking good. Unfortunately in 3.4 there was a massive drop in success rates. This correlates with the removal of lumpy reclaim which compaction indirectly depended upon. This strongly indicates that enough memory is not being reclaimed for compaction to make forward progress or compaction is being disabled routinely due to failed attempts at compaction. The success rates at the end of the test when the machine is idle are still high implying that anti-fragmentation itself is still working as expected. ========================================================== Machine: sandy Result: http://www.csn.ul.ie/~mel/postings/mmtests-20120424/global-dhp__stress-highalloc-performance-ext3/sandy/comparison.html Arch: x86-64 CPUs: 1 socket, 8 threads Model: Intel Core i7-2600 Disk: Single Rotary Disk ========================================================== Same as hydra, this was looking good until 3.4 and then success rates dropped through the floor. -- Mel Gorman SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-07-23 21:20 UTC|newest] Thread overview: 108+ messages / expand[flat|nested] mbox.gz Atom feed top 2012-06-20 11:32 MMTests 0.04 Mel Gorman 2012-06-20 11:32 ` Mel Gorman 2012-06-29 11:19 ` Mel Gorman 2012-06-29 11:19 ` Mel Gorman 2012-06-29 11:21 ` [MMTests] Page allocator Mel Gorman 2012-06-29 11:21 ` Mel Gorman 2012-06-29 11:22 ` [MMTests] Network performance Mel Gorman 2012-06-29 11:22 ` Mel Gorman 2012-06-29 11:23 ` [MMTests] IO metadata on ext3 Mel Gorman 2012-06-29 11:23 ` Mel Gorman 2012-06-29 11:24 ` [MMTests] IO metadata on ext4 Mel Gorman 2012-06-29 11:24 ` Mel Gorman 2012-06-29 11:25 ` [MMTests] IO metadata on XFS Mel Gorman 2012-06-29 11:25 ` Mel Gorman 2012-06-29 11:25 ` Mel Gorman 2012-07-01 23:54 ` Dave Chinner 2012-07-01 23:54 ` Dave Chinner 2012-07-01 23:54 ` Dave Chinner 2012-07-02 6:32 ` Christoph Hellwig 2012-07-02 6:32 ` Christoph Hellwig 2012-07-02 6:32 ` Christoph Hellwig 2012-07-02 14:32 ` Mel Gorman 2012-07-02 14:32 ` Mel Gorman 2012-07-02 14:32 ` Mel Gorman 2012-07-02 19:35 ` Mel Gorman 2012-07-02 19:35 ` Mel Gorman 2012-07-02 19:35 ` Mel Gorman 2012-07-03 0:19 ` Dave Chinner 2012-07-03 0:19 ` Dave Chinner 2012-07-03 0:19 ` Dave Chinner 2012-07-03 10:59 ` Mel Gorman 2012-07-03 10:59 ` Mel Gorman 2012-07-03 10:59 ` Mel Gorman 2012-07-03 11:44 ` Mel Gorman 2012-07-03 11:44 ` Mel Gorman 2012-07-03 11:44 ` Mel Gorman 2012-07-03 12:31 ` Daniel Vetter 2012-07-03 12:31 ` Daniel Vetter 2012-07-03 12:31 ` Daniel Vetter 2012-07-03 13:08 ` Mel Gorman 2012-07-03 13:08 ` Mel Gorman 2012-07-03 13:08 ` Mel Gorman 2012-07-03 13:28 ` Eugeni Dodonov 2012-07-03 13:28 ` Eugeni Dodonov 2012-07-04 0:47 ` Dave Chinner 2012-07-04 0:47 ` Dave Chinner 2012-07-04 0:47 ` Dave Chinner 2012-07-04 9:51 ` Mel Gorman 2012-07-04 9:51 ` Mel Gorman 2012-07-04 9:51 ` Mel Gorman 2012-07-03 13:04 ` Mel Gorman 2012-07-03 13:04 ` Mel Gorman 2012-07-03 13:04 ` Mel Gorman 2012-07-03 14:04 ` Daniel Vetter 2012-07-03 14:04 ` Daniel Vetter 2012-07-03 14:04 ` Daniel Vetter 2012-07-02 13:30 ` Mel Gorman 2012-07-02 13:30 ` Mel Gorman 2012-07-02 13:30 ` Mel Gorman 2012-07-04 15:52 ` [MMTests] Page reclaim performance on ext3 Mel Gorman 2012-07-04 15:52 ` Mel Gorman 2012-07-04 15:53 ` [MMTests] Page reclaim performance on ext4 Mel Gorman 2012-07-04 15:53 ` Mel Gorman 2012-07-04 15:53 ` [MMTests] Page reclaim performance on xfs Mel Gorman 2012-07-04 15:53 ` Mel Gorman 2012-07-05 14:56 ` [MMTests] Interactivity during IO on ext3 Mel Gorman 2012-07-05 14:56 ` Mel Gorman 2012-07-10 9:49 ` Jan Kara 2012-07-10 9:49 ` Jan Kara 2012-07-10 11:30 ` Mel Gorman 2012-07-10 11:30 ` Mel Gorman 2012-07-05 14:57 ` [MMTests] Interactivity during IO on ext4 Mel Gorman 2012-07-05 14:57 ` Mel Gorman 2012-07-23 21:12 ` [MMTests] Scheduler Mel Gorman 2012-07-23 21:12 ` Mel Gorman 2012-07-23 21:13 ` [MMTests] Sysbench read-only on ext3 Mel Gorman 2012-07-23 21:13 ` Mel Gorman 2012-07-24 2:29 ` Mike Galbraith 2012-07-24 2:29 ` Mike Galbraith 2012-07-24 8:19 ` Mel Gorman 2012-07-24 8:19 ` Mel Gorman 2012-07-24 8:32 ` Mike Galbraith 2012-07-24 8:32 ` Mike Galbraith 2012-07-23 21:14 ` [MMTests] Sysbench read-only on ext4 Mel Gorman 2012-07-23 21:14 ` Mel Gorman 2012-07-23 21:15 ` [MMTests] Sysbench read-only on xfs Mel Gorman 2012-07-23 21:15 ` Mel Gorman 2012-07-23 21:17 ` [MMTests] memcachetest and parallel IO on ext3 Mel Gorman 2012-07-23 21:17 ` Mel Gorman 2012-07-23 21:19 ` [MMTests] memcachetest and parallel IO on xfs Mel Gorman 2012-07-23 21:19 ` Mel Gorman 2012-07-23 21:20 ` Mel Gorman [this message] 2012-07-23 21:20 ` [MMTests] Stress high-order allocations on ext3 Mel Gorman 2012-07-23 21:21 ` [MMTests] dbench4 async " Mel Gorman 2012-07-23 21:21 ` Mel Gorman 2012-08-16 14:52 ` Jan Kara 2012-08-16 14:52 ` Jan Kara 2012-08-21 22:00 ` Jan Kara 2012-08-21 22:00 ` Jan Kara 2012-08-22 10:48 ` Mel Gorman 2012-08-22 10:48 ` Mel Gorman 2012-07-23 21:23 ` [MMTests] dbench4 async on ext4 Mel Gorman 2012-07-23 21:23 ` Mel Gorman 2012-07-23 21:24 ` [MMTests] Threaded IO Performance on ext3 Mel Gorman 2012-07-23 21:24 ` Mel Gorman 2012-07-23 21:25 ` [MMTests] Threaded IO Performance on xfs Mel Gorman 2012-07-23 21:25 ` Mel Gorman 2012-07-23 21:25 ` Mel Gorman
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20120723212003.GF9222@suse.de \ --to=mgorman@suse.de \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.