From: Dave Hansen <dave.hansen@intel.com> To: Michal Hocko <mhocko@kernel.org>, Tim Chen <tim.c.chen@linux.intel.com> Cc: Aaron Lu <aaron.lu@intel.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Tim Chen <tim.c.chen@intel.com>, Andrew Morton <akpm@linux-foundation.org>, Ying Huang <ying.huang@intel.com> Subject: Re: [PATCH v2 0/5] mm: support parallel free of memory Date: Tue, 21 Mar 2017 07:54:37 -0700 [thread overview] Message-ID: <ae4e3597-f664-e5c4-97fb-e07f230d5017@intel.com> (raw) In-Reply-To: <20170316090732.GF30501@dhcp22.suse.cz> On 03/16/2017 02:07 AM, Michal Hocko wrote: > On Wed 15-03-17 14:38:34, Tim Chen wrote: >> max_active: time >> 1 8.9s ±0.5% >> 2 5.65s ±5.5% >> 4 4.84s ±0.16% >> 8 4.77s ±0.97% >> 16 4.85s ±0.77% >> 32 6.21s ±0.46% > > OK, but this will depend on the HW, right? Also now that I am looking at > those numbers more closely. This was about unmapping 320GB area and > using 4 times more CPUs you managed to half the run time. Is this really > worth it? Sure if those CPUs were idle then this is a clear win but if > the system is moderately busy then it doesn't look like a clear win to > me. This still suffers from zone lock contention. It scales much better if we are freeing memory from more than one zone. We would expect any other generic page allocator scalability improvements to really help here, too. Aaron, could you make sure to make sure that the memory being freed is coming from multiple NUMA nodes? It might also be interesting to boot with a fake NUMA configuration with a *bunch* of nodes to see what the best case looks like when zone lock contention isn't even in play where one worker would be working on its own zone. >>> Moreover, and this is a more generic question, is this functionality >>> useful in general purpose workloads? >> >> If we are running consecutive batch jobs, this optimization >> should help start the next job sooner. > > Is this sufficient justification to add a potentially hard to tune > optimization that can influence other workloads on the machine? The guys for whom a reboot is faster than a single exit() certainly think so. :) I have the feeling that we can find a pretty sane large process size to be the floor where this feature gets activated. I doubt the systems that really care about noise from other workloads are often doing multi-gigabyte mapping teardowns.
WARNING: multiple messages have this Message-ID (diff)
From: Dave Hansen <dave.hansen@intel.com> To: Michal Hocko <mhocko@kernel.org>, Tim Chen <tim.c.chen@linux.intel.com> Cc: Aaron Lu <aaron.lu@intel.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Tim Chen <tim.c.chen@intel.com>, Andrew Morton <akpm@linux-foundation.org>, Ying Huang <ying.huang@intel.com> Subject: Re: [PATCH v2 0/5] mm: support parallel free of memory Date: Tue, 21 Mar 2017 07:54:37 -0700 [thread overview] Message-ID: <ae4e3597-f664-e5c4-97fb-e07f230d5017@intel.com> (raw) In-Reply-To: <20170316090732.GF30501@dhcp22.suse.cz> On 03/16/2017 02:07 AM, Michal Hocko wrote: > On Wed 15-03-17 14:38:34, Tim Chen wrote: >> max_active: time >> 1 8.9s +-0.5% >> 2 5.65s +-5.5% >> 4 4.84s +-0.16% >> 8 4.77s +-0.97% >> 16 4.85s +-0.77% >> 32 6.21s +-0.46% > > OK, but this will depend on the HW, right? Also now that I am looking at > those numbers more closely. This was about unmapping 320GB area and > using 4 times more CPUs you managed to half the run time. Is this really > worth it? Sure if those CPUs were idle then this is a clear win but if > the system is moderately busy then it doesn't look like a clear win to > me. This still suffers from zone lock contention. It scales much better if we are freeing memory from more than one zone. We would expect any other generic page allocator scalability improvements to really help here, too. Aaron, could you make sure to make sure that the memory being freed is coming from multiple NUMA nodes? It might also be interesting to boot with a fake NUMA configuration with a *bunch* of nodes to see what the best case looks like when zone lock contention isn't even in play where one worker would be working on its own zone. >>> Moreover, and this is a more generic question, is this functionality >>> useful in general purpose workloads? >> >> If we are running consecutive batch jobs, this optimization >> should help start the next job sooner. > > Is this sufficient justification to add a potentially hard to tune > optimization that can influence other workloads on the machine? The guys for whom a reboot is faster than a single exit() certainly think so. :) I have the feeling that we can find a pretty sane large process size to be the floor where this feature gets activated. I doubt the systems that really care about noise from other workloads are often doing multi-gigabyte mapping teardowns. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-03-21 15:07 UTC|newest] Thread overview: 84+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-03-15 8:59 [PATCH v2 0/5] mm: support parallel free of memory Aaron Lu 2017-03-15 8:59 ` Aaron Lu 2017-03-15 9:00 ` [PATCH v2 1/5] mm: add tlb_flush_mmu_free_batches Aaron Lu 2017-03-15 9:00 ` Aaron Lu 2017-03-15 9:00 ` [PATCH v2 2/5] mm: parallel free pages Aaron Lu 2017-03-15 9:00 ` Aaron Lu 2017-03-15 9:42 ` Hillf Danton 2017-03-15 9:42 ` Hillf Danton 2017-03-15 11:54 ` Aaron Lu 2017-03-15 11:54 ` Aaron Lu 2017-03-15 9:00 ` [PATCH v2 3/5] mm: use a dedicated workqueue for the free workers Aaron Lu 2017-03-15 9:00 ` Aaron Lu 2017-03-22 6:33 ` Minchan Kim 2017-03-22 6:33 ` Minchan Kim 2017-03-22 8:41 ` Aaron Lu 2017-03-22 8:41 ` Aaron Lu 2017-03-22 8:55 ` Minchan Kim 2017-03-22 8:55 ` Minchan Kim 2017-03-22 13:43 ` Aaron Lu 2017-03-22 13:43 ` Aaron Lu 2017-03-23 5:53 ` Minchan Kim 2017-03-23 5:53 ` Minchan Kim 2017-03-23 15:38 ` Dave Hansen 2017-03-23 15:38 ` Dave Hansen 2017-03-24 12:37 ` Aaron Lu 2017-03-24 12:37 ` Aaron Lu 2017-03-15 9:00 ` [PATCH v2 4/5] mm: add force_free_pages in zap_pte_range Aaron Lu 2017-03-15 9:00 ` Aaron Lu 2017-03-15 9:00 ` [PATCH v2 5/5] mm: add debugfs interface for parallel free tuning Aaron Lu 2017-03-15 9:00 ` Aaron Lu 2017-03-15 14:18 ` [PATCH v2 0/5] mm: support parallel free of memory Michal Hocko 2017-03-15 14:18 ` Michal Hocko 2017-03-15 15:44 ` Aaron Lu 2017-03-15 15:44 ` Aaron Lu 2017-03-15 16:28 ` Michal Hocko 2017-03-15 16:28 ` Michal Hocko 2017-03-15 21:38 ` Tim Chen 2017-03-15 21:38 ` Tim Chen 2017-03-16 9:07 ` Michal Hocko 2017-03-16 9:07 ` Michal Hocko 2017-03-16 18:36 ` Tim Chen 2017-03-16 18:36 ` Tim Chen 2017-03-17 7:47 ` Michal Hocko 2017-03-17 7:47 ` Michal Hocko 2017-03-17 8:07 ` Minchan Kim 2017-03-17 8:07 ` Minchan Kim 2017-03-17 12:33 ` Aaron Lu 2017-03-17 12:33 ` Aaron Lu 2017-03-17 12:59 ` Michal Hocko 2017-03-17 12:59 ` Michal Hocko 2017-03-17 13:16 ` Peter Zijlstra 2017-03-17 13:16 ` Peter Zijlstra 2017-03-17 12:53 ` Peter Zijlstra 2017-03-17 12:53 ` Peter Zijlstra 2017-03-17 13:05 ` Michal Hocko 2017-03-17 13:05 ` Michal Hocko 2017-03-21 14:54 ` Dave Hansen [this message] 2017-03-21 14:54 ` Dave Hansen 2017-03-22 8:02 ` Aaron Lu 2017-03-22 8:02 ` Aaron Lu 2017-03-24 7:04 ` Aaron Lu 2017-03-24 7:04 ` Aaron Lu 2017-03-21 15:18 ` Tim Chen 2017-03-21 15:18 ` Tim Chen 2017-03-16 6:54 ` Aaron Lu 2017-03-16 6:54 ` Aaron Lu 2017-03-16 7:34 ` Aaron Lu 2017-03-16 7:34 ` Aaron Lu 2017-03-16 13:51 ` Aaron Lu 2017-03-16 13:51 ` Aaron Lu 2017-03-16 14:14 ` Aaron Lu 2017-03-16 14:14 ` Aaron Lu 2017-03-15 14:56 ` Vlastimil Babka 2017-03-15 14:56 ` Vlastimil Babka 2017-03-15 15:50 ` Aaron Lu 2017-03-15 15:50 ` Aaron Lu 2017-03-17 3:10 ` Aaron Lu 2017-03-17 3:10 ` Aaron Lu 2017-03-16 19:38 ` Alex Thorlton 2017-03-16 19:38 ` Alex Thorlton 2017-03-17 2:21 ` Aaron Lu 2017-03-17 2:21 ` Aaron Lu 2017-03-20 19:15 ` Alex Thorlton 2017-03-20 19:15 ` Alex Thorlton
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=ae4e3597-f664-e5c4-97fb-e07f230d5017@intel.com \ --to=dave.hansen@intel.com \ --cc=aaron.lu@intel.com \ --cc=akpm@linux-foundation.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mhocko@kernel.org \ --cc=tim.c.chen@intel.com \ --cc=tim.c.chen@linux.intel.com \ --cc=ying.huang@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.