From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754077AbbEKOUw (ORCPT ); Mon, 11 May 2015 10:20:52 -0400 Received: from mx1.redhat.com ([209.132.183.28]:45462 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752555AbbEKOUu (ORCPT ); Mon, 11 May 2015 10:20:50 -0400 Message-ID: <5550BA9D.3030104@redhat.com> Date: Mon, 11 May 2015 10:20:13 -0400 From: Rik van Riel User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: dedekind1@gmail.com CC: linux-kernel@vger.kernel.org, mgorman@suse.de, peterz@infradead.org, jhladky@redhat.com Subject: Re: [PATCH] numa,sched: only consider less busy nodes as numa balancing destination References: <1430908530.7444.145.camel@sauron.fi.intel.com> <20150506114128.0c846a37@cuia.bos.redhat.com> <1431090801.1418.87.camel@sauron.fi.intel.com> <554D1681.7040902@redhat.com> <1431342675.1418.148.camel@sauron.fi.intel.com> In-Reply-To: <1431342675.1418.148.camel@sauron.fi.intel.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 05/11/2015 07:11 AM, Artem Bityutskiy wrote: > On Fri, 2015-05-08 at 16:03 -0400, Rik van Riel wrote: >> This works well when dealing with tasks that are constantly >> running, but fails catastrophically when dealing with tasks >> that go to sleep, wake back up, go back to sleep, wake back >> up, and generally mess up the load statistics that the NUMA >> balancing code use in a random way. > > Sleeping is what happens a lot I believe in this workload: processes do > a lot of network I/O, file I/O too, and a lot of IPC. > > Would you please expand on this a bit more - why would this scenario > "mess up load statistics" ? > >> If the normal scheduler load balancer is moving tasks the >> other way the NUMA balancer is moving them, things will >> not converge, and tasks will have worse memory locality >> than not doing NUMA balancing at all. > > Are the regular and NUMA balancers independent? > > Are there mechanisms to detect ping-pong situations? I'd like to verify > your theory, and these kinds of mechanisms would be helpful. > >> Currently the load balancer has a preference for moving >> tasks to their preferred nodes (NUMA_FAVOUR_HIGHER, true), >> but there is no resistance to moving tasks away from their >> preferred nodes (NUMA_RESIST_LOWER, false). That setting >> was arrived at after a fair amount of experimenting, and >> is probably correct. > > I guess I can try making NUMA_RESIST_LOWER to be true and see what > happens. But probably first I need to confirm that your theory > (balancers playing ping-pong) is correct, any hints on how would I do > this? Funny thing, for your workload, the kernel only keeps statistics on forced migrations when NUMA_RESIST_LOWER is enabled. The reason is that the tasks on your system probably sleep too long to hit the task_hot() test most of the time. /* * Aggressive migration if: * 1) destination numa is preferred * 2) task is cache cold, or * 3) too many balance attempts have failed. */ tsk_cache_hot = task_hot(p, env); if (!tsk_cache_hot) tsk_cache_hot = migrate_degrades_locality(p, env); if (migrate_improves_locality(p, env) || !tsk_cache_hot || env->sd->nr_balance_failed > env->sd->cache_nice_tries) { if (tsk_cache_hot) { schedstat_inc(env->sd, lb_hot_gained[env->idle]); schedstat_inc(p, se.statistics.nr_forced_migrations); } return 1; } schedstat_inc(p, se.statistics.nr_failed_migrations_hot); return 0; I am also not sure where the se.statistics.nr_forced_migrations statistic is exported. -- All rights reversed