From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751837AbdAaSFF (ORCPT ); Tue, 31 Jan 2017 13:05:05 -0500 Received: from mga03.intel.com ([134.134.136.65]:56359 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751453AbdAaSEP (ORCPT ); Tue, 31 Jan 2017 13:04:15 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,315,1477983600"; d="scan'208";a="59305227" Subject: Re: [RFC V2 03/12] mm: Change generic FALLBACK zonelist creation process To: John Hubbard , Anshuman Khandual , linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20170130033602.12275-1-khandual@linux.vnet.ibm.com> <20170130033602.12275-4-khandual@linux.vnet.ibm.com> <07bd439c-6270-b219-227b-4079d36a2788@intel.com> <434aa74c-e917-490e-85ab-8c67b1a82d95@linux.vnet.ibm.com> <79bfd849-8e6c-2f6d-0acf-4256a4137526@nvidia.com> Cc: mhocko@suse.com, vbabka@suse.cz, mgorman@suse.de, minchan@kernel.org, aneesh.kumar@linux.vnet.ibm.com, bsingharora@gmail.com, srikar@linux.vnet.ibm.com, haren@linux.vnet.ibm.com, jglisse@redhat.com, dan.j.williams@intel.com From: Dave Hansen Message-ID: <217e817e-2f91-91a5-1bef-16fb0cbacb63@intel.com> Date: Tue, 31 Jan 2017 10:04:11 -0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.1 MIME-Version: 1.0 In-Reply-To: <79bfd849-8e6c-2f6d-0acf-4256a4137526@nvidia.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 01/30/2017 11:25 PM, John Hubbard wrote: > I also don't like having these policies hard-coded, and your 100x > example above helps clarify what can go wrong about it. It would be > nicer if, instead, we could better express the "distance" between nodes > (bandwidth, latency, relative to sysmem, perhaps), and let the NUMA > system figure out the Right Thing To Do. > > I realize that this is not quite possible with NUMA just yet, but I > wonder if that's a reasonable direction to go with this? In the end, I don't think the kernel can make the "right" decision very widely here. Intel's Xeon Phis have some high-bandwidth memory (MCDRAM) that evidently has a higher latency than DRAM. Given a plain malloc(), how is the kernel to know that the memory will be used for AVX-512 instructions that need lots of bandwidth vs. some random data structure that's latency-sensitive? In the end, I think all we can do is keep the kernel's existing default of "low latency to the CPU that allocated it", and let apps override when that policy doesn't fit them.