From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752120AbdAaTPI (ORCPT ); Tue, 31 Jan 2017 14:15:08 -0500 Received: from hqemgate16.nvidia.com ([216.228.121.65]:10492 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751599AbdAaTPG (ORCPT ); Tue, 31 Jan 2017 14:15:06 -0500 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 31 Jan 2017 11:15:05 -0800 Subject: Re: [RFC V2 03/12] mm: Change generic FALLBACK zonelist creation process To: Dave Hansen , John Hubbard , Anshuman Khandual , , References: <20170130033602.12275-1-khandual@linux.vnet.ibm.com> <20170130033602.12275-4-khandual@linux.vnet.ibm.com> <07bd439c-6270-b219-227b-4079d36a2788@intel.com> <434aa74c-e917-490e-85ab-8c67b1a82d95@linux.vnet.ibm.com> <79bfd849-8e6c-2f6d-0acf-4256a4137526@nvidia.com> <217e817e-2f91-91a5-1bef-16fb0cbacb63@intel.com> CC: , , , , , , , , , From: David Nellans Message-ID: <9c951c50-3d75-2356-3f21-434ddca63f1b@nvidia.com> Date: Tue, 31 Jan 2017 13:14:59 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.7.0 MIME-Version: 1.0 In-Reply-To: <217e817e-2f91-91a5-1bef-16fb0cbacb63@intel.com> X-Originating-IP: [10.20.174.107] X-ClientProxiedBy: HQMAIL103.nvidia.com (172.20.187.11) To HQMAIL103.nvidia.com (172.20.187.11) Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 01/31/2017 12:04 PM, Dave Hansen wrote: > On 01/30/2017 11:25 PM, John Hubbard wrote: >> I also don't like having these policies hard-coded, and your 100x >> example above helps clarify what can go wrong about it. It would be >> nicer if, instead, we could better express the "distance" between nodes >> (bandwidth, latency, relative to sysmem, perhaps), and let the NUMA >> system figure out the Right Thing To Do. >> >> I realize that this is not quite possible with NUMA just yet, but I >> wonder if that's a reasonable direction to go with this? > In the end, I don't think the kernel can make the "right" decision very > widely here. > > Intel's Xeon Phis have some high-bandwidth memory (MCDRAM) that > evidently has a higher latency than DRAM. Given a plain malloc(), how > is the kernel to know that the memory will be used for AVX-512 > instructions that need lots of bandwidth vs. some random data structure > that's latency-sensitive? > > In the end, I think all we can do is keep the kernel's existing default > of "low latency to the CPU that allocated it", and let apps override > when that policy doesn't fit them. > I think John's point is that latency might not be the predominant factor anymore for certain sections of the CPU and GPU world. What if a Phi has MCDRAM physically attached, but DDR4 connected via QPI that still has lower total latency (might be a stretch for Phi but not a stretch for GPUs with deep sorting memory controllers)? Lowest latency is probably the wrong choice. Latency has really been a numeric proxy for physical proximity, under assumption most closely coupled memory is the right placement, but HBM/MCDRAM is causing that relationship to break down in all sorts of interesting ways.