From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1946660AbeCBRer (ORCPT ); Fri, 2 Mar 2018 12:34:47 -0500 Received: from mail-it0-f41.google.com ([209.85.214.41]:51678 "EHLO mail-it0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1428218AbeCBReo (ORCPT ); Fri, 2 Mar 2018 12:34:44 -0500 X-Google-Smtp-Source: AG47ELuXqWzT/51IJieRgggwkN8/x1lPDCtrnBJ20+66IzJNjVmMszMX60p25YAf4L+QOQptACKYukDlTcwOGXWe7xQ= MIME-Version: 1.0 In-Reply-To: References: <20180228234006.21093-1-logang@deltatee.com> <1519876489.4592.3.camel@kernel.crashing.org> <1519876569.4592.4.camel@au1.ibm.com> <1519936477.4592.23.camel@au1.ibm.com> <1519936815.4592.25.camel@au1.ibm.com> <20180301205315.GJ19007@ziepe.ca> <1519942012.4592.31.camel@au1.ibm.com> <1519943658.4592.34.camel@kernel.crashing.org> <1520010446.2693.19.camel@hpe.com> From: Linus Torvalds Date: Fri, 2 Mar 2018 09:34:42 -0800 X-Google-Sender-Auth: ZsJbnA7LCk9M9X9G1163_7xBkRE Message-ID: Subject: Re: [PATCH v2 00/10] Copy Offload in NVMe Fabrics with P2P PCI Memory To: "Kani, Toshi" Cc: "benh@kernel.crashing.org" , "linux-kernel@vger.kernel.org" , "alex.williamson@redhat.com" , "linux-block@vger.kernel.org" , "linux-rdma@vger.kernel.org" , "hch@lst.de" , "axboe@kernel.dk" , "linux-nvdimm@lists.01.org" , "jglisse@redhat.com" , "linux-nvme@lists.infradead.org" , "maxg@mellanox.com" , "linux-pci@vger.kernel.org" , "keith.busch@intel.com" , "oliveroh@au1.ibm.com" , "jgg@ziepe.ca" , "bhelgaas@google.com" Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 2, 2018 at 8:57 AM, Linus Torvalds wrote: > > Like the page table caching entries, the memory type range registers > are really just "secondary information". They don't actually select > between PCIe and RAM, they just affect the behavior on top of that. Side note: historically the two may have been almost the same, since the CPU only had one single unified bus for "memory" (whether that was memory-mapped PCI or actual RAM). The steering was external. But even back then you had extended bits to specify things like how the 640k-1M region got remapped - which could depend on not just the address, but on whether you read or wrote to it. The "lost" 384kB of RAM could either be remapped at a different address, or could be used for shadowing the (slow) ROM contents, or whatever. Linus