From mboxrd@z Thu Jan 1 00:00:00 1970 From: Logan Gunthorpe Subject: Re: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory Date: Thu, 13 Apr 2017 15:22:06 -0600 Message-ID: <81888a1e-eb0d-cbbc-dc66-0a09c32e4ea2@deltatee.com> References: <1490911959-5146-1-git-send-email-logang@deltatee.com> <1491974532.7236.43.camel@kernel.crashing.org> <5ac22496-56ec-025d-f153-140001d2a7f9@deltatee.com> <1492034124.7236.77.camel@kernel.crashing.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1492034124.7236.77.camel-XVmvHMARGAS8U2dJNN8I7kB+6BGkLq7r@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org Sender: "Linux-nvdimm" To: Benjamin Herrenschmidt , Christoph Hellwig , Sagi Grimberg , "James E.J. Bottomley" , "Martin K. Petersen" , Jens Axboe , Steve Wise , Stephen Bates , Max Gurtovoy , Dan Williams , Keith Busch , Jason Gunthorpe Cc: linux-scsi-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-pci-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org, Jerome Glisse , linux-nvdimm-y27Ovi1pjclAfugRpC6u6w@public.gmane.org List-Id: linux-nvdimm@lists.01.org On 12/04/17 03:55 PM, Benjamin Herrenschmidt wrote: > Look at pcibios_resource_to_bus() and pcibios_bus_to_resource(). They > will perform the conversion between the struct resource content (CPU > physical address) and the actual PCI bus side address. Ah, thanks for the tip! On my system, this translation returns the same address so it was not necessary. And, yes, that means this would have to find its way into the dma mapping routine somehow. This means we'll eventually need a way to look-up the p2pmem device from the struct page. Which means we will likely need a new flag bit in the struct page or something. The big difficulty I see is testing. Do you know what architectures or in what circumstances are these translations used? > When behind the same switch you need to use PCI addresses. If one tries > later to do P2P between host bridges (via the CPU fabric) things get > more complex and one will have to use either CPU addresses or something > else alltogether (probably would have to teach the arch DMA mapping > routines to work with those struct pages you create and return the > right thing). Probably for starters we'd want to explicitly deny cases between host bridges and add that later if someone wants to do the testing. Thanks, Logan From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752769AbdDMVWV (ORCPT ); Thu, 13 Apr 2017 17:22:21 -0400 Received: from ale.deltatee.com ([207.54.116.67]:37573 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751354AbdDMVWT (ORCPT ); Thu, 13 Apr 2017 17:22:19 -0400 To: Benjamin Herrenschmidt , Christoph Hellwig , Sagi Grimberg , "James E.J. Bottomley" , "Martin K. Petersen" , Jens Axboe , Steve Wise , Stephen Bates , Max Gurtovoy , Dan Williams , Keith Busch , Jason Gunthorpe References: <1490911959-5146-1-git-send-email-logang@deltatee.com> <1491974532.7236.43.camel@kernel.crashing.org> <5ac22496-56ec-025d-f153-140001d2a7f9@deltatee.com> <1492034124.7236.77.camel@kernel.crashing.org> Cc: linux-pci@vger.kernel.org, linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-nvdimm@ml01.01.org, linux-kernel@vger.kernel.org, Jerome Glisse From: Logan Gunthorpe Message-ID: <81888a1e-eb0d-cbbc-dc66-0a09c32e4ea2@deltatee.com> Date: Thu, 13 Apr 2017 15:22:06 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Icedove/45.6.0 MIME-Version: 1.0 In-Reply-To: <1492034124.7236.77.camel@kernel.crashing.org> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-SA-Exim-Connect-IP: 172.16.1.111 X-SA-Exim-Rcpt-To: jglisse@redhat.com, linux-kernel@vger.kernel.org, linux-nvdimm@ml01.01.org, linux-rdma@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, linux-pci@vger.kernel.org, jgunthorpe@obsidianresearch.com, keith.busch@intel.com, dan.j.williams@intel.com, maxg@mellanox.com, sbates@raithlin.com, swise@opengridcomputing.com, axboe@kernel.dk, martin.petersen@oracle.com, jejb@linux.vnet.ibm.com, sagi@grimberg.me, hch@lst.de, benh@kernel.crashing.org X-SA-Exim-Mail-From: logang@deltatee.com Subject: Re: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory X-SA-Exim-Version: 4.2.1 (built Mon, 26 Dec 2011 16:24:06 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 12/04/17 03:55 PM, Benjamin Herrenschmidt wrote: > Look at pcibios_resource_to_bus() and pcibios_bus_to_resource(). They > will perform the conversion between the struct resource content (CPU > physical address) and the actual PCI bus side address. Ah, thanks for the tip! On my system, this translation returns the same address so it was not necessary. And, yes, that means this would have to find its way into the dma mapping routine somehow. This means we'll eventually need a way to look-up the p2pmem device from the struct page. Which means we will likely need a new flag bit in the struct page or something. The big difficulty I see is testing. Do you know what architectures or in what circumstances are these translations used? > When behind the same switch you need to use PCI addresses. If one tries > later to do P2P between host bridges (via the CPU fabric) things get > more complex and one will have to use either CPU addresses or something > else alltogether (probably would have to teach the arch DMA mapping > routines to work with those struct pages you create and return the > right thing). Probably for starters we'd want to explicitly deny cases between host bridges and add that later if someone wants to do the testing. Thanks, Logan From mboxrd@z Thu Jan 1 00:00:00 1970 From: logang@deltatee.com (Logan Gunthorpe) Date: Thu, 13 Apr 2017 15:22:06 -0600 Subject: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory In-Reply-To: <1492034124.7236.77.camel@kernel.crashing.org> References: <1490911959-5146-1-git-send-email-logang@deltatee.com> <1491974532.7236.43.camel@kernel.crashing.org> <5ac22496-56ec-025d-f153-140001d2a7f9@deltatee.com> <1492034124.7236.77.camel@kernel.crashing.org> Message-ID: <81888a1e-eb0d-cbbc-dc66-0a09c32e4ea2@deltatee.com> On 12/04/17 03:55 PM, Benjamin Herrenschmidt wrote: > Look at pcibios_resource_to_bus() and pcibios_bus_to_resource(). They > will perform the conversion between the struct resource content (CPU > physical address) and the actual PCI bus side address. Ah, thanks for the tip! On my system, this translation returns the same address so it was not necessary. And, yes, that means this would have to find its way into the dma mapping routine somehow. This means we'll eventually need a way to look-up the p2pmem device from the struct page. Which means we will likely need a new flag bit in the struct page or something. The big difficulty I see is testing. Do you know what architectures or in what circumstances are these translations used? > When behind the same switch you need to use PCI addresses. If one tries > later to do P2P between host bridges (via the CPU fabric) things get > more complex and one will have to use either CPU addresses or something > else alltogether (probably would have to teach the arch DMA mapping > routines to work with those struct pages you create and return the > right thing). Probably for starters we'd want to explicitly deny cases between host bridges and add that later if someone wants to do the testing. Thanks, Logan