From mboxrd@z Thu Jan 1 00:00:00 1970 From: Benjamin Herrenschmidt Subject: Re: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory Date: Wed, 19 Apr 2017 11:23:04 +1000 Message-ID: <1492564984.25766.126.camel@kernel.crashing.org> References: <1492381396.25766.43.camel@kernel.crashing.org> <20170418164557.GA7181@obsidianresearch.com> <20170418190138.GH7181@obsidianresearch.com> <20170418210339.GA24257@obsidianresearch.com> <9fc9352f-86fe-3a9e-e372-24b3346b518c@deltatee.com> <20170418222440.GA27113@obsidianresearch.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: In-Reply-To: <20170418222440.GA27113-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linux-nvdimm-bounces-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org Sender: "Linux-nvdimm" To: Jason Gunthorpe , Logan Gunthorpe Cc: Jens Axboe , "James E.J. Bottomley" , "Martin K. Petersen" , linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-pci-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Steve Wise , "linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org, Keith Busch , Jerome Glisse , Bjorn Helgaas , linux-scsi , linux-nvdimm , Max Gurtovoy , Christoph Hellwig List-Id: linux-nvdimm@lists.01.org T24gVHVlLCAyMDE3LTA0LTE4IGF0IDE2OjI0IC0wNjAwLCBKYXNvbiBHdW50aG9ycGUgd3JvdGU6 Cj4gQmFzaWNhbGx5LCBhbGwgdGhpcyBsaXN0IHByb2Nlc3NpbmcgaXMgYSBodWdlIG92ZXJoZWFk IGNvbXBhcmVkIHRvCj4ganVzdCBwdXR0aW5nIGEgaGVscGVyIGNhbGwgaW4gdGhlIGV4aXN0aW5n IHNnIGl0ZXJhdGlvbiBsb29wIG9mIHRoZQo+IGFjdHVhbCBvcC7CoCBQYXJ0aWN1bGFybHkgaWYg dGhlIGFjdHVhbCBvcCBpcyBhIG5vLW9wIGxpa2Ugbm8tbW11IHg4Ngo+IHdvdWxkIHVzZS4KClll cywgSSdtIGxlYW5pbmcgdG93YXJkIHRoYXQgYXBwcm9hY2ggdG9vLgoKVGhlIGhlbHBlciBpdHNl bGYgY291bGQgaGFuZyBvZmYgdGhlIGRldm1hcCB0aG91Z2guCgo+IFNpbmNlIGRtYSBtYXBwaW5n IGlzIGEgcGVyZm9ybWFuY2UgcGF0aCB3ZSBtdXN0IGJlIGNhcmVmdWwgbm90IHRvCj4gY3JlYXRl IGludHJpbnNpYyBpbmVmZmljaWVuY2llcyB3aXRoIG90aGVyd2lzZSBuaWNlIGxheWVyaW5nIDop Cj4gCj4gSmFzb24KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f X18KTGludXgtbnZkaW1tIG1haWxpbmcgbGlzdApMaW51eC1udmRpbW1AbGlzdHMuMDEub3JnCmh0 dHBzOi8vbGlzdHMuMDEub3JnL21haWxtYW4vbGlzdGluZm8vbGludXgtbnZkaW1tCg== From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759022AbdDSCEL (ORCPT ); Tue, 18 Apr 2017 22:04:11 -0400 Received: from gate.crashing.org ([63.228.1.57]:58571 "EHLO gate.crashing.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758323AbdDSCEK (ORCPT ); Tue, 18 Apr 2017 22:04:10 -0400 Message-ID: <1492564984.25766.126.camel@kernel.crashing.org> Subject: Re: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory From: Benjamin Herrenschmidt To: Jason Gunthorpe , Logan Gunthorpe Cc: Dan Williams , Bjorn Helgaas , Christoph Hellwig , Sagi Grimberg , "James E.J. Bottomley" , "Martin K. Petersen" , Jens Axboe , Steve Wise , Stephen Bates , Max Gurtovoy , Keith Busch , linux-pci@vger.kernel.org, linux-scsi , linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-nvdimm , "linux-kernel@vger.kernel.org" , Jerome Glisse Date: Wed, 19 Apr 2017 11:23:04 +1000 In-Reply-To: <20170418222440.GA27113@obsidianresearch.com> References: <1492381396.25766.43.camel@kernel.crashing.org> <20170418164557.GA7181@obsidianresearch.com> <20170418190138.GH7181@obsidianresearch.com> <20170418210339.GA24257@obsidianresearch.com> <9fc9352f-86fe-3a9e-e372-24b3346b518c@deltatee.com> <20170418222440.GA27113@obsidianresearch.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.22.6 (3.22.6-1.fc25) Mime-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 2017-04-18 at 16:24 -0600, Jason Gunthorpe wrote: > Basically, all this list processing is a huge overhead compared to > just putting a helper call in the existing sg iteration loop of the > actual op.  Particularly if the actual op is a no-op like no-mmu x86 > would use. Yes, I'm leaning toward that approach too. The helper itself could hang off the devmap though. > Since dma mapping is a performance path we must be careful not to > create intrinsic inefficiencies with otherwise nice layering :) > > Jason From mboxrd@z Thu Jan 1 00:00:00 1970 From: benh@kernel.crashing.org (Benjamin Herrenschmidt) Date: Wed, 19 Apr 2017 11:23:04 +1000 Subject: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory In-Reply-To: <20170418222440.GA27113@obsidianresearch.com> References: <1492381396.25766.43.camel@kernel.crashing.org> <20170418164557.GA7181@obsidianresearch.com> <20170418190138.GH7181@obsidianresearch.com> <20170418210339.GA24257@obsidianresearch.com> <9fc9352f-86fe-3a9e-e372-24b3346b518c@deltatee.com> <20170418222440.GA27113@obsidianresearch.com> Message-ID: <1492564984.25766.126.camel@kernel.crashing.org> On Tue, 2017-04-18@16:24 -0600, Jason Gunthorpe wrote: > Basically, all this list processing is a huge overhead compared to > just putting a helper call in the existing sg iteration loop of the > actual op.? Particularly if the actual op is a no-op like no-mmu x86 > would use. Yes, I'm leaning toward that approach too. The helper itself could hang off the devmap though. > Since dma mapping is a performance path we must be careful not to > create intrinsic inefficiencies with otherwise nice layering :) > > Jason