From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S968217AbdDSRPX (ORCPT ); Wed, 19 Apr 2017 13:15:23 -0400 Received: from quartz.orcorp.ca ([184.70.90.242]:36338 "EHLO quartz.orcorp.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S967978AbdDSRPU (ORCPT ); Wed, 19 Apr 2017 13:15:20 -0400 Date: Wed, 19 Apr 2017 11:14:51 -0600 From: Jason Gunthorpe To: Logan Gunthorpe Cc: Benjamin Herrenschmidt , Dan Williams , Bjorn Helgaas , Christoph Hellwig , Sagi Grimberg , "James E.J. Bottomley" , "Martin K. Petersen" , Jens Axboe , Steve Wise , Stephen Bates , Max Gurtovoy , Keith Busch , linux-pci@vger.kernel.org, linux-scsi , linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-nvdimm , "linux-kernel@vger.kernel.org" , Jerome Glisse Subject: Re: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory Message-ID: <20170419171451.GA10020@obsidianresearch.com> References: <1492381396.25766.43.camel@kernel.crashing.org> <20170418164557.GA7181@obsidianresearch.com> <20170418190138.GH7181@obsidianresearch.com> <20170418210339.GA24257@obsidianresearch.com> <1492564806.25766.124.camel@kernel.crashing.org> <20170419155557.GA8497@obsidianresearch.com> <4899b011-bdfb-18d8-ef00-33a1516216a6@deltatee.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4899b011-bdfb-18d8-ef00-33a1516216a6@deltatee.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-Broken-Reverse-DNS: no host name found for IP address 10.0.0.156 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Apr 19, 2017 at 10:48:51AM -0600, Logan Gunthorpe wrote: > The pci_enable_p2p_bar function would then just need to call > devm_memremap_pages with the dma_map callback set to a function that > does the segment check and the offset calculation. I don't see a use for the dma_map function pointer at this point.. It doesn't make alot of sense for the completor of the DMA to provide a mapping op, the mapping process is *path* specific, not specific to a completer/initiator. So, I would suggest more like this: static inline struct device *get_p2p_src(struct page *page) { struct device *res; struct dev_pagemap *pgmap; if (!is_zone_device_page(page)) return NULL; pgmap = get_dev_pagemap(page_to_pfn(page), NULL); if (!pgmap || pgmap->type != MEMORY_DEVICE_P2P) /* For now ZONE_DEVICE memory that is not P2P is assumed to be configured for DMA the same as CPU memory. */ return ERR_PTR(-EINVAL); res = pgmap->dev; device_get(res); put_dev_pagemap(pgmap); return res; } dma_addr_t pci_p2p_same_segment(struct device *initator, struct device *completer, struct page *page) { if (! PCI initiator & completer) return ERROR; if (!same segment initiator & completer) return ERROR; // Translate page directly to the value programmed into the BAR return (Completer's PCI BAR base address) + (offset of page within BAR); } // dma_sg_map for (each sgl) { struct page *page = sg_page(s); struct device *p2p_src = get_p2p_src(page); if (IS_ERR(p2p_src)) // fail dma_sg if (p2p_src) { bool needs_iommu = false; pa = pci_p2p_same_segment(dev, p2p_src, page); if (pa == ERROR) pa = arch_p2p_cross_segment(dev, p2psrc, page, &needs_iommui); device_put(p2p_src); if (pa == ERROR) // fail if (!needs_iommu) { // Insert PA directly into the result SGL sg++; continue; } } else // CPU memory pa = page_to_phys(page); To me it looks like the code duplication across the iommu stuff comes from just duplicating the basic iommu algorithm in every driver. To clean that up I think someone would need to hoist the overall sgl loop and use more ops callbacks eg allocate_iommu_range, assign_page_to_rage, dealloc_range, etc. This is a problem p2p makes worse, but isn't directly causing :\ Jason