From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3FFEC433DB for ; Fri, 12 Mar 2021 15:52:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5DCBB64FE0 for ; Fri, 12 Mar 2021 15:52:10 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5DCBB64FE0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EEE276B0074; Fri, 12 Mar 2021 10:52:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EC5BF6B0075; Fri, 12 Mar 2021 10:52:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D688B6B0087; Fri, 12 Mar 2021 10:52:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0021.hostedemail.com [216.40.44.21]) by kanga.kvack.org (Postfix) with ESMTP id B932A6B0074 for ; Fri, 12 Mar 2021 10:52:09 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 7D6B618221622 for ; Fri, 12 Mar 2021 15:52:09 +0000 (UTC) X-FDA: 77911663578.08.3FCC29D Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf22.hostedemail.com (Postfix) with ESMTP id 196AEC0007C5 for ; Fri, 12 Mar 2021 15:52:05 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E8D0211B3; Fri, 12 Mar 2021 07:52:07 -0800 (PST) Received: from [10.57.52.136] (unknown [10.57.52.136]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9DA3A3F7D7; Fri, 12 Mar 2021 07:52:04 -0800 (PST) Subject: Re: [RFC PATCH v2 06/11] dma-direct: Support PCI P2PDMA pages in dma-direct map_sg To: Logan Gunthorpe , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Minturn Dave B , John Hubbard , Dave Hansen , Ira Weiny , Matthew Wilcox , =?UTF-8?Q?Christian_K=c3=b6nig?= , Jason Gunthorpe , Jason Ekstrand , Daniel Vetter , Dan Williams , Stephen Bates , Jakowski Andrzej , Christoph Hellwig , Xiong Jianxin References: <20210311233142.7900-1-logang@deltatee.com> <20210311233142.7900-7-logang@deltatee.com> From: Robin Murphy Message-ID: <215e1472-5294-d20a-a43a-ff6dfe8cd66e@arm.com> Date: Fri, 12 Mar 2021 15:52:02 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: <20210311233142.7900-7-logang@deltatee.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 196AEC0007C5 X-Stat-Signature: 37bywypa55enx1f9oqxm4h8py77eoynm Received-SPF: none (arm.com>: No applicable sender policy available) receiver=imf22; identity=mailfrom; envelope-from=""; helo=foss.arm.com; client-ip=217.140.110.172 X-HE-DKIM-Result: none/none X-HE-Tag: 1615564325-535039 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2021-03-11 23:31, Logan Gunthorpe wrote: > Add PCI P2PDMA support for dma_direct_map_sg() so that it can map > PCI P2PDMA pages directly without a hack in the callers. This allows > for heterogeneous SGLs that contain both P2PDMA and regular pages. > > SGL segments that contain PCI bus addresses are marked with > sg_mark_pci_p2pdma() and are ignored when unmapped. > > Signed-off-by: Logan Gunthorpe > --- > kernel/dma/direct.c | 35 ++++++++++++++++++++++++++++++++--- > kernel/dma/mapping.c | 13 ++++++++++--- > 2 files changed, 42 insertions(+), 6 deletions(-) > > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > index 002268262c9a..f326d32062dd 100644 > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -13,6 +13,7 @@ > #include > #include > #include > +#include > #include "direct.h" > > /* > @@ -387,19 +388,47 @@ void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sgl, > struct scatterlist *sg; > int i; > > - for_each_sg(sgl, sg, nents, i) > + for_each_sg(sgl, sg, nents, i) { > + if (sg_is_pci_p2pdma(sg)) > + continue; > + > dma_direct_unmap_page(dev, sg->dma_address, sg_dma_len(sg), dir, > attrs); > + } > } > #endif > > int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, > enum dma_data_direction dir, unsigned long attrs) > { > - int i; > + struct dev_pagemap *pgmap = NULL; > + int i, map = -1, ret = 0; > struct scatterlist *sg; > + u64 bus_off; > > for_each_sg(sgl, sg, nents, i) { > + if (is_pci_p2pdma_page(sg_page(sg))) { > + if (sg_page(sg)->pgmap != pgmap) { > + pgmap = sg_page(sg)->pgmap; > + map = pci_p2pdma_dma_map_type(dev, pgmap); > + bus_off = pci_p2pdma_bus_offset(sg_page(sg)); > + } > + > + if (map < 0) { > + sg->dma_address = DMA_MAPPING_ERROR; > + ret = -EREMOTEIO; > + goto out_unmap; > + } > + > + if (map) { > + sg->dma_address = sg_phys(sg) + sg->offset - > + bus_off; > + sg_dma_len(sg) = sg->length; > + sg_mark_pci_p2pdma(sg); > + continue; > + } > + } > + > sg->dma_address = dma_direct_map_page(dev, sg_page(sg), > sg->offset, sg->length, dir, attrs); > if (sg->dma_address == DMA_MAPPING_ERROR) > @@ -411,7 +440,7 @@ int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, > > out_unmap: > dma_direct_unmap_sg(dev, sgl, i, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC); > - return 0; > + return ret; > } > > dma_addr_t dma_direct_map_resource(struct device *dev, phys_addr_t paddr, > diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c > index b6a633679933..adc1a83950be 100644 > --- a/kernel/dma/mapping.c > +++ b/kernel/dma/mapping.c > @@ -178,8 +178,15 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size, > EXPORT_SYMBOL(dma_unmap_page_attrs); > > /* > - * dma_maps_sg_attrs returns 0 on error and > 0 on success. > - * It should never return a value < 0. > + * dma_maps_sg_attrs returns 0 on any resource error and > 0 on success. > + * > + * If 0 is returned, the mapping can be retried and will succeed once > + * sufficient resources are available. That's not a guarantee we can uphold. Retrying forever in the vain hope that a device might evolve some extra address bits, or a bounce buffer might magically grow big enough for a gigantic mapping, isn't necessarily the best idea. > + * > + * If there are P2PDMA pages in the scatterlist then this function may > + * return -EREMOTEIO to indicate that the pages are not mappable by the > + * device. In this case, an error should be returned for the IO as it > + * will never be successfully retried. > */ > int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, > enum dma_data_direction dir, unsigned long attrs) > @@ -197,7 +204,7 @@ int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, > ents = dma_direct_map_sg(dev, sg, nents, dir, attrs); > else > ents = ops->map_sg(dev, sg, nents, dir, attrs); > - BUG_ON(ents < 0); > + This scares me - I hesitate to imagine the amount of driver/subsystem code out there that will see nonzero and merrily set off iterating a negative number of segments, if we open the floodgates of allowing implementations to return error codes here. Robin. > debug_dma_map_sg(dev, sg, nents, ents, dir); > > return ents; >