From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F36BC433E0 for ; Thu, 11 Mar 2021 23:32:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EEF8C64F92 for ; Thu, 11 Mar 2021 23:32:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EEF8C64F92 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=deltatee.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7EBDA8D0311; Thu, 11 Mar 2021 18:32:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7C3AB8D030D; Thu, 11 Mar 2021 18:32:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C8EA8D0311; Thu, 11 Mar 2021 18:32:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0128.hostedemail.com [216.40.44.128]) by kanga.kvack.org (Postfix) with ESMTP id 232BE8D030D for ; Thu, 11 Mar 2021 18:32:08 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id D438F82499A8 for ; Thu, 11 Mar 2021 23:32:07 +0000 (UTC) X-FDA: 77909193894.18.3449EB3 Received: from ale.deltatee.com (ale.deltatee.com [204.191.154.188]) by imf12.hostedemail.com (Postfix) with ESMTP id 62C59ED for ; Thu, 11 Mar 2021 23:32:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=deltatee.com; s=20200525; h=Subject:MIME-Version:References:In-Reply-To: Message-Id:Date:Cc:To:From:content-disposition; bh=C2zq0RJZ8ThW76MS1rjASAKoSeACRy/BMOxwbAjwe44=; b=YPe2kj6qoSd9nU97o+42pKhQUP y2qAxVsNW2xg08F86j2k6fHSw0JIOBKt4vGOF1xJyulq1SMyEeFb60M/SBtozVsp6vNvoHttPK4Jw stSan4zrYF1Ii9fS92AnH6ab6+Zlti52V3HTZJq0iugJ7W0LlXxqsLKRqo/DgqwooMi3LeDt4gNY6 ufhUPy2A4fkCCcIY2K9L34KNFFSsp5ihRrCUTfpxCmBgTFAYdWT9zTcpAmR2+wb0ShSoEurwgjYDD X4yJDBXX6ammjq8UJF9nT3cxQtZ2UcZO6QhW8YJEIwejEaDqTvIAYlKrAR0740Gv2opckyqN6QAbg quJBdmpg==; Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1lKUmi-0003ev-Lc; Thu, 11 Mar 2021 16:32:06 -0700 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1lKUmW-00024h-SB; Thu, 11 Mar 2021 16:31:52 -0700 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org Cc: Stephen Bates , Christoph Hellwig , Dan Williams , Jason Gunthorpe , =?UTF-8?q?Christian=20K=C3=B6nig?= , Ira Weiny , John Hubbard , Don Dutile , Matthew Wilcox , Daniel Vetter , Jakowski Andrzej , Minturn Dave B , Jason Ekstrand , Dave Hansen , Xiong Jianxin , Logan Gunthorpe Date: Thu, 11 Mar 2021 16:31:38 -0700 Message-Id: <20210311233142.7900-9-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210311233142.7900-1-logang@deltatee.com> References: <20210311233142.7900-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-pci@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, sbates@raithlin.com, hch@lst.de, jgg@ziepe.ca, christian.koenig@amd.com, jhubbard@nvidia.com, ddutile@redhat.com, willy@infradead.org, daniel.vetter@ffwll.ch, jason@jlekstrand.net, dave.hansen@linux.intel.com, dan.j.williams@intel.com, iweiny@intel.com, andrzej.jakowski@intel.com, dave.b.minturn@intel.com, jianxin.xiong@intel.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com Subject: [RFC PATCH v2 08/11] iommu/dma: Support PCI P2PDMA pages in dma-iommu map_sg X-SA-Exim-Version: 4.2.1 (built Wed, 08 May 2019 21:11:16 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) X-Stat-Signature: onk7p9okxa8ntpn738419rno3dt1h64k X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 62C59ED Received-SPF: none (deltatee.com>: No applicable sender policy available) receiver=imf12; identity=mailfrom; envelope-from=""; helo=ale.deltatee.com; client-ip=204.191.154.188 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615505523-90050 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When a PCI P2PDMA page is seen, set the IOVA length of the segment to zero so that it is not mapped into the IOVA. Then, in finalise_sg(), apply the appropriate bus address to the segment. The IOVA is not created if the scatterlist only consists of P2PDMA pages. Similar to dma-direct, the sg_mark_pci_p2pdma() flag is used to indicate bus address segments. On unmap, P2PDMA segments are skipped over when determining the start and end IOVA addresses. With this change, the flags variable in the dma_map_ops is set to DMA_F_PCI_P2PDMA_SUPPORTED to indicate support for P2PDMA pages. Signed-off-by: Logan Gunthorpe --- drivers/iommu/dma-iommu.c | 63 ++++++++++++++++++++++++++++++++------- 1 file changed, 53 insertions(+), 10 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index af765c813cc8..c0821e9051a9 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -846,7 +847,7 @@ static void iommu_dma_unmap_page(struct device *dev, = dma_addr_t dma_handle, * segment's start address to avoid concatenating across one. */ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int= nents, - dma_addr_t dma_addr) + dma_addr_t dma_addr, unsigned long attrs) { struct scatterlist *s, *cur =3D sg; unsigned long seg_mask =3D dma_get_seg_boundary(dev); @@ -864,6 +865,20 @@ static int __finalise_sg(struct device *dev, struct = scatterlist *sg, int nents, sg_dma_address(s) =3D DMA_MAPPING_ERROR; sg_dma_len(s) =3D 0; =20 + if (is_pci_p2pdma_page(sg_page(s)) && !s_iova_len) { + if (i > 0) + cur =3D sg_next(cur); + + sg_dma_address(cur) =3D sg_phys(s) + s->offset - + pci_p2pdma_bus_offset(sg_page(s)); + sg_dma_len(cur) =3D s->length; + sg_mark_pci_p2pdma(cur); + + count++; + cur_len =3D 0; + continue; + } + /* * Now fill in the real DMA data. If... * - there is a valid output segment to append to @@ -960,11 +975,12 @@ static int iommu_dma_map_sg(struct device *dev, str= uct scatterlist *sg, struct iommu_dma_cookie *cookie =3D domain->iova_cookie; struct iova_domain *iovad =3D &cookie->iovad; struct scatterlist *s, *prev =3D NULL; + struct dev_pagemap *pgmap =3D NULL; int prot =3D dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs); dma_addr_t iova; size_t iova_len =3D 0; unsigned long mask =3D dma_get_seg_boundary(dev); - int i; + int i, map =3D -1, ret =3D 0; =20 if (static_branch_unlikely(&iommu_deferred_attach_enabled) && iommu_deferred_attach(dev, domain)) @@ -993,6 +1009,23 @@ static int iommu_dma_map_sg(struct device *dev, str= uct scatterlist *sg, s_length =3D iova_align(iovad, s_length + s_iova_off); s->length =3D s_length; =20 + if (is_pci_p2pdma_page(sg_page(s))) { + if (sg_page(s)->pgmap !=3D pgmap) { + pgmap =3D sg_page(s)->pgmap; + map =3D pci_p2pdma_dma_map_type(dev, pgmap); + } + + if (map < 0) { + ret =3D -EREMOTEIO; + goto out_restore_sg; + } + + if (map) { + s->length =3D 0; + continue; + } + } + /* * Due to the alignment of our single IOVA allocation, we can * depend on these assumptions about the segment boundary mask: @@ -1015,6 +1048,9 @@ static int iommu_dma_map_sg(struct device *dev, str= uct scatterlist *sg, prev =3D s; } =20 + if (!iova_len) + return __finalise_sg(dev, sg, nents, 0, attrs); + iova =3D iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev)= ; if (!iova) goto out_restore_sg; @@ -1026,19 +1062,19 @@ static int iommu_dma_map_sg(struct device *dev, s= truct scatterlist *sg, if (iommu_map_sg_atomic(domain, iova, sg, nents, prot) < iova_len) goto out_free_iova; =20 - return __finalise_sg(dev, sg, nents, iova); + return __finalise_sg(dev, sg, nents, iova, attrs); =20 out_free_iova: iommu_dma_free_iova(cookie, iova, iova_len, NULL); out_restore_sg: __invalidate_sg(sg, nents); - return 0; + return ret; } =20 static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *s= g, int nents, enum dma_data_direction dir, unsigned long attrs) { - dma_addr_t start, end; + dma_addr_t end, start =3D DMA_MAPPING_ERROR; struct scatterlist *tmp; int i; =20 @@ -1054,14 +1090,20 @@ static void iommu_dma_unmap_sg(struct device *dev= , struct scatterlist *sg, * The scatterlist segments are mapped into a single * contiguous IOVA allocation, so this is incredibly easy. */ - start =3D sg_dma_address(sg); - for_each_sg(sg_next(sg), tmp, nents - 1, i) { + for_each_sg(sg, tmp, nents, i) { + if (sg_is_pci_p2pdma(tmp)) + continue; if (sg_dma_len(tmp) =3D=3D 0) break; - sg =3D tmp; + + if (start =3D=3D DMA_MAPPING_ERROR) + start =3D sg_dma_address(tmp); + + end =3D sg_dma_address(tmp) + sg_dma_len(tmp); } - end =3D sg_dma_address(sg) + sg_dma_len(sg); - __iommu_dma_unmap(dev, start, end - start); + + if (start !=3D DMA_MAPPING_ERROR) + __iommu_dma_unmap(dev, start, end - start); } =20 static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t= phys, @@ -1254,6 +1296,7 @@ static unsigned long iommu_dma_get_merge_boundary(s= truct device *dev) } =20 static const struct dma_map_ops iommu_dma_ops =3D { + .flags =3D DMA_F_PCI_P2PDMA_SUPPORTED, .alloc =3D iommu_dma_alloc, .free =3D iommu_dma_free, .alloc_pages =3D dma_common_alloc_pages, --=20 2.20.1