linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Robin Murphy <robin.murphy@arm.com>
To: Logan Gunthorpe <logang@deltatee.com>,
	linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-block@vger.kernel.org, linux-pci@vger.kernel.org,
	linux-mm@kvack.org, iommu@lists.linux-foundation.org
Cc: "Minturn Dave B" <dave.b.minturn@intel.com>,
	"Martin Oliveira" <martin.oliveira@eideticom.com>,
	"Ralph Campbell" <rcampbell@nvidia.com>,
	"Jason Gunthorpe" <jgg@nvidia.com>,
	"John Hubbard" <jhubbard@nvidia.com>,
	"Dave Hansen" <dave.hansen@linux.intel.com>,
	"Matthew Wilcox" <willy@infradead.org>,
	"Christian König" <christian.koenig@amd.com>,
	"Jason Gunthorpe" <jgg@ziepe.ca>,
	"Chaitanya Kulkarni" <ckulkarnilinux@gmail.com>,
	"Jason Ekstrand" <jason@jlekstrand.net>,
	"Daniel Vetter" <daniel.vetter@ffwll.ch>,
	"Bjorn Helgaas" <helgaas@kernel.org>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Stephen Bates" <sbates@raithlin.com>,
	"Ira Weiny" <ira.weiny@intel.com>,
	"Christoph Hellwig" <hch@lst.de>,
	"Xiong Jianxin" <jianxin.xiong@intel.com>
Subject: Re: [PATCH v7 08/21] iommu/dma: support PCI P2PDMA pages in dma-iommu map_sg
Date: Wed, 29 Jun 2022 13:07:42 +0100	[thread overview]
Message-ID: <feecc6fe-a16e-11f2-33c8-3de7c96b9ad5@arm.com> (raw)
In-Reply-To: <20220615161233.17527-9-logang@deltatee.com>

On 2022-06-15 17:12, Logan Gunthorpe wrote:
> When a PCI P2PDMA page is seen, set the IOVA length of the segment
> to zero so that it is not mapped into the IOVA. Then, in finalise_sg(),
> apply the appropriate bus address to the segment. The IOVA is not
> created if the scatterlist only consists of P2PDMA pages.
> 
> A P2PDMA page may have three possible outcomes when being mapped:
>    1) If the data path between the two devices doesn't go through
>       the root port, then it should be mapped with a PCI bus address
>    2) If the data path goes through the host bridge, it should be mapped
>       normally with an IOMMU IOVA.
>    3) It is not possible for the two devices to communicate and thus
>       the mapping operation should fail (and it will return -EREMOTEIO).
> 
> Similar to dma-direct, the sg_dma_mark_pci_p2pdma() flag is used to
> indicate bus address segments. On unmap, P2PDMA segments are skipped
> over when determining the start and end IOVA addresses.
> 
> With this change, the flags variable in the dma_map_ops is set to
> DMA_F_PCI_P2PDMA_SUPPORTED to indicate support for P2PDMA pages.
> 
> Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>   drivers/iommu/dma-iommu.c | 68 +++++++++++++++++++++++++++++++++++----
>   1 file changed, 61 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index f90251572a5d..b01ca0c6a7ab 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -21,6 +21,7 @@
>   #include <linux/iova.h>
>   #include <linux/irq.h>
>   #include <linux/list_sort.h>
> +#include <linux/memremap.h>
>   #include <linux/mm.h>
>   #include <linux/mutex.h>
>   #include <linux/pci.h>
> @@ -1062,6 +1063,16 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents,
>   		sg_dma_address(s) = DMA_MAPPING_ERROR;
>   		sg_dma_len(s) = 0;
>   
> +		if (is_pci_p2pdma_page(sg_page(s)) && !s_iova_len) {

Logically, should we not be able to use sg_is_dma_bus_address() here? I 
think it should be feasible, and simpler, to prepare the p2p segments 
up-front, such that at this point all we need to do is restore the 
original length (if even that, see below).

> +			if (i > 0)
> +				cur = sg_next(cur);
> +
> +			pci_p2pdma_map_bus_segment(s, cur);
> +			count++;
> +			cur_len = 0;
> +			continue;
> +		}
> +
>   		/*
>   		 * Now fill in the real DMA data. If...
>   		 * - there is a valid output segment to append to
> @@ -1158,6 +1169,8 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>   	struct iova_domain *iovad = &cookie->iovad;
>   	struct scatterlist *s, *prev = NULL;
>   	int prot = dma_info_to_prot(dir, dev_is_dma_coherent(dev), attrs);
> +	struct dev_pagemap *pgmap = NULL;
> +	enum pci_p2pdma_map_type map_type;
>   	dma_addr_t iova;
>   	size_t iova_len = 0;
>   	unsigned long mask = dma_get_seg_boundary(dev);
> @@ -1193,6 +1206,35 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>   		s_length = iova_align(iovad, s_length + s_iova_off);
>   		s->length = s_length;
>   
> +		if (is_pci_p2pdma_page(sg_page(s))) {
> +			if (sg_page(s)->pgmap != pgmap) {
> +				pgmap = sg_page(s)->pgmap;
> +				map_type = pci_p2pdma_map_type(pgmap, dev);
> +			}

There's a definite code smell here, but per above and below I think we 
*should* actually call the new helper instead of copy-pasting half of it.

> +
> +			switch (map_type) {
> +			case PCI_P2PDMA_MAP_BUS_ADDR:
> +				/*
> +				 * A zero length will be ignored by
> +				 * iommu_map_sg() and then can be detected

If that is required behaviour then it needs an explicit check in 
iommu_map_sg() to guarantee (and document) it. It's only by chance that 
__iommu_map() happens to return success for size == 0 *if* all the other 
arguments still line up, which is a far cry from a safe no-op.

However, rather than play yet more silly tricks, I think it would make 
even more sense to make iommu_map_sg() properly aware and able to skip 
direct p2p segments on its own. Once it becomes normal to pass mixed 
scatterlists around, it's only a matter of time until one ends up being 
handed to a driver which manages its own IOMMU domain, and then what?

> +				 * in __finalise_sg() to actually map the
> +				 * bus address.
> +				 */
> +				s->length = 0;
> +				continue;
> +			case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
> +				/*
> +				 * Mapping through host bridge should be
> +				 * mapped with regular IOVAs, thus we
> +				 * do nothing here and continue below.
> +				 */
> +				break;
> +			default:
> +				ret = -EREMOTEIO;
> +				goto out_restore_sg;
> +			}
> +		}
> +
>   		/*
>   		 * Due to the alignment of our single IOVA allocation, we can
>   		 * depend on these assumptions about the segment boundary mask:
> @@ -1215,6 +1257,9 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>   		prev = s;
>   	}
>   
> +	if (!iova_len)
> +		return __finalise_sg(dev, sg, nents, 0);
> +
>   	iova = iommu_dma_alloc_iova(domain, iova_len, dma_get_mask(dev), dev);
>   	if (!iova) {
>   		ret = -ENOMEM;
> @@ -1236,7 +1281,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>   out_restore_sg:
>   	__invalidate_sg(sg, nents);
>   out:
> -	if (ret != -ENOMEM)
> +	if (ret != -ENOMEM && ret != -EREMOTEIO)
>   		return -EINVAL;
>   	return ret;
>   }
> @@ -1244,7 +1289,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
>   static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
>   		int nents, enum dma_data_direction dir, unsigned long attrs)
>   {
> -	dma_addr_t start, end;
> +	dma_addr_t end, start = DMA_MAPPING_ERROR;

There are several things I don't like about this logic, I'd rather have 
"end = 0" here...

>   	struct scatterlist *tmp;
>   	int i;
>   
> @@ -1260,14 +1305,22 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
>   	 * The scatterlist segments are mapped into a single
>   	 * contiguous IOVA allocation, so this is incredibly easy.
>   	 */

[ This comment rather stops being true :( ]

> -	start = sg_dma_address(sg);
> -	for_each_sg(sg_next(sg), tmp, nents - 1, i) {

...then generalise the first-element special case here into a dedicated 
"walk to the first non-p2p element" loop...

> +	for_each_sg(sg, tmp, nents, i) {
> +		if (sg_is_dma_bus_address(tmp)) {
> +			sg_dma_unmark_bus_address(tmp);

[ Again I question what this actually achieves ]

> +			continue;
> +		}
>   		if (sg_dma_len(tmp) == 0)
>   			break;
> -		sg = tmp;
> +
> +		if (start == DMA_MAPPING_ERROR)
> +			start = sg_dma_address(tmp);
> +
> +		end = sg_dma_address(tmp) + sg_dma_len(tmp);
>   	}
> -	end = sg_dma_address(sg) + sg_dma_len(sg);
> -	__iommu_dma_unmap(dev, start, end - start);
> +
> +	if (start != DMA_MAPPING_ERROR)

...then "if (end)" here.

Thanks,
Robin.

> +		__iommu_dma_unmap(dev, start, end - start);
>   }
>   
>   static dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys,
> @@ -1460,6 +1513,7 @@ static unsigned long iommu_dma_get_merge_boundary(struct device *dev)
>   }
>   
>   static const struct dma_map_ops iommu_dma_ops = {
> +	.flags			= DMA_F_PCI_P2PDMA_SUPPORTED,
>   	.alloc			= iommu_dma_alloc,
>   	.free			= iommu_dma_free,
>   	.alloc_pages		= dma_common_alloc_pages,

  reply	other threads:[~2022-06-29 12:13 UTC|newest]

Thread overview: 71+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-15 16:12 [PATCH v7 00/21] Userspace P2PDMA with O_DIRECT NVMe devices Logan Gunthorpe
2022-06-15 16:12 ` [PATCH v7 01/21] lib/scatterlist: add flag for indicating P2PDMA segments in an SGL Logan Gunthorpe
2022-06-29  6:33   ` Christoph Hellwig
2022-06-29  9:05   ` Robin Murphy
2022-06-29 15:39     ` Logan Gunthorpe
2022-06-29 18:02       ` Robin Murphy
2022-06-29 18:24         ` Logan Gunthorpe
2022-07-04 15:08   ` Robin Murphy
2022-06-15 16:12 ` [PATCH v7 02/21] PCI/P2PDMA: Attempt to set map_type if it has not been set Logan Gunthorpe
2022-06-29  6:33   ` Christoph Hellwig
2022-06-15 16:12 ` [PATCH v7 03/21] PCI/P2PDMA: Expose pci_p2pdma_map_type() Logan Gunthorpe
2022-06-29  6:39   ` Christoph Hellwig
2022-06-15 16:12 ` [PATCH v7 04/21] PCI/P2PDMA: Introduce helpers for dma_map_sg implementations Logan Gunthorpe
2022-06-29  6:39   ` Christoph Hellwig
2022-06-15 16:12 ` [PATCH v7 05/21] dma-mapping: allow EREMOTEIO return code for P2PDMA transfers Logan Gunthorpe
2022-06-29  6:40   ` Christoph Hellwig
2022-06-15 16:12 ` [PATCH v7 06/21] dma-direct: support PCI P2PDMA pages in dma-direct map_sg Logan Gunthorpe
2022-06-29  6:40   ` Christoph Hellwig
2022-06-15 16:12 ` [PATCH v7 07/21] dma-mapping: add flags to dma_map_ops to indicate PCI P2PDMA support Logan Gunthorpe
2022-06-29  6:41   ` Christoph Hellwig
2022-06-15 16:12 ` [PATCH v7 08/21] iommu/dma: support PCI P2PDMA pages in dma-iommu map_sg Logan Gunthorpe
2022-06-29 12:07   ` Robin Murphy [this message]
2022-06-29 15:57     ` Logan Gunthorpe
2022-06-29 19:15       ` Robin Murphy
2022-06-29 22:41         ` Logan Gunthorpe
2022-06-30 14:56           ` Robin Murphy
2022-06-30 21:21             ` Logan Gunthorpe
2022-06-15 16:12 ` [PATCH v7 09/21] nvme-pci: check DMA ops when indicating support for PCI P2PDMA Logan Gunthorpe
2022-06-29  6:41   ` Christoph Hellwig
2022-06-15 16:12 ` [PATCH v7 10/21] nvme-pci: convert to using dma_map_sgtable() Logan Gunthorpe
2022-06-29  6:42   ` Christoph Hellwig
2022-06-15 16:12 ` [PATCH v7 11/21] RDMA/core: introduce ib_dma_pci_p2p_dma_supported() Logan Gunthorpe
2022-06-29  6:42   ` Christoph Hellwig
2022-06-15 16:12 ` [PATCH v7 12/21] RDMA/rw: drop pci_p2pdma_[un]map_sg() Logan Gunthorpe
2022-06-29  6:42   ` Christoph Hellwig
2022-06-15 16:12 ` [PATCH v7 13/21] PCI/P2PDMA: Remove pci_p2pdma_[un]map_sg() Logan Gunthorpe
2022-06-29  6:43   ` Christoph Hellwig
2022-06-15 16:12 ` [PATCH v7 14/21] mm: introduce FOLL_PCI_P2PDMA to gate getting PCI P2PDMA pages Logan Gunthorpe
2022-06-29  6:45   ` Christoph Hellwig
2022-06-15 16:12 ` [PATCH v7 15/21] iov_iter: introduce iov_iter_get_pages_[alloc_]flags() Logan Gunthorpe
2022-06-29  6:45   ` Christoph Hellwig
2022-06-15 16:12 ` [PATCH v7 16/21] block: add check when merging zone device pages Logan Gunthorpe
2022-06-29  6:46   ` Christoph Hellwig
2022-06-29 16:06     ` Logan Gunthorpe
2022-06-30 21:50       ` Logan Gunthorpe
2022-07-04  6:07         ` Christoph Hellwig
2022-06-15 16:12 ` [PATCH v7 17/21] lib/scatterlist: " Logan Gunthorpe
2022-06-15 16:12 ` [PATCH v7 18/21] block: set FOLL_PCI_P2PDMA in __bio_iov_iter_get_pages() Logan Gunthorpe
2022-06-15 16:12 ` [PATCH v7 19/21] block: set FOLL_PCI_P2PDMA in bio_map_user_iov() Logan Gunthorpe
2022-06-15 16:12 ` [PATCH v7 20/21] PCI/P2PDMA: Introduce pci_mmap_p2pmem() Logan Gunthorpe
2022-06-29  6:48   ` Christoph Hellwig
2022-06-29 16:00     ` Logan Gunthorpe
2022-06-29 17:59       ` Jason Gunthorpe
2022-07-05  7:51         ` Christoph Hellwig
2022-07-05 13:51           ` Jason Gunthorpe
2022-07-05 16:12             ` Christoph Hellwig
2022-07-05 16:29               ` Jason Gunthorpe
2022-07-05 16:40                 ` Christoph Hellwig
2022-07-05 16:41               ` Logan Gunthorpe
2022-07-05 16:43                 ` Christoph Hellwig
2022-07-05 16:44                   ` Logan Gunthorpe
2022-07-05 16:50                     ` Christoph Hellwig
2022-07-05 17:21                       ` Greg Kroah-Hartman
2022-07-05 17:32                         ` Logan Gunthorpe
2022-07-05 17:42                           ` Greg Kroah-Hartman
2022-07-05 18:16                             ` Logan Gunthorpe
2022-07-06  6:51                               ` Christoph Hellwig
2022-07-06  7:04                                 ` Greg Kroah-Hartman
2022-07-06 21:30                                   ` Logan Gunthorpe
2022-06-15 16:12 ` [PATCH v7 21/21] nvme-pci: allow mmaping the CMB in userspace Logan Gunthorpe
2022-06-29  6:49 ` [PATCH v7 00/21] Userspace P2PDMA with O_DIRECT NVMe devices Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=feecc6fe-a16e-11f2-33c8-3de7c96b9ad5@arm.com \
    --to=robin.murphy@arm.com \
    --cc=christian.koenig@amd.com \
    --cc=ckulkarnilinux@gmail.com \
    --cc=dan.j.williams@intel.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=dave.b.minturn@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=hch@lst.de \
    --cc=helgaas@kernel.org \
    --cc=iommu@lists.linux-foundation.org \
    --cc=ira.weiny@intel.com \
    --cc=jason@jlekstrand.net \
    --cc=jgg@nvidia.com \
    --cc=jgg@ziepe.ca \
    --cc=jhubbard@nvidia.com \
    --cc=jianxin.xiong@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=logang@deltatee.com \
    --cc=martin.oliveira@eideticom.com \
    --cc=rcampbell@nvidia.com \
    --cc=sbates@raithlin.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).