All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Sven Peter" <sven@svenpeter.dev>
To: "Alyssa Rosenzweig" <alyssa@rosenzweig.io>
Cc: iommu@lists.linux-foundation.org,
	"Joerg Roedel" <joro@8bytes.org>, "Will Deacon" <will@kernel.org>,
	"Robin Murphy" <robin.murphy@arm.com>,
	"Arnd Bergmann" <arnd@kernel.org>,
	"Mohamed Mediouni" <mohamed.mediouni@caramail.com>,
	"Alexander Graf" <graf@amazon.com>,
	"Hector Martin" <marcan@marcan.st>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 3/8] iommu/dma: Disable get_sgtable for granule > PAGE_SIZE
Date: Wed, 01 Sep 2021 19:06:27 +0200	[thread overview]
Message-ID: <c8bc7f77-3b46-4675-a642-76871fcec963@www.fastmail.com> (raw)
In-Reply-To: <YS6fasuqPURbmC6X@sunset>



On Tue, Aug 31, 2021, at 23:30, Alyssa Rosenzweig wrote:
> I use this function for cross-device sharing on the M1 display driver.
> Arguably this is unsafe but it works on 16k kernels and if you want to
> test the function on 4k, you know where my code is.
> 

My biggest issue is that I do not understand how this function is supposed
to be used correctly. It would work fine as-is if it only ever gets passed buffers
allocated by the coherent API but there's not way to check or guarantee that.
There may also be callers making assumptions that no longer hold when
iovad->granule > PAGE_SIZE.


Regarding your case: I'm not convinced the function is meant to be used there.
If I understand it correctly, your code first allocates memory with dma_alloc_coherent
(which possibly creates a sgt internally and then maps it with iommu_map_sg),
then coerces that back into a sgt with dma_get_sgtable, and then maps that sgt to
another iommu domain with dma_map_sg while assuming that the result will be contiguous
in IOVA space. It'll work out because dma_alloc_coherent is the very thing
meant to allocate pages that can be mapped into kernel and device VA space
as a single contiguous block and because both of your IOMMUs are different
instances of the same HW block. Anything allocated by dma_alloc_coherent for the
first IOMMU will have the right shape that will allow it to be mapped as
a single contiguous block for the second IOMMU.

What could be done in your case is to instead use the IOMMU API,
allocate the pages yourself (while ensuring the sgt your create is made up
of blocks with size and physaddr aligned to max(domain_a->granule, domain_b->granule))
and then just use iommu_map_sg for both domains which actually comes with the
guarantee that the result will be a single contiguous block in IOVA space and
doesn't required the sgt roundtrip.



Sven


> On Sat, Aug 28, 2021 at 05:36:37PM +0200, Sven Peter wrote:
> > Pretend that iommu_dma_get_sgtable is not implemented when
> > granule > PAGE_SIZE since I can neither test this function right now
> > nor do I fully understand how it is used.
> > 
> > Signed-off-by: Sven Peter <sven@svenpeter.dev>
> > ---
> >  drivers/iommu/dma-iommu.c | 6 ++++++
> >  1 file changed, 6 insertions(+)
> > 
> > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > index d6e273ec3de6..64fbd9236820 100644
> > --- a/drivers/iommu/dma-iommu.c
> > +++ b/drivers/iommu/dma-iommu.c
> > @@ -1315,9 +1315,15 @@ static int iommu_dma_get_sgtable(struct device *dev, struct sg_table *sgt,
> >  		void *cpu_addr, dma_addr_t dma_addr, size_t size,
> >  		unsigned long attrs)
> >  {
> > +	struct iommu_domain *domain = iommu_get_dma_domain(dev);
> > +	struct iommu_dma_cookie *cookie = domain->iova_cookie;
> > +	struct iova_domain *iovad = &cookie->iovad;
> >  	struct page *page;
> >  	int ret;
> >  
> > +	if (iovad->granule > PAGE_SIZE)
> > +		return -ENXIO;
> > +
> >  	if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr)) {
> >  		struct page **pages = dma_common_find_pages(cpu_addr);
> >  
> > -- 
> > 2.25.1
> > 
> 


-- 
Sven Peter

WARNING: multiple messages have this Message-ID (diff)
From: Sven Peter via iommu <iommu@lists.linux-foundation.org>
To: "Alyssa Rosenzweig" <alyssa@rosenzweig.io>
Cc: Arnd Bergmann <arnd@kernel.org>, Will Deacon <will@kernel.org>,
	Hector Martin <marcan@marcan.st>,
	linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org,
	Alexander Graf <graf@amazon.com>,
	Mohamed Mediouni <mohamed.mediouni@caramail.com>,
	Robin Murphy <robin.murphy@arm.com>
Subject: Re: [PATCH v2 3/8] iommu/dma: Disable get_sgtable for granule > PAGE_SIZE
Date: Wed, 01 Sep 2021 19:06:27 +0200	[thread overview]
Message-ID: <c8bc7f77-3b46-4675-a642-76871fcec963@www.fastmail.com> (raw)
In-Reply-To: <YS6fasuqPURbmC6X@sunset>



On Tue, Aug 31, 2021, at 23:30, Alyssa Rosenzweig wrote:
> I use this function for cross-device sharing on the M1 display driver.
> Arguably this is unsafe but it works on 16k kernels and if you want to
> test the function on 4k, you know where my code is.
> 

My biggest issue is that I do not understand how this function is supposed
to be used correctly. It would work fine as-is if it only ever gets passed buffers
allocated by the coherent API but there's not way to check or guarantee that.
There may also be callers making assumptions that no longer hold when
iovad->granule > PAGE_SIZE.


Regarding your case: I'm not convinced the function is meant to be used there.
If I understand it correctly, your code first allocates memory with dma_alloc_coherent
(which possibly creates a sgt internally and then maps it with iommu_map_sg),
then coerces that back into a sgt with dma_get_sgtable, and then maps that sgt to
another iommu domain with dma_map_sg while assuming that the result will be contiguous
in IOVA space. It'll work out because dma_alloc_coherent is the very thing
meant to allocate pages that can be mapped into kernel and device VA space
as a single contiguous block and because both of your IOMMUs are different
instances of the same HW block. Anything allocated by dma_alloc_coherent for the
first IOMMU will have the right shape that will allow it to be mapped as
a single contiguous block for the second IOMMU.

What could be done in your case is to instead use the IOMMU API,
allocate the pages yourself (while ensuring the sgt your create is made up
of blocks with size and physaddr aligned to max(domain_a->granule, domain_b->granule))
and then just use iommu_map_sg for both domains which actually comes with the
guarantee that the result will be a single contiguous block in IOVA space and
doesn't required the sgt roundtrip.



Sven


> On Sat, Aug 28, 2021 at 05:36:37PM +0200, Sven Peter wrote:
> > Pretend that iommu_dma_get_sgtable is not implemented when
> > granule > PAGE_SIZE since I can neither test this function right now
> > nor do I fully understand how it is used.
> > 
> > Signed-off-by: Sven Peter <sven@svenpeter.dev>
> > ---
> >  drivers/iommu/dma-iommu.c | 6 ++++++
> >  1 file changed, 6 insertions(+)
> > 
> > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > index d6e273ec3de6..64fbd9236820 100644
> > --- a/drivers/iommu/dma-iommu.c
> > +++ b/drivers/iommu/dma-iommu.c
> > @@ -1315,9 +1315,15 @@ static int iommu_dma_get_sgtable(struct device *dev, struct sg_table *sgt,
> >  		void *cpu_addr, dma_addr_t dma_addr, size_t size,
> >  		unsigned long attrs)
> >  {
> > +	struct iommu_domain *domain = iommu_get_dma_domain(dev);
> > +	struct iommu_dma_cookie *cookie = domain->iova_cookie;
> > +	struct iova_domain *iovad = &cookie->iovad;
> >  	struct page *page;
> >  	int ret;
> >  
> > +	if (iovad->granule > PAGE_SIZE)
> > +		return -ENXIO;
> > +
> >  	if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr)) {
> >  		struct page **pages = dma_common_find_pages(cpu_addr);
> >  
> > -- 
> > 2.25.1
> > 
> 


-- 
Sven Peter
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2021-09-01 17:06 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-28 15:36 [PATCH v2 0/8] Support IOMMU page sizes larger than the CPU page size Sven Peter
2021-08-28 15:36 ` Sven Peter via iommu
2021-08-28 15:36 ` [PATCH v2 1/8] iommu/dma: Align size for untrusted devs to IOVA granule Sven Peter
2021-08-28 15:36   ` Sven Peter via iommu
2021-08-28 15:36 ` [PATCH v2 2/8] iommu/dma: Fail unaligned map requests for untrusted devs Sven Peter
2021-08-28 15:36   ` Sven Peter via iommu
2021-08-28 19:00   ` Sven Peter
2021-08-28 19:00     ` Sven Peter via iommu
2021-08-28 15:36 ` [PATCH v2 3/8] iommu/dma: Disable get_sgtable for granule > PAGE_SIZE Sven Peter
2021-08-28 15:36   ` Sven Peter via iommu
2021-08-31 21:30   ` Alyssa Rosenzweig
2021-08-31 21:30     ` Alyssa Rosenzweig
2021-09-01 17:06     ` Sven Peter [this message]
2021-09-01 17:06       ` Sven Peter via iommu
2021-09-01 21:10       ` Alyssa Rosenzweig
2021-09-01 21:10         ` Alyssa Rosenzweig
2021-09-02 18:19         ` Sven Peter
2021-09-02 18:19           ` Sven Peter via iommu
2021-09-02 19:42           ` Robin Murphy
2021-09-02 19:42             ` Robin Murphy
2021-09-03 13:11             ` Alyssa Rosenzweig
2021-09-03 13:11               ` Alyssa Rosenzweig
2021-09-03 15:16             ` Sven Peter
2021-09-03 15:16               ` Sven Peter via iommu
2021-09-03 15:45               ` Robin Murphy
2021-09-03 15:45                 ` Robin Murphy
2021-09-03 16:51                 ` Sven Peter
2021-09-03 16:51                   ` Sven Peter via iommu
2021-08-28 15:36 ` [PATCH v2 4/8] iommu/dma: Support granule > PAGE_SIZE in dma_map_sg Sven Peter
2021-08-28 15:36   ` Sven Peter via iommu
2021-08-28 21:10   ` kernel test robot
2021-08-28 21:10     ` kernel test robot
2021-08-28 21:10     ` kernel test robot
2021-08-28 22:31   ` kernel test robot
2021-08-28 22:31     ` kernel test robot
2021-08-28 22:33   ` kernel test robot
2021-08-28 22:33     ` kernel test robot
2021-08-28 22:33     ` kernel test robot
2021-08-28 15:36 ` [PATCH v2 5/8] iommu/dma: Support PAGE_SIZE < iovad->granule allocations Sven Peter
2021-08-28 15:36   ` Sven Peter via iommu
2021-08-28 15:36 ` [PATCH v2 6/8] iommu: Move IOMMU pagesize check to attach_device Sven Peter
2021-08-28 15:36   ` Sven Peter via iommu
2021-08-31 21:39   ` Alyssa Rosenzweig
2021-08-31 21:39     ` Alyssa Rosenzweig
2021-09-01 17:14     ` Sven Peter
2021-09-01 17:14       ` Sven Peter via iommu
2021-09-01 18:53       ` Robin Murphy
2021-09-01 18:53         ` Robin Murphy
2021-08-28 15:36 ` [PATCH v2 7/8] iommu: Introduce __IOMMU_DOMAIN_LP Sven Peter
2021-08-28 15:36   ` Sven Peter via iommu
2021-08-28 15:36 ` [PATCH v2 8/8] iommu/dart: Remove force_bypass logic Sven Peter
2021-08-28 15:36   ` Sven Peter via iommu
2021-08-31 21:40   ` Alyssa Rosenzweig
2021-08-31 21:40     ` Alyssa Rosenzweig
2021-08-31 21:32 ` [PATCH v2 0/8] Support IOMMU page sizes larger than the CPU page size Alyssa Rosenzweig
2021-08-31 21:32   ` Alyssa Rosenzweig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c8bc7f77-3b46-4675-a642-76871fcec963@www.fastmail.com \
    --to=sven@svenpeter.dev \
    --cc=alyssa@rosenzweig.io \
    --cc=arnd@kernel.org \
    --cc=graf@amazon.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=joro@8bytes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marcan@marcan.st \
    --cc=mohamed.mediouni@caramail.com \
    --cc=robin.murphy@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.