All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] base: dma-mapping: Postpone cpu addr translation on mmap()
@ 2018-04-09 16:59 ` Jacopo Mondi
  0 siblings, 0 replies; 12+ messages in thread
From: Jacopo Mondi @ 2018-04-09 16:59 UTC (permalink / raw)
  To: laurent.pinchart, robin.murphy
  Cc: Jacopo Mondi, ysato, dalias, iommu, linux-sh, linux-renesas-soc,
	linux-kernel

Postpone calling virt_to_page() translation on memory locations not
guaranteed to be backed by a struct page.

This patch fixes a specific issue of SH architecture configured with
SPARSEMEM memory model, when mapping buffers allocated with the memblock
APIs at system initialization time, and thus not backed by the page
infrastructure.

It does apply to the general case though, as an early translation is anyhow
incorrect and shall be postponed after trying to map memory from the device
coherent memory pool first.

Suggested-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo+renesas@jmondi.org>

---
Compared to the RFC version I have tried to generalize the commit message,
please suggest any improvement to that.

I'm still a bit puzzled on what happens if dma_mmap_from_dev_coherent() fails.
Does a dma_mmap_from_dev_coherent() failure guarantee anyhow that the
successive virt_to_page() isn't problematic as it is today?
Or is it the
 	if (off < count && user_count <= (count - off))
check that makes the translation safe?

Thanks
   j

---
 drivers/base/dma-mapping.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/base/dma-mapping.c b/drivers/base/dma-mapping.c
index 3b11835..8b4ec34 100644
--- a/drivers/base/dma-mapping.c
+++ b/drivers/base/dma-mapping.c
@@ -226,8 +226,8 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
 #ifndef CONFIG_ARCH_NO_COHERENT_DMA_MMAP
 	unsigned long user_count = vma_pages(vma);
 	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
-	unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
 	unsigned long off = vma->vm_pgoff;
+	unsigned long pfn;

 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);

@@ -235,6 +235,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
 		return ret;

 	if (off < count && user_count <= (count - off)) {
+		pfn = page_to_pfn(virt_to_page(cpu_addr));
 		ret = remap_pfn_range(vma, vma->vm_start,
 				      pfn + off,
 				      user_count << PAGE_SHIFT,
--
2.7.4


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH] base: dma-mapping: Postpone cpu addr translation on mmap()
@ 2018-04-09 16:59 ` Jacopo Mondi
  0 siblings, 0 replies; 12+ messages in thread
From: Jacopo Mondi @ 2018-04-09 16:59 UTC (permalink / raw)
  To: laurent.pinchart, robin.murphy
  Cc: Jacopo Mondi, ysato, dalias, iommu, linux-sh, linux-renesas-soc,
	linux-kernel

Postpone calling virt_to_page() translation on memory locations not
guaranteed to be backed by a struct page.

This patch fixes a specific issue of SH architecture configured with
SPARSEMEM memory model, when mapping buffers allocated with the memblock
APIs at system initialization time, and thus not backed by the page
infrastructure.

It does apply to the general case though, as an early translation is anyhow
incorrect and shall be postponed after trying to map memory from the device
coherent memory pool first.

Suggested-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo+renesas@jmondi.org>

---
Compared to the RFC version I have tried to generalize the commit message,
please suggest any improvement to that.

I'm still a bit puzzled on what happens if dma_mmap_from_dev_coherent() fails.
Does a dma_mmap_from_dev_coherent() failure guarantee anyhow that the
successive virt_to_page() isn't problematic as it is today?
Or is it the
 	if (off < count && user_count <= (count - off))
check that makes the translation safe?

Thanks
   j

---
 drivers/base/dma-mapping.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/base/dma-mapping.c b/drivers/base/dma-mapping.c
index 3b11835..8b4ec34 100644
--- a/drivers/base/dma-mapping.c
+++ b/drivers/base/dma-mapping.c
@@ -226,8 +226,8 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
 #ifndef CONFIG_ARCH_NO_COHERENT_DMA_MMAP
 	unsigned long user_count = vma_pages(vma);
 	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
-	unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
 	unsigned long off = vma->vm_pgoff;
+	unsigned long pfn;

 	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);

@@ -235,6 +235,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
 		return ret;

 	if (off < count && user_count <= (count - off)) {
+		pfn = page_to_pfn(virt_to_page(cpu_addr));
 		ret = remap_pfn_range(vma, vma->vm_start,
 				      pfn + off,
 				      user_count << PAGE_SHIFT,
--
2.7.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH] base: dma-mapping: Postpone cpu addr translation on mmap()
  2018-04-09 16:59 ` Jacopo Mondi
  (?)
@ 2018-04-09 17:52     ` Christoph Hellwig
  -1 siblings, 0 replies; 12+ messages in thread
From: Christoph Hellwig @ 2018-04-09 17:52 UTC (permalink / raw)
  To: Jacopo Mondi
  Cc: dalias-8zAoT0mYgF4, ysato-Rn4VEauK+AKRv+LV9MX5uooqe+aC9MnS,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-renesas-soc-u79uwXL29TY76Z2rM5mHXA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	laurent.pinchart-ryLnwIuWjnjg/C1BVhZhaw

On Mon, Apr 09, 2018 at 06:59:08PM +0200, Jacopo Mondi wrote:
> I'm still a bit puzzled on what happens if dma_mmap_from_dev_coherent() fails.
> Does a dma_mmap_from_dev_coherent() failure guarantee anyhow that the
> successive virt_to_page() isn't problematic as it is today?
> Or is it the
>  	if (off < count && user_count <= (count - off))
> check that makes the translation safe?

It doesn't.  I think one major issue is that we should not simply fall
to dma_common_mmap if no method is required, but need every instance of
dma_map_ops to explicitly opt into an mmap method that is known to work.

>  #ifndef CONFIG_ARCH_NO_COHERENT_DMA_MMAP
>  	unsigned long user_count = vma_pages(vma);
>  	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> -	unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
>  	unsigned long off = vma->vm_pgoff;
> +	unsigned long pfn;
> 
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> 
> @@ -235,6 +235,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
>  		return ret;
> 
>  	if (off < count && user_count <= (count - off)) {
> +		pfn = page_to_pfn(virt_to_page(cpu_addr));
>  		ret = remap_pfn_range(vma, vma->vm_start,
>  				      pfn + off,
>  				      user_count << PAGE_SHIFT,

Why not:

		ret = remap_pfn_range(vma, vma->vm_start,
				page_to_pfn(virt_to_page(cpu_addr)) + off,

and save the temp variable?

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] base: dma-mapping: Postpone cpu addr translation on mmap()
@ 2018-04-09 17:52     ` Christoph Hellwig
  0 siblings, 0 replies; 12+ messages in thread
From: Christoph Hellwig @ 2018-04-09 17:52 UTC (permalink / raw)
  To: Jacopo Mondi
  Cc: laurent.pinchart, robin.murphy, dalias, ysato, linux-sh,
	linux-kernel, linux-renesas-soc, iommu

On Mon, Apr 09, 2018 at 06:59:08PM +0200, Jacopo Mondi wrote:
> I'm still a bit puzzled on what happens if dma_mmap_from_dev_coherent() fails.
> Does a dma_mmap_from_dev_coherent() failure guarantee anyhow that the
> successive virt_to_page() isn't problematic as it is today?
> Or is it the
>  	if (off < count && user_count <= (count - off))
> check that makes the translation safe?

It doesn't.  I think one major issue is that we should not simply fall
to dma_common_mmap if no method is required, but need every instance of
dma_map_ops to explicitly opt into an mmap method that is known to work.

>  #ifndef CONFIG_ARCH_NO_COHERENT_DMA_MMAP
>  	unsigned long user_count = vma_pages(vma);
>  	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> -	unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
>  	unsigned long off = vma->vm_pgoff;
> +	unsigned long pfn;
> 
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> 
> @@ -235,6 +235,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
>  		return ret;
> 
>  	if (off < count && user_count <= (count - off)) {
> +		pfn = page_to_pfn(virt_to_page(cpu_addr));
>  		ret = remap_pfn_range(vma, vma->vm_start,
>  				      pfn + off,
>  				      user_count << PAGE_SHIFT,

Why not:

		ret = remap_pfn_range(vma, vma->vm_start,
				page_to_pfn(virt_to_page(cpu_addr)) + off,

and save the temp variable?

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] base: dma-mapping: Postpone cpu addr translation on mmap()
@ 2018-04-09 17:52     ` Christoph Hellwig
  0 siblings, 0 replies; 12+ messages in thread
From: Christoph Hellwig @ 2018-04-09 17:52 UTC (permalink / raw)
  To: Jacopo Mondi
  Cc: dalias-8zAoT0mYgF4, ysato-Rn4VEauK+AKRv+LV9MX5uooqe+aC9MnS,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-renesas-soc-u79uwXL29TY76Z2rM5mHXA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	laurent.pinchart-ryLnwIuWjnjg/C1BVhZhaw

On Mon, Apr 09, 2018 at 06:59:08PM +0200, Jacopo Mondi wrote:
> I'm still a bit puzzled on what happens if dma_mmap_from_dev_coherent() fails.
> Does a dma_mmap_from_dev_coherent() failure guarantee anyhow that the
> successive virt_to_page() isn't problematic as it is today?
> Or is it the
>  	if (off < count && user_count <= (count - off))
> check that makes the translation safe?

It doesn't.  I think one major issue is that we should not simply fall
to dma_common_mmap if no method is required, but need every instance of
dma_map_ops to explicitly opt into an mmap method that is known to work.

>  #ifndef CONFIG_ARCH_NO_COHERENT_DMA_MMAP
>  	unsigned long user_count = vma_pages(vma);
>  	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> -	unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
>  	unsigned long off = vma->vm_pgoff;
> +	unsigned long pfn;
> 
>  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> 
> @@ -235,6 +235,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
>  		return ret;
> 
>  	if (off < count && user_count <= (count - off)) {
> +		pfn = page_to_pfn(virt_to_page(cpu_addr));
>  		ret = remap_pfn_range(vma, vma->vm_start,
>  				      pfn + off,
>  				      user_count << PAGE_SHIFT,

Why not:

		ret = remap_pfn_range(vma, vma->vm_start,
				page_to_pfn(virt_to_page(cpu_addr)) + off,

and save the temp variable?

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] base: dma-mapping: Postpone cpu addr translation on mmap()
  2018-04-09 17:52     ` Christoph Hellwig
@ 2018-04-10  7:57       ` jacopo mondi
  -1 siblings, 0 replies; 12+ messages in thread
From: jacopo mondi @ 2018-04-10  7:57 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jacopo Mondi, laurent.pinchart, robin.murphy, dalias, ysato,
	linux-sh, linux-kernel, linux-renesas-soc, iommu

[-- Attachment #1: Type: text/plain, Size: 1835 bytes --]

Hi Christoph,

On Mon, Apr 09, 2018 at 10:52:51AM -0700, Christoph Hellwig wrote:
> On Mon, Apr 09, 2018 at 06:59:08PM +0200, Jacopo Mondi wrote:
> > I'm still a bit puzzled on what happens if dma_mmap_from_dev_coherent() fails.
> > Does a dma_mmap_from_dev_coherent() failure guarantee anyhow that the
> > successive virt_to_page() isn't problematic as it is today?
> > Or is it the
> >  	if (off < count && user_count <= (count - off))
> > check that makes the translation safe?
>
> It doesn't.  I think one major issue is that we should not simply fall
> to dma_common_mmap if no method is required, but need every instance of
> dma_map_ops to explicitly opt into an mmap method that is known to work.

I see.. this patch thus just postpones the problem...

>
> >  #ifndef CONFIG_ARCH_NO_COHERENT_DMA_MMAP
> >  	unsigned long user_count = vma_pages(vma);
> >  	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > -	unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
> >  	unsigned long off = vma->vm_pgoff;
> > +	unsigned long pfn;
> >
> >  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> >
> > @@ -235,6 +235,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
> >  		return ret;
> >
> >  	if (off < count && user_count <= (count - off)) {
> > +		pfn = page_to_pfn(virt_to_page(cpu_addr));
> >  		ret = remap_pfn_range(vma, vma->vm_start,
> >  				      pfn + off,
> >  				      user_count << PAGE_SHIFT,
>
> Why not:
>
> 		ret = remap_pfn_range(vma, vma->vm_start,
> 				page_to_pfn(virt_to_page(cpu_addr)) + off,
>
> and save the temp variable?

Sure, it's better... Should I send a v2 or considering your above
comment this patch is just a mitigation and should be ditched in
favour of a proper solution (which requires a much more considerable amount
of work though)?

Thanks
   j

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] base: dma-mapping: Postpone cpu addr translation on mmap()
@ 2018-04-10  7:57       ` jacopo mondi
  0 siblings, 0 replies; 12+ messages in thread
From: jacopo mondi @ 2018-04-10  7:57 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jacopo Mondi, laurent.pinchart, robin.murphy, dalias, ysato,
	linux-sh, linux-kernel, linux-renesas-soc, iommu

[-- Attachment #1: Type: text/plain, Size: 1835 bytes --]

Hi Christoph,

On Mon, Apr 09, 2018 at 10:52:51AM -0700, Christoph Hellwig wrote:
> On Mon, Apr 09, 2018 at 06:59:08PM +0200, Jacopo Mondi wrote:
> > I'm still a bit puzzled on what happens if dma_mmap_from_dev_coherent() fails.
> > Does a dma_mmap_from_dev_coherent() failure guarantee anyhow that the
> > successive virt_to_page() isn't problematic as it is today?
> > Or is it the
> >  	if (off < count && user_count <= (count - off))
> > check that makes the translation safe?
>
> It doesn't.  I think one major issue is that we should not simply fall
> to dma_common_mmap if no method is required, but need every instance of
> dma_map_ops to explicitly opt into an mmap method that is known to work.

I see.. this patch thus just postpones the problem...

>
> >  #ifndef CONFIG_ARCH_NO_COHERENT_DMA_MMAP
> >  	unsigned long user_count = vma_pages(vma);
> >  	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > -	unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
> >  	unsigned long off = vma->vm_pgoff;
> > +	unsigned long pfn;
> >
> >  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> >
> > @@ -235,6 +235,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
> >  		return ret;
> >
> >  	if (off < count && user_count <= (count - off)) {
> > +		pfn = page_to_pfn(virt_to_page(cpu_addr));
> >  		ret = remap_pfn_range(vma, vma->vm_start,
> >  				      pfn + off,
> >  				      user_count << PAGE_SHIFT,
>
> Why not:
>
> 		ret = remap_pfn_range(vma, vma->vm_start,
> 				page_to_pfn(virt_to_page(cpu_addr)) + off,
>
> and save the temp variable?

Sure, it's better... Should I send a v2 or considering your above
comment this patch is just a mitigation and should be ditched in
favour of a proper solution (which requires a much more considerable amount
of work though)?

Thanks
   j

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] base: dma-mapping: Postpone cpu addr translation on mmap()
  2018-04-10  7:57       ` jacopo mondi
@ 2018-04-13 16:30         ` jacopo mondi
  -1 siblings, 0 replies; 12+ messages in thread
From: jacopo mondi @ 2018-04-13 16:30 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jacopo Mondi, laurent.pinchart, robin.murphy, dalias, ysato,
	linux-sh, linux-kernel, linux-renesas-soc, iommu

[-- Attachment #1: Type: text/plain, Size: 2091 bytes --]

Hello again,

On Tue, Apr 10, 2018 at 09:57:52AM +0200, jacopo mondi wrote:
> Hi Christoph,
>
> On Mon, Apr 09, 2018 at 10:52:51AM -0700, Christoph Hellwig wrote:
> > On Mon, Apr 09, 2018 at 06:59:08PM +0200, Jacopo Mondi wrote:
> > > I'm still a bit puzzled on what happens if dma_mmap_from_dev_coherent() fails.
> > > Does a dma_mmap_from_dev_coherent() failure guarantee anyhow that the
> > > successive virt_to_page() isn't problematic as it is today?
> > > Or is it the
> > >  	if (off < count && user_count <= (count - off))
> > > check that makes the translation safe?
> >
> > It doesn't.  I think one major issue is that we should not simply fall
> > to dma_common_mmap if no method is required, but need every instance of
> > dma_map_ops to explicitly opt into an mmap method that is known to work.
>
> I see.. this patch thus just postpones the problem...
>
> >
> > >  #ifndef CONFIG_ARCH_NO_COHERENT_DMA_MMAP
> > >  	unsigned long user_count = vma_pages(vma);
> > >  	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > > -	unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
> > >  	unsigned long off = vma->vm_pgoff;
> > > +	unsigned long pfn;
> > >
> > >  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> > >
> > > @@ -235,6 +235,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
> > >  		return ret;
> > >
> > >  	if (off < count && user_count <= (count - off)) {
> > > +		pfn = page_to_pfn(virt_to_page(cpu_addr));
> > >  		ret = remap_pfn_range(vma, vma->vm_start,
> > >  				      pfn + off,
> > >  				      user_count << PAGE_SHIFT,
> >
> > Why not:
> >
> > 		ret = remap_pfn_range(vma, vma->vm_start,
> > 				page_to_pfn(virt_to_page(cpu_addr)) + off,
> >
> > and save the temp variable?
>
> Sure, it's better... Should I send a v2 or considering your above
> comment this patch is just a mitigation and should be ditched in
> favour of a proper solution (which requires a much more considerable amount
> of work though)?

Don't want to be insistent, but I didn't get from your reply if a v2
is welcome or not :)

Thanks
  j

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] base: dma-mapping: Postpone cpu addr translation on mmap()
@ 2018-04-13 16:30         ` jacopo mondi
  0 siblings, 0 replies; 12+ messages in thread
From: jacopo mondi @ 2018-04-13 16:30 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Jacopo Mondi, laurent.pinchart, robin.murphy, dalias, ysato,
	linux-sh, linux-kernel, linux-renesas-soc, iommu

[-- Attachment #1: Type: text/plain, Size: 2091 bytes --]

Hello again,

On Tue, Apr 10, 2018 at 09:57:52AM +0200, jacopo mondi wrote:
> Hi Christoph,
>
> On Mon, Apr 09, 2018 at 10:52:51AM -0700, Christoph Hellwig wrote:
> > On Mon, Apr 09, 2018 at 06:59:08PM +0200, Jacopo Mondi wrote:
> > > I'm still a bit puzzled on what happens if dma_mmap_from_dev_coherent() fails.
> > > Does a dma_mmap_from_dev_coherent() failure guarantee anyhow that the
> > > successive virt_to_page() isn't problematic as it is today?
> > > Or is it the
> > >  	if (off < count && user_count <= (count - off))
> > > check that makes the translation safe?
> >
> > It doesn't.  I think one major issue is that we should not simply fall
> > to dma_common_mmap if no method is required, but need every instance of
> > dma_map_ops to explicitly opt into an mmap method that is known to work.
>
> I see.. this patch thus just postpones the problem...
>
> >
> > >  #ifndef CONFIG_ARCH_NO_COHERENT_DMA_MMAP
> > >  	unsigned long user_count = vma_pages(vma);
> > >  	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > > -	unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
> > >  	unsigned long off = vma->vm_pgoff;
> > > +	unsigned long pfn;
> > >
> > >  	vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
> > >
> > > @@ -235,6 +235,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
> > >  		return ret;
> > >
> > >  	if (off < count && user_count <= (count - off)) {
> > > +		pfn = page_to_pfn(virt_to_page(cpu_addr));
> > >  		ret = remap_pfn_range(vma, vma->vm_start,
> > >  				      pfn + off,
> > >  				      user_count << PAGE_SHIFT,
> >
> > Why not:
> >
> > 		ret = remap_pfn_range(vma, vma->vm_start,
> > 				page_to_pfn(virt_to_page(cpu_addr)) + off,
> >
> > and save the temp variable?
>
> Sure, it's better... Should I send a v2 or considering your above
> comment this patch is just a mitigation and should be ditched in
> favour of a proper solution (which requires a much more considerable amount
> of work though)?

Don't want to be insistent, but I didn't get from your reply if a v2
is welcome or not :)

Thanks
  j

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] base: dma-mapping: Postpone cpu addr translation on mmap()
  2018-04-13 16:30         ` jacopo mondi
  (?)
@ 2018-04-13 16:43           ` Christoph Hellwig
  -1 siblings, 0 replies; 12+ messages in thread
From: Christoph Hellwig @ 2018-04-13 16:43 UTC (permalink / raw)
  To: jacopo mondi
  Cc: linux-renesas-soc-u79uwXL29TY76Z2rM5mHXA, dalias-8zAoT0mYgF4,
	ysato-Rn4VEauK+AKRv+LV9MX5uooqe+aC9MnS,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Jacopo Mondi,
	laurent.pinchart-ryLnwIuWjnjg/C1BVhZhaw

Please send a v2.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] base: dma-mapping: Postpone cpu addr translation on mmap()
@ 2018-04-13 16:43           ` Christoph Hellwig
  0 siblings, 0 replies; 12+ messages in thread
From: Christoph Hellwig @ 2018-04-13 16:43 UTC (permalink / raw)
  To: jacopo mondi
  Cc: Christoph Hellwig, Jacopo Mondi, laurent.pinchart, robin.murphy,
	dalias, ysato, linux-sh, linux-kernel, linux-renesas-soc, iommu

Please send a v2.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] base: dma-mapping: Postpone cpu addr translation on mmap()
@ 2018-04-13 16:43           ` Christoph Hellwig
  0 siblings, 0 replies; 12+ messages in thread
From: Christoph Hellwig @ 2018-04-13 16:43 UTC (permalink / raw)
  To: jacopo mondi
  Cc: linux-renesas-soc-u79uwXL29TY76Z2rM5mHXA, dalias-8zAoT0mYgF4,
	ysato-Rn4VEauK+AKRv+LV9MX5uooqe+aC9MnS,
	linux-sh-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, Jacopo Mondi,
	laurent.pinchart-ryLnwIuWjnjg/C1BVhZhaw

Please send a v2.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2018-04-13 16:43 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-09 16:59 [PATCH] base: dma-mapping: Postpone cpu addr translation on mmap() Jacopo Mondi
2018-04-09 16:59 ` Jacopo Mondi
     [not found] ` <1523293148-18726-1-git-send-email-jacopo+renesas-AW8dsiIh9cEdnm+yROfE0A@public.gmane.org>
2018-04-09 17:52   ` Christoph Hellwig
2018-04-09 17:52     ` Christoph Hellwig
2018-04-09 17:52     ` Christoph Hellwig
2018-04-10  7:57     ` jacopo mondi
2018-04-10  7:57       ` jacopo mondi
2018-04-13 16:30       ` jacopo mondi
2018-04-13 16:30         ` jacopo mondi
2018-04-13 16:43         ` Christoph Hellwig
2018-04-13 16:43           ` Christoph Hellwig
2018-04-13 16:43           ` Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.