linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/1] dma-buf: heaps: Map system heap pages as managed by linux vm
@ 2021-01-28  8:38 Suren Baghdasaryan
  2021-01-28  8:41 ` Suren Baghdasaryan
  2021-01-28  9:13 ` Christoph Hellwig
  0 siblings, 2 replies; 10+ messages in thread
From: Suren Baghdasaryan @ 2021-01-28  8:38 UTC (permalink / raw)
  To: sumit.semwal
  Cc: benjamin.gaignard, lmark, labbott, Brian.Starkey, john.stultz,
	christian.koenig, cgoldswo, orjan.eide, robin.murphy, jajones,
	minchan, hridya, sspatil, linux-media, dri-devel, linaro-mm-sig,
	linux-kernel, kernel-team, surenb

Currently system heap maps its buffers with VM_PFNMAP flag using
remap_pfn_range. This results in such buffers not being accounted
for in PSS calculations because vm treats this memory as having no
page structs. Without page structs there are no counters representing
how many processes are mapping a page and therefore PSS calculation
is impossible.
Historically, ION driver used to map its buffers as VM_PFNMAP areas
due to memory carveouts that did not have page structs [1]. That
is not the case anymore and it seems there was desire to move away
from remap_pfn_range [2].
Dmabuf system heap design inherits this ION behavior and maps its
pages using remap_pfn_range even though allocated pages are backed
by page structs.
Clear VM_IO and VM_PFNMAP flags when mapping memory allocated by the
system heap and replace remap_pfn_range with vm_insert_page, following
Laura's suggestion in [1]. This would allow correct PSS calculation
for dmabufs.

[1] https://driverdev-devel.linuxdriverproject.narkive.com/v0fJGpaD/using-ion-memory-for-direct-io
[2] http://driverdev.linuxdriverproject.org/pipermail/driverdev-devel/2018-October/127519.html
(sorry, could not find lore links for these discussions)

Suggested-by: Laura Abbott <labbott@kernel.org>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
 drivers/dma-buf/heaps/system_heap.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
index 17e0e9a68baf..0e92e42b2251 100644
--- a/drivers/dma-buf/heaps/system_heap.c
+++ b/drivers/dma-buf/heaps/system_heap.c
@@ -200,11 +200,13 @@ static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
 	struct sg_page_iter piter;
 	int ret;
 
+	/* All pages are backed by a "struct page" */
+	vma->vm_flags &= ~VM_PFNMAP;
+
 	for_each_sgtable_page(table, &piter, vma->vm_pgoff) {
 		struct page *page = sg_page_iter_page(&piter);
 
-		ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE,
-				      vma->vm_page_prot);
+		ret = vm_insert_page(vma, addr, page);
 		if (ret)
 			return ret;
 		addr += PAGE_SIZE;
-- 
2.30.0.280.ga3ce27912f-goog


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/1] dma-buf: heaps: Map system heap pages as managed by linux vm
  2021-01-28  8:38 [PATCH 1/1] dma-buf: heaps: Map system heap pages as managed by linux vm Suren Baghdasaryan
@ 2021-01-28  8:41 ` Suren Baghdasaryan
  2021-01-28  9:13 ` Christoph Hellwig
  1 sibling, 0 replies; 10+ messages in thread
From: Suren Baghdasaryan @ 2021-01-28  8:41 UTC (permalink / raw)
  To: Sumit Semwal
  Cc: benjamin.gaignard, Liam Mark, labbott, Brian Starkey,
	John Stultz, christian.koenig, Chris Goldsworthy,
	Ørjan Eide, Robin Murphy, James Jones, Minchan Kim,
	Hridya Valsaraju, Sandeep Patil, linux-media, DRI mailing list,
	moderated list:DMA BUFFER SHARING FRAMEWORK, LKML, kernel-team

On Thu, Jan 28, 2021 at 12:38 AM Suren Baghdasaryan <surenb@google.com> wrote:
>
> Currently system heap maps its buffers with VM_PFNMAP flag using
> remap_pfn_range. This results in such buffers not being accounted
> for in PSS calculations because vm treats this memory as having no
> page structs. Without page structs there are no counters representing
> how many processes are mapping a page and therefore PSS calculation
> is impossible.
> Historically, ION driver used to map its buffers as VM_PFNMAP areas
> due to memory carveouts that did not have page structs [1]. That
> is not the case anymore and it seems there was desire to move away
> from remap_pfn_range [2].
> Dmabuf system heap design inherits this ION behavior and maps its
> pages using remap_pfn_range even though allocated pages are backed
> by page structs.
> Clear VM_IO and VM_PFNMAP flags when mapping memory allocated by the

Agrh, please ignore VM_IO in the description. The patch does not touch
that flag. I'll fix that in the next revision.

> system heap and replace remap_pfn_range with vm_insert_page, following
> Laura's suggestion in [1]. This would allow correct PSS calculation
> for dmabufs.
>
> [1] https://driverdev-devel.linuxdriverproject.narkive.com/v0fJGpaD/using-ion-memory-for-direct-io
> [2] http://driverdev.linuxdriverproject.org/pipermail/driverdev-devel/2018-October/127519.html
> (sorry, could not find lore links for these discussions)
>
> Suggested-by: Laura Abbott <labbott@kernel.org>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> ---
>  drivers/dma-buf/heaps/system_heap.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
> index 17e0e9a68baf..0e92e42b2251 100644
> --- a/drivers/dma-buf/heaps/system_heap.c
> +++ b/drivers/dma-buf/heaps/system_heap.c
> @@ -200,11 +200,13 @@ static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
>         struct sg_page_iter piter;
>         int ret;
>
> +       /* All pages are backed by a "struct page" */
> +       vma->vm_flags &= ~VM_PFNMAP;
> +
>         for_each_sgtable_page(table, &piter, vma->vm_pgoff) {
>                 struct page *page = sg_page_iter_page(&piter);
>
> -               ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE,
> -                                     vma->vm_page_prot);
> +               ret = vm_insert_page(vma, addr, page);
>                 if (ret)
>                         return ret;
>                 addr += PAGE_SIZE;
> --
> 2.30.0.280.ga3ce27912f-goog
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/1] dma-buf: heaps: Map system heap pages as managed by linux vm
  2021-01-28  8:38 [PATCH 1/1] dma-buf: heaps: Map system heap pages as managed by linux vm Suren Baghdasaryan
  2021-01-28  8:41 ` Suren Baghdasaryan
@ 2021-01-28  9:13 ` Christoph Hellwig
  2021-01-28 17:52   ` Suren Baghdasaryan
  1 sibling, 1 reply; 10+ messages in thread
From: Christoph Hellwig @ 2021-01-28  9:13 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: sumit.semwal, benjamin.gaignard, lmark, labbott, Brian.Starkey,
	john.stultz, christian.koenig, cgoldswo, orjan.eide,
	robin.murphy, jajones, minchan, hridya, sspatil, linux-media,
	dri-devel, linaro-mm-sig, linux-kernel, kernel-team

On Thu, Jan 28, 2021 at 12:38:17AM -0800, Suren Baghdasaryan wrote:
> Currently system heap maps its buffers with VM_PFNMAP flag using
> remap_pfn_range. This results in such buffers not being accounted
> for in PSS calculations because vm treats this memory as having no
> page structs. Without page structs there are no counters representing
> how many processes are mapping a page and therefore PSS calculation
> is impossible.
> Historically, ION driver used to map its buffers as VM_PFNMAP areas
> due to memory carveouts that did not have page structs [1]. That
> is not the case anymore and it seems there was desire to move away
> from remap_pfn_range [2].
> Dmabuf system heap design inherits this ION behavior and maps its
> pages using remap_pfn_range even though allocated pages are backed
> by page structs.
> Clear VM_IO and VM_PFNMAP flags when mapping memory allocated by the
> system heap and replace remap_pfn_range with vm_insert_page, following
> Laura's suggestion in [1]. This would allow correct PSS calculation
> for dmabufs.
> 
> [1] https://driverdev-devel.linuxdriverproject.narkive.com/v0fJGpaD/using-ion-memory-for-direct-io
> [2] http://driverdev.linuxdriverproject.org/pipermail/driverdev-devel/2018-October/127519.html
> (sorry, could not find lore links for these discussions)
> 
> Suggested-by: Laura Abbott <labbott@kernel.org>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> ---
>  drivers/dma-buf/heaps/system_heap.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
> index 17e0e9a68baf..0e92e42b2251 100644
> --- a/drivers/dma-buf/heaps/system_heap.c
> +++ b/drivers/dma-buf/heaps/system_heap.c
> @@ -200,11 +200,13 @@ static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
>  	struct sg_page_iter piter;
>  	int ret;
>  
> +	/* All pages are backed by a "struct page" */
> +	vma->vm_flags &= ~VM_PFNMAP;

Why do we clear this flag?  It shouldn't even be set here as far as I
can tell.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/1] dma-buf: heaps: Map system heap pages as managed by linux vm
  2021-01-28  9:13 ` Christoph Hellwig
@ 2021-01-28 17:52   ` Suren Baghdasaryan
  2021-01-28 18:19     ` Minchan Kim
  0 siblings, 1 reply; 10+ messages in thread
From: Suren Baghdasaryan @ 2021-01-28 17:52 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Sumit Semwal, benjamin.gaignard, Liam Mark, labbott,
	Brian Starkey, John Stultz, christian.koenig, Chris Goldsworthy,
	Ørjan Eide, Robin Murphy, James Jones, Minchan Kim,
	Hridya Valsaraju, Sandeep Patil, linux-media, DRI mailing list,
	moderated list:DMA BUFFER SHARING FRAMEWORK, LKML, kernel-team

On Thu, Jan 28, 2021 at 1:13 AM Christoph Hellwig <hch@infradead.org> wrote:
>
> On Thu, Jan 28, 2021 at 12:38:17AM -0800, Suren Baghdasaryan wrote:
> > Currently system heap maps its buffers with VM_PFNMAP flag using
> > remap_pfn_range. This results in such buffers not being accounted
> > for in PSS calculations because vm treats this memory as having no
> > page structs. Without page structs there are no counters representing
> > how many processes are mapping a page and therefore PSS calculation
> > is impossible.
> > Historically, ION driver used to map its buffers as VM_PFNMAP areas
> > due to memory carveouts that did not have page structs [1]. That
> > is not the case anymore and it seems there was desire to move away
> > from remap_pfn_range [2].
> > Dmabuf system heap design inherits this ION behavior and maps its
> > pages using remap_pfn_range even though allocated pages are backed
> > by page structs.
> > Clear VM_IO and VM_PFNMAP flags when mapping memory allocated by the
> > system heap and replace remap_pfn_range with vm_insert_page, following
> > Laura's suggestion in [1]. This would allow correct PSS calculation
> > for dmabufs.
> >
> > [1] https://driverdev-devel.linuxdriverproject.narkive.com/v0fJGpaD/using-ion-memory-for-direct-io
> > [2] http://driverdev.linuxdriverproject.org/pipermail/driverdev-devel/2018-October/127519.html
> > (sorry, could not find lore links for these discussions)
> >
> > Suggested-by: Laura Abbott <labbott@kernel.org>
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > ---
> >  drivers/dma-buf/heaps/system_heap.c | 6 ++++--
> >  1 file changed, 4 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
> > index 17e0e9a68baf..0e92e42b2251 100644
> > --- a/drivers/dma-buf/heaps/system_heap.c
> > +++ b/drivers/dma-buf/heaps/system_heap.c
> > @@ -200,11 +200,13 @@ static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> >       struct sg_page_iter piter;
> >       int ret;
> >
> > +     /* All pages are backed by a "struct page" */
> > +     vma->vm_flags &= ~VM_PFNMAP;
>
> Why do we clear this flag?  It shouldn't even be set here as far as I
> can tell.

Thanks for the question, Christoph.
I tracked down that flag being set by drm_gem_mmap_obj() which DRM
drivers use to "Set up the VMA to prepare mapping of the GEM object"
(according to drm_gem_mmap_obj comments). I also see a pattern in
several DMR drivers to call drm_gem_mmap_obj()/drm_gem_mmap(), then
clear VM_PFNMAP and then map the VMA (for example here:
https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/rockchip/rockchip_drm_gem.c#L246).
I thought that dmabuf allocator (in this case the system heap) would
be the right place to set these flags because it controls how memory
is allocated before mapping. However it's quite possible that I'm
missing the real reason for VM_PFNMAP being set in drm_gem_mmap_obj()
before dma_buf_mmap() is called. I could not find the answer to that,
so I hope someone here can clarify that.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/1] dma-buf: heaps: Map system heap pages as managed by linux vm
  2021-01-28 17:52   ` Suren Baghdasaryan
@ 2021-01-28 18:19     ` Minchan Kim
       [not found]       ` <CAJuCfpF78RYedBoAgkDdgMdfSmNwC2AQk-zZxAqkhCdtBB9gtQ@mail.gmail.com>
  0 siblings, 1 reply; 10+ messages in thread
From: Minchan Kim @ 2021-01-28 18:19 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: Christoph Hellwig, Sumit Semwal, benjamin.gaignard, Liam Mark,
	labbott, Brian Starkey, John Stultz, christian.koenig,
	Chris Goldsworthy, Ørjan Eide, Robin Murphy, James Jones,
	Hridya Valsaraju, Sandeep Patil, linux-media, DRI mailing list,
	moderated list:DMA BUFFER SHARING FRAMEWORK, LKML, kernel-team

On Thu, Jan 28, 2021 at 09:52:59AM -0800, Suren Baghdasaryan wrote:
> On Thu, Jan 28, 2021 at 1:13 AM Christoph Hellwig <hch@infradead.org> wrote:
> >
> > On Thu, Jan 28, 2021 at 12:38:17AM -0800, Suren Baghdasaryan wrote:
> > > Currently system heap maps its buffers with VM_PFNMAP flag using
> > > remap_pfn_range. This results in such buffers not being accounted
> > > for in PSS calculations because vm treats this memory as having no
> > > page structs. Without page structs there are no counters representing
> > > how many processes are mapping a page and therefore PSS calculation
> > > is impossible.
> > > Historically, ION driver used to map its buffers as VM_PFNMAP areas
> > > due to memory carveouts that did not have page structs [1]. That
> > > is not the case anymore and it seems there was desire to move away
> > > from remap_pfn_range [2].
> > > Dmabuf system heap design inherits this ION behavior and maps its
> > > pages using remap_pfn_range even though allocated pages are backed
> > > by page structs.
> > > Clear VM_IO and VM_PFNMAP flags when mapping memory allocated by the
> > > system heap and replace remap_pfn_range with vm_insert_page, following
> > > Laura's suggestion in [1]. This would allow correct PSS calculation
> > > for dmabufs.
> > >
> > > [1] https://driverdev-devel.linuxdriverproject.narkive.com/v0fJGpaD/using-ion-memory-for-direct-io
> > > [2] http://driverdev.linuxdriverproject.org/pipermail/driverdev-devel/2018-October/127519.html
> > > (sorry, could not find lore links for these discussions)
> > >
> > > Suggested-by: Laura Abbott <labbott@kernel.org>
> > > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > > ---
> > >  drivers/dma-buf/heaps/system_heap.c | 6 ++++--
> > >  1 file changed, 4 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
> > > index 17e0e9a68baf..0e92e42b2251 100644
> > > --- a/drivers/dma-buf/heaps/system_heap.c
> > > +++ b/drivers/dma-buf/heaps/system_heap.c
> > > @@ -200,11 +200,13 @@ static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> > >       struct sg_page_iter piter;
> > >       int ret;
> > >
> > > +     /* All pages are backed by a "struct page" */
> > > +     vma->vm_flags &= ~VM_PFNMAP;
> >
> > Why do we clear this flag?  It shouldn't even be set here as far as I
> > can tell.
> 
> Thanks for the question, Christoph.
> I tracked down that flag being set by drm_gem_mmap_obj() which DRM
> drivers use to "Set up the VMA to prepare mapping of the GEM object"
> (according to drm_gem_mmap_obj comments). I also see a pattern in
> several DMR drivers to call drm_gem_mmap_obj()/drm_gem_mmap(), then
> clear VM_PFNMAP and then map the VMA (for example here:
> https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/rockchip/rockchip_drm_gem.c#L246).
> I thought that dmabuf allocator (in this case the system heap) would
> be the right place to set these flags because it controls how memory
> is allocated before mapping. However it's quite possible that I'm

However, you're not setting but removing a flag under the caller.
It's different with appending more flags(e.g., removing condition
vs adding more conditions). If we should remove the flag, caller
didn't need to set it from the beginning. Hiding it under this API
continue to make wrong usecase in future.

> missing the real reason for VM_PFNMAP being set in drm_gem_mmap_obj()
> before dma_buf_mmap() is called. I could not find the answer to that,
> so I hope someone here can clarify that.

Guess DRM had used carved out pure PFN memory long time ago and
changed to use dmabuf since somepoint.
Whatever there is a history, rather than removing the flag
under them, let's add WARN_ON(vma->vm_flags & VM_PFNMAP) so
we could clean up catching them and start discussion.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/1] dma-buf: heaps: Map system heap pages as managed by linux vm
       [not found]       ` <CAJuCfpF78RYedBoAgkDdgMdfSmNwC2AQk-zZxAqkhCdtBB9gtQ@mail.gmail.com>
@ 2021-02-02  1:08         ` Suren Baghdasaryan
  2021-02-02  7:03           ` Christoph Hellwig
  0 siblings, 1 reply; 10+ messages in thread
From: Suren Baghdasaryan @ 2021-02-02  1:08 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Christoph Hellwig, Sumit Semwal, (Exiting) Benjamin Gaignard,
	Liam Mark, labbott, Brian Starkey, John Stultz,
	Christian König, Chris Goldsworthy, Ørjan Eide,
	Robin Murphy, James Jones, Hridya Valsaraju, Sandeep Patil,
	linux-media, DRI mailing list,
	moderated list:DMA BUFFER SHARING FRAMEWORK, LKML, kernel-team

On Thu, Jan 28, 2021 at 11:00 AM Suren Baghdasaryan <surenb@google.com> wrote:
>
> On Thu, Jan 28, 2021 at 10:19 AM Minchan Kim <minchan@kernel.org> wrote:
> >
> > On Thu, Jan 28, 2021 at 09:52:59AM -0800, Suren Baghdasaryan wrote:
> > > On Thu, Jan 28, 2021 at 1:13 AM Christoph Hellwig <hch@infradead.org> wrote:
> > > >
> > > > On Thu, Jan 28, 2021 at 12:38:17AM -0800, Suren Baghdasaryan wrote:
> > > > > Currently system heap maps its buffers with VM_PFNMAP flag using
> > > > > remap_pfn_range. This results in such buffers not being accounted
> > > > > for in PSS calculations because vm treats this memory as having no
> > > > > page structs. Without page structs there are no counters representing
> > > > > how many processes are mapping a page and therefore PSS calculation
> > > > > is impossible.
> > > > > Historically, ION driver used to map its buffers as VM_PFNMAP areas
> > > > > due to memory carveouts that did not have page structs [1]. That
> > > > > is not the case anymore and it seems there was desire to move away
> > > > > from remap_pfn_range [2].
> > > > > Dmabuf system heap design inherits this ION behavior and maps its
> > > > > pages using remap_pfn_range even though allocated pages are backed
> > > > > by page structs.
> > > > > Clear VM_IO and VM_PFNMAP flags when mapping memory allocated by the
> > > > > system heap and replace remap_pfn_range with vm_insert_page, following
> > > > > Laura's suggestion in [1]. This would allow correct PSS calculation
> > > > > for dmabufs.
> > > > >
> > > > > [1] https://driverdev-devel.linuxdriverproject.narkive.com/v0fJGpaD/using-ion-memory-for-direct-io
> > > > > [2] http://driverdev.linuxdriverproject.org/pipermail/driverdev-devel/2018-October/127519.html
> > > > > (sorry, could not find lore links for these discussions)
> > > > >
> > > > > Suggested-by: Laura Abbott <labbott@kernel.org>
> > > > > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > > > > ---
> > > > >  drivers/dma-buf/heaps/system_heap.c | 6 ++++--
> > > > >  1 file changed, 4 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
> > > > > index 17e0e9a68baf..0e92e42b2251 100644
> > > > > --- a/drivers/dma-buf/heaps/system_heap.c
> > > > > +++ b/drivers/dma-buf/heaps/system_heap.c
> > > > > @@ -200,11 +200,13 @@ static int system_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> > > > >       struct sg_page_iter piter;
> > > > >       int ret;
> > > > >
> > > > > +     /* All pages are backed by a "struct page" */
> > > > > +     vma->vm_flags &= ~VM_PFNMAP;
> > > >
> > > > Why do we clear this flag?  It shouldn't even be set here as far as I
> > > > can tell.
> > >
> > > Thanks for the question, Christoph.
> > > I tracked down that flag being set by drm_gem_mmap_obj() which DRM
> > > drivers use to "Set up the VMA to prepare mapping of the GEM object"
> > > (according to drm_gem_mmap_obj comments). I also see a pattern in
> > > several DMR drivers to call drm_gem_mmap_obj()/drm_gem_mmap(), then
> > > clear VM_PFNMAP and then map the VMA (for example here:
> > > https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/rockchip/rockchip_drm_gem.c#L246).
> > > I thought that dmabuf allocator (in this case the system heap) would
> > > be the right place to set these flags because it controls how memory
> > > is allocated before mapping. However it's quite possible that I'm
> >
> > However, you're not setting but removing a flag under the caller.
> > It's different with appending more flags(e.g., removing condition
> > vs adding more conditions). If we should remove the flag, caller
> > didn't need to set it from the beginning. Hiding it under this API
> > continue to make wrong usecase in future.
>
> Which takes us back to the question of why VM_PFNMAP is being set by
> the caller in the first place.
>
> >
> > > missing the real reason for VM_PFNMAP being set in drm_gem_mmap_obj()
> > > before dma_buf_mmap() is called. I could not find the answer to that,
> > > so I hope someone here can clarify that.
> >
> > Guess DRM had used carved out pure PFN memory long time ago and
> > changed to use dmabuf since somepoint.
>
> It would be really good to know the reason for sure to address the
> issue properly.
>
> > Whatever there is a history, rather than removing the flag
> > under them, let's add WARN_ON(vma->vm_flags & VM_PFNMAP) so
> > we could clean up catching them and start discussion.
>
> The issue with not clearing the flag here is that vm_insert_page() has
> a BUG_ON(vma->vm_flags & VM_PFNMAP). If we do not clear this flag I
> suspect we will get many angry developers :)
> If your above guess is correct and we can mandate dmabuf heap users
> not to use VM_PFNMAP then I think the following code might be the best
> way forward:
>
> +       bool pfn_requested = !!(vma->vm_flags & VM_PFNMAP);
> +.      WARN_ON_ONCE(pfn_requested);
>
>         for_each_sgtable_page(table, &piter, vma->vm_pgoff) {
>                 struct page *page = sg_page_iter_page(&piter);
>
> -               ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE,
> -                                     vma->vm_page_prot);
> +               ret = pfn_requested ?
> +.                      remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE,
> +                                     vma->vm_page_prot) :
> +                       vm_insert_page(vma, addr, page);

Folks, any objections to the approach above?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/1] dma-buf: heaps: Map system heap pages as managed by linux vm
  2021-02-02  1:08         ` Suren Baghdasaryan
@ 2021-02-02  7:03           ` Christoph Hellwig
  2021-02-02  8:44             ` Suren Baghdasaryan
  0 siblings, 1 reply; 10+ messages in thread
From: Christoph Hellwig @ 2021-02-02  7:03 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: Minchan Kim, Christoph Hellwig, Sumit Semwal,
	(Exiting) Benjamin Gaignard, Liam Mark, labbott, Brian Starkey,
	John Stultz, Christian K??nig, Chris Goldsworthy, ??rjan Eide,
	Robin Murphy, James Jones, Hridya Valsaraju, Sandeep Patil,
	linux-media, DRI mailing list,
	moderated list:DMA BUFFER SHARING FRAMEWORK, LKML, kernel-team

IMHO the

	BUG_ON(vma->vm_flags & VM_PFNMAP);

in vm_insert_page should just become a WARN_ON_ONCE with an error
return, and then we just need to gradually fix up the callers that
trigger it instead of coming up with workarounds like this.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/1] dma-buf: heaps: Map system heap pages as managed by linux vm
  2021-02-02  7:03           ` Christoph Hellwig
@ 2021-02-02  8:44             ` Suren Baghdasaryan
  2021-02-02  8:51               ` Christoph Hellwig
  0 siblings, 1 reply; 10+ messages in thread
From: Suren Baghdasaryan @ 2021-02-02  8:44 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Minchan Kim, Sumit Semwal, (Exiting) Benjamin Gaignard,
	Liam Mark, labbott, Brian Starkey, John Stultz, Christian K??nig,
	Chris Goldsworthy, ??rjan Eide, Robin Murphy, James Jones,
	Hridya Valsaraju, Sandeep Patil, linux-media, DRI mailing list,
	moderated list:DMA BUFFER SHARING FRAMEWORK, LKML, kernel-team

On Mon, Feb 1, 2021 at 11:03 PM Christoph Hellwig <hch@infradead.org> wrote:
>
> IMHO the
>
>         BUG_ON(vma->vm_flags & VM_PFNMAP);
>
> in vm_insert_page should just become a WARN_ON_ONCE with an error
> return, and then we just need to gradually fix up the callers that
> trigger it instead of coming up with workarounds like this.

For the existing vm_insert_page users this should be fine since
BUG_ON() guarantees that none of them sets VM_PFNMAP. However, for the
system_heap_mmap I have one concern. When vm_insert_page returns an
error due to VM_PFNMAP flag, the whole mmap operation should fail
(system_heap_mmap returning an error leading to dma_buf_mmap failure).
Could there be cases when a heap user (DRM driver for example) would
be expected to work with a heap which requires VM_PFNMAP and at the
same time with another heap which requires !VM_PFNMAP? IOW, this
introduces a dependency between the heap and its
user. The user would have to know expectations of the heap it uses and
can't work with another heap that has the opposite expectation. This
usecase is purely theoretical and maybe I should not worry about it
for now?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/1] dma-buf: heaps: Map system heap pages as managed by linux vm
  2021-02-02  8:44             ` Suren Baghdasaryan
@ 2021-02-02  8:51               ` Christoph Hellwig
  2021-02-02 18:23                 ` Suren Baghdasaryan
  0 siblings, 1 reply; 10+ messages in thread
From: Christoph Hellwig @ 2021-02-02  8:51 UTC (permalink / raw)
  To: Suren Baghdasaryan
  Cc: Christoph Hellwig, Minchan Kim, Sumit Semwal,
	(Exiting) Benjamin Gaignard, Liam Mark, labbott, Brian Starkey,
	John Stultz, Christian K??nig, Chris Goldsworthy, ??rjan Eide,
	Robin Murphy, James Jones, Hridya Valsaraju, Sandeep Patil,
	linux-media, DRI mailing list,
	moderated list:DMA BUFFER SHARING FRAMEWORK, LKML, kernel-team

On Tue, Feb 02, 2021 at 12:44:44AM -0800, Suren Baghdasaryan wrote:
> On Mon, Feb 1, 2021 at 11:03 PM Christoph Hellwig <hch@infradead.org> wrote:
> >
> > IMHO the
> >
> >         BUG_ON(vma->vm_flags & VM_PFNMAP);
> >
> > in vm_insert_page should just become a WARN_ON_ONCE with an error
> > return, and then we just need to gradually fix up the callers that
> > trigger it instead of coming up with workarounds like this.
> 
> For the existing vm_insert_page users this should be fine since
> BUG_ON() guarantees that none of them sets VM_PFNMAP.

Even for them WARN_ON_ONCE plus an actual error return is a way
better assert that is much developer friendly.

> However, for the
> system_heap_mmap I have one concern. When vm_insert_page returns an
> error due to VM_PFNMAP flag, the whole mmap operation should fail
> (system_heap_mmap returning an error leading to dma_buf_mmap failure).
> Could there be cases when a heap user (DRM driver for example) would
> be expected to work with a heap which requires VM_PFNMAP and at the
> same time with another heap which requires !VM_PFNMAP? IOW, this
> introduces a dependency between the heap and its
> user. The user would have to know expectations of the heap it uses and
> can't work with another heap that has the opposite expectation. This
> usecase is purely theoretical and maybe I should not worry about it
> for now?

If such a case ever arises we can look into it.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/1] dma-buf: heaps: Map system heap pages as managed by linux vm
  2021-02-02  8:51               ` Christoph Hellwig
@ 2021-02-02 18:23                 ` Suren Baghdasaryan
  0 siblings, 0 replies; 10+ messages in thread
From: Suren Baghdasaryan @ 2021-02-02 18:23 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Minchan Kim, Sumit Semwal, (Exiting) Benjamin Gaignard,
	Liam Mark, labbott, Brian Starkey, John Stultz, Christian K??nig,
	Chris Goldsworthy, ??rjan Eide, Robin Murphy, James Jones,
	Hridya Valsaraju, Sandeep Patil, linux-media, DRI mailing list,
	moderated list:DMA BUFFER SHARING FRAMEWORK, LKML, kernel-team

On Tue, Feb 2, 2021 at 12:51 AM Christoph Hellwig <hch@infradead.org> wrote:
>
> On Tue, Feb 02, 2021 at 12:44:44AM -0800, Suren Baghdasaryan wrote:
> > On Mon, Feb 1, 2021 at 11:03 PM Christoph Hellwig <hch@infradead.org> wrote:
> > >
> > > IMHO the
> > >
> > >         BUG_ON(vma->vm_flags & VM_PFNMAP);
> > >
> > > in vm_insert_page should just become a WARN_ON_ONCE with an error
> > > return, and then we just need to gradually fix up the callers that
> > > trigger it instead of coming up with workarounds like this.
> >
> > For the existing vm_insert_page users this should be fine since
> > BUG_ON() guarantees that none of them sets VM_PFNMAP.
>
> Even for them WARN_ON_ONCE plus an actual error return is a way
> better assert that is much developer friendly.

Agree.

>
> > However, for the
> > system_heap_mmap I have one concern. When vm_insert_page returns an
> > error due to VM_PFNMAP flag, the whole mmap operation should fail
> > (system_heap_mmap returning an error leading to dma_buf_mmap failure).
> > Could there be cases when a heap user (DRM driver for example) would
> > be expected to work with a heap which requires VM_PFNMAP and at the
> > same time with another heap which requires !VM_PFNMAP? IOW, this
> > introduces a dependency between the heap and its
> > user. The user would have to know expectations of the heap it uses and
> > can't work with another heap that has the opposite expectation. This
> > usecase is purely theoretical and maybe I should not worry about it
> > for now?
>
> If such a case ever arises we can look into it.

Sounds good. I'll prepare a new patch and will post it later today. Thanks!

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-02-02 18:31 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-28  8:38 [PATCH 1/1] dma-buf: heaps: Map system heap pages as managed by linux vm Suren Baghdasaryan
2021-01-28  8:41 ` Suren Baghdasaryan
2021-01-28  9:13 ` Christoph Hellwig
2021-01-28 17:52   ` Suren Baghdasaryan
2021-01-28 18:19     ` Minchan Kim
     [not found]       ` <CAJuCfpF78RYedBoAgkDdgMdfSmNwC2AQk-zZxAqkhCdtBB9gtQ@mail.gmail.com>
2021-02-02  1:08         ` Suren Baghdasaryan
2021-02-02  7:03           ` Christoph Hellwig
2021-02-02  8:44             ` Suren Baghdasaryan
2021-02-02  8:51               ` Christoph Hellwig
2021-02-02 18:23                 ` Suren Baghdasaryan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).