linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] drm: allow limiting the scatter list size.
       [not found] <20200818074828.9509-1-kraxel@redhat.com>
@ 2020-08-18  7:48 ` Gerd Hoffmann
  2020-08-18  7:57   ` Christian König
  2020-08-18  7:48 ` [PATCH 2/2] drm/virtio: set max_segment Gerd Hoffmann
  1 sibling, 1 reply; 6+ messages in thread
From: Gerd Hoffmann @ 2020-08-18  7:48 UTC (permalink / raw)
  To: dri-devel
  Cc: Gerd Hoffmann, Alex Deucher, Christian König, David Airlie,
	Daniel Vetter, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, Lucas Stach, Russell King, Christian Gmeiner,
	Rob Clark, Sean Paul, Ben Skeggs, Sandy Huang,
	Heiko Stübner, Thierry Reding, Jonathan Hunter,
	Oleksandr Andrushchenko, open list:RADEON and AMDGPU DRM DRIVERS,
	open list, moderated list:DRM DRIVERS FOR VIVANTE GPU IP,
	open list:DRM DRIVER FOR MSM ADRENO GPU,
	open list:DRM DRIVER FOR MSM ADRENO GPU,
	open list:DRM DRIVER FOR NVIDIA GEFORCE/QUADRO GPUS,
	moderated list:ARM/Rockchip SoC support,
	open list:ARM/Rockchip SoC support,
	open list:DRM DRIVERS FOR NVIDIA TEGRA,
	moderated list:DRM DRIVERS FOR XEN

Add max_segment argument to drm_prime_pages_to_sg().  When set pass it
through to the __sg_alloc_table_from_pages() call, otherwise use
SCATTERLIST_MAX_SEGMENT.

Also add max_segment field to gem objects and pass it to
drm_prime_pages_to_sg() calls in drivers and helpers.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/drm/drm_gem.h                       |  8 ++++++++
 include/drm/drm_prime.h                     |  3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |  3 ++-
 drivers/gpu/drm/drm_gem_shmem_helper.c      |  3 ++-
 drivers/gpu/drm/drm_prime.c                 | 10 +++++++---
 drivers/gpu/drm/etnaviv/etnaviv_gem.c       |  3 ++-
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |  3 ++-
 drivers/gpu/drm/msm/msm_gem.c               |  3 ++-
 drivers/gpu/drm/msm/msm_gem_prime.c         |  3 ++-
 drivers/gpu/drm/nouveau/nouveau_prime.c     |  3 ++-
 drivers/gpu/drm/radeon/radeon_prime.c       |  3 ++-
 drivers/gpu/drm/rockchip/rockchip_drm_gem.c |  6 ++++--
 drivers/gpu/drm/tegra/gem.c                 |  3 ++-
 drivers/gpu/drm/vgem/vgem_drv.c             |  3 ++-
 drivers/gpu/drm/xen/xen_drm_front_gem.c     |  3 ++-
 15 files changed, 43 insertions(+), 17 deletions(-)

diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index 337a48321705..dea5e92e745b 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -241,6 +241,14 @@ struct drm_gem_object {
 	 */
 	size_t size;
 
+	/**
+	 * @max_segment:
+	 *
+	 * Max size for scatter list segments.  When unset the default
+	 * (SCATTERLIST_MAX_SEGMENT) is used.
+	 */
+	size_t max_segment;
+
 	/**
 	 * @name:
 	 *
diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
index 9af7422b44cf..2c3689435cb4 100644
--- a/include/drm/drm_prime.h
+++ b/include/drm/drm_prime.h
@@ -88,7 +88,8 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr);
 int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma);
 
-struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int nr_pages);
+struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int nr_pages,
+				       size_t max_segment);
 struct dma_buf *drm_gem_prime_export(struct drm_gem_object *obj,
 				     int flags);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 519ce4427fce..5e8a9760b33f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -303,7 +303,8 @@ static struct sg_table *amdgpu_dma_buf_map(struct dma_buf_attachment *attach,
 	switch (bo->tbo.mem.mem_type) {
 	case TTM_PL_TT:
 		sgt = drm_prime_pages_to_sg(bo->tbo.ttm->pages,
-					    bo->tbo.num_pages);
+					    bo->tbo.num_pages,
+					    obj->max_segment);
 		if (IS_ERR(sgt))
 			return sgt;
 
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 4b7cfbac4daa..cfb979d808fd 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -656,7 +656,8 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_object *obj)
 
 	WARN_ON(shmem->base.import_attach);
 
-	return drm_prime_pages_to_sg(shmem->pages, obj->size >> PAGE_SHIFT);
+	return drm_prime_pages_to_sg(shmem->pages, obj->size >> PAGE_SHIFT,
+				     obj->max_segment);
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
 
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 1693aa7c14b5..27c783fd6633 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -802,7 +802,8 @@ static const struct dma_buf_ops drm_gem_prime_dmabuf_ops =  {
  *
  * This is useful for implementing &drm_gem_object_funcs.get_sg_table.
  */
-struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int nr_pages)
+struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int nr_pages,
+				       size_t max_segment)
 {
 	struct sg_table *sg = NULL;
 	int ret;
@@ -813,8 +814,11 @@ struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int nr_page
 		goto out;
 	}
 
-	ret = sg_alloc_table_from_pages(sg, pages, nr_pages, 0,
-				nr_pages << PAGE_SHIFT, GFP_KERNEL);
+	if (max_segment == 0 || max_segment > SCATTERLIST_MAX_SEGMENT)
+		max_segment = SCATTERLIST_MAX_SEGMENT;
+	ret = __sg_alloc_table_from_pages(sg, pages, nr_pages, 0,
+					  nr_pages << PAGE_SHIFT,
+					  max_segment, GFP_KERNEL);
 	if (ret)
 		goto out;
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index f06e19e7be04..e5b6e7996f80 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -103,7 +103,8 @@ struct page **etnaviv_gem_get_pages(struct etnaviv_gem_object *etnaviv_obj)
 		int npages = etnaviv_obj->base.size >> PAGE_SHIFT;
 		struct sg_table *sgt;
 
-		sgt = drm_prime_pages_to_sg(etnaviv_obj->pages, npages);
+		sgt = drm_prime_pages_to_sg(etnaviv_obj->pages, npages,
+					    etnaviv_obj->base.max_segment);
 		if (IS_ERR(sgt)) {
 			dev_err(dev->dev, "failed to allocate sgt: %ld\n",
 				PTR_ERR(sgt));
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 6d9e5c3c4dd5..f327676450bd 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -19,7 +19,8 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	if (WARN_ON(!etnaviv_obj->pages))  /* should have already pinned! */
 		return ERR_PTR(-EINVAL);
 
-	return drm_prime_pages_to_sg(etnaviv_obj->pages, npages);
+	return drm_prime_pages_to_sg(etnaviv_obj->pages, npages,
+				     obj->max_segment);
 }
 
 void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index b2f49152b4d4..f805419bb84a 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -126,7 +126,8 @@ static struct page **get_pages(struct drm_gem_object *obj)
 
 		msm_obj->pages = p;
 
-		msm_obj->sgt = drm_prime_pages_to_sg(p, npages);
+		msm_obj->sgt = drm_prime_pages_to_sg(p, npages,
+						     obj->max_segment);
 		if (IS_ERR(msm_obj->sgt)) {
 			void *ptr = ERR_CAST(msm_obj->sgt);
 
diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c
index d7c8948427fe..a5a412564c7f 100644
--- a/drivers/gpu/drm/msm/msm_gem_prime.c
+++ b/drivers/gpu/drm/msm/msm_gem_prime.c
@@ -19,7 +19,8 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	if (WARN_ON(!msm_obj->pages))  /* should have already pinned! */
 		return NULL;
 
-	return drm_prime_pages_to_sg(msm_obj->pages, npages);
+	return drm_prime_pages_to_sg(msm_obj->pages, npages,
+				     obj->max_segment);
 }
 
 void *msm_gem_prime_vmap(struct drm_gem_object *obj)
diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
index bae6a3eccee0..56a2e916d51a 100644
--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
+++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
@@ -32,7 +32,8 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
 	int npages = nvbo->bo.num_pages;
 
-	return drm_prime_pages_to_sg(nvbo->bo.ttm->pages, npages);
+	return drm_prime_pages_to_sg(nvbo->bo.ttm->pages, npages,
+				     obj->max_segment);
 }
 
 void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
index b906e8fbd5f3..503e35625045 100644
--- a/drivers/gpu/drm/radeon/radeon_prime.c
+++ b/drivers/gpu/drm/radeon/radeon_prime.c
@@ -36,7 +36,8 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	struct radeon_bo *bo = gem_to_radeon_bo(obj);
 	int npages = bo->tbo.num_pages;
 
-	return drm_prime_pages_to_sg(bo->tbo.ttm->pages, npages);
+	return drm_prime_pages_to_sg(bo->tbo.ttm->pages, npages,
+				     obj->max_segment);
 }
 
 void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
index b9275ba7c5a5..444657e03c16 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
@@ -85,7 +85,8 @@ static int rockchip_gem_get_pages(struct rockchip_gem_object *rk_obj)
 
 	rk_obj->num_pages = rk_obj->base.size >> PAGE_SHIFT;
 
-	rk_obj->sgt = drm_prime_pages_to_sg(rk_obj->pages, rk_obj->num_pages);
+	rk_obj->sgt = drm_prime_pages_to_sg(rk_obj->pages, rk_obj->num_pages,
+					    rk_obj->base.max_segment);
 	if (IS_ERR(rk_obj->sgt)) {
 		ret = PTR_ERR(rk_obj->sgt);
 		goto err_put_pages;
@@ -442,7 +443,8 @@ struct sg_table *rockchip_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	int ret;
 
 	if (rk_obj->pages)
-		return drm_prime_pages_to_sg(rk_obj->pages, rk_obj->num_pages);
+		return drm_prime_pages_to_sg(rk_obj->pages, rk_obj->num_pages,
+					     obj->max_segment);
 
 	sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
 	if (!sgt)
diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
index 723df142a981..8d98b02a8d21 100644
--- a/drivers/gpu/drm/tegra/gem.c
+++ b/drivers/gpu/drm/tegra/gem.c
@@ -284,7 +284,8 @@ static int tegra_bo_get_pages(struct drm_device *drm, struct tegra_bo *bo)
 
 	bo->num_pages = bo->gem.size >> PAGE_SHIFT;
 
-	bo->sgt = drm_prime_pages_to_sg(bo->pages, bo->num_pages);
+	bo->sgt = drm_prime_pages_to_sg(bo->pages, bo->num_pages,
+					bo->gem.max_segment);
 	if (IS_ERR(bo->sgt)) {
 		err = PTR_ERR(bo->sgt);
 		goto put_pages;
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index 313339bbff90..d25c93b5a2c1 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -321,7 +321,8 @@ static struct sg_table *vgem_prime_get_sg_table(struct drm_gem_object *obj)
 {
 	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
 
-	return drm_prime_pages_to_sg(bo->pages, bo->base.size >> PAGE_SHIFT);
+	return drm_prime_pages_to_sg(bo->pages, bo->base.size >> PAGE_SHIFT,
+				     obj->max_segment);
 }
 
 static struct drm_gem_object* vgem_prime_import(struct drm_device *dev,
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index f0b85e094111..362fe5311b1b 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -179,7 +179,8 @@ struct sg_table *xen_drm_front_gem_get_sg_table(struct drm_gem_object *gem_obj)
 	if (!xen_obj->pages)
 		return ERR_PTR(-ENOMEM);
 
-	return drm_prime_pages_to_sg(xen_obj->pages, xen_obj->num_pages);
+	return drm_prime_pages_to_sg(xen_obj->pages, xen_obj->num_pages,
+				     gem_obj->max_segment);
 }
 
 struct drm_gem_object *
-- 
2.18.4


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/2] drm/virtio: set max_segment
       [not found] <20200818074828.9509-1-kraxel@redhat.com>
  2020-08-18  7:48 ` [PATCH 1/2] drm: allow limiting the scatter list size Gerd Hoffmann
@ 2020-08-18  7:48 ` Gerd Hoffmann
  1 sibling, 0 replies; 6+ messages in thread
From: Gerd Hoffmann @ 2020-08-18  7:48 UTC (permalink / raw)
  To: dri-devel
  Cc: Gerd Hoffmann, David Airlie, Daniel Vetter,
	open list:VIRTIO GPU DRIVER, open list

When initializing gem objects call virtio_max_dma_size() to figure the
scatter list limit.  Needed to make virtio-gpu work properly with SEV.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 drivers/gpu/drm/virtio/virtgpu_object.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index 1359eb8f1a02..26b608e2554e 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -214,6 +214,7 @@ int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 		goto err_free_gem;
 
 	bo->dumb = params->dumb;
+	shmem_obj->base.max_segment = virtio_max_dma_size(vgdev->vdev);
 
 	if (fence) {
 		ret = -ENOMEM;
-- 
2.18.4


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] drm: allow limiting the scatter list size.
  2020-08-18  7:48 ` [PATCH 1/2] drm: allow limiting the scatter list size Gerd Hoffmann
@ 2020-08-18  7:57   ` Christian König
  2020-08-18  8:27     ` Gerd Hoffmann
  0 siblings, 1 reply; 6+ messages in thread
From: Christian König @ 2020-08-18  7:57 UTC (permalink / raw)
  To: Gerd Hoffmann, dri-devel
  Cc: Alex Deucher, David Airlie, Daniel Vetter, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, Lucas Stach, Russell King,
	Christian Gmeiner, Rob Clark, Sean Paul, Ben Skeggs, Sandy Huang,
	Heiko Stübner, Thierry Reding, Jonathan Hunter,
	Oleksandr Andrushchenko, open list:RADEON and AMDGPU DRM DRIVERS,
	open list, moderated list:DRM DRIVERS FOR VIVANTE GPU IP,
	open list:DRM DRIVER FOR MSM ADRENO GPU,
	open list:DRM DRIVER FOR MSM ADRENO GPU,
	open list:DRM DRIVER FOR NVIDIA GEFORCE/QUADRO GPUS,
	moderated list:ARM/Rockchip SoC support,
	open list:ARM/Rockchip SoC support,
	open list:DRM DRIVERS FOR NVIDIA TEGRA,
	moderated list:DRM DRIVERS FOR XEN

Am 18.08.20 um 09:48 schrieb Gerd Hoffmann:
> Add max_segment argument to drm_prime_pages_to_sg().  When set pass it
> through to the __sg_alloc_table_from_pages() call, otherwise use
> SCATTERLIST_MAX_SEGMENT.
>
> Also add max_segment field to gem objects and pass it to
> drm_prime_pages_to_sg() calls in drivers and helpers.
>
> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>

I'm missing an explanation why this should be useful (it certainly is).

And the maximum segment size seems misplaced in the GEM object. This is 
usually a property of the device or even completely constant.

Christian.

> ---
>   include/drm/drm_gem.h                       |  8 ++++++++
>   include/drm/drm_prime.h                     |  3 ++-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |  3 ++-
>   drivers/gpu/drm/drm_gem_shmem_helper.c      |  3 ++-
>   drivers/gpu/drm/drm_prime.c                 | 10 +++++++---
>   drivers/gpu/drm/etnaviv/etnaviv_gem.c       |  3 ++-
>   drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |  3 ++-
>   drivers/gpu/drm/msm/msm_gem.c               |  3 ++-
>   drivers/gpu/drm/msm/msm_gem_prime.c         |  3 ++-
>   drivers/gpu/drm/nouveau/nouveau_prime.c     |  3 ++-
>   drivers/gpu/drm/radeon/radeon_prime.c       |  3 ++-
>   drivers/gpu/drm/rockchip/rockchip_drm_gem.c |  6 ++++--
>   drivers/gpu/drm/tegra/gem.c                 |  3 ++-
>   drivers/gpu/drm/vgem/vgem_drv.c             |  3 ++-
>   drivers/gpu/drm/xen/xen_drm_front_gem.c     |  3 ++-
>   15 files changed, 43 insertions(+), 17 deletions(-)
>
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index 337a48321705..dea5e92e745b 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -241,6 +241,14 @@ struct drm_gem_object {
>   	 */
>   	size_t size;
>   
> +	/**
> +	 * @max_segment:
> +	 *
> +	 * Max size for scatter list segments.  When unset the default
> +	 * (SCATTERLIST_MAX_SEGMENT) is used.
> +	 */
> +	size_t max_segment;
> +
>   	/**
>   	 * @name:
>   	 *
> diff --git a/include/drm/drm_prime.h b/include/drm/drm_prime.h
> index 9af7422b44cf..2c3689435cb4 100644
> --- a/include/drm/drm_prime.h
> +++ b/include/drm/drm_prime.h
> @@ -88,7 +88,8 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr);
>   int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
>   int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct vm_area_struct *vma);
>   
> -struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int nr_pages);
> +struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int nr_pages,
> +				       size_t max_segment);
>   struct dma_buf *drm_gem_prime_export(struct drm_gem_object *obj,
>   				     int flags);
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> index 519ce4427fce..5e8a9760b33f 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> @@ -303,7 +303,8 @@ static struct sg_table *amdgpu_dma_buf_map(struct dma_buf_attachment *attach,
>   	switch (bo->tbo.mem.mem_type) {
>   	case TTM_PL_TT:
>   		sgt = drm_prime_pages_to_sg(bo->tbo.ttm->pages,
> -					    bo->tbo.num_pages);
> +					    bo->tbo.num_pages,
> +					    obj->max_segment);
>   		if (IS_ERR(sgt))
>   			return sgt;
>   
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index 4b7cfbac4daa..cfb979d808fd 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -656,7 +656,8 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_object *obj)
>   
>   	WARN_ON(shmem->base.import_attach);
>   
> -	return drm_prime_pages_to_sg(shmem->pages, obj->size >> PAGE_SHIFT);
> +	return drm_prime_pages_to_sg(shmem->pages, obj->size >> PAGE_SHIFT,
> +				     obj->max_segment);
>   }
>   EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
>   
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index 1693aa7c14b5..27c783fd6633 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -802,7 +802,8 @@ static const struct dma_buf_ops drm_gem_prime_dmabuf_ops =  {
>    *
>    * This is useful for implementing &drm_gem_object_funcs.get_sg_table.
>    */
> -struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int nr_pages)
> +struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int nr_pages,
> +				       size_t max_segment)
>   {
>   	struct sg_table *sg = NULL;
>   	int ret;
> @@ -813,8 +814,11 @@ struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int nr_page
>   		goto out;
>   	}
>   
> -	ret = sg_alloc_table_from_pages(sg, pages, nr_pages, 0,
> -				nr_pages << PAGE_SHIFT, GFP_KERNEL);
> +	if (max_segment == 0 || max_segment > SCATTERLIST_MAX_SEGMENT)
> +		max_segment = SCATTERLIST_MAX_SEGMENT;
> +	ret = __sg_alloc_table_from_pages(sg, pages, nr_pages, 0,
> +					  nr_pages << PAGE_SHIFT,
> +					  max_segment, GFP_KERNEL);
>   	if (ret)
>   		goto out;
>   
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> index f06e19e7be04..e5b6e7996f80 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> @@ -103,7 +103,8 @@ struct page **etnaviv_gem_get_pages(struct etnaviv_gem_object *etnaviv_obj)
>   		int npages = etnaviv_obj->base.size >> PAGE_SHIFT;
>   		struct sg_table *sgt;
>   
> -		sgt = drm_prime_pages_to_sg(etnaviv_obj->pages, npages);
> +		sgt = drm_prime_pages_to_sg(etnaviv_obj->pages, npages,
> +					    etnaviv_obj->base.max_segment);
>   		if (IS_ERR(sgt)) {
>   			dev_err(dev->dev, "failed to allocate sgt: %ld\n",
>   				PTR_ERR(sgt));
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 6d9e5c3c4dd5..f327676450bd 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -19,7 +19,8 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
>   	if (WARN_ON(!etnaviv_obj->pages))  /* should have already pinned! */
>   		return ERR_PTR(-EINVAL);
>   
> -	return drm_prime_pages_to_sg(etnaviv_obj->pages, npages);
> +	return drm_prime_pages_to_sg(etnaviv_obj->pages, npages,
> +				     obj->max_segment);
>   }
>   
>   void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> index b2f49152b4d4..f805419bb84a 100644
> --- a/drivers/gpu/drm/msm/msm_gem.c
> +++ b/drivers/gpu/drm/msm/msm_gem.c
> @@ -126,7 +126,8 @@ static struct page **get_pages(struct drm_gem_object *obj)
>   
>   		msm_obj->pages = p;
>   
> -		msm_obj->sgt = drm_prime_pages_to_sg(p, npages);
> +		msm_obj->sgt = drm_prime_pages_to_sg(p, npages,
> +						     obj->max_segment);
>   		if (IS_ERR(msm_obj->sgt)) {
>   			void *ptr = ERR_CAST(msm_obj->sgt);
>   
> diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c
> index d7c8948427fe..a5a412564c7f 100644
> --- a/drivers/gpu/drm/msm/msm_gem_prime.c
> +++ b/drivers/gpu/drm/msm/msm_gem_prime.c
> @@ -19,7 +19,8 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj)
>   	if (WARN_ON(!msm_obj->pages))  /* should have already pinned! */
>   		return NULL;
>   
> -	return drm_prime_pages_to_sg(msm_obj->pages, npages);
> +	return drm_prime_pages_to_sg(msm_obj->pages, npages,
> +				     obj->max_segment);
>   }
>   
>   void *msm_gem_prime_vmap(struct drm_gem_object *obj)
> diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
> index bae6a3eccee0..56a2e916d51a 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_prime.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
> @@ -32,7 +32,8 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
>   	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
>   	int npages = nvbo->bo.num_pages;
>   
> -	return drm_prime_pages_to_sg(nvbo->bo.ttm->pages, npages);
> +	return drm_prime_pages_to_sg(nvbo->bo.ttm->pages, npages,
> +				     obj->max_segment);
>   }
>   
>   void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
> diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
> index b906e8fbd5f3..503e35625045 100644
> --- a/drivers/gpu/drm/radeon/radeon_prime.c
> +++ b/drivers/gpu/drm/radeon/radeon_prime.c
> @@ -36,7 +36,8 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
>   	struct radeon_bo *bo = gem_to_radeon_bo(obj);
>   	int npages = bo->tbo.num_pages;
>   
> -	return drm_prime_pages_to_sg(bo->tbo.ttm->pages, npages);
> +	return drm_prime_pages_to_sg(bo->tbo.ttm->pages, npages,
> +				     obj->max_segment);
>   }
>   
>   void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> index b9275ba7c5a5..444657e03c16 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> @@ -85,7 +85,8 @@ static int rockchip_gem_get_pages(struct rockchip_gem_object *rk_obj)
>   
>   	rk_obj->num_pages = rk_obj->base.size >> PAGE_SHIFT;
>   
> -	rk_obj->sgt = drm_prime_pages_to_sg(rk_obj->pages, rk_obj->num_pages);
> +	rk_obj->sgt = drm_prime_pages_to_sg(rk_obj->pages, rk_obj->num_pages,
> +					    rk_obj->base.max_segment);
>   	if (IS_ERR(rk_obj->sgt)) {
>   		ret = PTR_ERR(rk_obj->sgt);
>   		goto err_put_pages;
> @@ -442,7 +443,8 @@ struct sg_table *rockchip_gem_prime_get_sg_table(struct drm_gem_object *obj)
>   	int ret;
>   
>   	if (rk_obj->pages)
> -		return drm_prime_pages_to_sg(rk_obj->pages, rk_obj->num_pages);
> +		return drm_prime_pages_to_sg(rk_obj->pages, rk_obj->num_pages,
> +					     obj->max_segment);
>   
>   	sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
>   	if (!sgt)
> diff --git a/drivers/gpu/drm/tegra/gem.c b/drivers/gpu/drm/tegra/gem.c
> index 723df142a981..8d98b02a8d21 100644
> --- a/drivers/gpu/drm/tegra/gem.c
> +++ b/drivers/gpu/drm/tegra/gem.c
> @@ -284,7 +284,8 @@ static int tegra_bo_get_pages(struct drm_device *drm, struct tegra_bo *bo)
>   
>   	bo->num_pages = bo->gem.size >> PAGE_SHIFT;
>   
> -	bo->sgt = drm_prime_pages_to_sg(bo->pages, bo->num_pages);
> +	bo->sgt = drm_prime_pages_to_sg(bo->pages, bo->num_pages,
> +					bo->gem.max_segment);
>   	if (IS_ERR(bo->sgt)) {
>   		err = PTR_ERR(bo->sgt);
>   		goto put_pages;
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index 313339bbff90..d25c93b5a2c1 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -321,7 +321,8 @@ static struct sg_table *vgem_prime_get_sg_table(struct drm_gem_object *obj)
>   {
>   	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
>   
> -	return drm_prime_pages_to_sg(bo->pages, bo->base.size >> PAGE_SHIFT);
> +	return drm_prime_pages_to_sg(bo->pages, bo->base.size >> PAGE_SHIFT,
> +				     obj->max_segment);
>   }
>   
>   static struct drm_gem_object* vgem_prime_import(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> index f0b85e094111..362fe5311b1b 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> @@ -179,7 +179,8 @@ struct sg_table *xen_drm_front_gem_get_sg_table(struct drm_gem_object *gem_obj)
>   	if (!xen_obj->pages)
>   		return ERR_PTR(-ENOMEM);
>   
> -	return drm_prime_pages_to_sg(xen_obj->pages, xen_obj->num_pages);
> +	return drm_prime_pages_to_sg(xen_obj->pages, xen_obj->num_pages,
> +				     gem_obj->max_segment);
>   }
>   
>   struct drm_gem_object *


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] drm: allow limiting the scatter list size.
  2020-08-18  7:57   ` Christian König
@ 2020-08-18  8:27     ` Gerd Hoffmann
  2020-08-18  8:31       ` Christian König
  0 siblings, 1 reply; 6+ messages in thread
From: Gerd Hoffmann @ 2020-08-18  8:27 UTC (permalink / raw)
  To: Christian König
  Cc: dri-devel, David Airlie,
	open list:DRM DRIVER FOR NVIDIA GEFORCE/QUADRO GPUS, Sandy Huang,
	Thierry Reding, Oleksandr Andrushchenko,
	open list:RADEON and AMDGPU DRM DRIVERS, Jonathan Hunter,
	open list:ARM/Rockchip SoC support, Ben Skeggs, Russell King,
	moderated list:DRM DRIVERS FOR XEN,
	open list:DRM DRIVER FOR MSM ADRENO GPU,
	moderated list:DRM DRIVERS FOR VIVANTE GPU IP,
	open list:DRM DRIVERS FOR NVIDIA TEGRA, Sean Paul,
	moderated list:ARM/Rockchip SoC support, open list,
	Thomas Zimmermann, Alex Deucher,
	open list:DRM DRIVER FOR MSM ADRENO GPU

On Tue, Aug 18, 2020 at 09:57:59AM +0200, Christian König wrote:
> Am 18.08.20 um 09:48 schrieb Gerd Hoffmann:
> > Add max_segment argument to drm_prime_pages_to_sg().  When set pass it
> > through to the __sg_alloc_table_from_pages() call, otherwise use
> > SCATTERLIST_MAX_SEGMENT.
> > 
> > Also add max_segment field to gem objects and pass it to
> > drm_prime_pages_to_sg() calls in drivers and helpers.
> > 
> > Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> 
> I'm missing an explanation why this should be useful (it certainly is).

virtio-gpu needs this to work properly with SEV (see patch 2/2 of this
series).

> And the maximum segment size seems misplaced in the GEM object. This is
> usually a property of the device or even completely constant.

Placing it in drm_device instead would indeed work for virtio-gpu, so I
guess you are suggesting that instead?

take care,
  Gerd


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] drm: allow limiting the scatter list size.
  2020-08-18  8:27     ` Gerd Hoffmann
@ 2020-08-18  8:31       ` Christian König
  2020-08-18  9:00         ` Gerd Hoffmann
  0 siblings, 1 reply; 6+ messages in thread
From: Christian König @ 2020-08-18  8:31 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: dri-devel, David Airlie,
	open list:DRM DRIVER FOR NVIDIA GEFORCE/QUADRO GPUS, Sandy Huang,
	Thierry Reding, Oleksandr Andrushchenko,
	open list:RADEON and AMDGPU DRM DRIVERS, Jonathan Hunter,
	open list:ARM/Rockchip SoC support, Ben Skeggs, Russell King,
	moderated list:DRM DRIVERS FOR XEN,
	open list:DRM DRIVER FOR MSM ADRENO GPU,
	moderated list:DRM DRIVERS FOR VIVANTE GPU IP,
	open list:DRM DRIVERS FOR NVIDIA TEGRA, Sean Paul,
	moderated list:ARM/Rockchip SoC support, open list,
	Thomas Zimmermann, Alex Deucher,
	open list:DRM DRIVER FOR MSM ADRENO GPU

Am 18.08.20 um 10:27 schrieb Gerd Hoffmann:
> On Tue, Aug 18, 2020 at 09:57:59AM +0200, Christian König wrote:
>> Am 18.08.20 um 09:48 schrieb Gerd Hoffmann:
>>> Add max_segment argument to drm_prime_pages_to_sg().  When set pass it
>>> through to the __sg_alloc_table_from_pages() call, otherwise use
>>> SCATTERLIST_MAX_SEGMENT.
>>>
>>> Also add max_segment field to gem objects and pass it to
>>> drm_prime_pages_to_sg() calls in drivers and helpers.
>>>
>>> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
>> I'm missing an explanation why this should be useful (it certainly is).
> virtio-gpu needs this to work properly with SEV (see patch 2/2 of this
> series).

Yeah, that's the problem patch 2/2 never showed up here :)

>> And the maximum segment size seems misplaced in the GEM object. This is
>> usually a property of the device or even completely constant.
> Placing it in drm_device instead would indeed work for virtio-gpu, so I
> guess you are suggesting that instead?

That is probably the best approach, yes.

For Intel and AMD it could even be global/constant, but it certainly 
doesn't needs to be kept around for each buffer.

Christian.

>
> take care,
>    Gerd
>


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] drm: allow limiting the scatter list size.
  2020-08-18  8:31       ` Christian König
@ 2020-08-18  9:00         ` Gerd Hoffmann
  0 siblings, 0 replies; 6+ messages in thread
From: Gerd Hoffmann @ 2020-08-18  9:00 UTC (permalink / raw)
  To: Christian König
  Cc: dri-devel, David Airlie,
	open list:DRM DRIVER FOR NVIDIA GEFORCE/QUADRO GPUS, Sandy Huang,
	Thierry Reding, Oleksandr Andrushchenko,
	open list:RADEON and AMDGPU DRM DRIVERS, Jonathan Hunter,
	open list:ARM/Rockchip SoC support, Ben Skeggs, Russell King,
	moderated list:DRM DRIVERS FOR XEN,
	open list:DRM DRIVER FOR MSM ADRENO GPU,
	moderated list:DRM DRIVERS FOR VIVANTE GPU IP,
	open list:DRM DRIVERS FOR NVIDIA TEGRA, Sean Paul,
	moderated list:ARM/Rockchip SoC support, open list,
	Thomas Zimmermann, Alex Deucher,
	open list:DRM DRIVER FOR MSM ADRENO GPU

  Hi,

> > > I'm missing an explanation why this should be useful (it certainly is).
> > virtio-gpu needs this to work properly with SEV (see patch 2/2 of this
> > series).
> 
> Yeah, that's the problem patch 2/2 never showed up here :)

The list should have everything.

Your inbox probably has 1/2 only because 2/2 doesn't touch amd code and
'git send-email' evaluates sendemail.cccmd (pointing to
get_maintainer.pl) for each patch individually.

I've found this behavior confusing at times before.  Is there some way
to send the whole series to everybody?  Or at least the cover letter?
The git-send-email manpage doesn't give a clue :(

> > Placing it in drm_device instead would indeed work for virtio-gpu, so I
> > guess you are suggesting that instead?
> 
> That is probably the best approach, yes.

Ok, I'll go that route then.

thanks,
  Gerd


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-08-18  9:01 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20200818074828.9509-1-kraxel@redhat.com>
2020-08-18  7:48 ` [PATCH 1/2] drm: allow limiting the scatter list size Gerd Hoffmann
2020-08-18  7:57   ` Christian König
2020-08-18  8:27     ` Gerd Hoffmann
2020-08-18  8:31       ` Christian König
2020-08-18  9:00         ` Gerd Hoffmann
2020-08-18  7:48 ` [PATCH 2/2] drm/virtio: set max_segment Gerd Hoffmann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).