All of lore.kernel.org
 help / color / mirror / Atom feed
From: Boris Brezillon <boris.brezillon@collabora.com>
To: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: "DRI Development" <dri-devel@lists.freedesktop.org>,
	"Tomeu Vizoso" <tomeu.vizoso@collabora.com>,
	"Intel Graphics Development" <intel-gfx@lists.freedesktop.org>,
	"Steven Price" <steven.price@arm.com>,
	linaro-mm-sig@lists.linaro.org,
	"Luben Tuikov" <luben.tuikov@amd.com>,
	"Alyssa Rosenzweig" <alyssa.rosenzweig@collabora.com>,
	"Alex Deucher" <alexander.deucher@amd.com>,
	"Daniel Vetter" <daniel.vetter@intel.com>,
	linux-media@vger.kernel.org, "Lee Jones" <lee.jones@linaro.org>,
	"Christian König" <christian.koenig@amd.com>
Subject: Re: [PATCH 05/15] drm/panfrost: Use xarray and helpers for depedency tracking
Date: Wed, 23 Jun 2021 18:51:21 +0200	[thread overview]
Message-ID: <20210623185121.73437307@collabora.com> (raw)
In-Reply-To: <20210622165511.3169559-6-daniel.vetter@ffwll.ch>

On Tue, 22 Jun 2021 18:55:01 +0200
Daniel Vetter <daniel.vetter@ffwll.ch> wrote:

> More consistency and prep work for the next patch.
> 
> Aside: I wonder whether we shouldn't just move this entire xarray
> business into the scheduler so that not everyone has to reinvent the
> same wheels. Cc'ing some scheduler people for this too.
> 
> v2: Correctly handle sched_lock since Lucas pointed out it's needed.
> 
> v3: Rebase, dma_resv_get_excl_unlocked got renamed
> 
> v4: Don't leak job references on failure (Steven).

Hehe, I had pretty much the same patch here [1].

Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>

[1]https://patchwork.kernel.org/project/dri-devel/patch/20210311092539.2405596-3-boris.brezillon@collabora.com/

> 
> Cc: Lucas Stach <l.stach@pengutronix.de>
> Cc: "Christian König" <christian.koenig@amd.com>
> Cc: Luben Tuikov <luben.tuikov@amd.com>
> Cc: Alex Deucher <alexander.deucher@amd.com>
> Cc: Lee Jones <lee.jones@linaro.org>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Rob Herring <robh@kernel.org>
> Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
> Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: linux-media@vger.kernel.org
> Cc: linaro-mm-sig@lists.linaro.org
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> ---
>  drivers/gpu/drm/panfrost/panfrost_drv.c | 41 +++++++---------
>  drivers/gpu/drm/panfrost/panfrost_job.c | 65 +++++++++++--------------
>  drivers/gpu/drm/panfrost/panfrost_job.h |  8 ++-
>  3 files changed, 49 insertions(+), 65 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> index 075ec0ef746c..3ee828f1e7a5 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> @@ -138,12 +138,6 @@ panfrost_lookup_bos(struct drm_device *dev,
>  	if (!job->bo_count)
>  		return 0;
>  
> -	job->implicit_fences = kvmalloc_array(job->bo_count,
> -				  sizeof(struct dma_fence *),
> -				  GFP_KERNEL | __GFP_ZERO);
> -	if (!job->implicit_fences)
> -		return -ENOMEM;
> -
>  	ret = drm_gem_objects_lookup(file_priv,
>  				     (void __user *)(uintptr_t)args->bo_handles,
>  				     job->bo_count, &job->bos);
> @@ -174,7 +168,7 @@ panfrost_lookup_bos(struct drm_device *dev,
>  }
>  
>  /**
> - * panfrost_copy_in_sync() - Sets up job->in_fences[] with the sync objects
> + * panfrost_copy_in_sync() - Sets up job->deps with the sync objects
>   * referenced by the job.
>   * @dev: DRM device
>   * @file_priv: DRM file for this fd
> @@ -194,22 +188,14 @@ panfrost_copy_in_sync(struct drm_device *dev,
>  {
>  	u32 *handles;
>  	int ret = 0;
> -	int i;
> +	int i, in_fence_count;
>  
> -	job->in_fence_count = args->in_sync_count;
> +	in_fence_count = args->in_sync_count;
>  
> -	if (!job->in_fence_count)
> +	if (!in_fence_count)
>  		return 0;
>  
> -	job->in_fences = kvmalloc_array(job->in_fence_count,
> -					sizeof(struct dma_fence *),
> -					GFP_KERNEL | __GFP_ZERO);
> -	if (!job->in_fences) {
> -		DRM_DEBUG("Failed to allocate job in fences\n");
> -		return -ENOMEM;
> -	}
> -
> -	handles = kvmalloc_array(job->in_fence_count, sizeof(u32), GFP_KERNEL);
> +	handles = kvmalloc_array(in_fence_count, sizeof(u32), GFP_KERNEL);
>  	if (!handles) {
>  		ret = -ENOMEM;
>  		DRM_DEBUG("Failed to allocate incoming syncobj handles\n");
> @@ -218,16 +204,23 @@ panfrost_copy_in_sync(struct drm_device *dev,
>  
>  	if (copy_from_user(handles,
>  			   (void __user *)(uintptr_t)args->in_syncs,
> -			   job->in_fence_count * sizeof(u32))) {
> +			   in_fence_count * sizeof(u32))) {
>  		ret = -EFAULT;
>  		DRM_DEBUG("Failed to copy in syncobj handles\n");
>  		goto fail;
>  	}
>  
> -	for (i = 0; i < job->in_fence_count; i++) {
> +	for (i = 0; i < in_fence_count; i++) {
> +		struct dma_fence *fence;
> +
>  		ret = drm_syncobj_find_fence(file_priv, handles[i], 0, 0,
> -					     &job->in_fences[i]);
> -		if (ret == -EINVAL)
> +					     &fence);
> +		if (ret)
> +			goto fail;
> +
> +		ret = drm_gem_fence_array_add(&job->deps, fence);
> +
> +		if (ret)
>  			goto fail;
>  	}
>  
> @@ -265,6 +258,8 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
>  
>  	kref_init(&job->refcount);
>  
> +	xa_init_flags(&job->deps, XA_FLAGS_ALLOC);
> +
>  	job->pfdev = pfdev;
>  	job->jc = args->jc;
>  	job->requirements = args->requirements;
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> index 38f8580c19f1..71cd43fa1b36 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -196,14 +196,21 @@ static void panfrost_job_hw_submit(struct panfrost_job *job, int js)
>  	job_write(pfdev, JS_COMMAND_NEXT(js), JS_COMMAND_START);
>  }
>  
> -static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
> -					   int bo_count,
> -					   struct dma_fence **implicit_fences)
> +static int panfrost_acquire_object_fences(struct drm_gem_object **bos,
> +					  int bo_count,
> +					  struct xarray *deps)
>  {
> -	int i;
> +	int i, ret;
>  
> -	for (i = 0; i < bo_count; i++)
> -		implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
> +	for (i = 0; i < bo_count; i++) {
> +		struct dma_fence *fence = dma_resv_get_excl_unlocked(bos[i]->resv);
> +
> +		ret = drm_gem_fence_array_add(deps, fence);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
>  }
>  
>  static void panfrost_attach_object_fences(struct drm_gem_object **bos,
> @@ -240,10 +247,14 @@ int panfrost_job_push(struct panfrost_job *job)
>  
>  	job->render_done_fence = dma_fence_get(&job->base.s_fence->finished);
>  
> -	kref_get(&job->refcount); /* put by scheduler job completion */
> +	ret = panfrost_acquire_object_fences(job->bos, job->bo_count,
> +					     &job->deps);
> +	if (ret) {
> +		mutex_unlock(&pfdev->sched_lock);
> +		goto unlock;
> +	}
>  
> -	panfrost_acquire_object_fences(job->bos, job->bo_count,
> -				       job->implicit_fences);
> +	kref_get(&job->refcount); /* put by scheduler job completion */
>  
>  	drm_sched_entity_push_job(&job->base, entity);
>  
> @@ -262,18 +273,15 @@ static void panfrost_job_cleanup(struct kref *ref)
>  {
>  	struct panfrost_job *job = container_of(ref, struct panfrost_job,
>  						refcount);
> +	struct dma_fence *fence;
> +	unsigned long index;
>  	unsigned int i;
>  
> -	if (job->in_fences) {
> -		for (i = 0; i < job->in_fence_count; i++)
> -			dma_fence_put(job->in_fences[i]);
> -		kvfree(job->in_fences);
> -	}
> -	if (job->implicit_fences) {
> -		for (i = 0; i < job->bo_count; i++)
> -			dma_fence_put(job->implicit_fences[i]);
> -		kvfree(job->implicit_fences);
> +	xa_for_each(&job->deps, index, fence) {
> +		dma_fence_put(fence);
>  	}
> +	xa_destroy(&job->deps);
> +
>  	dma_fence_put(job->done_fence);
>  	dma_fence_put(job->render_done_fence);
>  
> @@ -316,26 +324,9 @@ static struct dma_fence *panfrost_job_dependency(struct drm_sched_job *sched_job
>  						 struct drm_sched_entity *s_entity)
>  {
>  	struct panfrost_job *job = to_panfrost_job(sched_job);
> -	struct dma_fence *fence;
> -	unsigned int i;
> -
> -	/* Explicit fences */
> -	for (i = 0; i < job->in_fence_count; i++) {
> -		if (job->in_fences[i]) {
> -			fence = job->in_fences[i];
> -			job->in_fences[i] = NULL;
> -			return fence;
> -		}
> -	}
>  
> -	/* Implicit fences, max. one per BO */
> -	for (i = 0; i < job->bo_count; i++) {
> -		if (job->implicit_fences[i]) {
> -			fence = job->implicit_fences[i];
> -			job->implicit_fences[i] = NULL;
> -			return fence;
> -		}
> -	}
> +	if (!xa_empty(&job->deps))
> +		return xa_erase(&job->deps, job->last_dep++);
>  
>  	return NULL;
>  }
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.h b/drivers/gpu/drm/panfrost/panfrost_job.h
> index bbd3ba97ff67..82306a03b57e 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.h
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.h
> @@ -19,9 +19,9 @@ struct panfrost_job {
>  	struct panfrost_device *pfdev;
>  	struct panfrost_file_priv *file_priv;
>  
> -	/* Optional fences userspace can pass in for the job to depend on. */
> -	struct dma_fence **in_fences;
> -	u32 in_fence_count;
> +	/* Contains both explicit and implicit fences */
> +	struct xarray deps;
> +	unsigned long last_dep;
>  
>  	/* Fence to be signaled by IRQ handler when the job is complete. */
>  	struct dma_fence *done_fence;
> @@ -30,8 +30,6 @@ struct panfrost_job {
>  	__u32 requirements;
>  	__u32 flush_id;
>  
> -	/* Exclusive fences we have taken from the BOs to wait for */
> -	struct dma_fence **implicit_fences;
>  	struct panfrost_gem_mapping **mappings;
>  	struct drm_gem_object **bos;
>  	u32 bo_count;


WARNING: multiple messages have this Message-ID (diff)
From: Boris Brezillon <boris.brezillon@collabora.com>
To: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: "Tomeu Vizoso" <tomeu.vizoso@collabora.com>,
	"Intel Graphics Development" <intel-gfx@lists.freedesktop.org>,
	"DRI Development" <dri-devel@lists.freedesktop.org>,
	"Steven Price" <steven.price@arm.com>,
	linaro-mm-sig@lists.linaro.org,
	"Luben Tuikov" <luben.tuikov@amd.com>,
	"Alyssa Rosenzweig" <alyssa.rosenzweig@collabora.com>,
	"Alex Deucher" <alexander.deucher@amd.com>,
	"Daniel Vetter" <daniel.vetter@intel.com>,
	"Lee Jones" <lee.jones@linaro.org>,
	"Christian König" <christian.koenig@amd.com>,
	linux-media@vger.kernel.org
Subject: Re: [PATCH 05/15] drm/panfrost: Use xarray and helpers for depedency tracking
Date: Wed, 23 Jun 2021 18:51:21 +0200	[thread overview]
Message-ID: <20210623185121.73437307@collabora.com> (raw)
In-Reply-To: <20210622165511.3169559-6-daniel.vetter@ffwll.ch>

On Tue, 22 Jun 2021 18:55:01 +0200
Daniel Vetter <daniel.vetter@ffwll.ch> wrote:

> More consistency and prep work for the next patch.
> 
> Aside: I wonder whether we shouldn't just move this entire xarray
> business into the scheduler so that not everyone has to reinvent the
> same wheels. Cc'ing some scheduler people for this too.
> 
> v2: Correctly handle sched_lock since Lucas pointed out it's needed.
> 
> v3: Rebase, dma_resv_get_excl_unlocked got renamed
> 
> v4: Don't leak job references on failure (Steven).

Hehe, I had pretty much the same patch here [1].

Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>

[1]https://patchwork.kernel.org/project/dri-devel/patch/20210311092539.2405596-3-boris.brezillon@collabora.com/

> 
> Cc: Lucas Stach <l.stach@pengutronix.de>
> Cc: "Christian König" <christian.koenig@amd.com>
> Cc: Luben Tuikov <luben.tuikov@amd.com>
> Cc: Alex Deucher <alexander.deucher@amd.com>
> Cc: Lee Jones <lee.jones@linaro.org>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Rob Herring <robh@kernel.org>
> Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
> Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: linux-media@vger.kernel.org
> Cc: linaro-mm-sig@lists.linaro.org
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> ---
>  drivers/gpu/drm/panfrost/panfrost_drv.c | 41 +++++++---------
>  drivers/gpu/drm/panfrost/panfrost_job.c | 65 +++++++++++--------------
>  drivers/gpu/drm/panfrost/panfrost_job.h |  8 ++-
>  3 files changed, 49 insertions(+), 65 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> index 075ec0ef746c..3ee828f1e7a5 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> @@ -138,12 +138,6 @@ panfrost_lookup_bos(struct drm_device *dev,
>  	if (!job->bo_count)
>  		return 0;
>  
> -	job->implicit_fences = kvmalloc_array(job->bo_count,
> -				  sizeof(struct dma_fence *),
> -				  GFP_KERNEL | __GFP_ZERO);
> -	if (!job->implicit_fences)
> -		return -ENOMEM;
> -
>  	ret = drm_gem_objects_lookup(file_priv,
>  				     (void __user *)(uintptr_t)args->bo_handles,
>  				     job->bo_count, &job->bos);
> @@ -174,7 +168,7 @@ panfrost_lookup_bos(struct drm_device *dev,
>  }
>  
>  /**
> - * panfrost_copy_in_sync() - Sets up job->in_fences[] with the sync objects
> + * panfrost_copy_in_sync() - Sets up job->deps with the sync objects
>   * referenced by the job.
>   * @dev: DRM device
>   * @file_priv: DRM file for this fd
> @@ -194,22 +188,14 @@ panfrost_copy_in_sync(struct drm_device *dev,
>  {
>  	u32 *handles;
>  	int ret = 0;
> -	int i;
> +	int i, in_fence_count;
>  
> -	job->in_fence_count = args->in_sync_count;
> +	in_fence_count = args->in_sync_count;
>  
> -	if (!job->in_fence_count)
> +	if (!in_fence_count)
>  		return 0;
>  
> -	job->in_fences = kvmalloc_array(job->in_fence_count,
> -					sizeof(struct dma_fence *),
> -					GFP_KERNEL | __GFP_ZERO);
> -	if (!job->in_fences) {
> -		DRM_DEBUG("Failed to allocate job in fences\n");
> -		return -ENOMEM;
> -	}
> -
> -	handles = kvmalloc_array(job->in_fence_count, sizeof(u32), GFP_KERNEL);
> +	handles = kvmalloc_array(in_fence_count, sizeof(u32), GFP_KERNEL);
>  	if (!handles) {
>  		ret = -ENOMEM;
>  		DRM_DEBUG("Failed to allocate incoming syncobj handles\n");
> @@ -218,16 +204,23 @@ panfrost_copy_in_sync(struct drm_device *dev,
>  
>  	if (copy_from_user(handles,
>  			   (void __user *)(uintptr_t)args->in_syncs,
> -			   job->in_fence_count * sizeof(u32))) {
> +			   in_fence_count * sizeof(u32))) {
>  		ret = -EFAULT;
>  		DRM_DEBUG("Failed to copy in syncobj handles\n");
>  		goto fail;
>  	}
>  
> -	for (i = 0; i < job->in_fence_count; i++) {
> +	for (i = 0; i < in_fence_count; i++) {
> +		struct dma_fence *fence;
> +
>  		ret = drm_syncobj_find_fence(file_priv, handles[i], 0, 0,
> -					     &job->in_fences[i]);
> -		if (ret == -EINVAL)
> +					     &fence);
> +		if (ret)
> +			goto fail;
> +
> +		ret = drm_gem_fence_array_add(&job->deps, fence);
> +
> +		if (ret)
>  			goto fail;
>  	}
>  
> @@ -265,6 +258,8 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
>  
>  	kref_init(&job->refcount);
>  
> +	xa_init_flags(&job->deps, XA_FLAGS_ALLOC);
> +
>  	job->pfdev = pfdev;
>  	job->jc = args->jc;
>  	job->requirements = args->requirements;
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> index 38f8580c19f1..71cd43fa1b36 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -196,14 +196,21 @@ static void panfrost_job_hw_submit(struct panfrost_job *job, int js)
>  	job_write(pfdev, JS_COMMAND_NEXT(js), JS_COMMAND_START);
>  }
>  
> -static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
> -					   int bo_count,
> -					   struct dma_fence **implicit_fences)
> +static int panfrost_acquire_object_fences(struct drm_gem_object **bos,
> +					  int bo_count,
> +					  struct xarray *deps)
>  {
> -	int i;
> +	int i, ret;
>  
> -	for (i = 0; i < bo_count; i++)
> -		implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
> +	for (i = 0; i < bo_count; i++) {
> +		struct dma_fence *fence = dma_resv_get_excl_unlocked(bos[i]->resv);
> +
> +		ret = drm_gem_fence_array_add(deps, fence);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
>  }
>  
>  static void panfrost_attach_object_fences(struct drm_gem_object **bos,
> @@ -240,10 +247,14 @@ int panfrost_job_push(struct panfrost_job *job)
>  
>  	job->render_done_fence = dma_fence_get(&job->base.s_fence->finished);
>  
> -	kref_get(&job->refcount); /* put by scheduler job completion */
> +	ret = panfrost_acquire_object_fences(job->bos, job->bo_count,
> +					     &job->deps);
> +	if (ret) {
> +		mutex_unlock(&pfdev->sched_lock);
> +		goto unlock;
> +	}
>  
> -	panfrost_acquire_object_fences(job->bos, job->bo_count,
> -				       job->implicit_fences);
> +	kref_get(&job->refcount); /* put by scheduler job completion */
>  
>  	drm_sched_entity_push_job(&job->base, entity);
>  
> @@ -262,18 +273,15 @@ static void panfrost_job_cleanup(struct kref *ref)
>  {
>  	struct panfrost_job *job = container_of(ref, struct panfrost_job,
>  						refcount);
> +	struct dma_fence *fence;
> +	unsigned long index;
>  	unsigned int i;
>  
> -	if (job->in_fences) {
> -		for (i = 0; i < job->in_fence_count; i++)
> -			dma_fence_put(job->in_fences[i]);
> -		kvfree(job->in_fences);
> -	}
> -	if (job->implicit_fences) {
> -		for (i = 0; i < job->bo_count; i++)
> -			dma_fence_put(job->implicit_fences[i]);
> -		kvfree(job->implicit_fences);
> +	xa_for_each(&job->deps, index, fence) {
> +		dma_fence_put(fence);
>  	}
> +	xa_destroy(&job->deps);
> +
>  	dma_fence_put(job->done_fence);
>  	dma_fence_put(job->render_done_fence);
>  
> @@ -316,26 +324,9 @@ static struct dma_fence *panfrost_job_dependency(struct drm_sched_job *sched_job
>  						 struct drm_sched_entity *s_entity)
>  {
>  	struct panfrost_job *job = to_panfrost_job(sched_job);
> -	struct dma_fence *fence;
> -	unsigned int i;
> -
> -	/* Explicit fences */
> -	for (i = 0; i < job->in_fence_count; i++) {
> -		if (job->in_fences[i]) {
> -			fence = job->in_fences[i];
> -			job->in_fences[i] = NULL;
> -			return fence;
> -		}
> -	}
>  
> -	/* Implicit fences, max. one per BO */
> -	for (i = 0; i < job->bo_count; i++) {
> -		if (job->implicit_fences[i]) {
> -			fence = job->implicit_fences[i];
> -			job->implicit_fences[i] = NULL;
> -			return fence;
> -		}
> -	}
> +	if (!xa_empty(&job->deps))
> +		return xa_erase(&job->deps, job->last_dep++);
>  
>  	return NULL;
>  }
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.h b/drivers/gpu/drm/panfrost/panfrost_job.h
> index bbd3ba97ff67..82306a03b57e 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.h
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.h
> @@ -19,9 +19,9 @@ struct panfrost_job {
>  	struct panfrost_device *pfdev;
>  	struct panfrost_file_priv *file_priv;
>  
> -	/* Optional fences userspace can pass in for the job to depend on. */
> -	struct dma_fence **in_fences;
> -	u32 in_fence_count;
> +	/* Contains both explicit and implicit fences */
> +	struct xarray deps;
> +	unsigned long last_dep;
>  
>  	/* Fence to be signaled by IRQ handler when the job is complete. */
>  	struct dma_fence *done_fence;
> @@ -30,8 +30,6 @@ struct panfrost_job {
>  	__u32 requirements;
>  	__u32 flush_id;
>  
> -	/* Exclusive fences we have taken from the BOs to wait for */
> -	struct dma_fence **implicit_fences;
>  	struct panfrost_gem_mapping **mappings;
>  	struct drm_gem_object **bos;
>  	u32 bo_count;


WARNING: multiple messages have this Message-ID (diff)
From: Boris Brezillon <boris.brezillon@collabora.com>
To: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: "Tomeu Vizoso" <tomeu.vizoso@collabora.com>,
	"Intel Graphics Development" <intel-gfx@lists.freedesktop.org>,
	"DRI Development" <dri-devel@lists.freedesktop.org>,
	"Steven Price" <steven.price@arm.com>,
	linaro-mm-sig@lists.linaro.org,
	"Luben Tuikov" <luben.tuikov@amd.com>,
	"Alyssa Rosenzweig" <alyssa.rosenzweig@collabora.com>,
	"Alex Deucher" <alexander.deucher@amd.com>,
	"Daniel Vetter" <daniel.vetter@intel.com>,
	"Lee Jones" <lee.jones@linaro.org>,
	"Christian König" <christian.koenig@amd.com>,
	linux-media@vger.kernel.org
Subject: Re: [Intel-gfx] [PATCH 05/15] drm/panfrost: Use xarray and helpers for depedency tracking
Date: Wed, 23 Jun 2021 18:51:21 +0200	[thread overview]
Message-ID: <20210623185121.73437307@collabora.com> (raw)
In-Reply-To: <20210622165511.3169559-6-daniel.vetter@ffwll.ch>

On Tue, 22 Jun 2021 18:55:01 +0200
Daniel Vetter <daniel.vetter@ffwll.ch> wrote:

> More consistency and prep work for the next patch.
> 
> Aside: I wonder whether we shouldn't just move this entire xarray
> business into the scheduler so that not everyone has to reinvent the
> same wheels. Cc'ing some scheduler people for this too.
> 
> v2: Correctly handle sched_lock since Lucas pointed out it's needed.
> 
> v3: Rebase, dma_resv_get_excl_unlocked got renamed
> 
> v4: Don't leak job references on failure (Steven).

Hehe, I had pretty much the same patch here [1].

Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>

[1]https://patchwork.kernel.org/project/dri-devel/patch/20210311092539.2405596-3-boris.brezillon@collabora.com/

> 
> Cc: Lucas Stach <l.stach@pengutronix.de>
> Cc: "Christian König" <christian.koenig@amd.com>
> Cc: Luben Tuikov <luben.tuikov@amd.com>
> Cc: Alex Deucher <alexander.deucher@amd.com>
> Cc: Lee Jones <lee.jones@linaro.org>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Rob Herring <robh@kernel.org>
> Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
> Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: linux-media@vger.kernel.org
> Cc: linaro-mm-sig@lists.linaro.org
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> ---
>  drivers/gpu/drm/panfrost/panfrost_drv.c | 41 +++++++---------
>  drivers/gpu/drm/panfrost/panfrost_job.c | 65 +++++++++++--------------
>  drivers/gpu/drm/panfrost/panfrost_job.h |  8 ++-
>  3 files changed, 49 insertions(+), 65 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c
> index 075ec0ef746c..3ee828f1e7a5 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_drv.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
> @@ -138,12 +138,6 @@ panfrost_lookup_bos(struct drm_device *dev,
>  	if (!job->bo_count)
>  		return 0;
>  
> -	job->implicit_fences = kvmalloc_array(job->bo_count,
> -				  sizeof(struct dma_fence *),
> -				  GFP_KERNEL | __GFP_ZERO);
> -	if (!job->implicit_fences)
> -		return -ENOMEM;
> -
>  	ret = drm_gem_objects_lookup(file_priv,
>  				     (void __user *)(uintptr_t)args->bo_handles,
>  				     job->bo_count, &job->bos);
> @@ -174,7 +168,7 @@ panfrost_lookup_bos(struct drm_device *dev,
>  }
>  
>  /**
> - * panfrost_copy_in_sync() - Sets up job->in_fences[] with the sync objects
> + * panfrost_copy_in_sync() - Sets up job->deps with the sync objects
>   * referenced by the job.
>   * @dev: DRM device
>   * @file_priv: DRM file for this fd
> @@ -194,22 +188,14 @@ panfrost_copy_in_sync(struct drm_device *dev,
>  {
>  	u32 *handles;
>  	int ret = 0;
> -	int i;
> +	int i, in_fence_count;
>  
> -	job->in_fence_count = args->in_sync_count;
> +	in_fence_count = args->in_sync_count;
>  
> -	if (!job->in_fence_count)
> +	if (!in_fence_count)
>  		return 0;
>  
> -	job->in_fences = kvmalloc_array(job->in_fence_count,
> -					sizeof(struct dma_fence *),
> -					GFP_KERNEL | __GFP_ZERO);
> -	if (!job->in_fences) {
> -		DRM_DEBUG("Failed to allocate job in fences\n");
> -		return -ENOMEM;
> -	}
> -
> -	handles = kvmalloc_array(job->in_fence_count, sizeof(u32), GFP_KERNEL);
> +	handles = kvmalloc_array(in_fence_count, sizeof(u32), GFP_KERNEL);
>  	if (!handles) {
>  		ret = -ENOMEM;
>  		DRM_DEBUG("Failed to allocate incoming syncobj handles\n");
> @@ -218,16 +204,23 @@ panfrost_copy_in_sync(struct drm_device *dev,
>  
>  	if (copy_from_user(handles,
>  			   (void __user *)(uintptr_t)args->in_syncs,
> -			   job->in_fence_count * sizeof(u32))) {
> +			   in_fence_count * sizeof(u32))) {
>  		ret = -EFAULT;
>  		DRM_DEBUG("Failed to copy in syncobj handles\n");
>  		goto fail;
>  	}
>  
> -	for (i = 0; i < job->in_fence_count; i++) {
> +	for (i = 0; i < in_fence_count; i++) {
> +		struct dma_fence *fence;
> +
>  		ret = drm_syncobj_find_fence(file_priv, handles[i], 0, 0,
> -					     &job->in_fences[i]);
> -		if (ret == -EINVAL)
> +					     &fence);
> +		if (ret)
> +			goto fail;
> +
> +		ret = drm_gem_fence_array_add(&job->deps, fence);
> +
> +		if (ret)
>  			goto fail;
>  	}
>  
> @@ -265,6 +258,8 @@ static int panfrost_ioctl_submit(struct drm_device *dev, void *data,
>  
>  	kref_init(&job->refcount);
>  
> +	xa_init_flags(&job->deps, XA_FLAGS_ALLOC);
> +
>  	job->pfdev = pfdev;
>  	job->jc = args->jc;
>  	job->requirements = args->requirements;
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> index 38f8580c19f1..71cd43fa1b36 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -196,14 +196,21 @@ static void panfrost_job_hw_submit(struct panfrost_job *job, int js)
>  	job_write(pfdev, JS_COMMAND_NEXT(js), JS_COMMAND_START);
>  }
>  
> -static void panfrost_acquire_object_fences(struct drm_gem_object **bos,
> -					   int bo_count,
> -					   struct dma_fence **implicit_fences)
> +static int panfrost_acquire_object_fences(struct drm_gem_object **bos,
> +					  int bo_count,
> +					  struct xarray *deps)
>  {
> -	int i;
> +	int i, ret;
>  
> -	for (i = 0; i < bo_count; i++)
> -		implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv);
> +	for (i = 0; i < bo_count; i++) {
> +		struct dma_fence *fence = dma_resv_get_excl_unlocked(bos[i]->resv);
> +
> +		ret = drm_gem_fence_array_add(deps, fence);
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
>  }
>  
>  static void panfrost_attach_object_fences(struct drm_gem_object **bos,
> @@ -240,10 +247,14 @@ int panfrost_job_push(struct panfrost_job *job)
>  
>  	job->render_done_fence = dma_fence_get(&job->base.s_fence->finished);
>  
> -	kref_get(&job->refcount); /* put by scheduler job completion */
> +	ret = panfrost_acquire_object_fences(job->bos, job->bo_count,
> +					     &job->deps);
> +	if (ret) {
> +		mutex_unlock(&pfdev->sched_lock);
> +		goto unlock;
> +	}
>  
> -	panfrost_acquire_object_fences(job->bos, job->bo_count,
> -				       job->implicit_fences);
> +	kref_get(&job->refcount); /* put by scheduler job completion */
>  
>  	drm_sched_entity_push_job(&job->base, entity);
>  
> @@ -262,18 +273,15 @@ static void panfrost_job_cleanup(struct kref *ref)
>  {
>  	struct panfrost_job *job = container_of(ref, struct panfrost_job,
>  						refcount);
> +	struct dma_fence *fence;
> +	unsigned long index;
>  	unsigned int i;
>  
> -	if (job->in_fences) {
> -		for (i = 0; i < job->in_fence_count; i++)
> -			dma_fence_put(job->in_fences[i]);
> -		kvfree(job->in_fences);
> -	}
> -	if (job->implicit_fences) {
> -		for (i = 0; i < job->bo_count; i++)
> -			dma_fence_put(job->implicit_fences[i]);
> -		kvfree(job->implicit_fences);
> +	xa_for_each(&job->deps, index, fence) {
> +		dma_fence_put(fence);
>  	}
> +	xa_destroy(&job->deps);
> +
>  	dma_fence_put(job->done_fence);
>  	dma_fence_put(job->render_done_fence);
>  
> @@ -316,26 +324,9 @@ static struct dma_fence *panfrost_job_dependency(struct drm_sched_job *sched_job
>  						 struct drm_sched_entity *s_entity)
>  {
>  	struct panfrost_job *job = to_panfrost_job(sched_job);
> -	struct dma_fence *fence;
> -	unsigned int i;
> -
> -	/* Explicit fences */
> -	for (i = 0; i < job->in_fence_count; i++) {
> -		if (job->in_fences[i]) {
> -			fence = job->in_fences[i];
> -			job->in_fences[i] = NULL;
> -			return fence;
> -		}
> -	}
>  
> -	/* Implicit fences, max. one per BO */
> -	for (i = 0; i < job->bo_count; i++) {
> -		if (job->implicit_fences[i]) {
> -			fence = job->implicit_fences[i];
> -			job->implicit_fences[i] = NULL;
> -			return fence;
> -		}
> -	}
> +	if (!xa_empty(&job->deps))
> +		return xa_erase(&job->deps, job->last_dep++);
>  
>  	return NULL;
>  }
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.h b/drivers/gpu/drm/panfrost/panfrost_job.h
> index bbd3ba97ff67..82306a03b57e 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.h
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.h
> @@ -19,9 +19,9 @@ struct panfrost_job {
>  	struct panfrost_device *pfdev;
>  	struct panfrost_file_priv *file_priv;
>  
> -	/* Optional fences userspace can pass in for the job to depend on. */
> -	struct dma_fence **in_fences;
> -	u32 in_fence_count;
> +	/* Contains both explicit and implicit fences */
> +	struct xarray deps;
> +	unsigned long last_dep;
>  
>  	/* Fence to be signaled by IRQ handler when the job is complete. */
>  	struct dma_fence *done_fence;
> @@ -30,8 +30,6 @@ struct panfrost_job {
>  	__u32 requirements;
>  	__u32 flush_id;
>  
> -	/* Exclusive fences we have taken from the BOs to wait for */
> -	struct dma_fence **implicit_fences;
>  	struct panfrost_gem_mapping **mappings;
>  	struct drm_gem_object **bos;
>  	u32 bo_count;

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2021-06-23 16:51 UTC|newest]

Thread overview: 175+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-22 16:54 [PATCH 00/15] implicit fencing/dma-resv rules for shared buffers Daniel Vetter
2021-06-22 16:54 ` [Intel-gfx] " Daniel Vetter
2021-06-22 16:54 ` [PATCH 01/15] dma-resv: Fix kerneldoc Daniel Vetter
2021-06-22 16:54   ` [Intel-gfx] " Daniel Vetter
2021-06-22 16:54   ` Daniel Vetter
2021-06-22 18:19   ` Alex Deucher
2021-06-22 18:19     ` [Intel-gfx] " Alex Deucher
2021-06-22 18:19     ` Alex Deucher
2021-06-22 18:49   ` Sam Ravnborg
2021-06-22 18:49     ` [Intel-gfx] " Sam Ravnborg
2021-06-22 19:19     ` Daniel Vetter
2021-06-22 19:19       ` [Intel-gfx] " Daniel Vetter
2021-06-22 19:19       ` Daniel Vetter
2021-06-23  8:31   ` Christian König
2021-06-23  8:31     ` [Intel-gfx] " Christian König
2021-06-23  8:31     ` Christian König
2021-06-23 15:15     ` Daniel Vetter
2021-06-23 15:15       ` [Intel-gfx] " Daniel Vetter
2021-06-23 15:15       ` Daniel Vetter
2021-06-22 16:54 ` [PATCH 02/15] dma-buf: Switch to inline kerneldoc Daniel Vetter
2021-06-22 16:54   ` [Intel-gfx] " Daniel Vetter
2021-06-22 16:54   ` Daniel Vetter
2021-06-22 18:24   ` Alex Deucher
2021-06-22 18:24     ` [Intel-gfx] " Alex Deucher
2021-06-22 18:24     ` Alex Deucher
2021-06-22 19:01   ` Sam Ravnborg
2021-06-22 19:01     ` [Intel-gfx] " Sam Ravnborg
2021-06-22 19:21     ` Daniel Vetter
2021-06-22 19:21       ` [Intel-gfx] " Daniel Vetter
2021-06-22 19:21       ` Daniel Vetter
2021-06-23  8:32   ` Christian König
2021-06-23  8:32     ` [Intel-gfx] " Christian König
2021-06-23  8:32     ` Christian König
2021-06-23 16:17   ` [PATCH] " Daniel Vetter
2021-06-23 16:17     ` [Intel-gfx] " Daniel Vetter
2021-06-23 16:17     ` Daniel Vetter
2021-06-23 17:33     ` Sam Ravnborg
2021-06-23 17:33       ` [Intel-gfx] " Sam Ravnborg
2021-06-22 16:54 ` [PATCH 03/15] dma-buf: Document dma-buf implicit fencing/resv fencing rules Daniel Vetter
2021-06-22 16:54   ` [Intel-gfx] " Daniel Vetter
2021-06-23  8:41   ` Christian König
2021-06-23  8:41     ` [Intel-gfx] " Christian König
2021-06-23 16:19   ` [PATCH] " Daniel Vetter
2021-06-23 16:19     ` [Intel-gfx] " Daniel Vetter
2021-06-24  6:59     ` Dave Airlie
2021-06-24  6:59       ` [Intel-gfx] " Dave Airlie
2021-06-24 11:08     ` [Mesa-dev] " Daniel Stone
2021-06-24 11:08       ` [Intel-gfx] " Daniel Stone
2021-06-24 11:23       ` Daniel Vetter
2021-06-24 11:23         ` [Intel-gfx] " Daniel Vetter
2021-06-24 12:48     ` Daniel Vetter
2021-06-24 12:52     ` Daniel Vetter
2021-06-22 16:55 ` [PATCH 04/15] drm/panfrost: Shrink sched_lock Daniel Vetter
2021-06-22 16:55   ` [Intel-gfx] " Daniel Vetter
2021-06-23 16:52   ` Boris Brezillon
2021-06-23 16:52     ` [Intel-gfx] " Boris Brezillon
2021-06-22 16:55 ` [PATCH 05/15] drm/panfrost: Use xarray and helpers for depedency tracking Daniel Vetter
2021-06-22 16:55   ` [Intel-gfx] " Daniel Vetter
2021-06-22 16:55   ` Daniel Vetter
2021-06-23 16:51   ` Boris Brezillon [this message]
2021-06-23 16:51     ` [Intel-gfx] " Boris Brezillon
2021-06-23 16:51     ` Boris Brezillon
2021-06-22 16:55 ` [PATCH 06/15] drm/panfrost: Fix implicit sync Daniel Vetter
2021-06-22 16:55   ` [Intel-gfx] " Daniel Vetter
2021-06-22 16:55   ` Daniel Vetter
2021-06-23 16:47   ` Boris Brezillon
2021-06-23 16:47     ` [Intel-gfx] " Boris Brezillon
2021-06-23 16:47     ` Boris Brezillon
2021-06-23 19:17     ` Daniel Vetter
2021-06-23 19:17       ` [Intel-gfx] " Daniel Vetter
2021-06-23 19:17       ` Daniel Vetter
2021-06-22 16:55 ` [PATCH 07/15] drm/atomic-helper: make drm_gem_plane_helper_prepare_fb the default Daniel Vetter
2021-06-22 16:55   ` [Intel-gfx] " Daniel Vetter
2021-06-22 19:10   ` Sam Ravnborg
2021-06-22 19:10     ` [Intel-gfx] " Sam Ravnborg
2021-06-22 20:20     ` Daniel Vetter
2021-06-22 20:20       ` [Intel-gfx] " Daniel Vetter
2021-06-23 15:39       ` Sam Ravnborg
2021-06-23 15:39         ` [Intel-gfx] " Sam Ravnborg
2021-06-23 16:22   ` [PATCH] " Daniel Vetter
2021-06-23 16:22     ` [Intel-gfx] " Daniel Vetter
2021-06-22 16:55 ` [PATCH 08/15] drm/<driver>: drm_gem_plane_helper_prepare_fb is now " Daniel Vetter
2021-06-22 16:55   ` Daniel Vetter
2021-06-22 16:55   ` [Intel-gfx] " Daniel Vetter
2021-06-22 16:55   ` Daniel Vetter
2021-06-22 16:55   ` Daniel Vetter
2021-06-22 16:55   ` Daniel Vetter
2021-06-22 16:55   ` Daniel Vetter
2021-06-24  8:32   ` Philipp Zabel
2021-06-24  8:32     ` Philipp Zabel
2021-06-24  8:32     ` [Intel-gfx] " Philipp Zabel
2021-06-24  8:32     ` Philipp Zabel
2021-06-24  8:32     ` Philipp Zabel
2021-06-24  8:32     ` Philipp Zabel
2021-06-24  8:32     ` Philipp Zabel
2021-06-24  8:32     ` Philipp Zabel
2021-06-22 16:55 ` [PATCH 09/15] drm/armada: Remove prepare/cleanup_fb hooks Daniel Vetter
2021-06-22 16:55   ` [Intel-gfx] " Daniel Vetter
2021-06-24 12:46   ` Maxime Ripard
2021-06-24 12:46     ` [Intel-gfx] " Maxime Ripard
2021-06-22 16:55 ` [PATCH 10/15] drm/vram-helpers: Create DRM_GEM_VRAM_PLANE_HELPER_FUNCS Daniel Vetter
2021-06-22 16:55   ` [Intel-gfx] " Daniel Vetter
2021-06-24  7:38   ` Thomas Zimmermann
2021-06-24  7:38     ` [Intel-gfx] " Thomas Zimmermann
2021-06-24  7:46   ` Thomas Zimmermann
2021-06-24  7:46     ` [Intel-gfx] " Thomas Zimmermann
2021-06-24 13:39     ` Daniel Vetter
2021-06-24 13:39       ` [Intel-gfx] " Daniel Vetter
2021-06-22 16:55 ` [PATCH 11/15] drm/omap: Follow implicit fencing in prepare_fb Daniel Vetter
2021-06-22 16:55   ` [Intel-gfx] " Daniel Vetter
2021-06-22 16:55 ` [PATCH 12/15] drm/simple-helper: drm_gem_simple_display_pipe_prepare_fb as default Daniel Vetter
2021-06-22 16:55   ` [Intel-gfx] " Daniel Vetter
2021-06-22 19:15   ` Sam Ravnborg
2021-06-22 19:15     ` [Intel-gfx] " Sam Ravnborg
2021-06-23 16:24   ` [PATCH] " Daniel Vetter
2021-06-23 16:24     ` [Intel-gfx] " Daniel Vetter
2021-06-23 17:34     ` Sam Ravnborg
2021-06-23 17:34       ` [Intel-gfx] " Sam Ravnborg
2021-06-22 16:55 ` [PATCH 13/15] drm/tiny: drm_gem_simple_display_pipe_prepare_fb is the default Daniel Vetter
2021-06-22 16:55   ` Daniel Vetter
2021-06-22 16:55   ` [Intel-gfx] " Daniel Vetter
2021-06-22 16:55   ` Daniel Vetter
2021-06-22 16:55 ` [PATCH 14/15] drm/gem: Tiny kernel clarification for drm_gem_fence_array_add Daniel Vetter
2021-06-22 16:55   ` [Intel-gfx] " Daniel Vetter
2021-06-23  8:42   ` Christian König
2021-06-23  8:42     ` [Intel-gfx] " Christian König
2021-06-24 12:41     ` Daniel Vetter
2021-06-24 12:41       ` [Intel-gfx] " Daniel Vetter
2021-06-24 12:48       ` Christian König
2021-06-24 12:48         ` [Intel-gfx] " Christian König
2021-06-24 13:32         ` Daniel Vetter
2021-06-24 13:32           ` [Intel-gfx] " Daniel Vetter
2021-06-24 13:35           ` Christian König
2021-06-24 13:35             ` [Intel-gfx] " Christian König
2021-06-24 13:41             ` Daniel Vetter
2021-06-24 13:41               ` [Intel-gfx] " Daniel Vetter
2021-06-24 13:45               ` Christian König
2021-06-24 13:45                 ` [Intel-gfx] " Christian König
2021-06-22 16:55 ` [PATCH 15/15] RFC: drm/amdgpu: Implement a proper implicit fencing uapi Daniel Vetter
2021-06-22 16:55   ` [Intel-gfx] " Daniel Vetter
2021-06-22 23:56   ` kernel test robot
2021-06-23  9:45   ` Bas Nieuwenhuizen
2021-06-23  9:45     ` [Intel-gfx] " Bas Nieuwenhuizen
2021-06-23 12:18     ` Daniel Vetter
2021-06-23 12:18       ` [Intel-gfx] " Daniel Vetter
2021-06-23 12:59       ` Christian König
2021-06-23 12:59         ` [Intel-gfx] " Christian König
2021-06-23 13:38         ` Bas Nieuwenhuizen
2021-06-23 13:38           ` [Intel-gfx] " Bas Nieuwenhuizen
2021-06-23 13:44           ` Christian König
2021-06-23 13:44             ` [Intel-gfx] " Christian König
2021-06-23 13:49             ` Daniel Vetter
2021-06-23 13:49               ` [Intel-gfx] " Daniel Vetter
2021-06-23 14:02               ` Christian König
2021-06-23 14:02                 ` [Intel-gfx] " Christian König
2021-06-23 14:50                 ` Daniel Vetter
2021-06-23 14:50                   ` [Intel-gfx] " Daniel Vetter
2021-06-23 14:58                   ` Bas Nieuwenhuizen
2021-06-23 14:58                     ` [Intel-gfx] " Bas Nieuwenhuizen
2021-06-23 15:03                     ` Daniel Vetter
2021-06-23 15:03                       ` [Intel-gfx] " Daniel Vetter
2021-06-23 15:07                       ` Christian König
2021-06-23 15:07                         ` [Intel-gfx] " Christian König
2021-06-23 15:12                         ` Daniel Vetter
2021-06-23 15:12                           ` [Intel-gfx] " Daniel Vetter
2021-06-23 15:15                           ` Christian König
2021-06-23 15:15                             ` [Intel-gfx] " Christian König
2021-06-22 17:08 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for implicit fencing/dma-resv rules for shared buffers Patchwork
2021-06-22 17:11 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-06-22 17:38 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-06-22 19:12 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2021-06-23 17:05 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for implicit fencing/dma-resv rules for shared buffers (rev5) Patchwork
2021-06-23 17:07 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-06-23 17:35 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-06-23 21:04 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210623185121.73437307@collabora.com \
    --to=boris.brezillon@collabora.com \
    --cc=alexander.deucher@amd.com \
    --cc=alyssa.rosenzweig@collabora.com \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=daniel.vetter@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=lee.jones@linaro.org \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=linux-media@vger.kernel.org \
    --cc=luben.tuikov@amd.com \
    --cc=steven.price@arm.com \
    --cc=tomeu.vizoso@collabora.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.