dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Lucas Stach <l.stach@pengutronix.de>
To: Daniel Vetter <daniel.vetter@ffwll.ch>,
	DRI Development <dri-devel@lists.freedesktop.org>
Cc: Daniel Vetter <daniel.vetter@intel.com>,
	Intel Graphics Development <intel-gfx@lists.freedesktop.org>,
	Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
	Tomeu Vizoso <tomeu.vizoso@collabora.com>,
	Steven Price <steven.price@arm.com>
Subject: Re: [PATCH 02/11] drm/panfrost: Remove sched_lock
Date: Fri, 21 May 2021 11:32:48 +0200	[thread overview]
Message-ID: <066b1c490a1251113fbcf7f2270654be25be4f29.camel@pengutronix.de> (raw)
In-Reply-To: <20210521090959.1663703-2-daniel.vetter@ffwll.ch>

Am Freitag, dem 21.05.2021 um 11:09 +0200 schrieb Daniel Vetter:
> Scheduler takes care of its own locking, dont worry. For everything else
> there's reservation locking on each bo.
> 
> So seems to be entirely redundnant and just a bit in the way.

I haven't read all the surrounding code, but this looks wrong from a
glance. You must hold a lock across drm_sched_job_init ->
drm_sched_entity_push_job as the scheduler fences are initialized in
the job init, so if there's no exclusive section across those two
function calls you might end up with jobs being queued with their fence
seqnos not monotonically increasing, which breaks all kinds of other
stuff.

I don't see a reason to hold the lock across the reservation calls,
though.

Regards,
Lucas

> 
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Rob Herring <robh@kernel.org>
> Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
> ---
>  drivers/gpu/drm/panfrost/panfrost_device.c |  1 -
>  drivers/gpu/drm/panfrost/panfrost_device.h |  2 --
>  drivers/gpu/drm/panfrost/panfrost_job.c    | 13 ++-----------
>  3 files changed, 2 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/gpu/drm/panfrost/panfrost_device.c b/drivers/gpu/drm/panfrost/panfrost_device.c
> index 125ed973feaa..23070c01c63f 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_device.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_device.c
> @@ -199,7 +199,6 @@ int panfrost_device_init(struct panfrost_device *pfdev)
>  	int err;
>  	struct resource *res;
>  
> -	mutex_init(&pfdev->sched_lock);
>  	INIT_LIST_HEAD(&pfdev->scheduled_jobs);
>  	INIT_LIST_HEAD(&pfdev->as_lru_list);
>  
> diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h b/drivers/gpu/drm/panfrost/panfrost_device.h
> index 597cf1459b0a..7519903bb031 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_device.h
> +++ b/drivers/gpu/drm/panfrost/panfrost_device.h
> @@ -105,8 +105,6 @@ struct panfrost_device {
>  
>  	struct panfrost_perfcnt *perfcnt;
>  
> -	struct mutex sched_lock;
> -
>  	struct {
>  		struct work_struct work;
>  		atomic_t pending;
> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
> index 6003cfeb1322..f5d39ee14ab5 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
> @@ -218,26 +218,19 @@ static void panfrost_attach_object_fences(struct drm_gem_object **bos,
>  
>  int panfrost_job_push(struct panfrost_job *job)
>  {
> -	struct panfrost_device *pfdev = job->pfdev;
>  	int slot = panfrost_job_get_slot(job);
>  	struct drm_sched_entity *entity = &job->file_priv->sched_entity[slot];
>  	struct ww_acquire_ctx acquire_ctx;
>  	int ret = 0;
>  
> -	mutex_lock(&pfdev->sched_lock);
> -
>  	ret = drm_gem_lock_reservations(job->bos, job->bo_count,
>  					    &acquire_ctx);
> -	if (ret) {
> -		mutex_unlock(&pfdev->sched_lock);
> +	if (ret)
>  		return ret;
> -	}
>  
>  	ret = drm_sched_job_init(&job->base, entity, NULL);
> -	if (ret) {
> -		mutex_unlock(&pfdev->sched_lock);
> +	if (ret)
>  		goto unlock;
> -	}
>  
>  	job->render_done_fence = dma_fence_get(&job->base.s_fence->finished);
>  
> @@ -248,8 +241,6 @@ int panfrost_job_push(struct panfrost_job *job)
>  
>  	drm_sched_entity_push_job(&job->base, entity);
>  
> -	mutex_unlock(&pfdev->sched_lock);
> -
>  	panfrost_attach_object_fences(job->bos, job->bo_count,
>  				      job->render_done_fence);
>  



  reply	other threads:[~2021-05-21  9:32 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-21  9:09 [PATCH 01/11] drm/amdgpu: Comply with implicit fencing rules Daniel Vetter
2021-05-21  9:09 ` [PATCH 02/11] drm/panfrost: Remove sched_lock Daniel Vetter
2021-05-21  9:32   ` Lucas Stach [this message]
2021-05-21 14:49     ` Daniel Vetter
2021-05-21  9:09 ` [PATCH 03/11] drm/panfrost: Use xarray and helpers for depedency tracking Daniel Vetter
2021-06-02 14:06   ` Steven Price
2021-06-02 18:51     ` Daniel Vetter
2021-06-03  7:48       ` Steven Price
2021-05-21  9:09 ` [PATCH 04/11] drm/panfrost: Fix implicit sync Daniel Vetter
2021-05-21 12:22   ` Daniel Stone
2021-05-21 12:28     ` [Linaro-mm-sig] " Christian König
2021-05-21 12:54       ` Daniel Stone
2021-05-21 13:09         ` Christian König
2021-05-21 13:23           ` Daniel Stone
2021-05-21  9:09 ` [PATCH 05/11] drm/atomic-helper: make drm_gem_plane_helper_prepare_fb the default Daniel Vetter
2021-05-21  9:09 ` [PATCH 06/11] drm/<driver>: drm_gem_plane_helper_prepare_fb is now " Daniel Vetter
2021-05-21  9:38   ` Lucas Stach
2021-05-21 12:20   ` Heiko Stübner
2021-05-21 12:22   ` Paul Cercueil
2021-05-21 15:53   ` Jernej Škrabec
2021-05-21 23:18   ` Chun-Kuang Hu
2021-05-23 12:17   ` Martin Blumenstingl
2021-05-24  7:54   ` Tomi Valkeinen
2021-05-28  9:55   ` Philippe CORNU
2021-05-21  9:09 ` [PATCH 07/11] drm/armada: Remove prepare/cleanup_fb hooks Daniel Vetter
2021-05-21  9:09 ` [PATCH 08/11] drm/vram-helpers: Create DRM_GEM_VRAM_PLANE_HELPER_FUNCS Daniel Vetter
2021-05-21  9:33   ` tiantao (H)
2021-05-21  9:09 ` [PATCH 09/11] drm/omap: Follow implicit fencing in prepare_fb Daniel Vetter
2021-05-24  7:53   ` Tomi Valkeinen
2021-05-21  9:09 ` [PATCH 10/11] drm/simple-helper: drm_gem_simple_display_pipe_prepare_fb as default Daniel Vetter
2021-05-25 17:48   ` Noralf Trønnes
2021-05-25 17:53     ` Daniel Vetter
2021-05-21  9:09 ` [PATCH 11/11] drm/tiny: drm_gem_simple_display_pipe_prepare_fb is the default Daniel Vetter
2021-05-21 13:41   ` David Lechner
2021-05-21 14:09   ` Noralf Trønnes
2021-05-25 16:05     ` Daniel Vetter
2021-05-21 14:13   ` Oleksandr Andrushchenko
2021-05-28  0:38   ` Linus Walleij
2021-05-21  9:46 ` [PATCH 01/11] drm/amdgpu: Comply with implicit fencing rules Bas Nieuwenhuizen
2021-05-21 14:37   ` Daniel Vetter
2021-05-21 15:00     ` Bas Nieuwenhuizen
2021-05-21 15:16       ` Daniel Vetter
2021-05-21 18:08         ` [Mesa-dev] " Christian König
2021-05-21 18:31           ` Daniel Vetter
2021-05-22  8:30             ` Christian König
2021-05-25 13:05               ` Daniel Vetter
2021-05-25 15:05                 ` Christian König
2021-05-25 15:23                   ` Daniel Vetter
2021-05-26 13:32                     ` Christian König
2021-05-26 13:51                       ` Daniel Vetter
2021-05-21 11:22 ` Christian König
2021-05-21 14:58 ` [Mesa-dev] " Rob Clark
2021-05-21 14:58   ` Daniel Vetter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=066b1c490a1251113fbcf7f2270654be25be4f29.camel@pengutronix.de \
    --to=l.stach@pengutronix.de \
    --cc=alyssa.rosenzweig@collabora.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=daniel.vetter@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=steven.price@arm.com \
    --cc=tomeu.vizoso@collabora.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).