linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Christian König" <ckoenig.leichtzumerken@gmail.com>
To: Lucas Stach <l.stach@pengutronix.de>,
	Eric Anholt <eric@anholt.net>,
	dri-devel@lists.freedesktop.org
Cc: linux-kernel@vger.kernel.org, amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 1/3] drm/v3d: Take a lock across GPU scheduler job creation and queuing.
Date: Wed, 6 Jun 2018 10:52:07 +0200	[thread overview]
Message-ID: <6462aaec-a5f3-3a78-2eb1-fb24faa68e42@gmail.com> (raw)
In-Reply-To: <1528274797.26063.6.camel@pengutronix.de>

Am 06.06.2018 um 10:46 schrieb Lucas Stach:
> Am Dienstag, den 05.06.2018, 12:03 -0700 schrieb Eric Anholt:
>> Between creation and queueing of a job, you need to prevent any other
>> job from being created and queued.  Otherwise the scheduler's fences
>> may be signaled out of seqno order.
>>
>>> Signed-off-by: Eric Anholt <eric@anholt.net>
>> Fixes: 57692c94dcbe ("drm/v3d: Introduce a new DRM driver for Broadcom V3D V3.x+")
>> ---
>>
>> ccing amd-gfx due to interaction of this series with the scheduler.
>>
>>   drivers/gpu/drm/v3d/v3d_drv.h |  5 +++++
>>   drivers/gpu/drm/v3d/v3d_gem.c | 11 +++++++++--
>>   2 files changed, 14 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
>> index a043ac3aae98..26005abd9c5d 100644
>> --- a/drivers/gpu/drm/v3d/v3d_drv.h
>> +++ b/drivers/gpu/drm/v3d/v3d_drv.h
>> @@ -85,6 +85,11 @@ struct v3d_dev {
>>>   	 */
>>>   	struct mutex reset_lock;
>>   
>>> +	/* Lock taken when creating and pushing the GPU scheduler
>>> +	 * jobs, to keep the sched-fence seqnos in order.
>>> +	 */
>>> +	struct mutex sched_lock;
>> +
>>>   	struct {
>>>   		u32 num_allocated;
>>>   		u32 pages_allocated;
>> diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
>> index b513f9189caf..9ea83bdb9a30 100644
>> --- a/drivers/gpu/drm/v3d/v3d_gem.c
>> +++ b/drivers/gpu/drm/v3d/v3d_gem.c
>> @@ -550,13 +550,16 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
>>>   	if (ret)
>>>   		goto fail;
>>   
>>> +	mutex_lock(&v3d->sched_lock);
>>>   	if (exec->bin.start != exec->bin.end) {
>>>   		ret = drm_sched_job_init(&exec->bin.base,
>>>   					 &v3d->queue[V3D_BIN].sched,
>>>   					 &v3d_priv->sched_entity[V3D_BIN],
>>>   					 v3d_priv);
>>> -		if (ret)
>>> +		if (ret) {
>>> +			mutex_unlock(&v3d->sched_lock);
>>   			goto fail_unreserve;
> I don't see any path where you would go to fail_unreserve with the
> mutex not yet locked, so you could just fold the mutex_unlock into this
> error path for a bit less code duplication.
>
> Otherwise this looks fine.

Yeah, agree that could be cleaned up.

I can't judge the correctness of the driver, but at least the scheduler 
handling looks good to me.

Regards,
Christian.

>
> Regards,
> Lucas
>
>> +		}
>>   
>>>   		exec->bin_done_fence =
>>>   			dma_fence_get(&exec->bin.base.s_fence->finished);
>> @@ -570,12 +573,15 @@ v3d_submit_cl_ioctl(struct drm_device *dev, void *data,
>>>   				 &v3d->queue[V3D_RENDER].sched,
>>>   				 &v3d_priv->sched_entity[V3D_RENDER],
>>>   				 v3d_priv);
>>> -	if (ret)
>>> +	if (ret) {
>>> +		mutex_unlock(&v3d->sched_lock);
>>>   		goto fail_unreserve;
>>> +	}
>>   
>>>   	kref_get(&exec->refcount); /* put by scheduler job completion */
>>>   	drm_sched_entity_push_job(&exec->render.base,
>>>   				  &v3d_priv->sched_entity[V3D_RENDER]);
>>> +	mutex_unlock(&v3d->sched_lock);
>>   
>>>   	v3d_attach_object_fences(exec);
>>   
>> @@ -615,6 +621,7 @@ v3d_gem_init(struct drm_device *dev)
>>>   	spin_lock_init(&v3d->job_lock);
>>>   	mutex_init(&v3d->bo_lock);
>>>   	mutex_init(&v3d->reset_lock);
>>> +	mutex_init(&v3d->sched_lock);
>>   
>>>   	/* Note: We don't allocate address 0.  Various bits of HW
>>>   	 * treat 0 as special, such as the occlusion query counters
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

  reply	other threads:[~2018-06-06  8:52 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-05 19:03 [PATCH 1/3] drm/v3d: Take a lock across GPU scheduler job creation and queuing Eric Anholt
2018-06-05 19:03 ` [PATCH 2/3] drm/v3d: Remove the bad signaled() implementation Eric Anholt
2018-06-08 10:21   ` Lucas Stach
2018-06-05 19:03 ` [PATCH 3/3] drm/v3d: Add a note about locking of v3d_fence_create() Eric Anholt
2018-06-08 10:24   ` Lucas Stach
2018-06-08 17:08     ` Eric Anholt
2018-06-06  8:46 ` [PATCH 1/3] drm/v3d: Take a lock across GPU scheduler job creation and queuing Lucas Stach
2018-06-06  8:52   ` Christian König [this message]
2018-06-06 17:48   ` [PATCH 1/3 v2] " Eric Anholt
2018-06-07  8:37     ` Lucas Stach

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6462aaec-a5f3-3a78-2eb1-fb24faa68e42@gmail.com \
    --to=ckoenig.leichtzumerken@gmail.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=eric@anholt.net \
    --cc=l.stach@pengutronix.de \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).