dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: "Emma Anholt" <emma@anholt.net>,
	"Adam Borowski" <kilobyte@angband.pl>,
	"David Airlie" <airlied@linux.ie>,
	"Viresh Kumar" <viresh.kumar@linaro.org>,
	"DRI Development" <dri-devel@lists.freedesktop.org>,
	"Sonny Jiang" <sonny.jiang@amd.com>,
	"Nirmoy Das" <nirmoy.das@amd.com>,
	"Daniel Vetter" <daniel.vetter@intel.com>,
	"Lee Jones" <lee.jones@linaro.org>,
	"Jack Zhang" <Jack.Zhang1@amd.com>,
	lima@lists.freedesktop.org,
	"Mauro Carvalho Chehab" <mchehab+huawei@kernel.org>,
	"Masahiro Yamada" <masahiroy@kernel.org>,
	"Steven Price" <steven.price@arm.com>,
	"Luben Tuikov" <luben.tuikov@amd.com>,
	"Alyssa Rosenzweig" <alyssa.rosenzweig@collabora.com>,
	"Sami Tolvanen" <samitolvanen@google.com>,
	"Russell King" <linux+etnaviv@armlinux.org.uk>,
	"Dave Airlie" <airlied@redhat.com>,
	"Dennis Li" <Dennis.Li@amd.com>, "Chen Li" <chenli@uniontech.com>,
	"Paul Menzel" <pmenzel@molgen.mpg.de>,
	"Kees Cook" <keescook@chromium.org>,
	"Marek Olšák" <marek.olsak@amd.com>,
	"Kevin Wang" <kevin1.wang@amd.com>,
	"The etnaviv authors" <etnaviv@lists.freedesktop.org>,
	"moderated list:DMA BUFFER SHARING FRAMEWORK"
	<linaro-mm-sig@lists.linaro.org>,
	"Nick Terrell" <terrelln@fb.com>,
	"Deepak R Varma" <mh12gx2825@gmail.com>,
	"Tomeu Vizoso" <tomeu.vizoso@collabora.com>,
	"Boris Brezillon" <boris.brezillon@collabora.com>,
	"Qiang Yu" <yuq825@gmail.com>,
	"Alex Deucher" <alexander.deucher@amd.com>,
	"Tian Tao" <tiantao6@hisilicon.com>,
	"open list:DMA BUFFER SHARING FRAMEWORK"
	<linux-media@vger.kernel.org>
Subject: Re: [PATCH v2 01/11] drm/sched: Split drm_sched_job_init
Date: Thu, 8 Jul 2021 13:28:13 +0200	[thread overview]
Message-ID: <3c8a29f4-07c0-63ce-703e-9d652534642d@amd.com> (raw)
In-Reply-To: <CAKMK7uE4H2nsAYSAQGB0R7YTHUFvfNmshE2Bqy0uSdHomPxo=A@mail.gmail.com>



Am 08.07.21 um 13:20 schrieb Daniel Vetter:
> On Thu, Jul 8, 2021 at 12:54 PM Christian König
> <christian.koenig@amd.com> wrote:
>> [SNIP]
>>>> As far as I know that not completely correct. The rules around atomics i
>>>> once learned are:
>>>>
>>>> 1. Everything which modifies something is a write barrier.
>>>> 2. Everything which returns something is a read barrier.
>>>>
>>>> And I know a whole bunch of use cases where this is relied upon in the core
>>>> kernel, so I'm pretty sure that's correct.
>>> That's against what the doc says, and also it would mean stuff like
>>> atomic_read_acquire or smp_mb__after/before_atomic is completely pointless.
>>>
>>> On x86 you're right, anywhere else where there's no total store ordering I
>>> you're wrong.
>> Good to know. I always thought that atomic_read_acquire() was just for
>> documentation purpose.
> Maybe you mixed it up with C++ atomics (which I think are now also in
> C)? Those are strongly ordered by default (you can get the weakly
> ordered kernel-style one too). It's a bit unfortunate that the default
> semantics are exactly opposite between kernel and userspace :-/

Yeah, that's most likely it.

>>> If there's code that relies on this it needs to be fixed and properly
>>> documented. I did go through the squeue code a bit, and might be better to
>>> just replace this with a core data structure.
>> Well the spsc was especially crafted for this use case and performed
>> quite a bit better then a double linked list.
> Yeah  double-linked list is awful.
>
>> Or what core data structure do you have in mind?
> Hm I thought there's a ready-made queue primitive, but there's just
> llist.h. Which I think is roughly what the scheduler queue also does.
> Minus the atomic_t for counting how many there are, and aside from the
> tracepoints I don't think we're using those anywhere, we just check
> for is_empty in the code (from a quick look only).

I think we just need to replace the atomic_read() with 
atomic_read_acquire() and the atomic_dec() with atomic_dec_return_release().

Apart from that everything should be working as far as I can see. And 
yes llist.h doesn't really do much different, it just doesn't keeps a 
tail pointer.

Christian.

> -Daniel
>
>> Christian.
>>
>>> -Daniel
>>>
>>>> In this case the write barrier is the atomic_dec() in spsc_queue_pop() and
>>>> the read barrier is the aromic_read() in spsc_queue_count().
>>>>
>>>> The READ_ONCE() is actually not even necessary as far as I can see.
>>>>
>>>> Christian.
>>>>
>>>>> -Daniel
>>>>>
>>>>>
>>>>>> atomic op, then it's a full barrier. So yeah you need more here. But
>>>>>> also since you only need a read barrier on one side, and a write
>>>>>> barrier on the other, you don't actually need a cpu barriers on x86.
>>>>>> And READ_ONCE gives you the compiler barrier on one side at least, I
>>>>>> haven't found it on the writer side yet.
>>>>>>
>>>>>>> But yes a comment would be really nice here. I had to think for a while
>>>>>>> why we don't need this as well.
>>>>>> I'm typing a patch, which after a night's sleep I realized has the
>>>>>> wrong barriers. And now I'm also typing some doc improvements for
>>>>>> drm_sched_entity and related functions.
>>>>>>
>>>>>>> Christian.
>>>>>>>
>>>>>>>> -Daniel
>>>>>>>>
>>>>>>>>> Christian.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> -Daniel
>>>>>>>>>>
>>>>>>>>>>> Regards
>>>>>>>>>>> Christian.
>>>>>>>>>>>
>>>>>>>>>>>> -Daniel
>>>>>>>>>>>>
>>>>>>>>>>>>> Christian.
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Also improve the kerneldoc for this.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Acked-by: Steven Price <steven.price@arm.com> (v2)
>>>>>>>>>>>>>> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
>>>>>>>>>>>>>> Cc: Lucas Stach <l.stach@pengutronix.de>
>>>>>>>>>>>>>> Cc: Russell King <linux+etnaviv@armlinux.org.uk>
>>>>>>>>>>>>>> Cc: Christian Gmeiner <christian.gmeiner@gmail.com>
>>>>>>>>>>>>>> Cc: Qiang Yu <yuq825@gmail.com>
>>>>>>>>>>>>>> Cc: Rob Herring <robh@kernel.org>
>>>>>>>>>>>>>> Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
>>>>>>>>>>>>>> Cc: Steven Price <steven.price@arm.com>
>>>>>>>>>>>>>> Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
>>>>>>>>>>>>>> Cc: David Airlie <airlied@linux.ie>
>>>>>>>>>>>>>> Cc: Daniel Vetter <daniel@ffwll.ch>
>>>>>>>>>>>>>> Cc: Sumit Semwal <sumit.semwal@linaro.org>
>>>>>>>>>>>>>> Cc: "Christian König" <christian.koenig@amd.com>
>>>>>>>>>>>>>> Cc: Masahiro Yamada <masahiroy@kernel.org>
>>>>>>>>>>>>>> Cc: Kees Cook <keescook@chromium.org>
>>>>>>>>>>>>>> Cc: Adam Borowski <kilobyte@angband.pl>
>>>>>>>>>>>>>> Cc: Nick Terrell <terrelln@fb.com>
>>>>>>>>>>>>>> Cc: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
>>>>>>>>>>>>>> Cc: Paul Menzel <pmenzel@molgen.mpg.de>
>>>>>>>>>>>>>> Cc: Sami Tolvanen <samitolvanen@google.com>
>>>>>>>>>>>>>> Cc: Viresh Kumar <viresh.kumar@linaro.org>
>>>>>>>>>>>>>> Cc: Alex Deucher <alexander.deucher@amd.com>
>>>>>>>>>>>>>> Cc: Dave Airlie <airlied@redhat.com>
>>>>>>>>>>>>>> Cc: Nirmoy Das <nirmoy.das@amd.com>
>>>>>>>>>>>>>> Cc: Deepak R Varma <mh12gx2825@gmail.com>
>>>>>>>>>>>>>> Cc: Lee Jones <lee.jones@linaro.org>
>>>>>>>>>>>>>> Cc: Kevin Wang <kevin1.wang@amd.com>
>>>>>>>>>>>>>> Cc: Chen Li <chenli@uniontech.com>
>>>>>>>>>>>>>> Cc: Luben Tuikov <luben.tuikov@amd.com>
>>>>>>>>>>>>>> Cc: "Marek Olšák" <marek.olsak@amd.com>
>>>>>>>>>>>>>> Cc: Dennis Li <Dennis.Li@amd.com>
>>>>>>>>>>>>>> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
>>>>>>>>>>>>>> Cc: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>>>>>>>>>>>> Cc: Sonny Jiang <sonny.jiang@amd.com>
>>>>>>>>>>>>>> Cc: Boris Brezillon <boris.brezillon@collabora.com>
>>>>>>>>>>>>>> Cc: Tian Tao <tiantao6@hisilicon.com>
>>>>>>>>>>>>>> Cc: Jack Zhang <Jack.Zhang1@amd.com>
>>>>>>>>>>>>>> Cc: etnaviv@lists.freedesktop.org
>>>>>>>>>>>>>> Cc: lima@lists.freedesktop.org
>>>>>>>>>>>>>> Cc: linux-media@vger.kernel.org
>>>>>>>>>>>>>> Cc: linaro-mm-sig@lists.linaro.org
>>>>>>>>>>>>>> Cc: Emma Anholt <emma@anholt.net>
>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>         drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c   |  2 ++
>>>>>>>>>>>>>>         drivers/gpu/drm/amd/amdgpu/amdgpu_job.c  |  2 ++
>>>>>>>>>>>>>>         drivers/gpu/drm/etnaviv/etnaviv_sched.c  |  2 ++
>>>>>>>>>>>>>>         drivers/gpu/drm/lima/lima_sched.c        |  2 ++
>>>>>>>>>>>>>>         drivers/gpu/drm/panfrost/panfrost_job.c  |  2 ++
>>>>>>>>>>>>>>         drivers/gpu/drm/scheduler/sched_entity.c |  6 ++--
>>>>>>>>>>>>>>         drivers/gpu/drm/scheduler/sched_fence.c  | 17 +++++----
>>>>>>>>>>>>>>         drivers/gpu/drm/scheduler/sched_main.c   | 46 +++++++++++++++++++++---
>>>>>>>>>>>>>>         drivers/gpu/drm/v3d/v3d_gem.c            |  2 ++
>>>>>>>>>>>>>>         include/drm/gpu_scheduler.h              |  7 +++-
>>>>>>>>>>>>>>         10 files changed, 74 insertions(+), 14 deletions(-)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>>>>>>>>>>>>> index c5386d13eb4a..a4ec092af9a7 100644
>>>>>>>>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>>>>>>>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>>>>>>>>>>>>> @@ -1226,6 +1226,8 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
>>>>>>>>>>>>>>             if (r)
>>>>>>>>>>>>>>                     goto error_unlock;
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> +     drm_sched_job_arm(&job->base);
>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>             /* No memory allocation is allowed while holding the notifier lock.
>>>>>>>>>>>>>>              * The lock is held until amdgpu_cs_submit is finished and fence is
>>>>>>>>>>>>>>              * added to BOs.
>>>>>>>>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>>>>>>>>>>>>> index d33e6d97cc89..5ddb955d2315 100644
>>>>>>>>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>>>>>>>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>>>>>>>>>>>>> @@ -170,6 +170,8 @@ int amdgpu_job_submit(struct amdgpu_job *job, struct drm_sched_entity *entity,
>>>>>>>>>>>>>>             if (r)
>>>>>>>>>>>>>>                     return r;
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> +     drm_sched_job_arm(&job->base);
>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>             *f = dma_fence_get(&job->base.s_fence->finished);
>>>>>>>>>>>>>>             amdgpu_job_free_resources(job);
>>>>>>>>>>>>>>             drm_sched_entity_push_job(&job->base, entity);
>>>>>>>>>>>>>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>>>>>>>>>>>>>> index feb6da1b6ceb..05f412204118 100644
>>>>>>>>>>>>>> --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>>>>>>>>>>>>>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>>>>>>>>>>>>>> @@ -163,6 +163,8 @@ int etnaviv_sched_push_job(struct drm_sched_entity *sched_entity,
>>>>>>>>>>>>>>             if (ret)
>>>>>>>>>>>>>>                     goto out_unlock;
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> +     drm_sched_job_arm(&submit->sched_job);
>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>             submit->out_fence = dma_fence_get(&submit->sched_job.s_fence->finished);
>>>>>>>>>>>>>>             submit->out_fence_id = idr_alloc_cyclic(&submit->gpu->fence_idr,
>>>>>>>>>>>>>>                                                     submit->out_fence, 0,
>>>>>>>>>>>>>> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
>>>>>>>>>>>>>> index dba8329937a3..38f755580507 100644
>>>>>>>>>>>>>> --- a/drivers/gpu/drm/lima/lima_sched.c
>>>>>>>>>>>>>> +++ b/drivers/gpu/drm/lima/lima_sched.c
>>>>>>>>>>>>>> @@ -129,6 +129,8 @@ int lima_sched_task_init(struct lima_sched_task *task,
>>>>>>>>>>>>>>                     return err;
>>>>>>>>>>>>>>             }
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> +     drm_sched_job_arm(&task->base);
>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>             task->num_bos = num_bos;
>>>>>>>>>>>>>>             task->vm = lima_vm_get(vm);
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>>>>>>>>>>> index 71a72fb50e6b..2992dc85325f 100644
>>>>>>>>>>>>>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>>>>>>>>>>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>>>>>>>>>>>>>> @@ -288,6 +288,8 @@ int panfrost_job_push(struct panfrost_job *job)
>>>>>>>>>>>>>>                     goto unlock;
>>>>>>>>>>>>>>             }
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> +     drm_sched_job_arm(&job->base);
>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>             job->render_done_fence = dma_fence_get(&job->base.s_fence->finished);
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>             ret = panfrost_acquire_object_fences(job->bos, job->bo_count,
>>>>>>>>>>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
>>>>>>>>>>>>>> index 79554aa4dbb1..f7347c284886 100644
>>>>>>>>>>>>>> --- a/drivers/gpu/drm/scheduler/sched_entity.c
>>>>>>>>>>>>>> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
>>>>>>>>>>>>>> @@ -485,9 +485,9 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
>>>>>>>>>>>>>>          * @sched_job: job to submit
>>>>>>>>>>>>>>          * @entity: scheduler entity
>>>>>>>>>>>>>>          *
>>>>>>>>>>>>>> - * Note: To guarantee that the order of insertion to queue matches
>>>>>>>>>>>>>> - * the job's fence sequence number this function should be
>>>>>>>>>>>>>> - * called with drm_sched_job_init under common lock.
>>>>>>>>>>>>>> + * Note: To guarantee that the order of insertion to queue matches the job's
>>>>>>>>>>>>>> + * fence sequence number this function should be called with drm_sched_job_arm()
>>>>>>>>>>>>>> + * under common lock.
>>>>>>>>>>>>>>          *
>>>>>>>>>>>>>>          * Returns 0 for success, negative error code otherwise.
>>>>>>>>>>>>>>          */
>>>>>>>>>>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_fence.c b/drivers/gpu/drm/scheduler/sched_fence.c
>>>>>>>>>>>>>> index 69de2c76731f..c451ee9a30d7 100644
>>>>>>>>>>>>>> --- a/drivers/gpu/drm/scheduler/sched_fence.c
>>>>>>>>>>>>>> +++ b/drivers/gpu/drm/scheduler/sched_fence.c
>>>>>>>>>>>>>> @@ -90,7 +90,7 @@ static const char *drm_sched_fence_get_timeline_name(struct dma_fence *f)
>>>>>>>>>>>>>>          *
>>>>>>>>>>>>>>          * Free up the fence memory after the RCU grace period.
>>>>>>>>>>>>>>          */
>>>>>>>>>>>>>> -static void drm_sched_fence_free(struct rcu_head *rcu)
>>>>>>>>>>>>>> +void drm_sched_fence_free(struct rcu_head *rcu)
>>>>>>>>>>>>>>         {
>>>>>>>>>>>>>>             struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
>>>>>>>>>>>>>>             struct drm_sched_fence *fence = to_drm_sched_fence(f);
>>>>>>>>>>>>>> @@ -152,11 +152,10 @@ struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f)
>>>>>>>>>>>>>>         }
>>>>>>>>>>>>>>         EXPORT_SYMBOL(to_drm_sched_fence);
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> -struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity,
>>>>>>>>>>>>>> -                                            void *owner)
>>>>>>>>>>>>>> +struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity,
>>>>>>>>>>>>>> +                                           void *owner)
>>>>>>>>>>>>>>         {
>>>>>>>>>>>>>>             struct drm_sched_fence *fence = NULL;
>>>>>>>>>>>>>> -     unsigned seq;
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>             fence = kmem_cache_zalloc(sched_fence_slab, GFP_KERNEL);
>>>>>>>>>>>>>>             if (fence == NULL)
>>>>>>>>>>>>>> @@ -166,13 +165,19 @@ struct drm_sched_fence *drm_sched_fence_create(struct drm_sched_entity *entity,
>>>>>>>>>>>>>>             fence->sched = entity->rq->sched;
>>>>>>>>>>>>>>             spin_lock_init(&fence->lock);
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> +     return fence;
>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>> +
>>>>>>>>>>>>>> +void drm_sched_fence_init(struct drm_sched_fence *fence,
>>>>>>>>>>>>>> +                       struct drm_sched_entity *entity)
>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>> +     unsigned seq;
>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>             seq = atomic_inc_return(&entity->fence_seq);
>>>>>>>>>>>>>>             dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled,
>>>>>>>>>>>>>>                            &fence->lock, entity->fence_context, seq);
>>>>>>>>>>>>>>             dma_fence_init(&fence->finished, &drm_sched_fence_ops_finished,
>>>>>>>>>>>>>>                            &fence->lock, entity->fence_context + 1, seq);
>>>>>>>>>>>>>> -
>>>>>>>>>>>>>> -     return fence;
>>>>>>>>>>>>>>         }
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>         module_init(drm_sched_fence_slab_init);
>>>>>>>>>>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
>>>>>>>>>>>>>> index 33c414d55fab..5e84e1500c32 100644
>>>>>>>>>>>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>>>>>>>>>>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>>>>>>>>>>>>> @@ -48,9 +48,11 @@
>>>>>>>>>>>>>>         #include <linux/wait.h>
>>>>>>>>>>>>>>         #include <linux/sched.h>
>>>>>>>>>>>>>>         #include <linux/completion.h>
>>>>>>>>>>>>>> +#include <linux/dma-resv.h>
>>>>>>>>>>>>>>         #include <uapi/linux/sched/types.h>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>         #include <drm/drm_print.h>
>>>>>>>>>>>>>> +#include <drm/drm_gem.h>
>>>>>>>>>>>>>>         #include <drm/gpu_scheduler.h>
>>>>>>>>>>>>>>         #include <drm/spsc_queue.h>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> @@ -569,7 +571,6 @@ EXPORT_SYMBOL(drm_sched_resubmit_jobs_ext);
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>         /**
>>>>>>>>>>>>>>          * drm_sched_job_init - init a scheduler job
>>>>>>>>>>>>>> - *
>>>>>>>>>>>>>>          * @job: scheduler job to init
>>>>>>>>>>>>>>          * @entity: scheduler entity to use
>>>>>>>>>>>>>>          * @owner: job owner for debugging
>>>>>>>>>>>>>> @@ -577,6 +578,9 @@ EXPORT_SYMBOL(drm_sched_resubmit_jobs_ext);
>>>>>>>>>>>>>>          * Refer to drm_sched_entity_push_job() documentation
>>>>>>>>>>>>>>          * for locking considerations.
>>>>>>>>>>>>>>          *
>>>>>>>>>>>>>> + * Drivers must make sure drm_sched_job_cleanup() if this function returns
>>>>>>>>>>>>>> + * successfully, even when @job is aborted before drm_sched_job_arm() is called.
>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>          * Returns 0 for success, negative error code otherwise.
>>>>>>>>>>>>>>          */
>>>>>>>>>>>>>>         int drm_sched_job_init(struct drm_sched_job *job,
>>>>>>>>>>>>>> @@ -594,7 +598,7 @@ int drm_sched_job_init(struct drm_sched_job *job,
>>>>>>>>>>>>>>             job->sched = sched;
>>>>>>>>>>>>>>             job->entity = entity;
>>>>>>>>>>>>>>             job->s_priority = entity->rq - sched->sched_rq;
>>>>>>>>>>>>>> -     job->s_fence = drm_sched_fence_create(entity, owner);
>>>>>>>>>>>>>> +     job->s_fence = drm_sched_fence_alloc(entity, owner);
>>>>>>>>>>>>>>             if (!job->s_fence)
>>>>>>>>>>>>>>                     return -ENOMEM;
>>>>>>>>>>>>>>             job->id = atomic64_inc_return(&sched->job_id_count);
>>>>>>>>>>>>>> @@ -606,13 +610,47 @@ int drm_sched_job_init(struct drm_sched_job *job,
>>>>>>>>>>>>>>         EXPORT_SYMBOL(drm_sched_job_init);
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>         /**
>>>>>>>>>>>>>> - * drm_sched_job_cleanup - clean up scheduler job resources
>>>>>>>>>>>>>> + * drm_sched_job_arm - arm a scheduler job for execution
>>>>>>>>>>>>>> + * @job: scheduler job to arm
>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>> + * This arms a scheduler job for execution. Specifically it initializes the
>>>>>>>>>>>>>> + * &drm_sched_job.s_fence of @job, so that it can be attached to struct dma_resv
>>>>>>>>>>>>>> + * or other places that need to track the completion of this job.
>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>> + * Refer to drm_sched_entity_push_job() documentation for locking
>>>>>>>>>>>>>> + * considerations.
>>>>>>>>>>>>>>          *
>>>>>>>>>>>>>> + * This can only be called if drm_sched_job_init() succeeded.
>>>>>>>>>>>>>> + */
>>>>>>>>>>>>>> +void drm_sched_job_arm(struct drm_sched_job *job)
>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>> +     drm_sched_fence_init(job->s_fence, job->entity);
>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>> +EXPORT_SYMBOL(drm_sched_job_arm);
>>>>>>>>>>>>>> +
>>>>>>>>>>>>>> +/**
>>>>>>>>>>>>>> + * drm_sched_job_cleanup - clean up scheduler job resources
>>>>>>>>>>>>>>          * @job: scheduler job to clean up
>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>> + * Cleans up the resources allocated with drm_sched_job_init().
>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>> + * Drivers should call this from their error unwind code if @job is aborted
>>>>>>>>>>>>>> + * before drm_sched_job_arm() is called.
>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>> + * After that point of no return @job is committed to be executed by the
>>>>>>>>>>>>>> + * scheduler, and this function should be called from the
>>>>>>>>>>>>>> + * &drm_sched_backend_ops.free_job callback.
>>>>>>>>>>>>>>          */
>>>>>>>>>>>>>>         void drm_sched_job_cleanup(struct drm_sched_job *job)
>>>>>>>>>>>>>>         {
>>>>>>>>>>>>>> -     dma_fence_put(&job->s_fence->finished);
>>>>>>>>>>>>>> +     if (!kref_read(&job->s_fence->finished.refcount)) {
>>>>>>>>>>>>>> +             /* drm_sched_job_arm() has been called */
>>>>>>>>>>>>>> +             dma_fence_put(&job->s_fence->finished);
>>>>>>>>>>>>>> +     } else {
>>>>>>>>>>>>>> +             /* aborted job before committing to run it */
>>>>>>>>>>>>>> +             drm_sched_fence_free(&job->s_fence->finished.rcu);
>>>>>>>>>>>>>> +     }
>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>             job->s_fence = NULL;
>>>>>>>>>>>>>>         }
>>>>>>>>>>>>>>         EXPORT_SYMBOL(drm_sched_job_cleanup);
>>>>>>>>>>>>>> diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
>>>>>>>>>>>>>> index 4eb354226972..5c3a99027ecd 100644
>>>>>>>>>>>>>> --- a/drivers/gpu/drm/v3d/v3d_gem.c
>>>>>>>>>>>>>> +++ b/drivers/gpu/drm/v3d/v3d_gem.c
>>>>>>>>>>>>>> @@ -475,6 +475,8 @@ v3d_push_job(struct v3d_file_priv *v3d_priv,
>>>>>>>>>>>>>>             if (ret)
>>>>>>>>>>>>>>                     return ret;
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> +     drm_sched_job_arm(&job->base);
>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>             job->done_fence = dma_fence_get(&job->base.s_fence->finished);
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>             /* put by scheduler job completion */
>>>>>>>>>>>>>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>>>>>>>>>>>>>> index 88ae7f331bb1..83afc3aa8e2f 100644
>>>>>>>>>>>>>> --- a/include/drm/gpu_scheduler.h
>>>>>>>>>>>>>> +++ b/include/drm/gpu_scheduler.h
>>>>>>>>>>>>>> @@ -348,6 +348,7 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched);
>>>>>>>>>>>>>>         int drm_sched_job_init(struct drm_sched_job *job,
>>>>>>>>>>>>>>                            struct drm_sched_entity *entity,
>>>>>>>>>>>>>>                            void *owner);
>>>>>>>>>>>>>> +void drm_sched_job_arm(struct drm_sched_job *job);
>>>>>>>>>>>>>>         void drm_sched_entity_modify_sched(struct drm_sched_entity *entity,
>>>>>>>>>>>>>>                                         struct drm_gpu_scheduler **sched_list,
>>>>>>>>>>>>>>                                            unsigned int num_sched_list);
>>>>>>>>>>>>>> @@ -387,8 +388,12 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity,
>>>>>>>>>>>>>>                                        enum drm_sched_priority priority);
>>>>>>>>>>>>>>         bool drm_sched_entity_is_ready(struct drm_sched_entity *entity);
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> -struct drm_sched_fence *drm_sched_fence_create(
>>>>>>>>>>>>>> +struct drm_sched_fence *drm_sched_fence_alloc(
>>>>>>>>>>>>>>             struct drm_sched_entity *s_entity, void *owner);
>>>>>>>>>>>>>> +void drm_sched_fence_init(struct drm_sched_fence *fence,
>>>>>>>>>>>>>> +                       struct drm_sched_entity *entity);
>>>>>>>>>>>>>> +void drm_sched_fence_free(struct rcu_head *rcu);
>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>         void drm_sched_fence_scheduled(struct drm_sched_fence *fence);
>>>>>>>>>>>>>>         void drm_sched_fence_finished(struct drm_sched_fence *fence);
>>>>>>>>>>>>>>
>>>>>> --
>>>>>> Daniel Vetter
>>>>>> Software Engineer, Intel Corporation
>>>>>> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2F&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C9ff11edafb334411dbf508d942026d53%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637613400464979063%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=MMhNTs1WSu%2B07ho3MOap4fbbpAh2vkCd0IJ0snpYvYo%3D&amp;reserved=0
>>>>> --
>>>>> Daniel Vetter
>>>>> Software Engineer, Intel Corporation
>>>>> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2F&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C9ff11edafb334411dbf508d942026d53%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637613400464979063%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=MMhNTs1WSu%2B07ho3MOap4fbbpAh2vkCd0IJ0snpYvYo%3D&amp;reserved=0
>


  reply	other threads:[~2021-07-08 11:28 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-02 21:38 [PATCH v2 00/11] drm/scheduler dependency tracking Daniel Vetter
2021-07-02 21:38 ` [PATCH v2 01/11] drm/sched: Split drm_sched_job_init Daniel Vetter
2021-07-07  9:29   ` Christian König
2021-07-07 11:14     ` Daniel Vetter
2021-07-07 11:57       ` Christian König
2021-07-07 12:13         ` Daniel Vetter
2021-07-07 12:58           ` Christian König
2021-07-07 16:32             ` Daniel Vetter
2021-07-08  6:56               ` Christian König
2021-07-08  7:09                 ` Daniel Vetter
2021-07-08  7:19                   ` Daniel Vetter
2021-07-08  7:53                     ` Christian König
2021-07-08 10:02                       ` Daniel Vetter
2021-07-08 10:54                         ` Christian König
2021-07-08 11:20                           ` Daniel Vetter
2021-07-08 11:28                             ` Christian König [this message]
2021-07-02 21:38 ` [PATCH v2 02/11] drm/sched: Add dependency tracking Daniel Vetter
2021-07-07  9:26   ` [Linaro-mm-sig] " Christian König
2021-07-07 11:23     ` Daniel Vetter
2021-07-02 21:38 ` [PATCH v2 03/11] drm/sched: drop entity parameter from drm_sched_push_job Daniel Vetter
2021-07-02 21:38 ` [PATCH v2 04/11] drm/panfrost: use scheduler dependency tracking Daniel Vetter
2021-07-02 21:38 ` [PATCH v2 05/11] drm/lima: " Daniel Vetter
2021-07-02 21:38 ` [PATCH v2 06/11] drm/v3d: Move drm_sched_job_init to v3d_job_init Daniel Vetter
2021-07-02 21:38 ` [PATCH v2 07/11] drm/v3d: Use scheduler dependency handling Daniel Vetter
2021-07-02 21:38 ` [PATCH v2 08/11] drm/etnaviv: " Daniel Vetter
2021-07-07  9:08   ` Lucas Stach
2021-07-07 11:26     ` Daniel Vetter
2021-07-07 11:32       ` Daniel Vetter
2021-07-07 12:34         ` Lucas Stach
2021-07-02 21:38 ` [PATCH v2 09/11] drm/gem: Delete gem array fencing helpers Daniel Vetter
2021-07-02 21:38 ` [PATCH v2 10/11] drm/sched: Don't store self-dependencies Daniel Vetter
2021-07-02 21:38 ` [PATCH v2 11/11] drm/sched: Check locking in drm_sched_job_await_implicit Daniel Vetter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3c8a29f4-07c0-63ce-703e-9d652534642d@amd.com \
    --to=christian.koenig@amd.com \
    --cc=Dennis.Li@amd.com \
    --cc=Jack.Zhang1@amd.com \
    --cc=airlied@linux.ie \
    --cc=airlied@redhat.com \
    --cc=alexander.deucher@amd.com \
    --cc=alyssa.rosenzweig@collabora.com \
    --cc=boris.brezillon@collabora.com \
    --cc=chenli@uniontech.com \
    --cc=daniel.vetter@intel.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=emma@anholt.net \
    --cc=etnaviv@lists.freedesktop.org \
    --cc=keescook@chromium.org \
    --cc=kevin1.wang@amd.com \
    --cc=kilobyte@angband.pl \
    --cc=lee.jones@linaro.org \
    --cc=lima@lists.freedesktop.org \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=linux+etnaviv@armlinux.org.uk \
    --cc=linux-media@vger.kernel.org \
    --cc=luben.tuikov@amd.com \
    --cc=marek.olsak@amd.com \
    --cc=masahiroy@kernel.org \
    --cc=mchehab+huawei@kernel.org \
    --cc=mh12gx2825@gmail.com \
    --cc=nirmoy.das@amd.com \
    --cc=pmenzel@molgen.mpg.de \
    --cc=samitolvanen@google.com \
    --cc=sonny.jiang@amd.com \
    --cc=steven.price@arm.com \
    --cc=terrelln@fb.com \
    --cc=tiantao6@hisilicon.com \
    --cc=tomeu.vizoso@collabora.com \
    --cc=viresh.kumar@linaro.org \
    --cc=yuq825@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).