All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alex Deucher <alexdeucher@gmail.com>
To: Nayan Deshmukh <nayan26deshmukh@gmail.com>
Cc: "Daniel Vetter" <daniel@ffwll.ch>,
	"Alex Deucher" <alexander.deucher@amd.com>,
	"Christian König" <christian.koenig@amd.com>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	"Linux Kernel Mailing List" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] drm/sched: Extend the documentation.
Date: Thu, 5 Apr 2018 09:44:22 -0400	[thread overview]
Message-ID: <CADnq5_NPs8Oq3AGvVuNHg0KuD7B-8FzSA8SJ+AYfdEyrGFJGTQ@mail.gmail.com> (raw)
In-Reply-To: <CAFd4ddyMvoWObq1728aE8r8MrQNEKsEuxGV+DJMhGH+8EtDaZQ@mail.gmail.com>

On Thu, Apr 5, 2018 at 9:41 AM, Nayan Deshmukh
<nayan26deshmukh@gmail.com> wrote:
> On Thu, Apr 5, 2018 at 6:59 PM, Daniel Vetter <daniel@ffwll.ch> wrote:
>> On Thu, Apr 5, 2018 at 3:27 PM, Alex Deucher <alexdeucher@gmail.com> wrote:
>>> On Thu, Apr 5, 2018 at 2:16 AM, Daniel Vetter <daniel@ffwll.ch> wrote:
>>>> On Thu, Apr 5, 2018 at 12:32 AM, Eric Anholt <eric@anholt.net> wrote:
>>>>> These comments answer all the questions I had for myself when
>>>>> implementing a driver using the GPU scheduler.
>>>>>
>>>>> Signed-off-by: Eric Anholt <eric@anholt.net>
>>>>
>>>> Pulling all these comments into the generated kerneldoc would be
>>>> awesome, maybe as a new "GPU Scheduler" chapter at the end of
>>>> drm-mm.rst? Would mean a bit of busywork to convert the existing raw
>>>> comments into proper kerneldoc. Also has the benefit that 0day will
>>>> complain when you forget to update the comment when editing the
>>>> function prototype - kerneldoc which isn't included anywhere in .rst
>>>> won't be checked automatically.
>>>
>>> I was actually planning to do this myself, but Nayan wanted to do this
>>> a prep work for his proposed GSoC project so I was going to see how
>>> far he got first.
>
> It is still on my TODO list. Just got a bit busy with my coursework. I
> will try to look at it during the weekend.

No worries.  Take your time.

>>
>> Awesome. I'm also happy to help out with any kerneldoc questions and
>> best practices. Technically ofc no clue about the scheduler :-)
>>
> I was thinking of adding a different rst for scheduler altogther. Will
> it be better to add it in drm-mm.rst itself?

I had been planning to add a separate file too since it's a separate
entity.  Do whatever you think works best.

Alex

>
>> Cheers, Daniel
>>> Alex
>>>
>>>> -Daniel
>>>>
>>>>> ---
>>>>>  include/drm/gpu_scheduler.h | 46 +++++++++++++++++++++++++++++++++----
>>>>>  1 file changed, 42 insertions(+), 4 deletions(-)
>>>>>
>>>>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>>>>> index dfd54fb94e10..c053a32341bf 100644
>>>>> --- a/include/drm/gpu_scheduler.h
>>>>> +++ b/include/drm/gpu_scheduler.h
>>>>> @@ -43,10 +43,12 @@ enum drm_sched_priority {
>>>>>  };
>>>>>
>>>>>  /**
>>>>> - * A scheduler entity is a wrapper around a job queue or a group
>>>>> - * of other entities. Entities take turns emitting jobs from their
>>>>> - * job queues to corresponding hardware ring based on scheduling
>>>>> - * policy.
>>>>> + * drm_sched_entity - A wrapper around a job queue (typically attached
>>>>> + * to the DRM file_priv).
>>>>> + *
>>>>> + * Entities will emit jobs in order to their corresponding hardware
>>>>> + * ring, and the scheduler will alternate between entities based on
>>>>> + * scheduling policy.
>>>>>  */
>>>>>  struct drm_sched_entity {
>>>>>         struct list_head                list;
>>>>> @@ -78,7 +80,18 @@ struct drm_sched_rq {
>>>>>
>>>>>  struct drm_sched_fence {
>>>>>         struct dma_fence                scheduled;
>>>>> +
>>>>> +       /* This fence is what will be signaled by the scheduler when
>>>>> +        * the job is completed.
>>>>> +        *
>>>>> +        * When setting up an out fence for the job, you should use
>>>>> +        * this, since it's available immediately upon
>>>>> +        * drm_sched_job_init(), and the fence returned by the driver
>>>>> +        * from run_job() won't be created until the dependencies have
>>>>> +        * resolved.
>>>>> +        */
>>>>>         struct dma_fence                finished;
>>>>> +
>>>>>         struct dma_fence_cb             cb;
>>>>>         struct dma_fence                *parent;
>>>>>         struct drm_gpu_scheduler        *sched;
>>>>> @@ -88,6 +101,13 @@ struct drm_sched_fence {
>>>>>
>>>>>  struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f);
>>>>>
>>>>> +/**
>>>>> + * drm_sched_job - A job to be run by an entity.
>>>>> + *
>>>>> + * A job is created by the driver using drm_sched_job_init(), and
>>>>> + * should call drm_sched_entity_push_job() once it wants the scheduler
>>>>> + * to schedule the job.
>>>>> + */
>>>>>  struct drm_sched_job {
>>>>>         struct spsc_node                queue_node;
>>>>>         struct drm_gpu_scheduler        *sched;
>>>>> @@ -112,10 +132,28 @@ static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job,
>>>>>   * these functions should be implemented in driver side
>>>>>  */
>>>>>  struct drm_sched_backend_ops {
>>>>> +       /* Called when the scheduler is considering scheduling this
>>>>> +        * job next, to get another struct dma_fence for this job to
>>>>> +        * block on.  Once it returns NULL, run_job() may be called.
>>>>> +        */
>>>>>         struct dma_fence *(*dependency)(struct drm_sched_job *sched_job,
>>>>>                                         struct drm_sched_entity *s_entity);
>>>>> +
>>>>> +       /* Called to execute the job once all of the dependencies have
>>>>> +        * been resolved.  This may be called multiple times, if
>>>>> +        * timedout_job() has happened and drm_sched_job_recovery()
>>>>> +        * decides to try it again.
>>>>> +        */
>>>>>         struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
>>>>> +
>>>>> +       /* Called when a job has taken too long to execute, to trigger
>>>>> +        * GPU recovery.
>>>>> +        */
>>>>>         void (*timedout_job)(struct drm_sched_job *sched_job);
>>>>> +
>>>>> +       /* Called once the job's finished fence has been signaled and
>>>>> +        * it's time to clean it up.
>>>>> +        */
>>>>>         void (*free_job)(struct drm_sched_job *sched_job);
>>>>>  };
>>>>>
>>>>> --
>>>>> 2.17.0
>>>>>
>>>>> _______________________________________________
>>>>> dri-devel mailing list
>>>>> dri-devel@lists.freedesktop.org
>>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>>>
>>>>
>>>>
>>>> --
>>>> Daniel Vetter
>>>> Software Engineer, Intel Corporation
>>>> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> dri-devel@lists.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>
>>
>>
>> --
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Alex Deucher <alexdeucher@gmail.com>
To: Nayan Deshmukh <nayan26deshmukh@gmail.com>
Cc: "Alex Deucher" <alexander.deucher@amd.com>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	"Christian König" <christian.koenig@amd.com>,
	"Linux Kernel Mailing List" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] drm/sched: Extend the documentation.
Date: Thu, 5 Apr 2018 09:44:22 -0400	[thread overview]
Message-ID: <CADnq5_NPs8Oq3AGvVuNHg0KuD7B-8FzSA8SJ+AYfdEyrGFJGTQ@mail.gmail.com> (raw)
In-Reply-To: <CAFd4ddyMvoWObq1728aE8r8MrQNEKsEuxGV+DJMhGH+8EtDaZQ@mail.gmail.com>

On Thu, Apr 5, 2018 at 9:41 AM, Nayan Deshmukh
<nayan26deshmukh@gmail.com> wrote:
> On Thu, Apr 5, 2018 at 6:59 PM, Daniel Vetter <daniel@ffwll.ch> wrote:
>> On Thu, Apr 5, 2018 at 3:27 PM, Alex Deucher <alexdeucher@gmail.com> wrote:
>>> On Thu, Apr 5, 2018 at 2:16 AM, Daniel Vetter <daniel@ffwll.ch> wrote:
>>>> On Thu, Apr 5, 2018 at 12:32 AM, Eric Anholt <eric@anholt.net> wrote:
>>>>> These comments answer all the questions I had for myself when
>>>>> implementing a driver using the GPU scheduler.
>>>>>
>>>>> Signed-off-by: Eric Anholt <eric@anholt.net>
>>>>
>>>> Pulling all these comments into the generated kerneldoc would be
>>>> awesome, maybe as a new "GPU Scheduler" chapter at the end of
>>>> drm-mm.rst? Would mean a bit of busywork to convert the existing raw
>>>> comments into proper kerneldoc. Also has the benefit that 0day will
>>>> complain when you forget to update the comment when editing the
>>>> function prototype - kerneldoc which isn't included anywhere in .rst
>>>> won't be checked automatically.
>>>
>>> I was actually planning to do this myself, but Nayan wanted to do this
>>> a prep work for his proposed GSoC project so I was going to see how
>>> far he got first.
>
> It is still on my TODO list. Just got a bit busy with my coursework. I
> will try to look at it during the weekend.

No worries.  Take your time.

>>
>> Awesome. I'm also happy to help out with any kerneldoc questions and
>> best practices. Technically ofc no clue about the scheduler :-)
>>
> I was thinking of adding a different rst for scheduler altogther. Will
> it be better to add it in drm-mm.rst itself?

I had been planning to add a separate file too since it's a separate
entity.  Do whatever you think works best.

Alex

>
>> Cheers, Daniel
>>> Alex
>>>
>>>> -Daniel
>>>>
>>>>> ---
>>>>>  include/drm/gpu_scheduler.h | 46 +++++++++++++++++++++++++++++++++----
>>>>>  1 file changed, 42 insertions(+), 4 deletions(-)
>>>>>
>>>>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>>>>> index dfd54fb94e10..c053a32341bf 100644
>>>>> --- a/include/drm/gpu_scheduler.h
>>>>> +++ b/include/drm/gpu_scheduler.h
>>>>> @@ -43,10 +43,12 @@ enum drm_sched_priority {
>>>>>  };
>>>>>
>>>>>  /**
>>>>> - * A scheduler entity is a wrapper around a job queue or a group
>>>>> - * of other entities. Entities take turns emitting jobs from their
>>>>> - * job queues to corresponding hardware ring based on scheduling
>>>>> - * policy.
>>>>> + * drm_sched_entity - A wrapper around a job queue (typically attached
>>>>> + * to the DRM file_priv).
>>>>> + *
>>>>> + * Entities will emit jobs in order to their corresponding hardware
>>>>> + * ring, and the scheduler will alternate between entities based on
>>>>> + * scheduling policy.
>>>>>  */
>>>>>  struct drm_sched_entity {
>>>>>         struct list_head                list;
>>>>> @@ -78,7 +80,18 @@ struct drm_sched_rq {
>>>>>
>>>>>  struct drm_sched_fence {
>>>>>         struct dma_fence                scheduled;
>>>>> +
>>>>> +       /* This fence is what will be signaled by the scheduler when
>>>>> +        * the job is completed.
>>>>> +        *
>>>>> +        * When setting up an out fence for the job, you should use
>>>>> +        * this, since it's available immediately upon
>>>>> +        * drm_sched_job_init(), and the fence returned by the driver
>>>>> +        * from run_job() won't be created until the dependencies have
>>>>> +        * resolved.
>>>>> +        */
>>>>>         struct dma_fence                finished;
>>>>> +
>>>>>         struct dma_fence_cb             cb;
>>>>>         struct dma_fence                *parent;
>>>>>         struct drm_gpu_scheduler        *sched;
>>>>> @@ -88,6 +101,13 @@ struct drm_sched_fence {
>>>>>
>>>>>  struct drm_sched_fence *to_drm_sched_fence(struct dma_fence *f);
>>>>>
>>>>> +/**
>>>>> + * drm_sched_job - A job to be run by an entity.
>>>>> + *
>>>>> + * A job is created by the driver using drm_sched_job_init(), and
>>>>> + * should call drm_sched_entity_push_job() once it wants the scheduler
>>>>> + * to schedule the job.
>>>>> + */
>>>>>  struct drm_sched_job {
>>>>>         struct spsc_node                queue_node;
>>>>>         struct drm_gpu_scheduler        *sched;
>>>>> @@ -112,10 +132,28 @@ static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job,
>>>>>   * these functions should be implemented in driver side
>>>>>  */
>>>>>  struct drm_sched_backend_ops {
>>>>> +       /* Called when the scheduler is considering scheduling this
>>>>> +        * job next, to get another struct dma_fence for this job to
>>>>> +        * block on.  Once it returns NULL, run_job() may be called.
>>>>> +        */
>>>>>         struct dma_fence *(*dependency)(struct drm_sched_job *sched_job,
>>>>>                                         struct drm_sched_entity *s_entity);
>>>>> +
>>>>> +       /* Called to execute the job once all of the dependencies have
>>>>> +        * been resolved.  This may be called multiple times, if
>>>>> +        * timedout_job() has happened and drm_sched_job_recovery()
>>>>> +        * decides to try it again.
>>>>> +        */
>>>>>         struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
>>>>> +
>>>>> +       /* Called when a job has taken too long to execute, to trigger
>>>>> +        * GPU recovery.
>>>>> +        */
>>>>>         void (*timedout_job)(struct drm_sched_job *sched_job);
>>>>> +
>>>>> +       /* Called once the job's finished fence has been signaled and
>>>>> +        * it's time to clean it up.
>>>>> +        */
>>>>>         void (*free_job)(struct drm_sched_job *sched_job);
>>>>>  };
>>>>>
>>>>> --
>>>>> 2.17.0
>>>>>
>>>>> _______________________________________________
>>>>> dri-devel mailing list
>>>>> dri-devel@lists.freedesktop.org
>>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>>>
>>>>
>>>>
>>>> --
>>>> Daniel Vetter
>>>> Software Engineer, Intel Corporation
>>>> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> dri-devel@lists.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>
>>
>>
>> --
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

  reply	other threads:[~2018-04-05 13:44 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-04 22:32 [PATCH] drm/sched: Extend the documentation Eric Anholt
2018-04-04 22:32 ` Eric Anholt
2018-04-05  6:16 ` Daniel Vetter
2018-04-05  6:16   ` Daniel Vetter
2018-04-05 13:27   ` Alex Deucher
2018-04-05 13:27     ` Alex Deucher
2018-04-05 13:29     ` Daniel Vetter
2018-04-05 13:41       ` Nayan Deshmukh
2018-04-05 13:41         ` Nayan Deshmukh
2018-04-05 13:44         ` Alex Deucher [this message]
2018-04-05 13:44           ` Alex Deucher
2018-04-05 14:03           ` Daniel Vetter
2018-04-05 14:03             ` Daniel Vetter
2018-04-05 12:33 ` Christian König
2018-04-05 12:33   ` Christian König

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CADnq5_NPs8Oq3AGvVuNHg0KuD7B-8FzSA8SJ+AYfdEyrGFJGTQ@mail.gmail.com \
    --to=alexdeucher@gmail.com \
    --cc=alexander.deucher@amd.com \
    --cc=christian.koenig@amd.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nayan26deshmukh@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.