All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Christian König via amd-gfx" <amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org>
To: Alex Deucher
	<alexdeucher-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	Christian Koenig <christian.koenig-5C7GfCeVMHo@public.gmane.org>
Cc: "Christian König"
	<ckoenig.leichtzumerken-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	"amd-gfx list"
	<amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org>,
	"Maling list - DRI developers"
	<dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org>,
	"Bas Nieuwenhuizen"
	<bas-dldO88ZXqoXqqjsSq9zF6IRWq/SkRNHw@public.gmane.org>
Subject: Re: [PATCH v2 1/4] drm/sched: Fix entities with 0 rqs.
Date: Thu, 14 Feb 2019 10:08:49 +0100	[thread overview]
Message-ID: <bb86b555-c2ae-c20a-6fb9-a00a53e2ff4a@gmail.com> (raw)
In-Reply-To: <CADnq5_PcSTHw48oFu1oKVKh6sQt2=6FGtz3OJAX5xPLR8D9GUA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>

Am 13.02.19 um 22:03 schrieb Alex Deucher via amd-gfx:
> On Wed, Jan 30, 2019 at 5:43 AM Christian König
> <ckoenig.leichtzumerken@gmail.com> wrote:
>> Am 30.01.19 um 02:53 schrieb Bas Nieuwenhuizen:
>>> Some blocks in amdgpu can have 0 rqs.
>>>
>>> Job creation already fails with -ENOENT when entity->rq is NULL,
>>> so jobs cannot be pushed. Without a rq there is no scheduler to
>>> pop jobs, and rq selection already does the right thing with a
>>> list of length 0.
>>>
>>> So the operations we need to fix are:
>>>     - Creation, do not set rq to rq_list[0] if the list can have length 0.
>>>     - Do not flush any jobs when there is no rq.
>>>     - On entity destruction handle the rq = NULL case.
>>>     - on set_priority, do not try to change the rq if it is NULL.
>>>
>>> Signed-off-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
>> One minor comment on patch #2, apart from that the series is
>> Reviewed-by: Christian König <christian.koenig@amd.com>.
>>
>> I'm going to make the change on #2 and pick them up for inclusion in
>> amd-staging-drm-next.
> Hi Christian,
>
> I haven't seen these land yet.  Just want to make sure they don't fall
> through the cracks.

Thanks for the reminder, I'm really having trouble catching up on 
applying patches lately.

Christian.

>
> Alex
>
>> Thanks for the help,
>> Christian.
>>
>>> ---
>>>    drivers/gpu/drm/scheduler/sched_entity.c | 39 ++++++++++++++++--------
>>>    1 file changed, 26 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
>>> index 4463d3826ecb..8e31b6628d09 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_entity.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
>>> @@ -52,12 +52,12 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
>>>    {
>>>        int i;
>>>
>>> -     if (!(entity && rq_list && num_rq_list > 0 && rq_list[0]))
>>> +     if (!(entity && rq_list && (num_rq_list == 0 || rq_list[0])))
>>>                return -EINVAL;
>>>
>>>        memset(entity, 0, sizeof(struct drm_sched_entity));
>>>        INIT_LIST_HEAD(&entity->list);
>>> -     entity->rq = rq_list[0];
>>> +     entity->rq = NULL;
>>>        entity->guilty = guilty;
>>>        entity->num_rq_list = num_rq_list;
>>>        entity->rq_list = kcalloc(num_rq_list, sizeof(struct drm_sched_rq *),
>>> @@ -67,6 +67,10 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
>>>
>>>        for (i = 0; i < num_rq_list; ++i)
>>>                entity->rq_list[i] = rq_list[i];
>>> +
>>> +     if (num_rq_list)
>>> +             entity->rq = rq_list[0];
>>> +
>>>        entity->last_scheduled = NULL;
>>>
>>>        spin_lock_init(&entity->rq_lock);
>>> @@ -165,6 +169,9 @@ long drm_sched_entity_flush(struct drm_sched_entity *entity, long timeout)
>>>        struct task_struct *last_user;
>>>        long ret = timeout;
>>>
>>> +     if (!entity->rq)
>>> +             return 0;
>>> +
>>>        sched = entity->rq->sched;
>>>        /**
>>>         * The client will not queue more IBs during this fini, consume existing
>>> @@ -264,20 +271,24 @@ static void drm_sched_entity_kill_jobs(struct drm_sched_entity *entity)
>>>     */
>>>    void drm_sched_entity_fini(struct drm_sched_entity *entity)
>>>    {
>>> -     struct drm_gpu_scheduler *sched;
>>> +     struct drm_gpu_scheduler *sched = NULL;
>>>
>>> -     sched = entity->rq->sched;
>>> -     drm_sched_rq_remove_entity(entity->rq, entity);
>>> +     if (entity->rq) {
>>> +             sched = entity->rq->sched;
>>> +             drm_sched_rq_remove_entity(entity->rq, entity);
>>> +     }
>>>
>>>        /* Consumption of existing IBs wasn't completed. Forcefully
>>>         * remove them here.
>>>         */
>>>        if (spsc_queue_peek(&entity->job_queue)) {
>>> -             /* Park the kernel for a moment to make sure it isn't processing
>>> -              * our enity.
>>> -              */
>>> -             kthread_park(sched->thread);
>>> -             kthread_unpark(sched->thread);
>>> +             if (sched) {
>>> +                     /* Park the kernel for a moment to make sure it isn't processing
>>> +                      * our enity.
>>> +                      */
>>> +                     kthread_park(sched->thread);
>>> +                     kthread_unpark(sched->thread);
>>> +             }
>>>                if (entity->dependency) {
>>>                        dma_fence_remove_callback(entity->dependency,
>>>                                                  &entity->cb);
>>> @@ -362,9 +373,11 @@ void drm_sched_entity_set_priority(struct drm_sched_entity *entity,
>>>        for (i = 0; i < entity->num_rq_list; ++i)
>>>                drm_sched_entity_set_rq_priority(&entity->rq_list[i], priority);
>>>
>>> -     drm_sched_rq_remove_entity(entity->rq, entity);
>>> -     drm_sched_entity_set_rq_priority(&entity->rq, priority);
>>> -     drm_sched_rq_add_entity(entity->rq, entity);
>>> +     if (entity->rq) {
>>> +             drm_sched_rq_remove_entity(entity->rq, entity);
>>> +             drm_sched_entity_set_rq_priority(&entity->rq, priority);
>>> +             drm_sched_rq_add_entity(entity->rq, entity);
>>> +     }
>>>
>>>        spin_unlock(&entity->rq_lock);
>>>    }
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

      parent reply	other threads:[~2019-02-14  9:08 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-30  1:53 [PATCH v2 1/4] drm/sched: Fix entities with 0 rqs Bas Nieuwenhuizen
     [not found] ` <20190130015322.105870-1-bas-dldO88ZXqoXqqjsSq9zF6IRWq/SkRNHw@public.gmane.org>
2019-01-30  1:53   ` [PATCH v2 2/4] drm/amdgpu: Only add rqs for initialized rings Bas Nieuwenhuizen
     [not found]     ` <20190130015322.105870-2-bas-dldO88ZXqoXqqjsSq9zF6IRWq/SkRNHw@public.gmane.org>
2019-01-30 10:42       ` Christian König
2019-01-30  1:53 ` [PATCH v2 3/4] drm/amdgpu: Check if fd really is an amdgpu fd Bas Nieuwenhuizen
2019-01-30  1:53 ` [PATCH v2 4/4] drm/amdgpu: Add command to override the context priority Bas Nieuwenhuizen
2019-01-30 10:43 ` [PATCH v2 1/4] drm/sched: Fix entities with 0 rqs Christian König
     [not found]   ` <23f6617a-2049-11b3-b1df-07eb029db714-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2019-02-13 21:03     ` Alex Deucher via amd-gfx
     [not found]       ` <CADnq5_PcSTHw48oFu1oKVKh6sQt2=6FGtz3OJAX5xPLR8D9GUA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-02-14  9:08         ` Christian König via amd-gfx [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bb86b555-c2ae-c20a-6fb9-a00a53e2ff4a@gmail.com \
    --to=amd-gfx-pd4fty7x32lngt0pjobp9y5qc8qiuhrw@public.gmane.org \
    --cc=alexdeucher-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
    --cc=bas-dldO88ZXqoXqqjsSq9zF6IRWq/SkRNHw@public.gmane.org \
    --cc=christian.koenig-5C7GfCeVMHo@public.gmane.org \
    --cc=ckoenig.leichtzumerken-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
    --cc=dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.