amd-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Nirmoy <nirmodas@amd.com>, Nirmoy Das <nirmoy.aiemd@gmail.com>,
	amd-gfx@lists.freedesktop.org
Cc: alexander.deucher@amd.com, kenny.ho@amd.com, nirmoy.das@amd.com,
	pierre-eric.pelloux-prayer@amd.com
Subject: Re: [PATCH] drm/scheduler: fix race condition in load balancer
Date: Tue, 14 Jan 2020 17:20:19 +0100	[thread overview]
Message-ID: <ac0b01a8-f360-41ba-568f-74e10fe95ecd@amd.com> (raw)
In-Reply-To: <529f8218-09f4-cb67-7bc0-18a1a808bff6@amd.com>

Am 14.01.20 um 17:13 schrieb Nirmoy:
>
> On 1/14/20 5:01 PM, Christian König wrote:
>> Am 14.01.20 um 16:43 schrieb Nirmoy Das:
>>> Jobs submitted in an entity should execute in the order those jobs
>>> are submitted. We make sure that by checking entity->job_queue in
>>> drm_sched_entity_select_rq() so that we don't loadbalance jobs within
>>> an entity.
>>>
>>> But because we update entity->job_queue later in 
>>> drm_sched_entity_push_job(),
>>> there remains a open window when it is possibe that entity->rq might 
>>> get
>>> updated by drm_sched_entity_select_rq() which should not be allowed.
>>
>> NAK, concurrent calls to 
>> drm_sched_job_init()/drm_sched_entity_push_job() are not allowed in 
>> the first place or otherwise we mess up the fence sequence order and 
>> risk memory corruption.
>
>>
>>>
>>> Changes in this part also improves job distribution.
>>> Below are test results after running amdgpu_test from mesa drm
>>>
>>> Before this patch:
>>>
>>> sched_name     num of many times it got scheduled
>>> =========      ==================================
>>> sdma0          314
>>> sdma1          32
>>> comp_1.0.0     56
>>> comp_1.1.0     0
>>> comp_1.1.1     0
>>> comp_1.2.0     0
>>> comp_1.2.1     0
>>> comp_1.3.0     0
>>> comp_1.3.1     0
>>>
>>> After this patch:
>>>
>>> sched_name     num of many times it got scheduled
>>> =========      ==================================
>>>   sdma1          243
>>>   sdma0          164
>>>   comp_1.0.1     14
>>>   comp_1.1.0     11
>>>   comp_1.1.1     10
>>>   comp_1.2.0     15
>>>   comp_1.2.1     14
>>>   comp_1.3.0     10
>>>   comp_1.3.1     10
>>
>> Well that is still rather nice to have, why does that happen?
>
> I think it is because we are updating num_jobs immediately after 
> selecting a new rq. Previously  we do that way after
>
> drm_sched_job_init() in drm_sched_entity_push_job(). The problem is if 
> I just do
>
> @@ -562,6 +562,7 @@ int drm_sched_job_init(struct drm_sched_job *job,
>           return -ENOENT;
>         sched = entity->rq->sched;
> +    atomic_inc(&entity->rq->sched->num_jobs);
>
>  @@ -498,7 +504,6 @@ void drm_sched_entity_push_job(struct 
> drm_sched_job *sched_job,
>       bool first;
>         trace_drm_sched_job(sched_job, entity);
> -    atomic_inc(&entity->rq->sched->num_jobs);
>
>
> num_jobs gets negative somewhere down the line somewhere. I am 
> guessing  it's hitting the race condition as I explained in the commit 
> message

The race condition you explain in the commit message should be 
impossible to hit or we have much much larger problems than just an 
incorrect job count.

Incrementing num_jobs so early is not possible either cause the job 
might not get pushed to the entity because of an error.

Christian.

>
>
> Regards,
>
> Nirmoy
>
>>
>> Christian.
>>
>>>
>>> Fixes: 35e160e781a048 (drm/scheduler: change entities rq even earlier)
>>>
>>> Signed-off-by: Nirmoy Das <nirmoy.das@amd.com>
>>> Reported-by: Pierre-Eric Pelloux-Prayer 
>>> <pierre-eric.pelloux-prayer@amd.com>
>>> ---
>>>   drivers/gpu/drm/scheduler/sched_entity.c | 9 +++++++--
>>>   drivers/gpu/drm/scheduler/sched_main.c   | 1 +
>>>   include/drm/gpu_scheduler.h              | 1 +
>>>   3 files changed, 9 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c 
>>> b/drivers/gpu/drm/scheduler/sched_entity.c
>>> index 2e3a058fc239..8414e084b6ac 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_entity.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
>>> @@ -67,6 +67,7 @@ int drm_sched_entity_init(struct drm_sched_entity 
>>> *entity,
>>>       entity->priority = priority;
>>>       entity->sched_list = num_sched_list > 1 ? sched_list : NULL;
>>>       entity->last_scheduled = NULL;
>>> +    entity->loadbalance_on = true;
>>>         if(num_sched_list)
>>>           entity->rq = &sched_list[0]->sched_rq[entity->priority];
>>> @@ -447,6 +448,9 @@ struct drm_sched_job 
>>> *drm_sched_entity_pop_job(struct drm_sched_entity *entity)
>>>       entity->last_scheduled = 
>>> dma_fence_get(&sched_job->s_fence->finished);
>>>         spsc_queue_pop(&entity->job_queue);
>>> +    if (!spsc_queue_count(&entity->job_queue))
>>> +        entity->loadbalance_on = true;
>>> +
>>>       return sched_job;
>>>   }
>>>   @@ -463,7 +467,8 @@ void drm_sched_entity_select_rq(struct 
>>> drm_sched_entity *entity)
>>>       struct dma_fence *fence;
>>>       struct drm_sched_rq *rq;
>>>   -    if (spsc_queue_count(&entity->job_queue) || 
>>> entity->num_sched_list <= 1)
>>> +    atomic_inc(&entity->rq->sched->num_jobs);
>>> +    if ((entity->num_sched_list <= 1) || !entity->loadbalance_on)
>>>           return;
>>>         fence = READ_ONCE(entity->last_scheduled);
>>> @@ -477,6 +482,7 @@ void drm_sched_entity_select_rq(struct 
>>> drm_sched_entity *entity)
>>>           entity->rq = rq;
>>>       }
>>>   +    entity->loadbalance_on = false;
>>>       spin_unlock(&entity->rq_lock);
>>>   }
>>>   @@ -498,7 +504,6 @@ void drm_sched_entity_push_job(struct 
>>> drm_sched_job *sched_job,
>>>       bool first;
>>>         trace_drm_sched_job(sched_job, entity);
>>> -    atomic_inc(&entity->rq->sched->num_jobs);
>>>       WRITE_ONCE(entity->last_user, current->group_leader);
>>>       first = spsc_queue_push(&entity->job_queue, 
>>> &sched_job->queue_node);
>>>   diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
>>> b/drivers/gpu/drm/scheduler/sched_main.c
>>> index 3fad5876a13f..00fdc350134e 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>>> @@ -562,6 +562,7 @@ int drm_sched_job_init(struct drm_sched_job *job,
>>>           return -ENOENT;
>>>         sched = entity->rq->sched;
>>> +    atomic_inc(&entity->rq->sched->num_jobs);
>>>         job->sched = sched;
>>>       job->entity = entity;
>>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>>> index 96a1a1b7526e..a5190869d323 100644
>>> --- a/include/drm/gpu_scheduler.h
>>> +++ b/include/drm/gpu_scheduler.h
>>> @@ -97,6 +97,7 @@ struct drm_sched_entity {
>>>       struct dma_fence                *last_scheduled;
>>>       struct task_struct        *last_user;
>>>       bool                 stopped;
>>> +    bool                loadbalance_on;
>>>       struct completion        entity_idle;
>>>   };
>>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2020-01-14 16:34 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-14 15:43 [PATCH] drm/scheduler: fix race condition in load balancer Nirmoy Das
2020-01-14 16:01 ` Christian König
2020-01-14 16:13   ` Nirmoy
2020-01-14 16:20     ` Christian König [this message]
2020-01-14 16:20   ` Nirmoy
2020-01-14 16:23     ` Christian König
2020-01-14 16:27       ` Nirmoy
2020-01-15 11:04   ` Nirmoy
2020-01-15 12:52     ` Christian König
2020-01-15 13:24       ` Nirmoy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ac0b01a8-f360-41ba-568f-74e10fe95ecd@amd.com \
    --to=christian.koenig@amd.com \
    --cc=alexander.deucher@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=kenny.ho@amd.com \
    --cc=nirmodas@amd.com \
    --cc=nirmoy.aiemd@gmail.com \
    --cc=nirmoy.das@amd.com \
    --cc=pierre-eric.pelloux-prayer@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).