From: Nirmoy <nirmodas@amd.com>
To: christian.koenig@amd.com, Nirmoy Das <nirmoy.aiemd@gmail.com>,
amd-gfx@lists.freedesktop.org
Cc: alexander.deucher@amd.com, kenny.ho@amd.com, nirmoy.das@amd.com,
pierre-eric.pelloux-prayer@amd.com
Subject: Re: [PATCH] drm/scheduler: fix race condition in load balancer
Date: Wed, 15 Jan 2020 12:04:04 +0100 [thread overview]
Message-ID: <862ad550-082d-7ece-1d4d-99801ab10428@amd.com> (raw)
In-Reply-To: <5deb3805-f7e8-3d0d-4259-a3be1c5d3cf5@gmail.com>
Hi Christian,
On 1/14/20 5:01 PM, Christian König wrote:
>
>> Before this patch:
>>
>> sched_name num of many times it got scheduled
>> ========= ==================================
>> sdma0 314
>> sdma1 32
>> comp_1.0.0 56
>> comp_1.1.0 0
>> comp_1.1.1 0
>> comp_1.2.0 0
>> comp_1.2.1 0
>> comp_1.3.0 0
>> comp_1.3.1 0
>>
>> After this patch:
>>
>> sched_name num of many times it got scheduled
>> ========= ==================================
>> sdma1 243
>> sdma0 164
>> comp_1.0.1 14
>> comp_1.1.0 11
>> comp_1.1.1 10
>> comp_1.2.0 15
>> comp_1.2.1 14
>> comp_1.3.0 10
>> comp_1.3.1 10
>
> Well that is still rather nice to have, why does that happen?
I think I know why it happens. At init all entity's rq gets assigned to
sched_list[0]. I put some prints to check what we compare in
drm_sched_entity_get_free_sched.
It turns out most of the time it compares zero values(num_jobs(0) <
min_jobs(0)) so most of the time 1st rq(sdma0, comp_1.0.0) was picked by
drm_sched_entity_get_free_sched.
This patch was not correct , had an extra atomic_inc(num_jobs) in
drm_sched_job_init. This probably added bit of randomness I think, which
helped in better job distribution.
I've updated my previous RFC patch which uses time consumed by each
sched for load balance with a twist of ignoring previously scheduled
sched/rq. Let me know what do you think.
Regards,
Nirmoy
>
> Christian.
>
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
next prev parent reply other threads:[~2020-01-15 11:02 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-14 15:43 [PATCH] drm/scheduler: fix race condition in load balancer Nirmoy Das
2020-01-14 16:01 ` Christian König
2020-01-14 16:13 ` Nirmoy
2020-01-14 16:20 ` Christian König
2020-01-14 16:20 ` Nirmoy
2020-01-14 16:23 ` Christian König
2020-01-14 16:27 ` Nirmoy
2020-01-15 11:04 ` Nirmoy [this message]
2020-01-15 12:52 ` Christian König
2020-01-15 13:24 ` Nirmoy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=862ad550-082d-7ece-1d4d-99801ab10428@amd.com \
--to=nirmodas@amd.com \
--cc=alexander.deucher@amd.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=christian.koenig@amd.com \
--cc=kenny.ho@amd.com \
--cc=nirmoy.aiemd@gmail.com \
--cc=nirmoy.das@amd.com \
--cc=pierre-eric.pelloux-prayer@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).