Yes, that is correct. Nayan On Wed, Aug 1, 2018, 9:05 PM Andrey Grodzovsky wrote: > Clarification question - if the run queues belong to different > schedulers they effectively point to different rings, > > it means we allow to move (reschedule) a drm_sched_entity from one ring > to another - i assume that the idea int the first place, that > > you have a set of HW rings and you can utilize any of them for your jobs > (like compute rings). Correct ? > > Andrey > > > On 08/01/2018 04:20 AM, Nayan Deshmukh wrote: > > The function selects the run queue from the rq_list with the > > least load. The load is decided by the number of jobs in a > > scheduler. > > > > v2: avoid using atomic read twice consecutively, instead store > > it locally > > > > Signed-off-by: Nayan Deshmukh > > --- > > drivers/gpu/drm/scheduler/gpu_scheduler.c | 25 > +++++++++++++++++++++++++ > > 1 file changed, 25 insertions(+) > > > > diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler.c > b/drivers/gpu/drm/scheduler/gpu_scheduler.c > > index 375f6f7f6a93..fb4e542660b0 100644 > > --- a/drivers/gpu/drm/scheduler/gpu_scheduler.c > > +++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c > > @@ -255,6 +255,31 @@ static bool drm_sched_entity_is_ready(struct > drm_sched_entity *entity) > > return true; > > } > > > > +/** > > + * drm_sched_entity_get_free_sched - Get the rq from rq_list with least > load > > + * > > + * @entity: scheduler entity > > + * > > + * Return the pointer to the rq with least load. > > + */ > > +static struct drm_sched_rq * > > +drm_sched_entity_get_free_sched(struct drm_sched_entity *entity) > > +{ > > + struct drm_sched_rq *rq = NULL; > > + unsigned int min_jobs = UINT_MAX, num_jobs; > > + int i; > > + > > + for (i = 0; i < entity->num_rq_list; ++i) { > > + num_jobs = > atomic_read(&entity->rq_list[i]->sched->num_jobs); > > + if (num_jobs < min_jobs) { > > + min_jobs = num_jobs; > > + rq = entity->rq_list[i]; > > + } > > + } > > + > > + return rq; > > +} > > + > > static void drm_sched_entity_kill_jobs_cb(struct dma_fence *f, > > struct dma_fence_cb *cb) > > { > >