All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] drm/scheduler: use hw_rq_count for load calculation
@ 2018-10-18 15:37 Nayan Deshmukh
  2018-10-22 12:46 ` Koenig, Christian
  0 siblings, 1 reply; 5+ messages in thread
From: Nayan Deshmukh @ 2018-10-18 15:37 UTC (permalink / raw)
  To: dri-devel; +Cc: Nayan Deshmukh, christian.koenig

If the hardware queue for a scheduler is empty then we don't
need to the shift the entities from their current scheduler
as they are not getting scheduled because of some dependency.

Signed-off-by: Nayan Deshmukh <nayan26deshmukh@gmail.com>
---
 drivers/gpu/drm/scheduler/sched_entity.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
index 3e22a54a99c2..4d18497d6ecf 100644
--- a/drivers/gpu/drm/scheduler/sched_entity.c
+++ b/drivers/gpu/drm/scheduler/sched_entity.c
@@ -130,6 +130,12 @@ drm_sched_entity_get_free_sched(struct drm_sched_entity *entity)
 	int i;
 
 	for (i = 0; i < entity->num_rq_list; ++i) {
+		if (atomic_read(&entity->rq_list[i]->sched->hw_rq_count) <
+			entity->rq_list[i]->sched->hw_submission_limit) {
+			rq = entity->rq_list[i];
+			break;
+		}
+
 		num_jobs = atomic_read(&entity->rq_list[i]->sched->num_jobs);
 		if (num_jobs < min_jobs) {
 			min_jobs = num_jobs;
@@ -470,6 +476,14 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
 	if (spsc_queue_count(&entity->job_queue) || entity->num_rq_list <= 1)
 		return;
 
+	/*
+	 * We don't need to shift entity if the hardware
+	 * queue of current scheduler is empty
+	 */
+	if (atomic_read(&entity->rq->sched->hw_rq_count) <
+		entity->rq->sched->hw_submission_limit)
+		return;
+
 	fence = READ_ONCE(entity->last_scheduled);
 	if (fence && !dma_fence_is_signaled(fence))
 		return;
-- 
2.14.3

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] drm/scheduler: use hw_rq_count for load calculation
  2018-10-18 15:37 [PATCH] drm/scheduler: use hw_rq_count for load calculation Nayan Deshmukh
@ 2018-10-22 12:46 ` Koenig, Christian
  2018-10-23 14:52   ` Nayan Deshmukh
  0 siblings, 1 reply; 5+ messages in thread
From: Koenig, Christian @ 2018-10-22 12:46 UTC (permalink / raw)
  To: Nayan Deshmukh, dri-devel

Am 18.10.18 um 17:37 schrieb :
> If the hardware queue for a scheduler is empty then we don't
> need to the shift the entities from their current scheduler
> as they are not getting scheduled because of some dependency.

That is most likely not a good idea. The scheduler might not have 
anything todo right now, but we can't guarantee that it will stay this way.

Instead when the number of jobs on a rq is identical we should select 
the one with the least entities on it.

This should make sure that we distribute the entities equally among the 
runqueues even when they are idle.

Christian.

>
> Signed-off-by: Nayan Deshmukh <nayan26deshmukh@gmail.com>
> ---
>   drivers/gpu/drm/scheduler/sched_entity.c | 14 ++++++++++++++
>   1 file changed, 14 insertions(+)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
> index 3e22a54a99c2..4d18497d6ecf 100644
> --- a/drivers/gpu/drm/scheduler/sched_entity.c
> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
> @@ -130,6 +130,12 @@ drm_sched_entity_get_free_sched(struct drm_sched_entity *entity)
>   	int i;
>   
>   	for (i = 0; i < entity->num_rq_list; ++i) {
> +		if (atomic_read(&entity->rq_list[i]->sched->hw_rq_count) <
> +			entity->rq_list[i]->sched->hw_submission_limit) {
> +			rq = entity->rq_list[i];
> +			break;
> +		}
> +
>   		num_jobs = atomic_read(&entity->rq_list[i]->sched->num_jobs);
>   		if (num_jobs < min_jobs) {
>   			min_jobs = num_jobs;
> @@ -470,6 +476,14 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
>   	if (spsc_queue_count(&entity->job_queue) || entity->num_rq_list <= 1)
>   		return;
>   
> +	/*
> +	 * We don't need to shift entity if the hardware
> +	 * queue of current scheduler is empty
> +	 */
> +	if (atomic_read(&entity->rq->sched->hw_rq_count) <
> +		entity->rq->sched->hw_submission_limit)
> +		return;
> +
>   	fence = READ_ONCE(entity->last_scheduled);
>   	if (fence && !dma_fence_is_signaled(fence))
>   		return;

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] drm/scheduler: use hw_rq_count for load calculation
  2018-10-22 12:46 ` Koenig, Christian
@ 2018-10-23 14:52   ` Nayan Deshmukh
  2018-10-28  9:59     ` [PATCH v2] " Nayan Deshmukh
  0 siblings, 1 reply; 5+ messages in thread
From: Nayan Deshmukh @ 2018-10-23 14:52 UTC (permalink / raw)
  To: Christian König; +Cc: Maling list - DRI developers

On Mon, Oct 22, 2018 at 9:46 PM Koenig, Christian
<Christian.Koenig@amd.com> wrote:
>
> Am 18.10.18 um 17:37 schrieb :
> > If the hardware queue for a scheduler is empty then we don't
> > need to the shift the entities from their current scheduler
> > as they are not getting scheduled because of some dependency.
>
> That is most likely not a good idea. The scheduler might not have
> anything todo right now, but we can't guarantee that it will stay this way.
>
I agree. But conversely it might also happens that one hardware engine
is sitting idle until the runqueue of the other schedulers comes to
the level of this scheduler.

I think the best option is to pick the scheduler with empty hardware
queue when the difference in their software queues is less that
MAX_DIFF. The problem is that determining the optimal value of
MAX_DIFF is not all that easy.

For now it's better to use MAX_DIFF=0 as you suggested until we can
find a way to determine its value.

Regards,
Nayan
> Instead when the number of jobs on a rq is identical we should select
> the one with the least entities on it.
>
> This should make sure that we distribute the entities equally among the
> runqueues even when they are idle.
>
> Christian.
>
> >
> > Signed-off-by: Nayan Deshmukh <nayan26deshmukh@gmail.com>
> > ---
> >   drivers/gpu/drm/scheduler/sched_entity.c | 14 ++++++++++++++
> >   1 file changed, 14 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
> > index 3e22a54a99c2..4d18497d6ecf 100644
> > --- a/drivers/gpu/drm/scheduler/sched_entity.c
> > +++ b/drivers/gpu/drm/scheduler/sched_entity.c
> > @@ -130,6 +130,12 @@ drm_sched_entity_get_free_sched(struct drm_sched_entity *entity)
> >       int i;
> >
> >       for (i = 0; i < entity->num_rq_list; ++i) {
> > +             if (atomic_read(&entity->rq_list[i]->sched->hw_rq_count) <
> > +                     entity->rq_list[i]->sched->hw_submission_limit) {
> > +                     rq = entity->rq_list[i];
> > +                     break;
> > +             }
> > +
> >               num_jobs = atomic_read(&entity->rq_list[i]->sched->num_jobs);
> >               if (num_jobs < min_jobs) {
> >                       min_jobs = num_jobs;
> > @@ -470,6 +476,14 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity)
> >       if (spsc_queue_count(&entity->job_queue) || entity->num_rq_list <= 1)
> >               return;
> >
> > +     /*
> > +      * We don't need to shift entity if the hardware
> > +      * queue of current scheduler is empty
> > +      */
> > +     if (atomic_read(&entity->rq->sched->hw_rq_count) <
> > +             entity->rq->sched->hw_submission_limit)
> > +             return;
> > +
> >       fence = READ_ONCE(entity->last_scheduled);
> >       if (fence && !dma_fence_is_signaled(fence))
> >               return;
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v2] drm/scheduler: use hw_rq_count for load calculation
  2018-10-23 14:52   ` Nayan Deshmukh
@ 2018-10-28  9:59     ` Nayan Deshmukh
  2018-11-07  8:09       ` Christian König
  0 siblings, 1 reply; 5+ messages in thread
From: Nayan Deshmukh @ 2018-10-28  9:59 UTC (permalink / raw)
  To: dri-devel; +Cc: Nayan Deshmukh, christian.koenig

If the hardware queue for a scheduler is empty then we don't
need to the shift the entities from their current scheduler
as they are not getting scheduled because of some dependency.

v2: to calculate the least loaded scheduler we only use hw_rq_count
when the number of jobs in their run queue are same

Signed-off-by: Nayan Deshmukh <nayan26deshmukh@gmail.com>
---
 drivers/gpu/drm/scheduler/sched_entity.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
index 3e22a54a99c2..cfe48df6621d 100644
--- a/drivers/gpu/drm/scheduler/sched_entity.c
+++ b/drivers/gpu/drm/scheduler/sched_entity.c
@@ -125,15 +125,21 @@ bool drm_sched_entity_is_ready(struct drm_sched_entity *entity)
 static struct drm_sched_rq *
 drm_sched_entity_get_free_sched(struct drm_sched_entity *entity)
 {
-	struct drm_sched_rq *rq = NULL;
+	struct drm_sched_rq *rq = NULL, *curr_rq = NULL;
 	unsigned int min_jobs = UINT_MAX, num_jobs;
 	int i;
 
 	for (i = 0; i < entity->num_rq_list; ++i) {
-		num_jobs = atomic_read(&entity->rq_list[i]->sched->num_jobs);
+		curr_rq = entity->rq_list[i];
+		num_jobs = atomic_read(&curr_rq->sched->num_jobs);
 		if (num_jobs < min_jobs) {
 			min_jobs = num_jobs;
-			rq = entity->rq_list[i];
+			rq = curr_rq;
+		} else if (num_jobs == min_jobs) {
+			if (atomic_read(&curr_rq->sched->hw_rq_count) <
+				atomic_read(&rq->sched->hw_rq_count)) {
+				rq = curr_rq;
+			}
 		}
 	}
 
-- 
2.14.3

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] drm/scheduler: use hw_rq_count for load calculation
  2018-10-28  9:59     ` [PATCH v2] " Nayan Deshmukh
@ 2018-11-07  8:09       ` Christian König
  0 siblings, 0 replies; 5+ messages in thread
From: Christian König @ 2018-11-07  8:09 UTC (permalink / raw)
  To: Nayan Deshmukh, dri-devel; +Cc: christian.koenig

Am 28.10.18 um 10:59 schrieb Nayan Deshmukh:
> If the hardware queue for a scheduler is empty then we don't
> need to the shift the entities from their current scheduler
> as they are not getting scheduled because of some dependency.
>
> v2: to calculate the least loaded scheduler we only use hw_rq_count
> when the number of jobs in their run queue are same

I still don't think that the hw_rq_count is the right thing to use here.

It is only a very short living snapshot of the momentary load and 
doesn't reflect what is going on the ring in general.

When the number of jobs is equal I would rather use the number of 
entities a scheduler has already assigned.

Christian.

>
> Signed-off-by: Nayan Deshmukh <nayan26deshmukh@gmail.com>
> ---
>   drivers/gpu/drm/scheduler/sched_entity.c | 12 +++++++++---
>   1 file changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
> index 3e22a54a99c2..cfe48df6621d 100644
> --- a/drivers/gpu/drm/scheduler/sched_entity.c
> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
> @@ -125,15 +125,21 @@ bool drm_sched_entity_is_ready(struct drm_sched_entity *entity)
>   static struct drm_sched_rq *
>   drm_sched_entity_get_free_sched(struct drm_sched_entity *entity)
>   {
> -	struct drm_sched_rq *rq = NULL;
> +	struct drm_sched_rq *rq = NULL, *curr_rq = NULL;
>   	unsigned int min_jobs = UINT_MAX, num_jobs;
>   	int i;
>   
>   	for (i = 0; i < entity->num_rq_list; ++i) {
> -		num_jobs = atomic_read(&entity->rq_list[i]->sched->num_jobs);
> +		curr_rq = entity->rq_list[i];
> +		num_jobs = atomic_read(&curr_rq->sched->num_jobs);
>   		if (num_jobs < min_jobs) {
>   			min_jobs = num_jobs;
> -			rq = entity->rq_list[i];
> +			rq = curr_rq;
> +		} else if (num_jobs == min_jobs) {
> +			if (atomic_read(&curr_rq->sched->hw_rq_count) <
> +				atomic_read(&rq->sched->hw_rq_count)) {
> +				rq = curr_rq;
> +			}
>   		}
>   	}
>   

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-11-07  8:09 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-18 15:37 [PATCH] drm/scheduler: use hw_rq_count for load calculation Nayan Deshmukh
2018-10-22 12:46 ` Koenig, Christian
2018-10-23 14:52   ` Nayan Deshmukh
2018-10-28  9:59     ` [PATCH v2] " Nayan Deshmukh
2018-11-07  8:09       ` Christian König

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.