All of lore.kernel.org
 help / color / mirror / Atom feed
From: Nirmoy <nirmodas@amd.com>
To: "Christian König" <christian.koenig@amd.com>,
	"Nirmoy Das" <nirmoy.aiemd@gmail.com>,
	alexander.deucher@amd.com, kenny.ho@amd.com
Cc: nirmoy.das@amd.com, amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH 4/4] drm/scheduler: do not keep a copy of sched list
Date: Mon, 9 Dec 2019 14:56:23 +0100	[thread overview]
Message-ID: <e08519b8-3288-3a30-a32e-60758a939eec@amd.com> (raw)
In-Reply-To: <86d30760-8f27-1c42-f914-b512c9a3a0f1@amd.com>

Hi Christian,

I got a different idea, a bit more simple let me know what do you think 
about it:

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h 
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 50bab33cba39..8de4de4f7a43 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -870,6 +870,7 @@ struct amdgpu_device {
         u64                             fence_context;
         unsigned                        num_rings;
         struct amdgpu_ring              *rings[AMDGPU_MAX_RINGS];
+      struct drm_gpu_scheduler *rings_sched_list[AMDGPU_MAX_RINGS];
         bool                            ib_pool_ready;
         struct amdgpu_sa_manager        ring_tmp_bo;

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index 1d6850af9908..52b3a5d85a1d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -122,9 +122,8 @@ static int amdgpu_ctx_init(struct amdgpu_device *adev,

         for (i = 0; i < AMDGPU_HW_IP_NUM; ++i) {
                 struct amdgpu_ring *rings[AMDGPU_MAX_RINGS];
-               struct drm_gpu_scheduler *sched_list[AMDGPU_MAX_RINGS]
+              struct drm_gpu_scheduler **sched_list;
                 unsigned num_rings = 0;
-               unsigned num_rqs = 0;

                 switch (i) {
                 case AMDGPU_HW_IP_GFX:
@@ -177,17 +176,11 @@ static int amdgpu_ctx_init(struct amdgpu_device *adev,
                         break;
                 }

-               for (j = 0; j < num_rings; ++j) {
-                       if (!rings[j]->adev)
-                               continue;
-
-                       sched_list[num_rqs++] = &rings[j]->sched;
-               }
-
+              sched_list= adev->rings_sched_list+rings[0]->idx;
                 for (j = 0; j < amdgpu_ctx_num_entities[i]; ++j)
                         r = 
drm_sched_entity_init(&ctx->entities[i][j].entity,
                                                   priority, sched_list,
-                                                 num_rqs, &ctx->guilty);
+                                                num_rings, &ctx->guilty);
                 if (r)
                         goto error_cleanup_entities;
         }
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index 377fe20bce23..e8cfa357e445 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -480,6 +480,8 @@ int amdgpu_fence_driver_init_ring(struct amdgpu_ring 
*ring,
                                   ring->name);
                         return r;
                 }
+
+               adev->rings_sched_list[ring->idx] = &ring->sched;
         }

         return 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c 
b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index bd9ed33bab43..bfe36199ffed 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -1744,8 +1744,11 @@ static int sdma_v4_0_sw_init(void *handle)
                                      AMDGPU_SDMA_IRQ_INSTANCE0 + i);
                 if (r)
                         return r;
+       }
+
+       if (adev->sdma.has_page_queue) {
+               for (i = 0; i < adev->sdma.num_instances; i++) {

-               if (adev->sdma.has_page_queue) {
                         ring = &adev->sdma.instance[i].page;
                         ring->ring_obj = NULL;
                         ring->use_doorbell = true;

It relies on contiguous ring initialization that's why I had to change  
sdma_v4_0.c so that we do ring_init(sdma0, sdma1, page0, page1}

instead of ring_init{sdma0, page0, sdma1, page1}


Regards,

Nirmoy

On 12/9/19 1:20 PM, Christian König wrote:
> Yes, you need to do this for the SDMA as well but in general that 
> looks like the idea I had in mind as well.
>
> I would do it like this:
>
> 1. Change the special case when you only get one scheduler for an 
> entity to drop the pointer to the scheduler list.
>     This way we always use the same scheduler for the entity and can 
> pass in the array on the stack.
>
> 2. Change all callers which use more than one scheduler in the list to 
> pass in pointers which are not allocated on the stack.
>     This obviously also means that we build the list of schedulers for 
> each type only once during device init and not for each context init.
>
> 3. Make the scheduler list const and drop the kcalloc()/kfree() from 
> the entity code.
>
> Regards,
> Christian.
>
> Am 08.12.19 um 20:57 schrieb Nirmoy:
>>
>> On 12/6/19 8:41 PM, Christian König wrote:
>>> Am 06.12.19 um 18:33 schrieb Nirmoy Das:
>>>> entity should not keep copy and maintain sched list for
>>>> itself.
>>>
>>> That is a good step, but we need to take this further.
>>
>> How about  something like ?
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
>> index 0ae0a2715b0d..a71ee084b47a 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
>> @@ -269,8 +269,10 @@ struct amdgpu_gfx {
>>         bool                            me_fw_write_wait;
>>         bool                            cp_fw_write_wait;
>>         struct amdgpu_ring gfx_ring[AMDGPU_MAX_GFX_RINGS];
>> +       struct drm_gpu_scheduler *gfx_sched_list[AMDGPU_MAX_GFX_RINGS];
>>         unsigned                        num_gfx_rings;
>>         struct amdgpu_ring compute_ring[AMDGPU_MAX_COMPUTE_RINGS];
>> +       struct drm_gpu_scheduler 
>> *compute_sched_list[AMDGPU_MAX_COMPUTE_RINGS];
>>         unsigned                        num_compute_rings;
>>         struct amdgpu_irq_src           eop_irq;
>>         struct amdgpu_irq_src           priv_reg_irq;
>>
>>
>> Regards,
>>
>> Nirmoy
>>
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2019-12-09 13:57 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-06 17:33 [PATCH 1/4] drm/scheduler: rework entity creation Nirmoy Das
2019-12-06 17:33 ` [PATCH 2/4] drm/amdgpu: replace vm_pte's run-queue list with drm gpu scheds list Nirmoy Das
2019-12-06 19:37   ` Christian König
2019-12-06 17:33 ` [PATCH 3/4] drm/amdgpu: allocate entities on demand Nirmoy Das
2019-12-06 19:40   ` Christian König
2019-12-06 17:33 ` [PATCH 4/4] drm/scheduler: do not keep a copy of sched list Nirmoy Das
2019-12-06 19:41   ` Christian König
2019-12-08 19:57     ` Nirmoy
2019-12-09 12:20       ` Christian König
2019-12-09 13:56         ` Nirmoy [this message]
2019-12-09 14:09           ` Nirmoy
2019-12-09 15:37             ` Christian König
2019-12-09 21:53 [PATCH 1/4] drm/scheduler: rework entity creation Nirmoy Das
2019-12-09 21:53 ` [PATCH 4/4] drm/scheduler: do not keep a copy of sched list Nirmoy Das
2019-12-10 12:52 [PATCH 1/4] drm/scheduler: rework entity creation Nirmoy Das
2019-12-10 12:53 ` [PATCH 4/4] drm/scheduler: do not keep a copy of sched list Nirmoy Das
2019-12-10 13:00   ` Christian König
2019-12-10 13:02     ` Christian König
2019-12-10 14:47     ` Nirmoy
2019-12-10 15:08       ` Nirmoy
2019-12-10 17:32         ` Christian König
2019-12-10 18:38           ` Nirmoy
2019-12-10 18:17 [PATCH 1/4] drm/scheduler: rework entity creation Nirmoy Das
2019-12-10 18:17 ` [PATCH 4/4] drm/scheduler: do not keep a copy of sched list Nirmoy Das
2019-12-11 12:25   ` Christian König
2019-12-11 14:24 [PATCH 1/4 v2] drm/scheduler: rework entity creation Nirmoy Das
2019-12-11 14:24 ` [PATCH 4/4] drm/scheduler: do not keep a copy of sched list Nirmoy Das
2019-12-11 15:42 [PATCH 1/4] drm/scheduler: rework entity creation Nirmoy Das
2019-12-11 15:42 ` [PATCH 4/4] drm/scheduler: do not keep a copy of sched list Nirmoy Das
2019-12-11 15:42   ` Nirmoy Das

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e08519b8-3288-3a30-a32e-60758a939eec@amd.com \
    --to=nirmodas@amd.com \
    --cc=alexander.deucher@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=kenny.ho@amd.com \
    --cc=nirmoy.aiemd@gmail.com \
    --cc=nirmoy.das@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.