From: "Thomas Hellström (Intel)" <thomas_os@shipmail.org>
To: "Christian König" <ckoenig.leichtzumerken@gmail.com>,
matthew.auld@intel.com, dri-devel@lists.freedesktop.org
Subject: Re: [PATCH 02/10] drm/ttm: flip over the range manager to self allocated nodes
Date: Wed, 2 Jun 2021 17:28:35 +0200 [thread overview]
Message-ID: <d6f0b8e1-f4a2-cbbd-8b0c-6700ae4f7e58@shipmail.org> (raw)
In-Reply-To: <55f59f1e-f988-f58a-4e3f-7d419a846ae6@gmail.com>
Hi!
On 6/2/21 4:17 PM, Christian König wrote:
> Am 02.06.21 um 16:13 schrieb Thomas Hellström (Intel):
>>
>> On 6/2/21 3:07 PM, Christian König wrote:
>>>
>>>
>>> Am 02.06.21 um 14:33 schrieb Thomas Hellström (Intel):
>>>>
>>>> On 6/2/21 2:11 PM, Christian König wrote:
>>>>> Am 02.06.21 um 13:44 schrieb Thomas Hellström (Intel):
>>>>>>
>>>>>> On 6/2/21 12:09 PM, Christian König wrote:
>>>>>>> Start with the range manager to make the resource object the base
>>>>>>> class for the allocated nodes.
>>>>>>>
>>>>>>> While at it cleanup a lot of the code around that.
>>>>>>>
>>>>>>> Signed-off-by: Christian König <christian.koenig@amd.com>
>>>>>>> Reviewed-by: Matthew Auld <matthew.auld@intel.com>
>>>>>>> ---
>>>>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 1 +
>>>>>>> drivers/gpu/drm/drm_gem_vram_helper.c | 2 +
>>>>>>> drivers/gpu/drm/nouveau/nouveau_ttm.c | 2 +
>>>>>>> drivers/gpu/drm/qxl/qxl_ttm.c | 1 +
>>>>>>> drivers/gpu/drm/radeon/radeon_ttm.c | 1 +
>>>>>>> drivers/gpu/drm/ttm/ttm_range_manager.c | 56
>>>>>>> ++++++++++++++++++-------
>>>>>>> drivers/gpu/drm/ttm/ttm_resource.c | 26 ++++++++----
>>>>>>> include/drm/ttm/ttm_bo_driver.h | 26 ------------
>>>>>>> include/drm/ttm/ttm_range_manager.h | 43 +++++++++++++++++++
>>>>>>> include/drm/ttm/ttm_resource.h | 3 ++
>>>>>>> 10 files changed, 111 insertions(+), 50 deletions(-)
>>>>>>> create mode 100644 include/drm/ttm/ttm_range_manager.h
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>>>>>> index 69db89261650..df1f185faae9 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>>>>>> @@ -45,6 +45,7 @@
>>>>>>> #include <drm/ttm/ttm_bo_api.h>
>>>>>>> #include <drm/ttm/ttm_bo_driver.h>
>>>>>>> #include <drm/ttm/ttm_placement.h>
>>>>>>> +#include <drm/ttm/ttm_range_manager.h>
>>>>>>> #include <drm/amdgpu_drm.h>
>>>>>>> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c
>>>>>>> b/drivers/gpu/drm/drm_gem_vram_helper.c
>>>>>>> index 83e7258c7f90..17a4c5d47b6a 100644
>>>>>>> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
>>>>>>> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
>>>>>>> @@ -17,6 +17,8 @@
>>>>>>> #include <drm/drm_prime.h>
>>>>>>> #include <drm/drm_simple_kms_helper.h>
>>>>>>> +#include <drm/ttm/ttm_range_manager.h>
>>>>>>> +
>>>>>>> static const struct drm_gem_object_funcs
>>>>>>> drm_gem_vram_object_funcs;
>>>>>>> /**
>>>>>>> diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c
>>>>>>> b/drivers/gpu/drm/nouveau/nouveau_ttm.c
>>>>>>> index 65430912ff72..b08b8efeefba 100644
>>>>>>> --- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
>>>>>>> +++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
>>>>>>> @@ -26,6 +26,8 @@
>>>>>>> #include <linux/limits.h>
>>>>>>> #include <linux/swiotlb.h>
>>>>>>> +#include <drm/ttm/ttm_range_manager.h>
>>>>>>> +
>>>>>>> #include "nouveau_drv.h"
>>>>>>> #include "nouveau_gem.h"
>>>>>>> #include "nouveau_mem.h"
>>>>>>> diff --git a/drivers/gpu/drm/qxl/qxl_ttm.c
>>>>>>> b/drivers/gpu/drm/qxl/qxl_ttm.c
>>>>>>> index 8aa87b8edb9c..19fd39d9a00c 100644
>>>>>>> --- a/drivers/gpu/drm/qxl/qxl_ttm.c
>>>>>>> +++ b/drivers/gpu/drm/qxl/qxl_ttm.c
>>>>>>> @@ -32,6 +32,7 @@
>>>>>>> #include <drm/ttm/ttm_bo_api.h>
>>>>>>> #include <drm/ttm/ttm_bo_driver.h>
>>>>>>> #include <drm/ttm/ttm_placement.h>
>>>>>>> +#include <drm/ttm/ttm_range_manager.h>
>>>>>>> #include "qxl_drv.h"
>>>>>>> #include "qxl_object.h"
>>>>>>> diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c
>>>>>>> b/drivers/gpu/drm/radeon/radeon_ttm.c
>>>>>>> index cdffa9b65108..ad2a5a791bba 100644
>>>>>>> --- a/drivers/gpu/drm/radeon/radeon_ttm.c
>>>>>>> +++ b/drivers/gpu/drm/radeon/radeon_ttm.c
>>>>>>> @@ -45,6 +45,7 @@
>>>>>>> #include <drm/ttm/ttm_bo_api.h>
>>>>>>> #include <drm/ttm/ttm_bo_driver.h>
>>>>>>> #include <drm/ttm/ttm_placement.h>
>>>>>>> +#include <drm/ttm/ttm_range_manager.h>
>>>>>>> #include "radeon_reg.h"
>>>>>>> #include "radeon.h"
>>>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_range_manager.c
>>>>>>> b/drivers/gpu/drm/ttm/ttm_range_manager.c
>>>>>>> index b9d5da6e6a81..ce5d07ca384c 100644
>>>>>>> --- a/drivers/gpu/drm/ttm/ttm_range_manager.c
>>>>>>> +++ b/drivers/gpu/drm/ttm/ttm_range_manager.c
>>>>>>> @@ -29,12 +29,13 @@
>>>>>>> * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
>>>>>>> */
>>>>>>> -#include <drm/ttm/ttm_bo_driver.h>
>>>>>>> +#include <drm/ttm/ttm_device.h>
>>>>>>> #include <drm/ttm/ttm_placement.h>
>>>>>>> +#include <drm/ttm/ttm_range_manager.h>
>>>>>>> +#include <drm/ttm/ttm_bo_api.h>
>>>>>>> #include <drm/drm_mm.h>
>>>>>>> #include <linux/slab.h>
>>>>>>> #include <linux/spinlock.h>
>>>>>>> -#include <linux/module.h>
>>>>>>> /*
>>>>>>> * Currently we use a spinlock for the lock, but a mutex *may* be
>>>>>>> @@ -60,8 +61,8 @@ static int ttm_range_man_alloc(struct
>>>>>>> ttm_resource_manager *man,
>>>>>>> struct ttm_resource *mem)
>>>>>>> {
>>>>>>> struct ttm_range_manager *rman = to_range_manager(man);
>>>>>>> + struct ttm_range_mgr_node *node;
>>>>>>> struct drm_mm *mm = &rman->mm;
>>>>>>> - struct drm_mm_node *node;
>>>>>>> enum drm_mm_insert_mode mode;
>>>>>>> unsigned long lpfn;
>>>>>>> int ret;
>>>>>>> @@ -70,7 +71,7 @@ static int ttm_range_man_alloc(struct
>>>>>>> ttm_resource_manager *man,
>>>>>>> if (!lpfn)
>>>>>>> lpfn = man->size;
>>>>>>> - node = kzalloc(sizeof(*node), GFP_KERNEL);
>>>>>>> + node = kzalloc(struct_size(node, mm_nodes, 1), GFP_KERNEL);
>>>>>>
>>>>>> I'm still a bit confused about the situation where a driver
>>>>>> wants to attach private data to a struct ttm_resource without
>>>>>> having to re-implement its own range manager?
>>>>>>
>>>>>> Could be cached sg-tables, list of GPU bindings etc. Wouldn't
>>>>>> work with the above unless we have a void *driver_private member
>>>>>> on the struct ttm_resource. Is that the plan going forward here?
>>>>>> Or that the driver actually does the re-implementation?
>>>>>
>>>>> I don't really understand your concern here. The basic idea is
>>>>> that drivers use ttm_resource as a base class for their own
>>>>> implementation.
>>>>>
>>>>> See for example how nouveau does that:
>>>>>
>>>>> struct nouveau_mem {
>>>>> struct ttm_resource base;
>>>>> struct nouveau_cli *cli;
>>>>> u8 kind;
>>>>> u8 comp;
>>>>> struct nvif_mem mem;
>>>>> struct nvif_vma vma[2];
>>>>> };
>>>>>
>>>>> The range manager is helping driver specific resource managers
>>>>> which want to implement something drm_mm_nodes based. E.g.
>>>>> amdgpu_gtt_mgr and amdgpu_vram_mgr, but it can also be used stand
>>>>> alone.
>>>>>
>>>>> The ttm_range_mgr_node can then be used as base class for this
>>>>> functionality. I already want to move some more code from
>>>>> amdgpu_vram_mgr.c into the range manager, but that is just minor
>>>>> cleanup work.
>>>>>
>>>> Sure but if you embed a ttm_range_mgr_node in your struct
>>>> i915_resource, and wanted to use the ttm range manager for it, it
>>>> would allocate a struct ttm_range_mgr_node rather than a struct
>>>> i915_resource? Or am I missing something?
>>>
>>> Yes, that's the general idea I'm targeting for. I'm just not fully
>>> there yet.
>>
>> Hmm, I don't fully understand the reply, I described a buggy scenario
>> and you replied that's what we're targeting for?
>
> Ok, I don't seem to understand what you mean here. What is buggy on that?
The buggy thing I'm trying to describe is a scenario where I want to
have a struct i915_ttm_resource which embeds a struct
ttm_range_mgr_node, but there is no way I can tell the generic ttm range
manager to allocate a struct i915_ttm_resource instead of a struct
ttm_range_mgr_node.
So what I want to be able to do: I have
struct i915_ttm_resource {
struct i915_gpu_bindings gpu_bindings;
struct ttm_range_mgr_node range_node;
};
Now I want to be able to share common code as much as possible and use
the generic ttm_range_manager here. How would I go about doing that with
the proposed changes?
Thanks,
Thomas
>
>> I assume you mean we're going to get an init() method for the range
>> manager, and a destroy method for the struct ttm_resource?
>
> Well the ttm_range_manager is just another component implementing some
> functionality by extending the ttm_resource object.
>
> We currently don't have a destroy function for ttm_resource object
> because that isn't necessary at the moment. But I'm probably going to
> add one at some point.
>
> Regards,
> Christian.
>
>>
>> Thanks,
>>
>> Thomas
>>
>>
next prev parent reply other threads:[~2021-06-02 15:28 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-02 10:09 [PATCH 01/10] drm/ttm: allocate resource object instead of embedding it v2 Christian König
2021-06-02 10:09 ` [PATCH 02/10] drm/ttm: flip over the range manager to self allocated nodes Christian König
2021-06-02 11:44 ` Thomas Hellström (Intel)
2021-06-02 12:11 ` Christian König
2021-06-02 12:33 ` Thomas Hellström (Intel)
2021-06-02 13:07 ` Christian König
2021-06-02 14:13 ` Thomas Hellström (Intel)
2021-06-02 14:17 ` Christian König
2021-06-02 15:28 ` Thomas Hellström (Intel) [this message]
2021-06-02 18:41 ` Christian König
2021-06-02 18:52 ` Thomas Hellström (Intel)
2021-06-02 18:53 ` Christian König
2021-06-02 10:09 ` [PATCH 03/10] drm/ttm: flip over the sys " Christian König
2021-06-03 7:51 ` Matthew Auld
2021-06-02 10:09 ` [PATCH 04/10] drm/amdgpu: revert "drm/amdgpu: stop allocating dummy GTT nodes" Christian König
2021-06-02 10:09 ` [PATCH 05/10] drm/amdkfd: use resource cursor in svm_migrate_copy_to_vram v2 Christian König
2021-06-03 9:44 ` Matthew Auld
2021-06-02 10:09 ` [PATCH 06/10] drm/amdgpu: switch the GTT backend to self alloc Christian König
2021-06-02 10:09 ` [PATCH 07/10] drm/amdgpu: switch the VRAM " Christian König
2021-06-02 10:09 ` [PATCH 08/10] drm/nouveau: switch the TTM backends " Christian König
2021-06-02 10:09 ` [PATCH 09/10] drm/vmwgfx: " Christian König
2021-06-02 10:09 ` [PATCH 10/10] drm/ttm: flip the switch for driver allocated resources v2 Christian König
2021-06-07 10:15 ` Thomas Hellström (Intel)
2021-06-07 10:37 ` Christian König
2021-06-07 10:44 ` Thomas Hellström (Intel)
2021-06-03 8:45 ` [PATCH 01/10] drm/ttm: allocate resource object instead of embedding it v2 Matthew Auld
2021-06-04 11:54 ` Christian König
2021-06-04 9:33 ` Thomas Hellström (Intel)
2021-06-07 16:40 ` Thomas Hellström (Intel)
2021-06-07 17:06 ` Thomas Hellström (Intel)
2021-06-07 17:54 ` Christian König
2021-06-07 17:58 ` Thomas Hellström (Intel)
2021-06-07 17:59 ` Christian König
2021-06-08 5:29 ` Thomas Hellström (Intel)
2021-06-08 7:14 ` Christian König
2021-06-08 7:17 ` Thomas Hellström (Intel)
2021-06-08 7:21 ` Christian König
2021-06-08 9:38 ` Das, Nirmoy
2021-06-08 9:40 ` Das, Nirmoy
2021-06-08 9:42 ` Christian König
2021-06-08 9:48 ` Das, Nirmoy
2021-06-07 17:10 ` Christian König
2021-06-08 6:55 ` Thomas Hellström
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d6f0b8e1-f4a2-cbbd-8b0c-6700ae4f7e58@shipmail.org \
--to=thomas_os@shipmail.org \
--cc=ckoenig.leichtzumerken@gmail.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).