All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Qiang Yu <yuq825@gmail.com>
Cc: Qiang Yu <qiang.yu@amd.com>,
	Linux Memory Management List <linux-mm@kvack.org>,
	cgroups@vger.kernel.org,
	dri-devel <dri-devel@lists.freedesktop.org>,
	David Airlie <airlied@linux.ie>, Kenny Ho <kenny.ho@amd.com>,
	Michal Hocko <mhocko@kernel.org>, Huang Rui <ray.huang@amd.com>,
	Johannes Weiner <hannes@cmpxchg.org>, Tejun Heo <tj@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH RFC 3/3] drm/ttm: support memcg for ttm_tt
Date: Sun, 19 Jan 2020 14:03:58 +0100	[thread overview]
Message-ID: <0164fcbf-393a-237f-69d7-aa26dabe6ad9@amd.com> (raw)
In-Reply-To: <CAKGbVbv7P-S_NUYpqQ5opDbXBHRb4rq6m95372nOuZ8kMvGnBQ@mail.gmail.com>

Am 19.01.20 um 03:47 schrieb Qiang Yu:
> On Mon, Jan 13, 2020 at 11:56 PM Christian König
> <christian.koenig@amd.com> wrote:
>> Am 13.01.20 um 16:35 schrieb Qiang Yu:
>>> Charge TTM allocated system memory to memory cgroup which will
>>> limit the memory usage of a group of processes.
>> NAK to the whole approach. This belongs into the GEM or driver layer,
>> but not into TTM.
>>
> Sorry for responding late.
>
> GEM layer seems not a proper place to handle this as:
> 1. it is not aware of the back storage (system mem or device mem) unless
> we add this information up to GEM which I think is not appropriate
> 2. system memory allocated by GEM with drm_gem_get_pages() is already
> charged to memcg, it's only the ttm system memory not charged to memcg

The key point is that we already discussed this on the mailing list and 
GEM was agreed on to be the right place for this.

That's the reason why the Intel developers already proposed a way to 
expose the buffer location in GEM.

Please sync up with Kenny who is leading the development efforts and 
with the Intel developers before warming up an old discussion again.

Adding that to TTM is an absolute no-go from my maintainers perspective.

>
> Implement in driver like amdgpu is an option. But seems the problem is inside
> TTM which does not charge pages allocated by itself to memcg, won't it be
> better to solve it in TTM so that all drivers using it can benefit? Or you just
> think we should not rely on memcg for GPU system memory limitation?
>
>>> The memory is always charged to the control group of task which
>>> create this buffer object and when it's created. For example,
>>> when a buffer is created by process A and exported to process B,
>>> then process B populate this buffer, the memory is still charged
>>> to process A's memcg; if a buffer is created by process A when in
>>> memcg B, then A is moved to memcg C and populate this buffer, it
>>> will charge memcg B.
>> This is actually the most common use case for graphics application where
>> the X server allocates most of the backing store.
>>
>> So we need a better handling than just accounting the memory to whoever
>> allocated it first.
>>
> You mean the application based on DRI2 and X11 protocol draw? I think this
> is still reasonable to charge xserver for the memory, because xserver allocate
> the buffer and share to application which is its design and implementation
> nature. With DRI3, the buffer is allocated by application, also
> suitable for this
> approach.

That is a way to simplistic.

Again we already discussed this and the agreed compromise is to charge 
the application which is using the memory and not who has allocated it.

So you need to add the charge on importing a buffer and not just when it 
is created.

Regards,
Christian.

>
> Regards,
> Qiang
>
>> Regards,
>> Christian.
>>
>>> Signed-off-by: Qiang Yu <qiang.yu@amd.com>
>>> ---
>>>    drivers/gpu/drm/ttm/ttm_bo.c         | 10 ++++++++++
>>>    drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 +++++++++++++++++-
>>>    drivers/gpu/drm/ttm/ttm_tt.c         |  3 +++
>>>    include/drm/ttm/ttm_bo_api.h         |  5 +++++
>>>    include/drm/ttm/ttm_tt.h             |  4 ++++
>>>    5 files changed, 39 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
>>> index 8d91b0428af1..4e64846ee523 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>> @@ -42,6 +42,7 @@
>>>    #include <linux/module.h>
>>>    #include <linux/atomic.h>
>>>    #include <linux/dma-resv.h>
>>> +#include <linux/memcontrol.h>
>>>
>>>    static void ttm_bo_global_kobj_release(struct kobject *kobj);
>>>
>>> @@ -162,6 +163,10 @@ static void ttm_bo_release_list(struct kref *list_kref)
>>>        if (!ttm_bo_uses_embedded_gem_object(bo))
>>>                dma_resv_fini(&bo->base._resv);
>>>        mutex_destroy(&bo->wu_mutex);
>>> +#ifdef CONFIG_MEMCG
>>> +     if (bo->memcg)
>>> +             css_put(&bo->memcg->css);
>>> +#endif
>>>        bo->destroy(bo);
>>>        ttm_mem_global_free(&ttm_mem_glob, acc_size);
>>>    }
>>> @@ -1330,6 +1335,11 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev,
>>>        }
>>>        atomic_inc(&ttm_bo_glob.bo_count);
>>>
>>> +#ifdef CONFIG_MEMCG
>>> +     if (bo->type == ttm_bo_type_device)
>>> +             bo->memcg = mem_cgroup_driver_get_from_current();
>>> +#endif
>>> +
>>>        /*
>>>         * For ttm_bo_type_device buffers, allocate
>>>         * address space from the device.
>>> diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
>>> index b40a4678c296..ecd1831a1d38 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
>>> @@ -42,7 +42,7 @@
>>>    #include <linux/seq_file.h> /* for seq_printf */
>>>    #include <linux/slab.h>
>>>    #include <linux/dma-mapping.h>
>>> -
>>> +#include <linux/memcontrol.h>
>>>    #include <linux/atomic.h>
>>>
>>>    #include <drm/ttm/ttm_bo_driver.h>
>>> @@ -1045,6 +1045,11 @@ ttm_pool_unpopulate_helper(struct ttm_tt *ttm, unsigned mem_count_update)
>>>        ttm_put_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
>>>                      ttm->caching_state);
>>>        ttm->state = tt_unpopulated;
>>> +
>>> +#ifdef CONFIG_MEMCG
>>> +     if (ttm->memcg)
>>> +             mem_cgroup_uncharge_drvmem(ttm->memcg, ttm->num_pages);
>>> +#endif
>>>    }
>>>
>>>    int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
>>> @@ -1059,6 +1064,17 @@ int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
>>>        if (ttm_check_under_lowerlimit(mem_glob, ttm->num_pages, ctx))
>>>                return -ENOMEM;
>>>
>>> +#ifdef CONFIG_MEMCG
>>> +     if (ttm->memcg) {
>>> +             gfp_t gfp_flags = GFP_USER;
>>> +             if (ttm->page_flags & TTM_PAGE_FLAG_NO_RETRY)
>>> +                     gfp_flags |= __GFP_RETRY_MAYFAIL;
>>> +             ret = mem_cgroup_charge_drvmem(ttm->memcg, gfp_flags, ttm->num_pages);
>>> +             if (ret)
>>> +                     return ret;
>>> +     }
>>> +#endif
>>> +
>>>        ret = ttm_get_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
>>>                            ttm->caching_state);
>>>        if (unlikely(ret != 0)) {
>>> diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
>>> index e0e9b4f69db6..1acb153084e1 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_tt.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_tt.c
>>> @@ -233,6 +233,9 @@ void ttm_tt_init_fields(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
>>>        ttm->state = tt_unpopulated;
>>>        ttm->swap_storage = NULL;
>>>        ttm->sg = bo->sg;
>>> +#ifdef CONFIG_MEMCG
>>> +     ttm->memcg = bo->memcg;
>>> +#endif
>>>    }
>>>
>>>    int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>> index 65e399d280f7..95a08e81a73e 100644
>>> --- a/include/drm/ttm/ttm_bo_api.h
>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>> @@ -54,6 +54,8 @@ struct ttm_place;
>>>
>>>    struct ttm_lru_bulk_move;
>>>
>>> +struct mem_cgroup;
>>> +
>>>    /**
>>>     * struct ttm_bus_placement
>>>     *
>>> @@ -180,6 +182,9 @@ struct ttm_buffer_object {
>>>        void (*destroy) (struct ttm_buffer_object *);
>>>        unsigned long num_pages;
>>>        size_t acc_size;
>>> +#ifdef CONFIG_MEMCG
>>> +     struct mem_cgroup *memcg;
>>> +#endif
>>>
>>>        /**
>>>        * Members not needing protection.
>>> diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
>>> index c0e928abf592..10fb5a557b95 100644
>>> --- a/include/drm/ttm/ttm_tt.h
>>> +++ b/include/drm/ttm/ttm_tt.h
>>> @@ -33,6 +33,7 @@ struct ttm_tt;
>>>    struct ttm_mem_reg;
>>>    struct ttm_buffer_object;
>>>    struct ttm_operation_ctx;
>>> +struct mem_cgroup;
>>>
>>>    #define TTM_PAGE_FLAG_WRITE           (1 << 3)
>>>    #define TTM_PAGE_FLAG_SWAPPED         (1 << 4)
>>> @@ -116,6 +117,9 @@ struct ttm_tt {
>>>                tt_unbound,
>>>                tt_unpopulated,
>>>        } state;
>>> +#ifdef CONFIG_MEMCG
>>> +     struct mem_cgroup *memcg;
>>> +#endif
>>>    };
>>>
>>>    /**
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C5d3b70a43b80444c550808d79c89e968%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637149988466853762&amp;sdata=ni3ku7nC%2FD5E8kivppfuuF7ZoiyfLQ8L3Y4j9IfHYUU%3D&amp;reserved=0



WARNING: multiple messages have this Message-ID (diff)
From: "Christian König" <christian.koenig@amd.com>
To: Qiang Yu <yuq825@gmail.com>
Cc: Linux Memory Management List <linux-mm@kvack.org>,
	David Airlie <airlied@linux.ie>, Kenny Ho <kenny.ho@amd.com>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	Michal Hocko <mhocko@kernel.org>, Qiang Yu <qiang.yu@amd.com>,
	Huang Rui <ray.huang@amd.com>,
	Johannes Weiner <hannes@cmpxchg.org>, Tejun Heo <tj@kernel.org>,
	cgroups@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH RFC 3/3] drm/ttm: support memcg for ttm_tt
Date: Sun, 19 Jan 2020 14:03:58 +0100	[thread overview]
Message-ID: <0164fcbf-393a-237f-69d7-aa26dabe6ad9@amd.com> (raw)
In-Reply-To: <CAKGbVbv7P-S_NUYpqQ5opDbXBHRb4rq6m95372nOuZ8kMvGnBQ@mail.gmail.com>

Am 19.01.20 um 03:47 schrieb Qiang Yu:
> On Mon, Jan 13, 2020 at 11:56 PM Christian König
> <christian.koenig@amd.com> wrote:
>> Am 13.01.20 um 16:35 schrieb Qiang Yu:
>>> Charge TTM allocated system memory to memory cgroup which will
>>> limit the memory usage of a group of processes.
>> NAK to the whole approach. This belongs into the GEM or driver layer,
>> but not into TTM.
>>
> Sorry for responding late.
>
> GEM layer seems not a proper place to handle this as:
> 1. it is not aware of the back storage (system mem or device mem) unless
> we add this information up to GEM which I think is not appropriate
> 2. system memory allocated by GEM with drm_gem_get_pages() is already
> charged to memcg, it's only the ttm system memory not charged to memcg

The key point is that we already discussed this on the mailing list and 
GEM was agreed on to be the right place for this.

That's the reason why the Intel developers already proposed a way to 
expose the buffer location in GEM.

Please sync up with Kenny who is leading the development efforts and 
with the Intel developers before warming up an old discussion again.

Adding that to TTM is an absolute no-go from my maintainers perspective.

>
> Implement in driver like amdgpu is an option. But seems the problem is inside
> TTM which does not charge pages allocated by itself to memcg, won't it be
> better to solve it in TTM so that all drivers using it can benefit? Or you just
> think we should not rely on memcg for GPU system memory limitation?
>
>>> The memory is always charged to the control group of task which
>>> create this buffer object and when it's created. For example,
>>> when a buffer is created by process A and exported to process B,
>>> then process B populate this buffer, the memory is still charged
>>> to process A's memcg; if a buffer is created by process A when in
>>> memcg B, then A is moved to memcg C and populate this buffer, it
>>> will charge memcg B.
>> This is actually the most common use case for graphics application where
>> the X server allocates most of the backing store.
>>
>> So we need a better handling than just accounting the memory to whoever
>> allocated it first.
>>
> You mean the application based on DRI2 and X11 protocol draw? I think this
> is still reasonable to charge xserver for the memory, because xserver allocate
> the buffer and share to application which is its design and implementation
> nature. With DRI3, the buffer is allocated by application, also
> suitable for this
> approach.

That is a way to simplistic.

Again we already discussed this and the agreed compromise is to charge 
the application which is using the memory and not who has allocated it.

So you need to add the charge on importing a buffer and not just when it 
is created.

Regards,
Christian.

>
> Regards,
> Qiang
>
>> Regards,
>> Christian.
>>
>>> Signed-off-by: Qiang Yu <qiang.yu@amd.com>
>>> ---
>>>    drivers/gpu/drm/ttm/ttm_bo.c         | 10 ++++++++++
>>>    drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 +++++++++++++++++-
>>>    drivers/gpu/drm/ttm/ttm_tt.c         |  3 +++
>>>    include/drm/ttm/ttm_bo_api.h         |  5 +++++
>>>    include/drm/ttm/ttm_tt.h             |  4 ++++
>>>    5 files changed, 39 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
>>> index 8d91b0428af1..4e64846ee523 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>> @@ -42,6 +42,7 @@
>>>    #include <linux/module.h>
>>>    #include <linux/atomic.h>
>>>    #include <linux/dma-resv.h>
>>> +#include <linux/memcontrol.h>
>>>
>>>    static void ttm_bo_global_kobj_release(struct kobject *kobj);
>>>
>>> @@ -162,6 +163,10 @@ static void ttm_bo_release_list(struct kref *list_kref)
>>>        if (!ttm_bo_uses_embedded_gem_object(bo))
>>>                dma_resv_fini(&bo->base._resv);
>>>        mutex_destroy(&bo->wu_mutex);
>>> +#ifdef CONFIG_MEMCG
>>> +     if (bo->memcg)
>>> +             css_put(&bo->memcg->css);
>>> +#endif
>>>        bo->destroy(bo);
>>>        ttm_mem_global_free(&ttm_mem_glob, acc_size);
>>>    }
>>> @@ -1330,6 +1335,11 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev,
>>>        }
>>>        atomic_inc(&ttm_bo_glob.bo_count);
>>>
>>> +#ifdef CONFIG_MEMCG
>>> +     if (bo->type == ttm_bo_type_device)
>>> +             bo->memcg = mem_cgroup_driver_get_from_current();
>>> +#endif
>>> +
>>>        /*
>>>         * For ttm_bo_type_device buffers, allocate
>>>         * address space from the device.
>>> diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
>>> index b40a4678c296..ecd1831a1d38 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
>>> @@ -42,7 +42,7 @@
>>>    #include <linux/seq_file.h> /* for seq_printf */
>>>    #include <linux/slab.h>
>>>    #include <linux/dma-mapping.h>
>>> -
>>> +#include <linux/memcontrol.h>
>>>    #include <linux/atomic.h>
>>>
>>>    #include <drm/ttm/ttm_bo_driver.h>
>>> @@ -1045,6 +1045,11 @@ ttm_pool_unpopulate_helper(struct ttm_tt *ttm, unsigned mem_count_update)
>>>        ttm_put_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
>>>                      ttm->caching_state);
>>>        ttm->state = tt_unpopulated;
>>> +
>>> +#ifdef CONFIG_MEMCG
>>> +     if (ttm->memcg)
>>> +             mem_cgroup_uncharge_drvmem(ttm->memcg, ttm->num_pages);
>>> +#endif
>>>    }
>>>
>>>    int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
>>> @@ -1059,6 +1064,17 @@ int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
>>>        if (ttm_check_under_lowerlimit(mem_glob, ttm->num_pages, ctx))
>>>                return -ENOMEM;
>>>
>>> +#ifdef CONFIG_MEMCG
>>> +     if (ttm->memcg) {
>>> +             gfp_t gfp_flags = GFP_USER;
>>> +             if (ttm->page_flags & TTM_PAGE_FLAG_NO_RETRY)
>>> +                     gfp_flags |= __GFP_RETRY_MAYFAIL;
>>> +             ret = mem_cgroup_charge_drvmem(ttm->memcg, gfp_flags, ttm->num_pages);
>>> +             if (ret)
>>> +                     return ret;
>>> +     }
>>> +#endif
>>> +
>>>        ret = ttm_get_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
>>>                            ttm->caching_state);
>>>        if (unlikely(ret != 0)) {
>>> diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
>>> index e0e9b4f69db6..1acb153084e1 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_tt.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_tt.c
>>> @@ -233,6 +233,9 @@ void ttm_tt_init_fields(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
>>>        ttm->state = tt_unpopulated;
>>>        ttm->swap_storage = NULL;
>>>        ttm->sg = bo->sg;
>>> +#ifdef CONFIG_MEMCG
>>> +     ttm->memcg = bo->memcg;
>>> +#endif
>>>    }
>>>
>>>    int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>> index 65e399d280f7..95a08e81a73e 100644
>>> --- a/include/drm/ttm/ttm_bo_api.h
>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>> @@ -54,6 +54,8 @@ struct ttm_place;
>>>
>>>    struct ttm_lru_bulk_move;
>>>
>>> +struct mem_cgroup;
>>> +
>>>    /**
>>>     * struct ttm_bus_placement
>>>     *
>>> @@ -180,6 +182,9 @@ struct ttm_buffer_object {
>>>        void (*destroy) (struct ttm_buffer_object *);
>>>        unsigned long num_pages;
>>>        size_t acc_size;
>>> +#ifdef CONFIG_MEMCG
>>> +     struct mem_cgroup *memcg;
>>> +#endif
>>>
>>>        /**
>>>        * Members not needing protection.
>>> diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
>>> index c0e928abf592..10fb5a557b95 100644
>>> --- a/include/drm/ttm/ttm_tt.h
>>> +++ b/include/drm/ttm/ttm_tt.h
>>> @@ -33,6 +33,7 @@ struct ttm_tt;
>>>    struct ttm_mem_reg;
>>>    struct ttm_buffer_object;
>>>    struct ttm_operation_ctx;
>>> +struct mem_cgroup;
>>>
>>>    #define TTM_PAGE_FLAG_WRITE           (1 << 3)
>>>    #define TTM_PAGE_FLAG_SWAPPED         (1 << 4)
>>> @@ -116,6 +117,9 @@ struct ttm_tt {
>>>                tt_unbound,
>>>                tt_unpopulated,
>>>        } state;
>>> +#ifdef CONFIG_MEMCG
>>> +     struct mem_cgroup *memcg;
>>> +#endif
>>>    };
>>>
>>>    /**
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C5d3b70a43b80444c550808d79c89e968%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637149988466853762&amp;sdata=ni3ku7nC%2FD5E8kivppfuuF7ZoiyfLQ8L3Y4j9IfHYUU%3D&amp;reserved=0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: "Christian König" <christian.koenig-5C7GfCeVMHo@public.gmane.org>
To: Qiang Yu <yuq825-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Qiang Yu <qiang.yu-5C7GfCeVMHo@public.gmane.org>,
	Linux Memory Management List
	<linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org>,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	dri-devel
	<dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org>,
	David Airlie <airlied-cv59FeDIM0c@public.gmane.org>,
	Kenny Ho <kenny.ho-5C7GfCeVMHo@public.gmane.org>,
	Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Huang Rui <ray.huang-5C7GfCeVMHo@public.gmane.org>,
	Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
	Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
Subject: Re: [PATCH RFC 3/3] drm/ttm: support memcg for ttm_tt
Date: Sun, 19 Jan 2020 14:03:58 +0100	[thread overview]
Message-ID: <0164fcbf-393a-237f-69d7-aa26dabe6ad9@amd.com> (raw)
In-Reply-To: <CAKGbVbv7P-S_NUYpqQ5opDbXBHRb4rq6m95372nOuZ8kMvGnBQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>

Am 19.01.20 um 03:47 schrieb Qiang Yu:
> On Mon, Jan 13, 2020 at 11:56 PM Christian König
> <christian.koenig-5C7GfCeVMHo@public.gmane.org> wrote:
>> Am 13.01.20 um 16:35 schrieb Qiang Yu:
>>> Charge TTM allocated system memory to memory cgroup which will
>>> limit the memory usage of a group of processes.
>> NAK to the whole approach. This belongs into the GEM or driver layer,
>> but not into TTM.
>>
> Sorry for responding late.
>
> GEM layer seems not a proper place to handle this as:
> 1. it is not aware of the back storage (system mem or device mem) unless
> we add this information up to GEM which I think is not appropriate
> 2. system memory allocated by GEM with drm_gem_get_pages() is already
> charged to memcg, it's only the ttm system memory not charged to memcg

The key point is that we already discussed this on the mailing list and 
GEM was agreed on to be the right place for this.

That's the reason why the Intel developers already proposed a way to 
expose the buffer location in GEM.

Please sync up with Kenny who is leading the development efforts and 
with the Intel developers before warming up an old discussion again.

Adding that to TTM is an absolute no-go from my maintainers perspective.

>
> Implement in driver like amdgpu is an option. But seems the problem is inside
> TTM which does not charge pages allocated by itself to memcg, won't it be
> better to solve it in TTM so that all drivers using it can benefit? Or you just
> think we should not rely on memcg for GPU system memory limitation?
>
>>> The memory is always charged to the control group of task which
>>> create this buffer object and when it's created. For example,
>>> when a buffer is created by process A and exported to process B,
>>> then process B populate this buffer, the memory is still charged
>>> to process A's memcg; if a buffer is created by process A when in
>>> memcg B, then A is moved to memcg C and populate this buffer, it
>>> will charge memcg B.
>> This is actually the most common use case for graphics application where
>> the X server allocates most of the backing store.
>>
>> So we need a better handling than just accounting the memory to whoever
>> allocated it first.
>>
> You mean the application based on DRI2 and X11 protocol draw? I think this
> is still reasonable to charge xserver for the memory, because xserver allocate
> the buffer and share to application which is its design and implementation
> nature. With DRI3, the buffer is allocated by application, also
> suitable for this
> approach.

That is a way to simplistic.

Again we already discussed this and the agreed compromise is to charge 
the application which is using the memory and not who has allocated it.

So you need to add the charge on importing a buffer and not just when it 
is created.

Regards,
Christian.

>
> Regards,
> Qiang
>
>> Regards,
>> Christian.
>>
>>> Signed-off-by: Qiang Yu <qiang.yu-5C7GfCeVMHo@public.gmane.org>
>>> ---
>>>    drivers/gpu/drm/ttm/ttm_bo.c         | 10 ++++++++++
>>>    drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 +++++++++++++++++-
>>>    drivers/gpu/drm/ttm/ttm_tt.c         |  3 +++
>>>    include/drm/ttm/ttm_bo_api.h         |  5 +++++
>>>    include/drm/ttm/ttm_tt.h             |  4 ++++
>>>    5 files changed, 39 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
>>> index 8d91b0428af1..4e64846ee523 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>> @@ -42,6 +42,7 @@
>>>    #include <linux/module.h>
>>>    #include <linux/atomic.h>
>>>    #include <linux/dma-resv.h>
>>> +#include <linux/memcontrol.h>
>>>
>>>    static void ttm_bo_global_kobj_release(struct kobject *kobj);
>>>
>>> @@ -162,6 +163,10 @@ static void ttm_bo_release_list(struct kref *list_kref)
>>>        if (!ttm_bo_uses_embedded_gem_object(bo))
>>>                dma_resv_fini(&bo->base._resv);
>>>        mutex_destroy(&bo->wu_mutex);
>>> +#ifdef CONFIG_MEMCG
>>> +     if (bo->memcg)
>>> +             css_put(&bo->memcg->css);
>>> +#endif
>>>        bo->destroy(bo);
>>>        ttm_mem_global_free(&ttm_mem_glob, acc_size);
>>>    }
>>> @@ -1330,6 +1335,11 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev,
>>>        }
>>>        atomic_inc(&ttm_bo_glob.bo_count);
>>>
>>> +#ifdef CONFIG_MEMCG
>>> +     if (bo->type == ttm_bo_type_device)
>>> +             bo->memcg = mem_cgroup_driver_get_from_current();
>>> +#endif
>>> +
>>>        /*
>>>         * For ttm_bo_type_device buffers, allocate
>>>         * address space from the device.
>>> diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
>>> index b40a4678c296..ecd1831a1d38 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
>>> @@ -42,7 +42,7 @@
>>>    #include <linux/seq_file.h> /* for seq_printf */
>>>    #include <linux/slab.h>
>>>    #include <linux/dma-mapping.h>
>>> -
>>> +#include <linux/memcontrol.h>
>>>    #include <linux/atomic.h>
>>>
>>>    #include <drm/ttm/ttm_bo_driver.h>
>>> @@ -1045,6 +1045,11 @@ ttm_pool_unpopulate_helper(struct ttm_tt *ttm, unsigned mem_count_update)
>>>        ttm_put_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
>>>                      ttm->caching_state);
>>>        ttm->state = tt_unpopulated;
>>> +
>>> +#ifdef CONFIG_MEMCG
>>> +     if (ttm->memcg)
>>> +             mem_cgroup_uncharge_drvmem(ttm->memcg, ttm->num_pages);
>>> +#endif
>>>    }
>>>
>>>    int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
>>> @@ -1059,6 +1064,17 @@ int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
>>>        if (ttm_check_under_lowerlimit(mem_glob, ttm->num_pages, ctx))
>>>                return -ENOMEM;
>>>
>>> +#ifdef CONFIG_MEMCG
>>> +     if (ttm->memcg) {
>>> +             gfp_t gfp_flags = GFP_USER;
>>> +             if (ttm->page_flags & TTM_PAGE_FLAG_NO_RETRY)
>>> +                     gfp_flags |= __GFP_RETRY_MAYFAIL;
>>> +             ret = mem_cgroup_charge_drvmem(ttm->memcg, gfp_flags, ttm->num_pages);
>>> +             if (ret)
>>> +                     return ret;
>>> +     }
>>> +#endif
>>> +
>>>        ret = ttm_get_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
>>>                            ttm->caching_state);
>>>        if (unlikely(ret != 0)) {
>>> diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
>>> index e0e9b4f69db6..1acb153084e1 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_tt.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_tt.c
>>> @@ -233,6 +233,9 @@ void ttm_tt_init_fields(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
>>>        ttm->state = tt_unpopulated;
>>>        ttm->swap_storage = NULL;
>>>        ttm->sg = bo->sg;
>>> +#ifdef CONFIG_MEMCG
>>> +     ttm->memcg = bo->memcg;
>>> +#endif
>>>    }
>>>
>>>    int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>> index 65e399d280f7..95a08e81a73e 100644
>>> --- a/include/drm/ttm/ttm_bo_api.h
>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>> @@ -54,6 +54,8 @@ struct ttm_place;
>>>
>>>    struct ttm_lru_bulk_move;
>>>
>>> +struct mem_cgroup;
>>> +
>>>    /**
>>>     * struct ttm_bus_placement
>>>     *
>>> @@ -180,6 +182,9 @@ struct ttm_buffer_object {
>>>        void (*destroy) (struct ttm_buffer_object *);
>>>        unsigned long num_pages;
>>>        size_t acc_size;
>>> +#ifdef CONFIG_MEMCG
>>> +     struct mem_cgroup *memcg;
>>> +#endif
>>>
>>>        /**
>>>        * Members not needing protection.
>>> diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
>>> index c0e928abf592..10fb5a557b95 100644
>>> --- a/include/drm/ttm/ttm_tt.h
>>> +++ b/include/drm/ttm/ttm_tt.h
>>> @@ -33,6 +33,7 @@ struct ttm_tt;
>>>    struct ttm_mem_reg;
>>>    struct ttm_buffer_object;
>>>    struct ttm_operation_ctx;
>>> +struct mem_cgroup;
>>>
>>>    #define TTM_PAGE_FLAG_WRITE           (1 << 3)
>>>    #define TTM_PAGE_FLAG_SWAPPED         (1 << 4)
>>> @@ -116,6 +117,9 @@ struct ttm_tt {
>>>                tt_unbound,
>>>                tt_unpopulated,
>>>        } state;
>>> +#ifdef CONFIG_MEMCG
>>> +     struct mem_cgroup *memcg;
>>> +#endif
>>>    };
>>>
>>>    /**
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C5d3b70a43b80444c550808d79c89e968%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637149988466853762&amp;sdata=ni3ku7nC%2FD5E8kivppfuuF7ZoiyfLQ8L3Y4j9IfHYUU%3D&amp;reserved=0


  reply	other threads:[~2020-01-19 13:04 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-13 15:35 [PATCH RFC 0/3] mm/memcontrol drm/ttm: charge ttm buffer backed by system memory Qiang Yu
2020-01-13 15:35 ` Qiang Yu
2020-01-13 15:35 ` Qiang Yu
2020-01-13 15:35 ` [PATCH RFC 1/3] mm: memcontrol: add mem_cgroup_(un)charge_drvmem Qiang Yu
2020-01-13 15:35   ` Qiang Yu
2020-01-13 15:35   ` Qiang Yu
2020-01-13 15:35 ` [PATCH RFC 2/3] mm: memcontrol: record driver memory statistics Qiang Yu
2020-01-13 15:35   ` Qiang Yu
2020-01-13 15:35 ` [PATCH RFC 3/3] drm/ttm: support memcg for ttm_tt Qiang Yu
2020-01-13 15:35   ` Qiang Yu
2020-01-13 15:55   ` Christian König
2020-01-13 15:55     ` Christian König
2020-01-13 15:55     ` Christian König
2020-01-19  2:47     ` Qiang Yu
2020-01-19  2:47       ` Qiang Yu
2020-01-19  2:47       ` Qiang Yu
2020-01-19 13:03       ` Christian König [this message]
2020-01-19 13:03         ` Christian König
2020-01-19 13:03         ` Christian König

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0164fcbf-393a-237f-69d7-aa26dabe6ad9@amd.com \
    --to=christian.koenig@amd.com \
    --cc=airlied@linux.ie \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hannes@cmpxchg.org \
    --cc=kenny.ho@amd.com \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=qiang.yu@amd.com \
    --cc=ray.huang@amd.com \
    --cc=tj@kernel.org \
    --cc=yuq825@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.