All of lore.kernel.org
 help / color / mirror / Atom feed
From: Qiang Yu <qiang.yu@amd.com>
To: linux-mm@kvack.org, cgroups@vger.kernel.org,
	dri-devel@lists.freedesktop.org
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tejun Heo <tj@kernel.org>,
	Christian Koenig <christian.koenig@amd.com>,
	Huang Rui <ray.huang@amd.com>, David Airlie <airlied@linux.ie>,
	Daniel Vetter <daniel@ffwll.ch>, Kenny Ho <kenny.ho@amd.com>,
	Qiang Yu <qiang.yu@amd.com>
Subject: [PATCH RFC 0/3] mm/memcontrol drm/ttm: charge ttm buffer backed by system memory
Date: Mon, 13 Jan 2020 23:35:40 +0800	[thread overview]
Message-ID: <20200113153543.24957-1-qiang.yu@amd.com> (raw)

Buffers created by GPU driver could be huge (often several MB and even hundred
or thousand MB). Some GPU driver call drm_gem_get_pages() which uses shmem to
allocate these buffers which will charge memcg already, while some GPU driver
like amdgpu use TTM which just allocate these system memory backed buffers with
alloc_pages() so won't charge memcg currently.

Not like pure kernel memory, GPU buffer need to be mapped to user space for user
filling data and command then let GPU hardware consume these buffers. So it is
not proper to use memcg kmem by adding __GFP_ACCOUNT to alloc_pages gfp flags.

Another reason is back memory of GPU buffer may be allocated latter after the
buffer object is created, and even in other processes. So we need to record the
memcg when buffer object creation, then charge it latter when needed.

TTM will use a page pool acting as a cache for write-combine/no-cache pages.
So adding new GFP flags for alloc_pages also does not work.

Qiang Yu (3):
  mm: memcontrol: add mem_cgroup_(un)charge_drvmem
  mm: memcontrol: record driver memory statistics
  drm/ttm: support memcg for ttm_tt

 drivers/gpu/drm/ttm/ttm_bo.c         | 10 +++++
 drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 ++++++++-
 drivers/gpu/drm/ttm/ttm_tt.c         |  3 ++
 include/drm/ttm/ttm_bo_api.h         |  5 +++
 include/drm/ttm/ttm_tt.h             |  4 ++
 include/linux/memcontrol.h           | 22 +++++++++++
 mm/memcontrol.c                      | 58 ++++++++++++++++++++++++++++
 7 files changed, 119 insertions(+), 1 deletion(-)

-- 
2.17.1



WARNING: multiple messages have this Message-ID (diff)
From: Qiang Yu <qiang.yu@amd.com>
To: linux-mm@kvack.org, cgroups@vger.kernel.org,
	dri-devel@lists.freedesktop.org
Cc: David Airlie <airlied@linux.ie>, Kenny Ho <kenny.ho@amd.com>,
	Michal Hocko <mhocko@kernel.org>, Qiang Yu <qiang.yu@amd.com>,
	Huang Rui <ray.huang@amd.com>,
	Johannes Weiner <hannes@cmpxchg.org>, Tejun Heo <tj@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Christian Koenig <christian.koenig@amd.com>
Subject: [PATCH RFC 0/3] mm/memcontrol drm/ttm: charge ttm buffer backed by system memory
Date: Mon, 13 Jan 2020 23:35:40 +0800	[thread overview]
Message-ID: <20200113153543.24957-1-qiang.yu@amd.com> (raw)

Buffers created by GPU driver could be huge (often several MB and even hundred
or thousand MB). Some GPU driver call drm_gem_get_pages() which uses shmem to
allocate these buffers which will charge memcg already, while some GPU driver
like amdgpu use TTM which just allocate these system memory backed buffers with
alloc_pages() so won't charge memcg currently.

Not like pure kernel memory, GPU buffer need to be mapped to user space for user
filling data and command then let GPU hardware consume these buffers. So it is
not proper to use memcg kmem by adding __GFP_ACCOUNT to alloc_pages gfp flags.

Another reason is back memory of GPU buffer may be allocated latter after the
buffer object is created, and even in other processes. So we need to record the
memcg when buffer object creation, then charge it latter when needed.

TTM will use a page pool acting as a cache for write-combine/no-cache pages.
So adding new GFP flags for alloc_pages also does not work.

Qiang Yu (3):
  mm: memcontrol: add mem_cgroup_(un)charge_drvmem
  mm: memcontrol: record driver memory statistics
  drm/ttm: support memcg for ttm_tt

 drivers/gpu/drm/ttm/ttm_bo.c         | 10 +++++
 drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 ++++++++-
 drivers/gpu/drm/ttm/ttm_tt.c         |  3 ++
 include/drm/ttm/ttm_bo_api.h         |  5 +++
 include/drm/ttm/ttm_tt.h             |  4 ++
 include/linux/memcontrol.h           | 22 +++++++++++
 mm/memcontrol.c                      | 58 ++++++++++++++++++++++++++++
 7 files changed, 119 insertions(+), 1 deletion(-)

-- 
2.17.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Qiang Yu <qiang.yu@amd.com>
To: linux-mm@kvack.org, cgroups@vger.kernel.org,
	dri-devel@lists.freedesktop.org
Cc: David Airlie <airlied@linux.ie>, Kenny Ho <kenny.ho@amd.com>,
	Michal Hocko <mhocko@kernel.org>, Qiang Yu <qiang.yu@amd.com>,
	Huang Rui <ray.huang@amd.com>,
	Johannes Weiner <hannes@cmpxchg.org>, Tejun Heo <tj@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Christian Koenig <christian.koenig@amd.com>
Subject: [PATCH RFC 0/3] mm/memcontrol drm/ttm: charge ttm buffer backed by system memory
Date: Mon, 13 Jan 2020 23:35:40 +0800	[thread overview]
Message-ID: <20200113153543.24957-1-qiang.yu@amd.com> (raw)

Buffers created by GPU driver could be huge (often several MB and even hundred
or thousand MB). Some GPU driver call drm_gem_get_pages() which uses shmem to
allocate these buffers which will charge memcg already, while some GPU driver
like amdgpu use TTM which just allocate these system memory backed buffers with
alloc_pages() so won't charge memcg currently.

Not like pure kernel memory, GPU buffer need to be mapped to user space for user
filling data and command then let GPU hardware consume these buffers. So it is
not proper to use memcg kmem by adding __GFP_ACCOUNT to alloc_pages gfp flags.

Another reason is back memory of GPU buffer may be allocated latter after the
buffer object is created, and even in other processes. So we need to record the
memcg when buffer object creation, then charge it latter when needed.

TTM will use a page pool acting as a cache for write-combine/no-cache pages.
So adding new GFP flags for alloc_pages also does not work.

Qiang Yu (3):
  mm: memcontrol: add mem_cgroup_(un)charge_drvmem
  mm: memcontrol: record driver memory statistics
  drm/ttm: support memcg for ttm_tt

 drivers/gpu/drm/ttm/ttm_bo.c         | 10 +++++
 drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 ++++++++-
 drivers/gpu/drm/ttm/ttm_tt.c         |  3 ++
 include/drm/ttm/ttm_bo_api.h         |  5 +++
 include/drm/ttm/ttm_tt.h             |  4 ++
 include/linux/memcontrol.h           | 22 +++++++++++
 mm/memcontrol.c                      | 58 ++++++++++++++++++++++++++++
 7 files changed, 119 insertions(+), 1 deletion(-)

-- 
2.17.1

             reply	other threads:[~2020-01-13 15:37 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-13 15:35 Qiang Yu [this message]
2020-01-13 15:35 ` [PATCH RFC 0/3] mm/memcontrol drm/ttm: charge ttm buffer backed by system memory Qiang Yu
2020-01-13 15:35 ` Qiang Yu
2020-01-13 15:35 ` [PATCH RFC 1/3] mm: memcontrol: add mem_cgroup_(un)charge_drvmem Qiang Yu
2020-01-13 15:35   ` Qiang Yu
2020-01-13 15:35   ` Qiang Yu
2020-01-13 15:35 ` [PATCH RFC 2/3] mm: memcontrol: record driver memory statistics Qiang Yu
2020-01-13 15:35   ` Qiang Yu
2020-01-13 15:35 ` [PATCH RFC 3/3] drm/ttm: support memcg for ttm_tt Qiang Yu
2020-01-13 15:35   ` Qiang Yu
2020-01-13 15:55   ` Christian König
2020-01-13 15:55     ` Christian König
2020-01-13 15:55     ` Christian König
2020-01-19  2:47     ` Qiang Yu
2020-01-19  2:47       ` Qiang Yu
2020-01-19  2:47       ` Qiang Yu
2020-01-19 13:03       ` Christian König
2020-01-19 13:03         ` Christian König
2020-01-19 13:03         ` Christian König

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200113153543.24957-1-qiang.yu@amd.com \
    --to=qiang.yu@amd.com \
    --cc=airlied@linux.ie \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hannes@cmpxchg.org \
    --cc=kenny.ho@amd.com \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=ray.huang@amd.com \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.