All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: John Stultz <john.stultz@linaro.org>, Daniel Vetter <daniel@ffwll.ch>
Cc: lkml <linux-kernel@vger.kernel.org>,
	"Sumit Semwal" <sumit.semwal@linaro.org>,
	"Liam Mark" <lmark@codeaurora.org>,
	"Chris Goldsworthy" <cgoldswo@codeaurora.org>,
	"Laura Abbott" <labbott@kernel.org>,
	"Brian Starkey" <Brian.Starkey@arm.com>,
	"Hridya Valsaraju" <hridya@google.com>,
	"Suren Baghdasaryan" <surenb@google.com>,
	"Sandeep Patil" <sspatil@google.com>,
	"Daniel Mentz" <danielmentz@google.com>,
	"Ørjan Eide" <orjan.eide@arm.com>,
	"Robin Murphy" <robin.murphy@arm.com>,
	"Ezequiel Garcia" <ezequiel@collabora.com>,
	"Simon Ser" <contact@emersion.fr>,
	"James Jones" <jajones@nvidia.com>,
	linux-media <linux-media@vger.kernel.org>,
	dri-devel <dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH v9 1/5] drm: Add a sharable drm page-pool implementation
Date: Wed, 7 Jul 2021 08:52:12 +0200	[thread overview]
Message-ID: <f54cb539-c364-7d14-2c7c-b2b0477c125e@amd.com> (raw)
In-Reply-To: <CALAqxLVJw=0sEWxdsZ7j2QvHFDUtym3HSpkgqGdQJVayssMNeA@mail.gmail.com>

Am 06.07.21 um 23:19 schrieb John Stultz:
> On Tue, Jul 6, 2021 at 2:15 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>> On Tue, Jul 6, 2021 at 11:04 PM John Stultz <john.stultz@linaro.org> wrote:
>>> On Wed, Jun 30, 2021 at 11:52 PM Christian König
>>> <christian.koenig@amd.com> wrote:
>>>> Am 01.07.21 um 00:24 schrieb John Stultz:
>>>>> On Wed, Jun 30, 2021 at 2:10 AM Christian König
>>>>> <christian.koenig@amd.com> wrote:
>>>>>> Am 30.06.21 um 03:34 schrieb John Stultz:
>>>>>>> +static unsigned long page_pool_size; /* max size of the pool */
>>>>>>> +
>>>>>>> +MODULE_PARM_DESC(page_pool_size, "Number of pages in the drm page pool");
>>>>>>> +module_param(page_pool_size, ulong, 0644);
>>>>>>> +
>>>>>>> +static atomic_long_t nr_managed_pages;
>>>>>>> +
>>>>>>> +static struct mutex shrinker_lock;
>>>>>>> +static struct list_head shrinker_list;
>>>>>>> +static struct shrinker mm_shrinker;
>>>>>>> +
>>>>>>> +/**
>>>>>>> + * drm_page_pool_set_max - Sets maximum size of all pools
>>>>>>> + *
>>>>>>> + * Sets the maximum number of pages allows in all pools.
>>>>>>> + * This can only be set once, and the first caller wins.
>>>>>>> + */
>>>>>>> +void drm_page_pool_set_max(unsigned long max)
>>>>>>> +{
>>>>>>> +     if (!page_pool_size)
>>>>>>> +             page_pool_size = max;
>>>>>>> +}
>>>>>>> +
>>>>>>> +/**
>>>>>>> + * drm_page_pool_get_max - Maximum size of all pools
>>>>>>> + *
>>>>>>> + * Return the maximum number of pages allows in all pools
>>>>>>> + */
>>>>>>> +unsigned long drm_page_pool_get_max(void)
>>>>>>> +{
>>>>>>> +     return page_pool_size;
>>>>>>> +}
>>>>>> Well in general I don't think it is a good idea to have getters/setters
>>>>>> for one line functionality, similar applies to locking/unlocking the
>>>>>> mutex below.
>>>>>>
>>>>>> Then in this specific case what those functions do is to aid
>>>>>> initializing the general pool manager and that in turn should absolutely
>>>>>> not be exposed.
>>>>>>
>>>>>> The TTM pool manager exposes this as function because initializing the
>>>>>> pool manager is done in one part of the module and calculating the
>>>>>> default value for the pages in another one. But that is not something I
>>>>>> would like to see here.
>>>>> So, I guess I'm not quite clear on what you'd like to see...
>>>>>
>>>>> Part of what I'm balancing here is the TTM subsystem normally sets a
>>>>> global max size, whereas the old ION pool didn't have caps (instead
>>>>> just relying on the shrinker when needed).
>>>>> So I'm trying to come up with a solution that can serve both uses. So
>>>>> I've got this drm_page_pool_set_max() function to optionally set the
>>>>> maximum value, which is called in the TTM initialization path or set
>>>>> the boot argument. But for systems that use the dmabuf system heap,
>>>>> but don't use TTM, no global limit is enforced.
>>>> Yeah, exactly that's what I'm trying to prevent.
>>>>
>>>> See if we have the same functionality used by different use cases we
>>>> should not have different behavior depending on what drivers are loaded.
>>>>
>>>> Is it a problem if we restrict the ION pool to 50% of system memory as
>>>> well? If yes than I would rather drop the limit from TTM and only rely
>>>> on the shrinker there as well.
>>> Would having the default value as a config option (still overridable
>>> via boot argument) be an acceptable solution?
>> We're also trying to get ttm over to the shrinker model, and a first
>> cut of that even landed, but didn't really work out yet. So maybe just
>> aiming for the shrinker? I do agree this should be consistent across
>> the board, otherwise we're just sharing code but not actually sharing
>> functionality, which is a recipe for disaster because one side will
>> end up breaking the other side's use-case.
> Fair enough, maybe it would be best to remove the default limit, but
> leave the logic so it can still be set via the boot argument?

Yeah, that would work for me and the shrinker implementation should 
already be good enough.

Regards,
Christian.

>
> thanks
> -john


WARNING: multiple messages have this Message-ID (diff)
From: "Christian König" <christian.koenig@amd.com>
To: John Stultz <john.stultz@linaro.org>, Daniel Vetter <daniel@ffwll.ch>
Cc: "Sandeep Patil" <sspatil@google.com>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	"Ezequiel Garcia" <ezequiel@collabora.com>,
	"Robin Murphy" <robin.murphy@arm.com>,
	"James Jones" <jajones@nvidia.com>,
	lkml <linux-kernel@vger.kernel.org>,
	"Liam Mark" <lmark@codeaurora.org>,
	"Laura Abbott" <labbott@kernel.org>,
	"Chris Goldsworthy" <cgoldswo@codeaurora.org>,
	"Hridya Valsaraju" <hridya@google.com>,
	"Ørjan Eide" <orjan.eide@arm.com>,
	linux-media <linux-media@vger.kernel.org>,
	"Suren Baghdasaryan" <surenb@google.com>,
	"Daniel Mentz" <danielmentz@google.com>
Subject: Re: [PATCH v9 1/5] drm: Add a sharable drm page-pool implementation
Date: Wed, 7 Jul 2021 08:52:12 +0200	[thread overview]
Message-ID: <f54cb539-c364-7d14-2c7c-b2b0477c125e@amd.com> (raw)
In-Reply-To: <CALAqxLVJw=0sEWxdsZ7j2QvHFDUtym3HSpkgqGdQJVayssMNeA@mail.gmail.com>

Am 06.07.21 um 23:19 schrieb John Stultz:
> On Tue, Jul 6, 2021 at 2:15 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>> On Tue, Jul 6, 2021 at 11:04 PM John Stultz <john.stultz@linaro.org> wrote:
>>> On Wed, Jun 30, 2021 at 11:52 PM Christian König
>>> <christian.koenig@amd.com> wrote:
>>>> Am 01.07.21 um 00:24 schrieb John Stultz:
>>>>> On Wed, Jun 30, 2021 at 2:10 AM Christian König
>>>>> <christian.koenig@amd.com> wrote:
>>>>>> Am 30.06.21 um 03:34 schrieb John Stultz:
>>>>>>> +static unsigned long page_pool_size; /* max size of the pool */
>>>>>>> +
>>>>>>> +MODULE_PARM_DESC(page_pool_size, "Number of pages in the drm page pool");
>>>>>>> +module_param(page_pool_size, ulong, 0644);
>>>>>>> +
>>>>>>> +static atomic_long_t nr_managed_pages;
>>>>>>> +
>>>>>>> +static struct mutex shrinker_lock;
>>>>>>> +static struct list_head shrinker_list;
>>>>>>> +static struct shrinker mm_shrinker;
>>>>>>> +
>>>>>>> +/**
>>>>>>> + * drm_page_pool_set_max - Sets maximum size of all pools
>>>>>>> + *
>>>>>>> + * Sets the maximum number of pages allows in all pools.
>>>>>>> + * This can only be set once, and the first caller wins.
>>>>>>> + */
>>>>>>> +void drm_page_pool_set_max(unsigned long max)
>>>>>>> +{
>>>>>>> +     if (!page_pool_size)
>>>>>>> +             page_pool_size = max;
>>>>>>> +}
>>>>>>> +
>>>>>>> +/**
>>>>>>> + * drm_page_pool_get_max - Maximum size of all pools
>>>>>>> + *
>>>>>>> + * Return the maximum number of pages allows in all pools
>>>>>>> + */
>>>>>>> +unsigned long drm_page_pool_get_max(void)
>>>>>>> +{
>>>>>>> +     return page_pool_size;
>>>>>>> +}
>>>>>> Well in general I don't think it is a good idea to have getters/setters
>>>>>> for one line functionality, similar applies to locking/unlocking the
>>>>>> mutex below.
>>>>>>
>>>>>> Then in this specific case what those functions do is to aid
>>>>>> initializing the general pool manager and that in turn should absolutely
>>>>>> not be exposed.
>>>>>>
>>>>>> The TTM pool manager exposes this as function because initializing the
>>>>>> pool manager is done in one part of the module and calculating the
>>>>>> default value for the pages in another one. But that is not something I
>>>>>> would like to see here.
>>>>> So, I guess I'm not quite clear on what you'd like to see...
>>>>>
>>>>> Part of what I'm balancing here is the TTM subsystem normally sets a
>>>>> global max size, whereas the old ION pool didn't have caps (instead
>>>>> just relying on the shrinker when needed).
>>>>> So I'm trying to come up with a solution that can serve both uses. So
>>>>> I've got this drm_page_pool_set_max() function to optionally set the
>>>>> maximum value, which is called in the TTM initialization path or set
>>>>> the boot argument. But for systems that use the dmabuf system heap,
>>>>> but don't use TTM, no global limit is enforced.
>>>> Yeah, exactly that's what I'm trying to prevent.
>>>>
>>>> See if we have the same functionality used by different use cases we
>>>> should not have different behavior depending on what drivers are loaded.
>>>>
>>>> Is it a problem if we restrict the ION pool to 50% of system memory as
>>>> well? If yes than I would rather drop the limit from TTM and only rely
>>>> on the shrinker there as well.
>>> Would having the default value as a config option (still overridable
>>> via boot argument) be an acceptable solution?
>> We're also trying to get ttm over to the shrinker model, and a first
>> cut of that even landed, but didn't really work out yet. So maybe just
>> aiming for the shrinker? I do agree this should be consistent across
>> the board, otherwise we're just sharing code but not actually sharing
>> functionality, which is a recipe for disaster because one side will
>> end up breaking the other side's use-case.
> Fair enough, maybe it would be best to remove the default limit, but
> leave the logic so it can still be set via the boot argument?

Yeah, that would work for me and the shrinker implementation should 
already be good enough.

Regards,
Christian.

>
> thanks
> -john


  reply	other threads:[~2021-07-07  6:52 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-30  1:34 [PATCH v9 0/5] Generic page pool & deferred freeing for system dmabuf hea John Stultz
2021-06-30  1:34 ` John Stultz
2021-06-30  1:34 ` [PATCH v9 1/5] drm: Add a sharable drm page-pool implementation John Stultz
2021-06-30  1:34   ` John Stultz
2021-06-30  9:10   ` Christian König
2021-06-30  9:10     ` Christian König
2021-06-30 22:24     ` John Stultz
2021-06-30 22:24       ` John Stultz
2021-07-01  6:52       ` Christian König
2021-07-01  6:52         ` Christian König
2021-07-06 21:03         ` John Stultz
2021-07-06 21:03           ` John Stultz
2021-07-06 21:15           ` Daniel Vetter
2021-07-06 21:15             ` Daniel Vetter
2021-07-06 21:19             ` John Stultz
2021-07-06 21:19               ` John Stultz
2021-07-07  6:52               ` Christian König [this message]
2021-07-07  6:52                 ` Christian König
2021-07-07  6:38   ` page pools, was " Christoph Hellwig
2021-07-07  7:10     ` Christian König
2021-07-07  7:10       ` Christian König
2021-07-07  7:14       ` Christoph Hellwig
2021-07-07  9:32         ` Christian König
2021-07-07  9:32           ` Christian König
2021-07-07 19:42         ` John Stultz
2021-07-07 19:42           ` John Stultz
2021-07-07 19:42           ` John Stultz
2021-07-07 19:35     ` John Stultz
2021-07-07 19:35       ` John Stultz
2021-07-07 19:35       ` John Stultz
2021-07-08  4:20       ` Christoph Hellwig
2021-07-08  7:37         ` Christian König
2021-07-08  7:37           ` Christian König
2021-06-30  1:34 ` [PATCH v9 2/5] drm: ttm_pool: Rework ttm_pool to use drm_page_pool John Stultz
2021-06-30  1:34   ` John Stultz
2021-06-30  5:11   ` kernel test robot
2021-06-30  5:11     ` kernel test robot
2021-06-30  1:34 ` [PATCH v9 3/5] dma-buf: system_heap: Add drm pagepool support to system heap John Stultz
2021-06-30  1:34   ` John Stultz
2021-06-30  4:34   ` kernel test robot
2021-06-30  4:34     ` kernel test robot
2021-06-30  5:25   ` kernel test robot
2021-06-30  5:25     ` kernel test robot
2021-06-30  1:34 ` [PATCH v9 4/5] dma-buf: heaps: Add deferred-free-helper library code John Stultz
2021-06-30  1:34   ` John Stultz
2021-06-30  1:34 ` [PATCH v9 5/5] dma-buf: system_heap: Add deferred freeing to the system heap John Stultz
2021-06-30  1:34   ` John Stultz
2021-06-30  9:13 ` [PATCH v9 0/5] Generic page pool & deferred freeing for system dmabuf hea Christian König
2021-06-30  9:13   ` Christian König

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f54cb539-c364-7d14-2c7c-b2b0477c125e@amd.com \
    --to=christian.koenig@amd.com \
    --cc=Brian.Starkey@arm.com \
    --cc=cgoldswo@codeaurora.org \
    --cc=contact@emersion.fr \
    --cc=daniel@ffwll.ch \
    --cc=danielmentz@google.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=ezequiel@collabora.com \
    --cc=hridya@google.com \
    --cc=jajones@nvidia.com \
    --cc=john.stultz@linaro.org \
    --cc=labbott@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=lmark@codeaurora.org \
    --cc=orjan.eide@arm.com \
    --cc=robin.murphy@arm.com \
    --cc=sspatil@google.com \
    --cc=sumit.semwal@linaro.org \
    --cc=surenb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.