linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Arunpravin Paneer Selvam <arunpravin.paneerselvam@amd.com>
To: xinhui pan <xinhui.pan@amd.com>, amd-gfx@lists.freedesktop.org
Cc: linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	matthew.auld@intel.com, daniel@ffwll.ch,
	christian.koenig@amd.com
Subject: Re: [PATCH v3] drm: Optimise for continuous memory allocation
Date: Mon, 28 Nov 2022 22:39:39 +0530	[thread overview]
Message-ID: <90e62dcc-49f2-84d7-d845-1d05c9f3dd08@amd.com> (raw)
In-Reply-To: <20221128063419.101586-1-xinhui.pan@amd.com>

Hi Xinhui,

On 11/28/2022 12:04 PM, xinhui pan wrote:
> Currently drm-buddy does not have full knowledge of continuous memory.
>
> Lets consider scenario below.
> order 1:    L		    R
> order 0: LL	LR	RL	RR
> for order 1 allocation, it can offer L or R or LR+RL.
>
> For now, we only implement L or R case for continuous memory allocation.
> So this patch aims to implement the LR+RL case.
>
> Signed-off-by: xinhui pan <xinhui.pan@amd.com>
> ---
> change from v2:
> search continuous block in nearby root if needed
>
> change from v1:
> implement top-down continuous allocation
> ---
>   drivers/gpu/drm/drm_buddy.c | 78 +++++++++++++++++++++++++++++++++----
>   1 file changed, 71 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
> index 11bb59399471..ff58eb3136d2 100644
> --- a/drivers/gpu/drm/drm_buddy.c
> +++ b/drivers/gpu/drm/drm_buddy.c
> @@ -386,6 +386,58 @@ alloc_range_bias(struct drm_buddy *mm,
>   	return ERR_PTR(err);
>   }
>   
> +static struct drm_buddy_block *
> +find_continuous_blocks(struct drm_buddy *mm,
> +		       int order,
> +		       unsigned long flags,
> +		       struct drm_buddy_block **rn)
> +{
> +	struct list_head *head = &mm->free_list[order];
> +	struct drm_buddy_block *node, *parent, *free_node, *max_node = NULL;
NIT: We usually name the variable as *block or ***_block for drm buddy 
and we have *node or ***_node for drm mm manager.
> +	int i;
> +
> +	list_for_each_entry(free_node, head, link) {
> +		if (max_node) {
> +			if (!(flags & DRM_BUDDY_TOPDOWN_ALLOCATION))
> +				break;
> +
> +			if (drm_buddy_block_offset(free_node) <
> +			    drm_buddy_block_offset(max_node))
> +				continue;
> +		}
> +
> +		parent = free_node;
> +		do {
> +			node = parent;
> +			parent = parent->parent;
> +		} while (parent && parent->right == node);
> +
> +		if (!parent) {
> +			for (i = 0; i < mm->n_roots - 1; i++)
> +				if (mm->roots[i] == node)
> +					break;
> +			if (i == mm->n_roots - 1)
> +				continue;
> +			node = mm->roots[i + 1];
> +		} else {
> +			node = parent->right;
> +		}
> +
> +		while (drm_buddy_block_is_split(node))
> +			node = node->left;
> +
> +		if (drm_buddy_block_is_free(node) &&
> +		    drm_buddy_block_order(node) == order) {
> +			*rn = node;
> +			max_node = free_node;
> +			BUG_ON(drm_buddy_block_offset(node) !=
> +				drm_buddy_block_offset(max_node) +
> +				drm_buddy_block_size(mm, max_node));
> +		}
> +	}
> +	return max_node;
> +}
> +
>   static struct drm_buddy_block *
>   get_maxblock(struct list_head *head)
>   {
> @@ -637,7 +689,7 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm,
>   			   struct list_head *blocks,
>   			   unsigned long flags)
>   {
> -	struct drm_buddy_block *block = NULL;
> +	struct drm_buddy_block *block = NULL, *rblock = NULL;
>   	unsigned int min_order, order;
>   	unsigned long pages;
>   	LIST_HEAD(allocated);
> @@ -689,17 +741,29 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm,
>   				break;
>   
>   			if (order-- == min_order) {
> +				if (!(flags & DRM_BUDDY_RANGE_ALLOCATION) &&
> +				    min_order != 0 && pages == BIT(order + 1)) {
> +					block = find_continuous_blocks(mm,
> +								       order,
> +								       flags,
> +								       &rblock);
> +					if (block)
> +						break;
> +				}
>   				err = -ENOSPC;
>   				goto err_free;
>   			}
>   		} while (1);
>   
> -		mark_allocated(block);
> -		mm->avail -= drm_buddy_block_size(mm, block);
> -		kmemleak_update_trace(block);
> -		list_add_tail(&block->link, &allocated);
> -
> -		pages -= BIT(order);
> +		do {
> +			mark_allocated(block);
> +			mm->avail -= drm_buddy_block_size(mm, block);
> +			kmemleak_update_trace(block);
> +			list_add_tail(&block->link, &allocated);
> +			pages -= BIT(order);
> +			block = rblock;
> +			rblock = NULL;
> +		} while (block);
I think with this approach, if we are lucky enough we may get contiguous 
blocks in one order level down in RL
combination from the freelist?

Regards,
Arun
>   
>   		if (!pages)
>   			break;


  reply	other threads:[~2022-11-28 17:10 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-28  6:34 [PATCH v3] drm: Optimise for continuous memory allocation xinhui pan
2022-11-28 17:09 ` Arunpravin Paneer Selvam [this message]
2022-11-29  1:58   ` 回复: " Pan, Xinhui

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=90e62dcc-49f2-84d7-d845-1d05c9f3dd08@amd.com \
    --to=arunpravin.paneerselvam@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=matthew.auld@intel.com \
    --cc=xinhui.pan@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).