From mboxrd@z Thu Jan 1 00:00:00 1970 From: =?UTF-8?B?TWFyZWsgT2zFocOhaw==?= Subject: Re: [PATCH libdrm] amdgpu: add a faster BO list API Date: Wed, 16 Jan 2019 09:31:51 -0500 Message-ID: References: <20190107193104.4361-1-maraeo@gmail.com> <74054b1e-5211-3bfc-ab0f-27e8604759d1@gmail.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1843628119==" Return-path: In-Reply-To: <74054b1e-5211-3bfc-ab0f-27e8604759d1-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> List-Id: Discussion list for AMD gfx List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: amd-gfx-bounces-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org Sender: "amd-gfx" To: =?UTF-8?Q?Christian_K=C3=B6nig?= Cc: amd-gfx mailing list , Bas Nieuwenhuizen --===============1843628119== Content-Type: multipart/alternative; boundary="0000000000004ba1ca057f942453" --0000000000004ba1ca057f942453 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Wed, Jan 16, 2019, 7:55 AM Christian K=C3=B6nig < ckoenig.leichtzumerken-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org wrote: > Well if you ask me we should have the following interface for > negotiating memory management with the kernel: > > 1. We have per process BOs which can't be shared between processes. > > Those are always valid and don't need to be mentioned in any BO list > whatsoever. > > If we knew that a per process BO is currently not in use we can > optionally tell that to the kernel to make memory management more > efficient. > > In other words instead of a list of stuff which is used we send down to > the kernel a list of stuff which is not used any more and that only when > we know that it is necessary, e.g. when a game or application overcommits= . > Radeonsi doesn't use this because this approach caused performance degradation and also drops BO priorities. Marek > 2. We have shared BOs which are used by more than one process. > > Those are rare and should be added to the per CS list of BOs in use. > > > The whole BO list interface Marek tries to optimize here should be > deprecated and not used any more. > > Regards, > Christian. > > Am 16.01.19 um 13:46 schrieb Bas Nieuwenhuizen: > > So random questions: > > > > 1) In this discussion it was mentioned that some Vulkan drivers still > > use the bo_list interface. I think that implies radv as I think we're > > still using bo_list. Is there any other API we should be using? (Also, > > with VK_EXT_descriptor_indexing I suspect we'll be moving more towards > > a global bo list instead of a cmd buffer one, as we cannot know all > > the BOs referenced anymore, but not sure what end state here will be). > > > > 2) The other alternative mentioned was adding the buffers directly > > into the submit ioctl. Is this the desired end state (though as above > > I'm not sure how that works for vulkan)? If yes, what is the timeline > > for this that we need something in the interim? > > > > 3) Did we measure any performance benefit? > > > > In general I'd like to to ack the raw bo list creation function as > > this interface seems easier to use. The two arrays thing has always > > been kind of a pain when we want to use e.g. builtin sort functions to > > make sure we have no duplicate BOs, but have some comments below. > > > > On Mon, Jan 7, 2019 at 8:31 PM Marek Ol=C5=A1=C3=A1k = wrote: > >> From: Marek Ol=C5=A1=C3=A1k > >> > >> --- > >> amdgpu/amdgpu-symbol-check | 3 ++ > >> amdgpu/amdgpu.h | 56 ++++++++++++++++++++++++++++++++++++= +- > >> amdgpu/amdgpu_bo.c | 36 ++++++++++++++++++++++++ > >> amdgpu/amdgpu_cs.c | 25 +++++++++++++++++ > >> 4 files changed, 119 insertions(+), 1 deletion(-) > >> > >> diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-check > >> index 6f5e0f95..96a44b40 100755 > >> --- a/amdgpu/amdgpu-symbol-check > >> +++ b/amdgpu/amdgpu-symbol-check > >> @@ -12,20 +12,22 @@ _edata > >> _end > >> _fini > >> _init > >> amdgpu_bo_alloc > >> amdgpu_bo_cpu_map > >> amdgpu_bo_cpu_unmap > >> amdgpu_bo_export > >> amdgpu_bo_free > >> amdgpu_bo_import > >> amdgpu_bo_inc_ref > >> +amdgpu_bo_list_create_raw > >> +amdgpu_bo_list_destroy_raw > >> amdgpu_bo_list_create > >> amdgpu_bo_list_destroy > >> amdgpu_bo_list_update > >> amdgpu_bo_query_info > >> amdgpu_bo_set_metadata > >> amdgpu_bo_va_op > >> amdgpu_bo_va_op_raw > >> amdgpu_bo_wait_for_idle > >> amdgpu_create_bo_from_user_mem > >> amdgpu_cs_chunk_fence_info_to_data > >> @@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore > >> amdgpu_cs_destroy_syncobj > >> amdgpu_cs_export_syncobj > >> amdgpu_cs_fence_to_handle > >> amdgpu_cs_import_syncobj > >> amdgpu_cs_query_fence_status > >> amdgpu_cs_query_reset_state > >> amdgpu_query_sw_info > >> amdgpu_cs_signal_semaphore > >> amdgpu_cs_submit > >> amdgpu_cs_submit_raw > >> +amdgpu_cs_submit_raw2 > >> amdgpu_cs_syncobj_export_sync_file > >> amdgpu_cs_syncobj_import_sync_file > >> amdgpu_cs_syncobj_reset > >> amdgpu_cs_syncobj_signal > >> amdgpu_cs_syncobj_wait > >> amdgpu_cs_wait_fences > >> amdgpu_cs_wait_semaphore > >> amdgpu_device_deinitialize > >> amdgpu_device_initialize > >> amdgpu_find_bo_by_cpu_mapping > >> diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h > >> index dc51659a..5b800033 100644 > >> --- a/amdgpu/amdgpu.h > >> +++ b/amdgpu/amdgpu.h > >> @@ -35,20 +35,21 @@ > >> #define _AMDGPU_H_ > >> > >> #include > >> #include > >> > >> #ifdef __cplusplus > >> extern "C" { > >> #endif > >> > >> struct drm_amdgpu_info_hw_ip; > >> +struct drm_amdgpu_bo_list_entry; > >> > >> > /*----------------------------------------------------------------------= ----*/ > >> /* --------------------------- Defines > ------------------------------------ */ > >> > /*----------------------------------------------------------------------= ----*/ > >> > >> /** > >> * Define max. number of Command Buffers (IB) which could be sent to > the single > >> * hardware IP to accommodate CE/DE requirements > >> * > >> * \sa amdgpu_cs_ib_info > >> @@ -767,34 +768,65 @@ int amdgpu_bo_cpu_unmap(amdgpu_bo_handle > buf_handle); > >> * and no GPU access is scheduled. > >> * 1 GPU access is in fly or scheduled > >> * > >> * \return 0 - on success > >> * <0 - Negative POSIX Error code > >> */ > >> int amdgpu_bo_wait_for_idle(amdgpu_bo_handle buf_handle, > >> uint64_t timeout_ns, > >> bool *buffer_busy); > >> > >> +/** > >> + * Creates a BO list handle for command submission. > >> + * > >> + * \param dev - \c [in] Device handle. > >> + * See #amdgpu_device_initialize() > >> + * \param number_of_buffers - \c [in] Number of BOs in the list > >> + * \param buffers - \c [in] List of BO handles > >> + * \param result - \c [out] Created BO list handle > >> + * > >> + * \return 0 on success\n > >> + * <0 - Negative POSIX Error code > >> + * > >> + * \sa amdgpu_bo_list_destroy_raw() > >> +*/ > >> +int amdgpu_bo_list_create_raw(amdgpu_device_handle dev, > >> + uint32_t number_of_buffers, > >> + struct drm_amdgpu_bo_list_entry *buffers= , > >> + uint32_t *result); > > So AFAIU drm_amdgpu_bo_list_entry takes a raw bo handle while we > > never get a raw bo handle from libdrm_amdgpu. How are we supposed to > > fill it in? > > > > What do we win by having the raw handle for the bo_list? If we would > > not return the raw handle we would not need the submit_raw2. > > > >> + > >> +/** > >> + * Destroys a BO list handle. > >> + * > >> + * \param bo_list - \c [in] BO list handle. > >> + * > >> + * \return 0 on success\n > >> + * <0 - Negative POSIX Error code > >> + * > >> + * \sa amdgpu_bo_list_create_raw(), amdgpu_cs_submit_raw2() > >> +*/ > >> +int amdgpu_bo_list_destroy_raw(amdgpu_device_handle dev, uint32_t > bo_list); > >> + > >> /** > >> * Creates a BO list handle for command submission. > >> * > >> * \param dev - \c [in] Device handle. > >> * See #amdgpu_device_initialize() > >> * \param number_of_resources - \c [in] Number of BOs in th= e > list > >> * \param resources - \c [in] List of BO handles > >> * \param resource_prios - \c [in] Optional priority for each > handle > >> * \param result - \c [out] Created BO list handle > >> * > >> * \return 0 on success\n > >> * <0 - Negative POSIX Error code > >> * > >> - * \sa amdgpu_bo_list_destroy() > >> + * \sa amdgpu_bo_list_destroy(), amdgpu_cs_submit_raw2() > >> */ > >> int amdgpu_bo_list_create(amdgpu_device_handle dev, > >> uint32_t number_of_resources, > >> amdgpu_bo_handle *resources, > >> uint8_t *resource_prios, > >> amdgpu_bo_list_handle *result); > >> > >> /** > >> * Destroys a BO list handle. > >> * > >> @@ -1580,20 +1612,42 @@ struct drm_amdgpu_cs_chunk; > >> struct drm_amdgpu_cs_chunk_dep; > >> struct drm_amdgpu_cs_chunk_data; > >> > >> int amdgpu_cs_submit_raw(amdgpu_device_handle dev, > >> amdgpu_context_handle context, > >> amdgpu_bo_list_handle bo_list_handle, > >> int num_chunks, > >> struct drm_amdgpu_cs_chunk *chunks, > >> uint64_t *seq_no); > >> > >> +/** > >> + * Submit raw command submission to the kernel with a raw BO list > handle. > >> + * > >> + * \param dev - \c [in] device handle > >> + * \param context - \c [in] context handle for context id > >> + * \param bo_list_handle - \c [in] raw bo list handle (0 for none) > >> + * \param num_chunks - \c [in] number of CS chunks to submit > >> + * \param chunks - \c [in] array of CS chunks > >> + * \param seq_no - \c [out] output sequence number for > submission. > >> + * > >> + * \return 0 on success\n > >> + * <0 - Negative POSIX Error code > >> + * > >> + * \sa amdgpu_bo_list_create_raw(), amdgpu_bo_list_destroy_raw() > >> + */ > >> +int amdgpu_cs_submit_raw2(amdgpu_device_handle dev, > >> + amdgpu_context_handle context, > >> + uint32_t bo_list_handle, > >> + int num_chunks, > >> + struct drm_amdgpu_cs_chunk *chunks, > >> + uint64_t *seq_no); > >> + > >> void amdgpu_cs_chunk_fence_to_dep(struct amdgpu_cs_fence *fence, > >> struct drm_amdgpu_cs_chunk_dep *dep= ); > >> void amdgpu_cs_chunk_fence_info_to_data(struct amdgpu_cs_fence_info > *fence_info, > >> struct > drm_amdgpu_cs_chunk_data *data); > >> > >> /** > >> * Reserve VMID > >> * \param context - \c [in] GPU Context > >> * \param flags - \c [in] TBD > >> * > >> diff --git a/amdgpu/amdgpu_bo.c b/amdgpu/amdgpu_bo.c > >> index c0f42e81..21bc73aa 100644 > >> --- a/amdgpu/amdgpu_bo.c > >> +++ b/amdgpu/amdgpu_bo.c > >> @@ -611,20 +611,56 @@ drm_public int > amdgpu_create_bo_from_user_mem(amdgpu_device_handle dev, > >> pthread_mutex_lock(&dev->bo_table_mutex); > >> r =3D handle_table_insert(&dev->bo_handles, > (*buf_handle)->handle, > >> *buf_handle); > >> pthread_mutex_unlock(&dev->bo_table_mutex); > >> if (r) > >> amdgpu_bo_free(*buf_handle); > >> out: > >> return r; > >> } > >> > >> +drm_public int amdgpu_bo_list_create_raw(amdgpu_device_handle dev, > >> + uint32_t number_of_buffers, > >> + struct > drm_amdgpu_bo_list_entry *buffers, > >> + uint32_t *result) > >> +{ > >> + union drm_amdgpu_bo_list args; > >> + int r; > >> + > >> + memset(&args, 0, sizeof(args)); > >> + args.in.operation =3D AMDGPU_BO_LIST_OP_CREATE; > >> + args.in.bo_number =3D number_of_buffers; > >> + args.in.bo_info_size =3D sizeof(struct drm_amdgpu_bo_list_entr= y); > >> + args.in.bo_info_ptr =3D (uint64_t)(uintptr_t)buffers; > >> + > >> + r =3D drmCommandWriteRead(dev->fd, DRM_AMDGPU_BO_LIST, > >> + &args, sizeof(args)); > >> + if (r) > >> + return r; > >> + > >> + *result =3D args.out.list_handle; > >> + return 0; > >> +} > >> + > >> +drm_public int amdgpu_bo_list_destroy_raw(amdgpu_device_handle dev, > >> + uint32_t bo_list) > >> +{ > >> + union drm_amdgpu_bo_list args; > >> + > >> + memset(&args, 0, sizeof(args)); > >> + args.in.operation =3D AMDGPU_BO_LIST_OP_DESTROY; > >> + args.in.list_handle =3D bo_list; > >> + > >> + return drmCommandWriteRead(dev->fd, DRM_AMDGPU_BO_LIST, > >> + &args, sizeof(args)); > >> +} > >> + > >> drm_public int amdgpu_bo_list_create(amdgpu_device_handle dev, > >> uint32_t number_of_resources, > >> amdgpu_bo_handle *resources, > >> uint8_t *resource_prios, > >> amdgpu_bo_list_handle *result) > >> { > >> struct drm_amdgpu_bo_list_entry *list; > >> union drm_amdgpu_bo_list args; > >> unsigned i; > >> int r; > >> diff --git a/amdgpu/amdgpu_cs.c b/amdgpu/amdgpu_cs.c > >> index 3b8231aa..5bedf748 100644 > >> --- a/amdgpu/amdgpu_cs.c > >> +++ b/amdgpu/amdgpu_cs.c > >> @@ -724,20 +724,45 @@ drm_public int > amdgpu_cs_submit_raw(amdgpu_device_handle dev, > >> r =3D drmCommandWriteRead(dev->fd, DRM_AMDGPU_CS, > >> &cs, sizeof(cs)); > >> if (r) > >> return r; > >> > >> if (seq_no) > >> *seq_no =3D cs.out.handle; > >> return 0; > >> } > >> > >> +drm_public int amdgpu_cs_submit_raw2(amdgpu_device_handle dev, > >> + amdgpu_context_handle context, > >> + uint32_t bo_list_handle, > >> + int num_chunks, > >> + struct drm_amdgpu_cs_chunk *chunk= s, > >> + uint64_t *seq_no) > >> +{ > >> + union drm_amdgpu_cs cs =3D {0}; > >> + uint64_t *chunk_array; > >> + int i, r; > >> + > >> + chunk_array =3D alloca(sizeof(uint64_t) * num_chunks); > >> + for (i =3D 0; i < num_chunks; i++) > >> + chunk_array[i] =3D (uint64_t)(uintptr_t)&chunks[i]; > >> + cs.in.chunks =3D (uint64_t)(uintptr_t)chunk_array; > >> + cs.in.ctx_id =3D context->id; > >> + cs.in.bo_list_handle =3D bo_list_handle; > >> + cs.in.num_chunks =3D num_chunks; > >> + r =3D drmCommandWriteRead(dev->fd, DRM_AMDGPU_CS, > >> + &cs, sizeof(cs)); > >> + if (!r && seq_no) > >> + *seq_no =3D cs.out.handle; > >> + return r; > >> +} > >> + > >> drm_public void amdgpu_cs_chunk_fence_info_to_data(struct > amdgpu_cs_fence_info *fence_info, > >> struct > drm_amdgpu_cs_chunk_data *data) > >> { > >> data->fence_data.handle =3D fence_info->handle->handle; > >> data->fence_data.offset =3D fence_info->offset * > sizeof(uint64_t); > >> } > >> > >> drm_public void amdgpu_cs_chunk_fence_to_dep(struct amdgpu_cs_fence > *fence, > >> struct drm_amdgpu_cs_chunk_de= p > *dep) > >> { > >> -- > >> 2.17.1 > >> > >> _______________________________________________ > >> amd-gfx mailing list > >> amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org > >> https://lists.freedesktop.org/mailman/listinfo/amd-gfx > > _______________________________________________ > > amd-gfx mailing list > > amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org > > https://lists.freedesktop.org/mailman/listinfo/amd-gfx > > --0000000000004ba1ca057f942453 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


= On Wed, Jan 16, 2019, 7:55 AM Christian K=C3=B6nig <ckoenig.leichtzumerken-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org wrote:=
Well if you ask me we should have = the following interface for
negotiating memory management with the kernel:

1. We have per process BOs which can't be shared between processes.

Those are always valid and don't need to be mentioned in any BO list whatsoever.

If we knew that a per process BO is currently not in use we can
optionally tell that to the kernel to make memory management more efficient= .

In other words instead of a list of stuff which is used we send down to the kernel a list of stuff which is not used any more and that only when we know that it is necessary, e.g. when a game or application overcommits.<= br>

R= adeonsi doesn't use this because this approach caused performance degra= dation and also drops BO priorities.

Marek


2. We have shared BOs which are used by more than one process.

Those are rare and should be added to the per CS list of BOs in use.


The whole BO list interface Marek tries to optimize here should be
deprecated and not used any more.

Regards,
Christian.

Am 16.01.19 um 13:46 schrieb Bas Nieuwenhuizen:
> So random questions:
>
> 1) In this discussion it was mentioned that some Vulkan drivers still<= br> > use the bo_list interface. I think that implies radv as I think we'= ;re
> still using bo_list. Is there any other API we should be using? (Also,=
> with VK_EXT_descriptor_indexing I suspect we'll be moving more tow= ards
> a global bo list instead of a cmd buffer one, as we cannot know all > the BOs referenced anymore, but not sure what end state here will be).=
>
> 2) The other alternative mentioned was adding the buffers directly
> into the submit ioctl. Is this the desired end state (though as above<= br> > I'm not sure how that works for vulkan)? If yes, what is the timel= ine
> for this that we need something in the interim?
>
> 3) Did we measure any performance benefit?
>
> In general I'd like to to ack the raw bo list creation function as=
> this interface seems easier to use. The two arrays thing has always > been kind of a pain when we want to use e.g. builtin sort functions to=
> make sure we have no duplicate BOs, but have some comments below.
>
> On Mon, Jan 7, 2019 at 8:31 PM Marek Ol=C5=A1=C3=A1k <maraeo-Re5JQEeQqe8@public.gmane.org= m> wrote:
>> From: Marek Ol=C5=A1=C3=A1k <marek.olsak-5C7GfCeVMHo@public.gmane.org>
>>
>> ---
>>=C2=A0 =C2=A0amdgpu/amdgpu-symbol-check |=C2=A0 3 ++
>>=C2=A0 =C2=A0amdgpu/amdgpu.h=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 | 56 +++++++++++++++++++++++++++++++++++++-
>>=C2=A0 =C2=A0amdgpu/amdgpu_bo.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0| = 36 ++++++++++++++++++++++++
>>=C2=A0 =C2=A0amdgpu/amdgpu_cs.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0| = 25 +++++++++++++++++
>>=C2=A0 =C2=A04 files changed, 119 insertions(+), 1 deletion(-)
>>
>> diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-che= ck
>> index 6f5e0f95..96a44b40 100755
>> --- a/amdgpu/amdgpu-symbol-check
>> +++ b/amdgpu/amdgpu-symbol-check
>> @@ -12,20 +12,22 @@ _edata
>>=C2=A0 =C2=A0_end
>>=C2=A0 =C2=A0_fini
>>=C2=A0 =C2=A0_init
>>=C2=A0 =C2=A0amdgpu_bo_alloc
>>=C2=A0 =C2=A0amdgpu_bo_cpu_map
>>=C2=A0 =C2=A0amdgpu_bo_cpu_unmap
>>=C2=A0 =C2=A0amdgpu_bo_export
>>=C2=A0 =C2=A0amdgpu_bo_free
>>=C2=A0 =C2=A0amdgpu_bo_import
>>=C2=A0 =C2=A0amdgpu_bo_inc_ref
>> +amdgpu_bo_list_create_raw
>> +amdgpu_bo_list_destroy_raw
>>=C2=A0 =C2=A0amdgpu_bo_list_create
>>=C2=A0 =C2=A0amdgpu_bo_list_destroy
>>=C2=A0 =C2=A0amdgpu_bo_list_update
>>=C2=A0 =C2=A0amdgpu_bo_query_info
>>=C2=A0 =C2=A0amdgpu_bo_set_metadata
>>=C2=A0 =C2=A0amdgpu_bo_va_op
>>=C2=A0 =C2=A0amdgpu_bo_va_op_raw
>>=C2=A0 =C2=A0amdgpu_bo_wait_for_idle
>>=C2=A0 =C2=A0amdgpu_create_bo_from_user_mem
>>=C2=A0 =C2=A0amdgpu_cs_chunk_fence_info_to_data
>> @@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore
>>=C2=A0 =C2=A0amdgpu_cs_destroy_syncobj
>>=C2=A0 =C2=A0amdgpu_cs_export_syncobj
>>=C2=A0 =C2=A0amdgpu_cs_fence_to_handle
>>=C2=A0 =C2=A0amdgpu_cs_import_syncobj
>>=C2=A0 =C2=A0amdgpu_cs_query_fence_status
>>=C2=A0 =C2=A0amdgpu_cs_query_reset_state
>>=C2=A0 =C2=A0amdgpu_query_sw_info
>>=C2=A0 =C2=A0amdgpu_cs_signal_semaphore
>>=C2=A0 =C2=A0amdgpu_cs_submit
>>=C2=A0 =C2=A0amdgpu_cs_submit_raw
>> +amdgpu_cs_submit_raw2
>>=C2=A0 =C2=A0amdgpu_cs_syncobj_export_sync_file
>>=C2=A0 =C2=A0amdgpu_cs_syncobj_import_sync_file
>>=C2=A0 =C2=A0amdgpu_cs_syncobj_reset
>>=C2=A0 =C2=A0amdgpu_cs_syncobj_signal
>>=C2=A0 =C2=A0amdgpu_cs_syncobj_wait
>>=C2=A0 =C2=A0amdgpu_cs_wait_fences
>>=C2=A0 =C2=A0amdgpu_cs_wait_semaphore
>>=C2=A0 =C2=A0amdgpu_device_deinitialize
>>=C2=A0 =C2=A0amdgpu_device_initialize
>>=C2=A0 =C2=A0amdgpu_find_bo_by_cpu_mapping
>> diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h
>> index dc51659a..5b800033 100644
>> --- a/amdgpu/amdgpu.h
>> +++ b/amdgpu/amdgpu.h
>> @@ -35,20 +35,21 @@
>>=C2=A0 =C2=A0#define _AMDGPU_H_
>>
>>=C2=A0 =C2=A0#include <stdint.h>
>>=C2=A0 =C2=A0#include <stdbool.h>
>>
>>=C2=A0 =C2=A0#ifdef __cplusplus
>>=C2=A0 =C2=A0extern "C" {
>>=C2=A0 =C2=A0#endif
>>
>>=C2=A0 =C2=A0struct drm_amdgpu_info_hw_ip;
>> +struct drm_amdgpu_bo_list_entry;
>>
>>=C2=A0 =C2=A0/*----------------------------------------------------= ----------------------*/
>>=C2=A0 =C2=A0/* --------------------------- Defines ---------------= --------------------- */
>>=C2=A0 =C2=A0/*----------------------------------------------------= ----------------------*/
>>
>>=C2=A0 =C2=A0/**
>>=C2=A0 =C2=A0 * Define max. number of Command Buffers (IB) which co= uld be sent to the single
>>=C2=A0 =C2=A0 * hardware IP to accommodate CE/DE requirements
>>=C2=A0 =C2=A0 *
>>=C2=A0 =C2=A0 * \sa amdgpu_cs_ib_info
>> @@ -767,34 +768,65 @@ int amdgpu_bo_cpu_unmap(amdgpu_bo_handle buf= _handle);
>>=C2=A0 =C2=A0 *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 and no GPU access is schedule= d.
>>=C2=A0 =C2=A0 *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 1 GPU access is in fly or scheduled<= br> >>=C2=A0 =C2=A0 *
>>=C2=A0 =C2=A0 * \return=C2=A0 =C2=A00 - on success
>>=C2=A0 =C2=A0 *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <0 - Negative = POSIX Error code
>>=C2=A0 =C2=A0 */
>>=C2=A0 =C2=A0int amdgpu_bo_wait_for_idle(amdgpu_bo_handle buf_handl= e,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint64_t timeout_ns,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 bool *buffer_busy);
>>
>> +/**
>> + * Creates a BO list handle for command submission.
>> + *
>> + * \param=C2=A0 =C2=A0dev=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 - \c [in] Device handle.
>> + *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 See #amdgpu_device_initial= ize()
>> + * \param=C2=A0 =C2=A0number_of_buffers=C2=A0 - \c [in] Number of= BOs in the list
>> + * \param=C2=A0 =C2=A0buffers=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 - \c [in] List of BO handles
>> + * \param=C2=A0 =C2=A0result=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0- \c [out] Created BO list handle
>> + *
>> + * \return=C2=A0 =C2=A00 on success\n
>> + *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <0 - Negative POSIX Error= code
>> + *
>> + * \sa amdgpu_bo_list_destroy_raw()
>> +*/
>> +int amdgpu_bo_list_create_raw(amdgpu_device_handle dev,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0uint32_t number_of_buffers,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0struct drm_amdgpu_bo_list_entry *b= uffers,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0uint32_t *result);
> So AFAIU=C2=A0 drm_amdgpu_bo_list_entry takes a raw bo handle while we=
> never get a raw bo handle from libdrm_amdgpu. How are we supposed to > fill it in?
>
> What do we win by having the raw handle for the bo_list? If we would > not return the raw handle we would not need the submit_raw2.
>
>> +
>> +/**
>> + * Destroys a BO list handle.
>> + *
>> + * \param=C2=A0 =C2=A0bo_list=C2=A0 =C2=A0 - \c [in] BO list hand= le.
>> + *
>> + * \return=C2=A0 =C2=A00 on success\n
>> + *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <0 - Negative POSIX Error= code
>> + *
>> + * \sa amdgpu_bo_list_create_raw(), amdgpu_cs_submit_raw2()
>> +*/
>> +int amdgpu_bo_list_destroy_raw(amdgpu_device_handle dev, uint32_t= bo_list);
>> +
>>=C2=A0 =C2=A0/**
>>=C2=A0 =C2=A0 * Creates a BO list handle for command submission. >>=C2=A0 =C2=A0 *
>>=C2=A0 =C2=A0 * \param=C2=A0 =C2=A0dev=C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 - \c [in] Device ha= ndle.
>>=C2=A0 =C2=A0 *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 See #amdgpu_dev= ice_initialize()
>>=C2=A0 =C2=A0 * \param=C2=A0 =C2=A0number_of_resources=C2=A0 =C2=A0= =C2=A0 =C2=A0 - \c [in] Number of BOs in the list
>>=C2=A0 =C2=A0 * \param=C2=A0 =C2=A0resources=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 - \c [in] List of BO handles
>>=C2=A0 =C2=A0 * \param=C2=A0 =C2=A0resource_prios=C2=A0 =C2=A0 =C2= =A0- \c [in] Optional priority for each handle
>>=C2=A0 =C2=A0 * \param=C2=A0 =C2=A0result=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0- \c [out] Created BO list handle
>>=C2=A0 =C2=A0 *
>>=C2=A0 =C2=A0 * \return=C2=A0 =C2=A00 on success\n
>>=C2=A0 =C2=A0 *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <0 - Negative = POSIX Error code
>>=C2=A0 =C2=A0 *
>> - * \sa amdgpu_bo_list_destroy()
>> + * \sa amdgpu_bo_list_destroy(), amdgpu_cs_submit_raw2()
>>=C2=A0 =C2=A0*/
>>=C2=A0 =C2=A0int amdgpu_bo_list_create(amdgpu_device_handle dev, >>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint32_t number_of_resources,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 amdgpu_bo_handle *resources,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint8_t *resource_prios,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 amdgpu_bo_list_handle *result);
>>
>>=C2=A0 =C2=A0/**
>>=C2=A0 =C2=A0 * Destroys a BO list handle.
>>=C2=A0 =C2=A0 *
>> @@ -1580,20 +1612,42 @@ struct drm_amdgpu_cs_chunk;
>>=C2=A0 =C2=A0struct drm_amdgpu_cs_chunk_dep;
>>=C2=A0 =C2=A0struct drm_amdgpu_cs_chunk_data;
>>
>>=C2=A0 =C2=A0int amdgpu_cs_submit_raw(amdgpu_device_handle dev,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0amdgpu_context_handle context,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0amdgpu_bo_list_handle bo_list_handle,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0int num_chunks,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0struct drm_amdgpu_cs_chunk *chunks,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0uint64_t *seq_no);
>>
>> +/**
>> + * Submit raw command submission to the kernel with a raw BO list= handle.
>> + *
>> + * \param=C2=A0 =C2=A0dev=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0- \c [in] device handle
>> + * \param=C2=A0 =C2=A0context=C2=A0 =C2=A0 - \c [in] context hand= le for context id
>> + * \param=C2=A0 =C2=A0bo_list_handle - \c [in] raw bo list handle= (0 for none)
>> + * \param=C2=A0 =C2=A0num_chunks - \c [in] number of CS chunks to= submit
>> + * \param=C2=A0 =C2=A0chunks=C2=A0 =C2=A0 =C2=A0- \c [in] array o= f CS chunks
>> + * \param=C2=A0 =C2=A0seq_no=C2=A0 =C2=A0 =C2=A0- \c [out] output= sequence number for submission.
>> + *
>> + * \return=C2=A0 =C2=A00 on success\n
>> + *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <0 - Negative POSIX Error= code
>> + *
>> + * \sa amdgpu_bo_list_create_raw(), amdgpu_bo_list_destroy_raw()<= br> >> + */
>> +int amdgpu_cs_submit_raw2(amdgpu_device_handle dev,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0amdgpu_context_handle context,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0uint32_t bo_list_handle,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0int num_chunks,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0struct drm_amdgpu_cs_chunk *chunks,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0uint64_t *seq_no);
>> +
>>=C2=A0 =C2=A0void amdgpu_cs_chunk_fence_to_dep(struct amdgpu_cs_fen= ce *fence,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 struct drm_amdg= pu_cs_chunk_dep *dep);
>>=C2=A0 =C2=A0void amdgpu_cs_chunk_fence_info_to_data(struct amdgpu_= cs_fence_info *fence_info,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 struct drm_amdgpu_cs_chunk_data *data);
>>
>>=C2=A0 =C2=A0/**
>>=C2=A0 =C2=A0 * Reserve VMID
>>=C2=A0 =C2=A0 * \param=C2=A0 =C2=A0context - \c [in]=C2=A0 GPU Cont= ext
>>=C2=A0 =C2=A0 * \param=C2=A0 =C2=A0flags - \c [in]=C2=A0 TBD
>>=C2=A0 =C2=A0 *
>> diff --git a/amdgpu/amdgpu_bo.c b/amdgpu/amdgpu_bo.c
>> index c0f42e81..21bc73aa 100644
>> --- a/amdgpu/amdgpu_bo.c
>> +++ b/amdgpu/amdgpu_bo.c
>> @@ -611,20 +611,56 @@ drm_public int amdgpu_create_bo_from_user_me= m(amdgpu_device_handle dev,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 pthread_mutex_lock(&dev->= bo_table_mutex);
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 r =3D handle_table_insert(&d= ev->bo_handles, (*buf_handle)->handle,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *buf_handle);
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 pthread_mutex_unlock(&dev-&g= t;bo_table_mutex);
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (r)
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 amdg= pu_bo_free(*buf_handle);
>>=C2=A0 =C2=A0out:
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return r;
>>=C2=A0 =C2=A0}
>>
>> +drm_public int amdgpu_bo_list_create_raw(amdgpu_device_handle dev= ,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 uint32_t number_of_buffers,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 struct drm_amdgpu_bo_list_entry *buffers,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 uint32_t *result)
>> +{
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0union drm_amdgpu_bo_list args;
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0int r;
>> +
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0memset(&args, 0, sizeof(args)); >> +=C2=A0 =C2=A0 =C2=A0 =C2=A0args.in.operation =3D AMDGPU_BO_LIST_O= P_CREATE;
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0args.in.bo_number =3D number_of_buffer= s;
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0args.in.bo_info_size =3D sizeof(struct= drm_amdgpu_bo_list_entry);
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0args.in.bo_info_ptr =3D (uint64_t)(uin= tptr_t)buffers;
>> +
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0r =3D drmCommandWriteRead(dev->fd, = DRM_AMDGPU_BO_LIST,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&args, sizeof(args)); >> +=C2=A0 =C2=A0 =C2=A0 =C2=A0if (r)
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return r;<= br> >> +
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0*result =3D args.out.list_handle;
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0return 0;
>> +}
>> +
>> +drm_public int amdgpu_bo_list_destroy_raw(amdgpu_device_handle de= v,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0uint32_t bo_list)
>> +{
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0union drm_amdgpu_bo_list args;
>> +
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0memset(&args, 0, sizeof(args)); >> +=C2=A0 =C2=A0 =C2=A0 =C2=A0args.in.operation =3D AMDGPU_BO_LIST_O= P_DESTROY;
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0args.in.list_handle =3D bo_list;
>> +
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0return drmCommandWriteRead(dev->fd,= DRM_AMDGPU_BO_LIST,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &args, sizeof(a= rgs));
>> +}
>> +
>>=C2=A0 =C2=A0drm_public int amdgpu_bo_list_create(amdgpu_device_han= dle dev,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ui= nt32_t number_of_resources,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0am= dgpu_bo_handle *resources,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ui= nt8_t *resource_prios,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0am= dgpu_bo_list_handle *result)
>>=C2=A0 =C2=A0{
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 struct drm_amdgpu_bo_list_entry = *list;
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 union drm_amdgpu_bo_list args; >>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 unsigned i;
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 int r;
>> diff --git a/amdgpu/amdgpu_cs.c b/amdgpu/amdgpu_cs.c
>> index 3b8231aa..5bedf748 100644
>> --- a/amdgpu/amdgpu_cs.c
>> +++ b/amdgpu/amdgpu_cs.c
>> @@ -724,20 +724,45 @@ drm_public int amdgpu_cs_submit_raw(amdgpu_d= evice_handle dev,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 r =3D drmCommandWriteRead(dev-&g= t;fd, DRM_AMDGPU_CS,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &cs, sizeof(cs));<= br> >>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (r)
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 retu= rn r;
>>
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if (seq_no)
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 *seq= _no =3D cs.out.handle;
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return 0;
>>=C2=A0 =C2=A0}
>>
>> +drm_public int amdgpu_cs_submit_raw2(amdgpu_device_handle dev, >> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 amdgpu_conte= xt_handle context,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint32_t bo_= list_handle,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 int num_chun= ks,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 struct drm_a= mdgpu_cs_chunk *chunks,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint64_t *se= q_no)
>> +{
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0union drm_amdgpu_cs cs =3D {0};
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0uint64_t *chunk_array;
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0int i, r;
>> +
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0chunk_array =3D alloca(sizeof(uint64_t= ) * num_chunks);
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0for (i =3D 0; i < num_chunks; i++)<= br> >> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0chunk_arra= y[i] =3D (uint64_t)(uintptr_t)&chunks[i];
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0cs.in.chunks =3D (uint64_t)(uintptr_t)= chunk_array;
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0cs.in.ctx_id =3D context->id;
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0cs.in.bo_list_handle =3D bo_list_handl= e;
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0cs.in.num_chunks =3D num_chunks;
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0r =3D drmCommandWriteRead(dev->fd, = DRM_AMDGPU_CS,
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&cs, sizeof(cs));
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0if (!r && seq_no)
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0*seq_no = =3D cs.out.handle;
>> +=C2=A0 =C2=A0 =C2=A0 =C2=A0return r;
>> +}
>> +
>>=C2=A0 =C2=A0drm_public void amdgpu_cs_chunk_fence_info_to_data(str= uct amdgpu_cs_fence_info *fence_info,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 struct drm_amdgpu_cs_chunk_data *data)
>>=C2=A0 =C2=A0{
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 data->fence_data.handle =3D f= ence_info->handle->handle;
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 data->fence_data.offset =3D f= ence_info->offset * sizeof(uint64_t);
>>=C2=A0 =C2=A0}
>>
>>=C2=A0 =C2=A0drm_public void amdgpu_cs_chunk_fence_to_dep(struct am= dgpu_cs_fence *fence,
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 struct drm_amdgpu_cs_chunk_dep *dep)
>>=C2=A0 =C2=A0{
>> --
>> 2.17.1
>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
>> https://lists.freedesktop.= org/mailman/listinfo/amd-gfx
> _______________________________________________
> amd-gfx mailing list
> amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
> https://lists.freedesktop.org/= mailman/listinfo/amd-gfx

--0000000000004ba1ca057f942453-- --===============1843628119== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KYW1kLWdmeCBt YWlsaW5nIGxpc3QKYW1kLWdmeEBsaXN0cy5mcmVlZGVza3RvcC5vcmcKaHR0cHM6Ly9saXN0cy5m cmVlZGVza3RvcC5vcmcvbWFpbG1hbi9saXN0aW5mby9hbWQtZ2Z4Cg== --===============1843628119==--