From mboxrd@z Thu Jan 1 00:00:00 1970 From: =?UTF-8?B?TWFyZWsgT2zFocOhaw==?= Subject: Re: [PATCH libdrm] amdgpu: add a faster BO list API Date: Wed, 16 Jan 2019 09:37:55 -0500 Message-ID: References: <20190107193104.4361-1-maraeo@gmail.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0620389953==" Return-path: In-Reply-To: List-Id: Discussion list for AMD gfx List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: amd-gfx-bounces-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org Sender: "amd-gfx" To: Bas Nieuwenhuizen Cc: amd-gfx mailing list --===============0620389953== Content-Type: multipart/alternative; boundary="000000000000fa1bbe057f943962" --000000000000fa1bbe057f943962 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Wed, Jan 16, 2019, 7:46 AM Bas Nieuwenhuizen So random questions: > > 1) In this discussion it was mentioned that some Vulkan drivers still > use the bo_list interface. I think that implies radv as I think we're > still using bo_list. Is there any other API we should be using? (Also, > with VK_EXT_descriptor_indexing I suspect we'll be moving more towards > a global bo list instead of a cmd buffer one, as we cannot know all > the BOs referenced anymore, but not sure what end state here will be). > > 2) The other alternative mentioned was adding the buffers directly > into the submit ioctl. Is this the desired end state (though as above > I'm not sure how that works for vulkan)? If yes, what is the timeline > for this that we need something in the interim? > Radeonsi already uses this. > 3) Did we measure any performance benefit? > > In general I'd like to to ack the raw bo list creation function as > this interface seems easier to use. The two arrays thing has always > been kind of a pain when we want to use e.g. builtin sort functions to > make sure we have no duplicate BOs, but have some comments below. > The reason amdgpu was slower than radeon was because of this inefficient bo list interface. > On Mon, Jan 7, 2019 at 8:31 PM Marek Ol=C5=A1=C3=A1k w= rote: > > > > From: Marek Ol=C5=A1=C3=A1k > > > > --- > > amdgpu/amdgpu-symbol-check | 3 ++ > > amdgpu/amdgpu.h | 56 +++++++++++++++++++++++++++++++++++++- > > amdgpu/amdgpu_bo.c | 36 ++++++++++++++++++++++++ > > amdgpu/amdgpu_cs.c | 25 +++++++++++++++++ > > 4 files changed, 119 insertions(+), 1 deletion(-) > > > > diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-check > > index 6f5e0f95..96a44b40 100755 > > --- a/amdgpu/amdgpu-symbol-check > > +++ b/amdgpu/amdgpu-symbol-check > > @@ -12,20 +12,22 @@ _edata > > _end > > _fini > > _init > > amdgpu_bo_alloc > > amdgpu_bo_cpu_map > > amdgpu_bo_cpu_unmap > > amdgpu_bo_export > > amdgpu_bo_free > > amdgpu_bo_import > > amdgpu_bo_inc_ref > > +amdgpu_bo_list_create_raw > > +amdgpu_bo_list_destroy_raw > > amdgpu_bo_list_create > > amdgpu_bo_list_destroy > > amdgpu_bo_list_update > > amdgpu_bo_query_info > > amdgpu_bo_set_metadata > > amdgpu_bo_va_op > > amdgpu_bo_va_op_raw > > amdgpu_bo_wait_for_idle > > amdgpu_create_bo_from_user_mem > > amdgpu_cs_chunk_fence_info_to_data > > @@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore > > amdgpu_cs_destroy_syncobj > > amdgpu_cs_export_syncobj > > amdgpu_cs_fence_to_handle > > amdgpu_cs_import_syncobj > > amdgpu_cs_query_fence_status > > amdgpu_cs_query_reset_state > > amdgpu_query_sw_info > > amdgpu_cs_signal_semaphore > > amdgpu_cs_submit > > amdgpu_cs_submit_raw > > +amdgpu_cs_submit_raw2 > > amdgpu_cs_syncobj_export_sync_file > > amdgpu_cs_syncobj_import_sync_file > > amdgpu_cs_syncobj_reset > > amdgpu_cs_syncobj_signal > > amdgpu_cs_syncobj_wait > > amdgpu_cs_wait_fences > > amdgpu_cs_wait_semaphore > > amdgpu_device_deinitialize > > amdgpu_device_initialize > > amdgpu_find_bo_by_cpu_mapping > > diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h > > index dc51659a..5b800033 100644 > > --- a/amdgpu/amdgpu.h > > +++ b/amdgpu/amdgpu.h > > @@ -35,20 +35,21 @@ > > #define _AMDGPU_H_ > > > > #include > > #include > > > > #ifdef __cplusplus > > extern "C" { > > #endif > > > > struct drm_amdgpu_info_hw_ip; > > +struct drm_amdgpu_bo_list_entry; > > > > > /*-----------------------------------------------------------------------= ---*/ > > /* --------------------------- Defines > ------------------------------------ */ > > > /*-----------------------------------------------------------------------= ---*/ > > > > /** > > * Define max. number of Command Buffers (IB) which could be sent to > the single > > * hardware IP to accommodate CE/DE requirements > > * > > * \sa amdgpu_cs_ib_info > > @@ -767,34 +768,65 @@ int amdgpu_bo_cpu_unmap(amdgpu_bo_handle > buf_handle); > > * and no GPU access is scheduled. > > * 1 GPU access is in fly or scheduled > > * > > * \return 0 - on success > > * <0 - Negative POSIX Error code > > */ > > int amdgpu_bo_wait_for_idle(amdgpu_bo_handle buf_handle, > > uint64_t timeout_ns, > > bool *buffer_busy); > > > > +/** > > + * Creates a BO list handle for command submission. > > + * > > + * \param dev - \c [in] Device handle. > > + * See #amdgpu_device_initialize() > > + * \param number_of_buffers - \c [in] Number of BOs in the list > > + * \param buffers - \c [in] List of BO handles > > + * \param result - \c [out] Created BO list handle > > + * > > + * \return 0 on success\n > > + * <0 - Negative POSIX Error code > > + * > > + * \sa amdgpu_bo_list_destroy_raw() > > +*/ > > +int amdgpu_bo_list_create_raw(amdgpu_device_handle dev, > > + uint32_t number_of_buffers, > > + struct drm_amdgpu_bo_list_entry *buffers, > > + uint32_t *result); > > So AFAIU drm_amdgpu_bo_list_entry takes a raw bo handle while we > never get a raw bo handle from libdrm_amdgpu. How are we supposed to > fill it in? > This function returns it. > What do we win by having the raw handle for the bo_list? If we would > not return the raw handle we would not need the submit_raw2. > One less malloc call and pointer indirection. Marek > > + > > +/** > > + * Destroys a BO list handle. > > + * > > + * \param bo_list - \c [in] BO list handle. > > + * > > + * \return 0 on success\n > > + * <0 - Negative POSIX Error code > > + * > > + * \sa amdgpu_bo_list_create_raw(), amdgpu_cs_submit_raw2() > > +*/ > > +int amdgpu_bo_list_destroy_raw(amdgpu_device_handle dev, uint32_t > bo_list); > > + > > /** > > * Creates a BO list handle for command submission. > > * > > * \param dev - \c [in] Device handle. > > * See #amdgpu_device_initialize() > > * \param number_of_resources - \c [in] Number of BOs in the > list > > * \param resources - \c [in] List of BO handles > > * \param resource_prios - \c [in] Optional priority for each > handle > > * \param result - \c [out] Created BO list handle > > * > > * \return 0 on success\n > > * <0 - Negative POSIX Error code > > * > > - * \sa amdgpu_bo_list_destroy() > > + * \sa amdgpu_bo_list_destroy(), amdgpu_cs_submit_raw2() > > */ > > int amdgpu_bo_list_create(amdgpu_device_handle dev, > > uint32_t number_of_resources, > > amdgpu_bo_handle *resources, > > uint8_t *resource_prios, > > amdgpu_bo_list_handle *result); > > > > /** > > * Destroys a BO list handle. > > * > > @@ -1580,20 +1612,42 @@ struct drm_amdgpu_cs_chunk; > > struct drm_amdgpu_cs_chunk_dep; > > struct drm_amdgpu_cs_chunk_data; > > > > int amdgpu_cs_submit_raw(amdgpu_device_handle dev, > > amdgpu_context_handle context, > > amdgpu_bo_list_handle bo_list_handle, > > int num_chunks, > > struct drm_amdgpu_cs_chunk *chunks, > > uint64_t *seq_no); > > > > +/** > > + * Submit raw command submission to the kernel with a raw BO list > handle. > > + * > > + * \param dev - \c [in] device handle > > + * \param context - \c [in] context handle for context id > > + * \param bo_list_handle - \c [in] raw bo list handle (0 for none) > > + * \param num_chunks - \c [in] number of CS chunks to submit > > + * \param chunks - \c [in] array of CS chunks > > + * \param seq_no - \c [out] output sequence number for submissio= n. > > + * > > + * \return 0 on success\n > > + * <0 - Negative POSIX Error code > > + * > > + * \sa amdgpu_bo_list_create_raw(), amdgpu_bo_list_destroy_raw() > > + */ > > +int amdgpu_cs_submit_raw2(amdgpu_device_handle dev, > > + amdgpu_context_handle context, > > + uint32_t bo_list_handle, > > + int num_chunks, > > + struct drm_amdgpu_cs_chunk *chunks, > > + uint64_t *seq_no); > > + > > void amdgpu_cs_chunk_fence_to_dep(struct amdgpu_cs_fence *fence, > > struct drm_amdgpu_cs_chunk_dep *dep); > > void amdgpu_cs_chunk_fence_info_to_data(struct amdgpu_cs_fence_info > *fence_info, > > struct drm_amdgpu_cs_chunk_data > *data); > > > > /** > > * Reserve VMID > > * \param context - \c [in] GPU Context > > * \param flags - \c [in] TBD > > * > > diff --git a/amdgpu/amdgpu_bo.c b/amdgpu/amdgpu_bo.c > > index c0f42e81..21bc73aa 100644 > > --- a/amdgpu/amdgpu_bo.c > > +++ b/amdgpu/amdgpu_bo.c > > @@ -611,20 +611,56 @@ drm_public int > amdgpu_create_bo_from_user_mem(amdgpu_device_handle dev, > > pthread_mutex_lock(&dev->bo_table_mutex); > > r =3D handle_table_insert(&dev->bo_handles, (*buf_handle)->hand= le, > > *buf_handle); > > pthread_mutex_unlock(&dev->bo_table_mutex); > > if (r) > > amdgpu_bo_free(*buf_handle); > > out: > > return r; > > } > > > > +drm_public int amdgpu_bo_list_create_raw(amdgpu_device_handle dev, > > + uint32_t number_of_buffers, > > + struct drm_amdgpu_bo_list_entr= y > *buffers, > > + uint32_t *result) > > +{ > > + union drm_amdgpu_bo_list args; > > + int r; > > + > > + memset(&args, 0, sizeof(args)); > > + args.in.operation =3D AMDGPU_BO_LIST_OP_CREATE; > > + args.in.bo_number =3D number_of_buffers; > > + args.in.bo_info_size =3D sizeof(struct drm_amdgpu_bo_list_entry= ); > > + args.in.bo_info_ptr =3D (uint64_t)(uintptr_t)buffers; > > + > > + r =3D drmCommandWriteRead(dev->fd, DRM_AMDGPU_BO_LIST, > > + &args, sizeof(args)); > > + if (r) > > + return r; > > + > > + *result =3D args.out.list_handle; > > + return 0; > > +} > > + > > +drm_public int amdgpu_bo_list_destroy_raw(amdgpu_device_handle dev, > > + uint32_t bo_list) > > +{ > > + union drm_amdgpu_bo_list args; > > + > > + memset(&args, 0, sizeof(args)); > > + args.in.operation =3D AMDGPU_BO_LIST_OP_DESTROY; > > + args.in.list_handle =3D bo_list; > > + > > + return drmCommandWriteRead(dev->fd, DRM_AMDGPU_BO_LIST, > > + &args, sizeof(args)); > > +} > > + > > drm_public int amdgpu_bo_list_create(amdgpu_device_handle dev, > > uint32_t number_of_resources, > > amdgpu_bo_handle *resources, > > uint8_t *resource_prios, > > amdgpu_bo_list_handle *result) > > { > > struct drm_amdgpu_bo_list_entry *list; > > union drm_amdgpu_bo_list args; > > unsigned i; > > int r; > > diff --git a/amdgpu/amdgpu_cs.c b/amdgpu/amdgpu_cs.c > > index 3b8231aa..5bedf748 100644 > > --- a/amdgpu/amdgpu_cs.c > > +++ b/amdgpu/amdgpu_cs.c > > @@ -724,20 +724,45 @@ drm_public int > amdgpu_cs_submit_raw(amdgpu_device_handle dev, > > r =3D drmCommandWriteRead(dev->fd, DRM_AMDGPU_CS, > > &cs, sizeof(cs)); > > if (r) > > return r; > > > > if (seq_no) > > *seq_no =3D cs.out.handle; > > return 0; > > } > > > > +drm_public int amdgpu_cs_submit_raw2(amdgpu_device_handle dev, > > + amdgpu_context_handle context, > > + uint32_t bo_list_handle, > > + int num_chunks, > > + struct drm_amdgpu_cs_chunk *chunks= , > > + uint64_t *seq_no) > > +{ > > + union drm_amdgpu_cs cs =3D {0}; > > + uint64_t *chunk_array; > > + int i, r; > > + > > + chunk_array =3D alloca(sizeof(uint64_t) * num_chunks); > > + for (i =3D 0; i < num_chunks; i++) > > + chunk_array[i] =3D (uint64_t)(uintptr_t)&chunks[i]; > > + cs.in.chunks =3D (uint64_t)(uintptr_t)chunk_array; > > + cs.in.ctx_id =3D context->id; > > + cs.in.bo_list_handle =3D bo_list_handle; > > + cs.in.num_chunks =3D num_chunks; > > + r =3D drmCommandWriteRead(dev->fd, DRM_AMDGPU_CS, > > + &cs, sizeof(cs)); > > + if (!r && seq_no) > > + *seq_no =3D cs.out.handle; > > + return r; > > +} > > + > > drm_public void amdgpu_cs_chunk_fence_info_to_data(struct > amdgpu_cs_fence_info *fence_info, > > struct drm_amdgpu_cs_chunk_data > *data) > > { > > data->fence_data.handle =3D fence_info->handle->handle; > > data->fence_data.offset =3D fence_info->offset * sizeof(uint64_= t); > > } > > > > drm_public void amdgpu_cs_chunk_fence_to_dep(struct amdgpu_cs_fence > *fence, > > struct drm_amdgpu_cs_chunk_dep > *dep) > > { > > -- > > 2.17.1 > > > > _______________________________________________ > > amd-gfx mailing list > > amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org > > https://lists.freedesktop.org/mailman/listinfo/amd-gfx > --000000000000fa1bbe057f943962 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


= On Wed, Jan 16, 2019, 7:46 AM Bas Nieuwenhuizen <bas-dldO88ZXqoXqqjsSq9zF6IRWq/SkRNHw@public.gmane.org wrote:
So random questions:

1) In this discussion it was mentioned that some Vulkan drivers still
use the bo_list interface. I think that implies radv as I think we're still using bo_list. Is there any other API we should be using? (Also,
with VK_EXT_descriptor_indexing I suspect we'll be moving more towards<= br> a global bo list instead of a cmd buffer one, as we cannot know all
the BOs referenced anymore, but not sure what end state here will be).

2) The other alternative mentioned was adding the buffers directly
into the submit ioctl. Is this the desired end state (though as above
I'm not sure how that works for vulkan)? If yes, what is the timeline for this that we need something in the interim?

Radeonsi already uses this.<= /div>


3) Did we measure any performance benefit?

In general I'd like to to ack the raw bo list creation function as
this interface seems easier to use. The two arrays thing has always
been kind of a pain when we want to use e.g. builtin sort functions to
make sure we have no duplicate BOs, but have some comments below.

The reason= amdgpu was slower than radeon was because of this inefficient bo list inte= rface.


On Mon, Jan 7, 2019 at 8:31 PM Marek Ol=C5=A1=C3=A1k <maraeo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org= > wrote:
>
> From: Marek Ol=C5=A1=C3=A1k <marek.olsak-5C7GfCeVMHo@public.gmane.org>
>
> ---
>=C2=A0 amdgpu/amdgpu-symbol-check |=C2=A0 3 ++
>=C2=A0 amdgpu/amdgpu.h=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 | 56 ++= +++++++++++++++++++++++++++++++++++-
>=C2=A0 amdgpu/amdgpu_bo.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0| 36 +++++++= +++++++++++++++++
>=C2=A0 amdgpu/amdgpu_cs.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0| 25 +++++++= ++++++++++
>=C2=A0 4 files changed, 119 insertions(+), 1 deletion(-)
>
> diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-check > index 6f5e0f95..96a44b40 100755
> --- a/amdgpu/amdgpu-symbol-check
> +++ b/amdgpu/amdgpu-symbol-check
> @@ -12,20 +12,22 @@ _edata
>=C2=A0 _end
>=C2=A0 _fini
>=C2=A0 _init
>=C2=A0 amdgpu_bo_alloc
>=C2=A0 amdgpu_bo_cpu_map
>=C2=A0 amdgpu_bo_cpu_unmap
>=C2=A0 amdgpu_bo_export
>=C2=A0 amdgpu_bo_free
>=C2=A0 amdgpu_bo_import
>=C2=A0 amdgpu_bo_inc_ref
> +amdgpu_bo_list_create_raw
> +amdgpu_bo_list_destroy_raw
>=C2=A0 amdgpu_bo_list_create
>=C2=A0 amdgpu_bo_list_destroy
>=C2=A0 amdgpu_bo_list_update
>=C2=A0 amdgpu_bo_query_info
>=C2=A0 amdgpu_bo_set_metadata
>=C2=A0 amdgpu_bo_va_op
>=C2=A0 amdgpu_bo_va_op_raw
>=C2=A0 amdgpu_bo_wait_for_idle
>=C2=A0 amdgpu_create_bo_from_user_mem
>=C2=A0 amdgpu_cs_chunk_fence_info_to_data
> @@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore
>=C2=A0 amdgpu_cs_destroy_syncobj
>=C2=A0 amdgpu_cs_export_syncobj
>=C2=A0 amdgpu_cs_fence_to_handle
>=C2=A0 amdgpu_cs_import_syncobj
>=C2=A0 amdgpu_cs_query_fence_status
>=C2=A0 amdgpu_cs_query_reset_state
>=C2=A0 amdgpu_query_sw_info
>=C2=A0 amdgpu_cs_signal_semaphore
>=C2=A0 amdgpu_cs_submit
>=C2=A0 amdgpu_cs_submit_raw
> +amdgpu_cs_submit_raw2
>=C2=A0 amdgpu_cs_syncobj_export_sync_file
>=C2=A0 amdgpu_cs_syncobj_import_sync_file
>=C2=A0 amdgpu_cs_syncobj_reset
>=C2=A0 amdgpu_cs_syncobj_signal
>=C2=A0 amdgpu_cs_syncobj_wait
>=C2=A0 amdgpu_cs_wait_fences
>=C2=A0 amdgpu_cs_wait_semaphore
>=C2=A0 amdgpu_device_deinitialize
>=C2=A0 amdgpu_device_initialize
>=C2=A0 amdgpu_find_bo_by_cpu_mapping
> diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h
> index dc51659a..5b800033 100644
> --- a/amdgpu/amdgpu.h
> +++ b/amdgpu/amdgpu.h
> @@ -35,20 +35,21 @@
>=C2=A0 #define _AMDGPU_H_
>
>=C2=A0 #include <stdint.h>
>=C2=A0 #include <stdbool.h>
>
>=C2=A0 #ifdef __cplusplus
>=C2=A0 extern "C" {
>=C2=A0 #endif
>
>=C2=A0 struct drm_amdgpu_info_hw_ip;
> +struct drm_amdgpu_bo_list_entry;
>
>=C2=A0 /*--------------------------------------------------------------= ------------*/
>=C2=A0 /* --------------------------- Defines -------------------------= ----------- */
>=C2=A0 /*--------------------------------------------------------------= ------------*/
>
>=C2=A0 /**
>=C2=A0 =C2=A0* Define max. number of Command Buffers (IB) which could b= e sent to the single
>=C2=A0 =C2=A0* hardware IP to accommodate CE/DE requirements
>=C2=A0 =C2=A0*
>=C2=A0 =C2=A0* \sa amdgpu_cs_ib_info
> @@ -767,34 +768,65 @@ int amdgpu_bo_cpu_unmap(amdgpu_bo_handle buf_han= dle);
>=C2=A0 =C2=A0*=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 and no GPU access is scheduled. >=C2=A0 =C2=A0*=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 1 GPU access is in fly or scheduled
>=C2=A0 =C2=A0*
>=C2=A0 =C2=A0* \return=C2=A0 =C2=A00 - on success
>=C2=A0 =C2=A0*=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <0 - Negative POSIX= Error code
>=C2=A0 =C2=A0*/
>=C2=A0 int amdgpu_bo_wait_for_idle(amdgpu_bo_handle buf_handle,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0uint64_t timeout_ns,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0bool *buffer_busy);
>
> +/**
> + * Creates a BO list handle for command submission.
> + *
> + * \param=C2=A0 =C2=A0dev=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 - \c [in] Device handle.
> + *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 See #amdgpu_device_initialize= ()
> + * \param=C2=A0 =C2=A0number_of_buffers=C2=A0 - \c [in] Number of BOs= in the list
> + * \param=C2=A0 =C2=A0buffers=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 - \c [in] List of BO handles
> + * \param=C2=A0 =C2=A0result=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0- \c [out] Created BO list handle
> + *
> + * \return=C2=A0 =C2=A00 on success\n
> + *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <0 - Negative POSIX Error cod= e
> + *
> + * \sa amdgpu_bo_list_destroy_raw()
> +*/
> +int amdgpu_bo_list_create_raw(amdgpu_device_handle dev,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0uint32_t number_of_buffers,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0struct drm_amdgpu_bo_list_entry *buffers= ,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0uint32_t *result);

So AFAIU=C2=A0 drm_amdgpu_bo_list_entry takes a raw bo handle while we
never get a raw bo handle from libdrm_amdgpu. How are we supposed to
fill it in?

This function returns it.


What do we win by having the raw handle for the bo_list? If we would
not return the raw handle we would not need the submit_raw2.

One less malloc= call and pointer indirection.

Marek


> +
> +/**
> + * Destroys a BO list handle.
> + *
> + * \param=C2=A0 =C2=A0bo_list=C2=A0 =C2=A0 - \c [in] BO list handle.<= br> > + *
> + * \return=C2=A0 =C2=A00 on success\n
> + *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <0 - Negative POSIX Error cod= e
> + *
> + * \sa amdgpu_bo_list_create_raw(), amdgpu_cs_submit_raw2()
> +*/
> +int amdgpu_bo_list_destroy_raw(amdgpu_device_handle dev, uint32_t bo_= list);
> +
>=C2=A0 /**
>=C2=A0 =C2=A0* Creates a BO list handle for command submission.
>=C2=A0 =C2=A0*
>=C2=A0 =C2=A0* \param=C2=A0 =C2=A0dev=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 - \c [in] Device handle.<= br> >=C2=A0 =C2=A0*=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 See #amdgpu_device_= initialize()
>=C2=A0 =C2=A0* \param=C2=A0 =C2=A0number_of_resources=C2=A0 =C2=A0 =C2= =A0 =C2=A0 - \c [in] Number of BOs in the list
>=C2=A0 =C2=A0* \param=C2=A0 =C2=A0resources=C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 - \c [in] List of BO handles
>=C2=A0 =C2=A0* \param=C2=A0 =C2=A0resource_prios=C2=A0 =C2=A0 =C2=A0- \= c [in] Optional priority for each handle
>=C2=A0 =C2=A0* \param=C2=A0 =C2=A0result=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0- \c [out] Created BO list handle
>=C2=A0 =C2=A0*
>=C2=A0 =C2=A0* \return=C2=A0 =C2=A00 on success\n
>=C2=A0 =C2=A0*=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <0 - Negative POSIX= Error code
>=C2=A0 =C2=A0*
> - * \sa amdgpu_bo_list_destroy()
> + * \sa amdgpu_bo_list_destroy(), amdgpu_cs_submit_raw2()
>=C2=A0 */
>=C2=A0 int amdgpu_bo_list_create(amdgpu_device_handle dev,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0uint32_t number_of_resources,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0amdgpu_bo_handle *resources,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0uint8_t *resource_prios,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0amdgpu_bo_list_handle *result);
>
>=C2=A0 /**
>=C2=A0 =C2=A0* Destroys a BO list handle.
>=C2=A0 =C2=A0*
> @@ -1580,20 +1612,42 @@ struct drm_amdgpu_cs_chunk;
>=C2=A0 struct drm_amdgpu_cs_chunk_dep;
>=C2=A0 struct drm_amdgpu_cs_chunk_data;
>
>=C2=A0 int amdgpu_cs_submit_raw(amdgpu_device_handle dev,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 amdgpu_context_handle context,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 amdgpu_bo_list_handle bo_list_handle,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 int num_chunks,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 struct drm_amdgpu_cs_chunk *chunks,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 uint64_t *seq_no);
>
> +/**
> + * Submit raw command submission to the kernel with a raw BO list han= dle.
> + *
> + * \param=C2=A0 =C2=A0dev=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0- \c [in] device handle
> + * \param=C2=A0 =C2=A0context=C2=A0 =C2=A0 - \c [in] context handle f= or context id
> + * \param=C2=A0 =C2=A0bo_list_handle - \c [in] raw bo list handle (0 = for none)
> + * \param=C2=A0 =C2=A0num_chunks - \c [in] number of CS chunks to sub= mit
> + * \param=C2=A0 =C2=A0chunks=C2=A0 =C2=A0 =C2=A0- \c [in] array of CS= chunks
> + * \param=C2=A0 =C2=A0seq_no=C2=A0 =C2=A0 =C2=A0- \c [out] output seq= uence number for submission.
> + *
> + * \return=C2=A0 =C2=A00 on success\n
> + *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 <0 - Negative POSIX Error cod= e
> + *
> + * \sa amdgpu_bo_list_create_raw(), amdgpu_bo_list_destroy_raw()
> + */
> +int amdgpu_cs_submit_raw2(amdgpu_device_handle dev,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0amdgpu_context_handle context,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0uint32_t bo_list_handle,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0int num_chunks,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0struct drm_amdgpu_cs_chunk *chunks,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0uint64_t *seq_no);
> +
>=C2=A0 void amdgpu_cs_chunk_fence_to_dep(struct amdgpu_cs_fence *fence,=
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0struct drm_amdgpu_cs= _chunk_dep *dep);
>=C2=A0 void amdgpu_cs_chunk_fence_info_to_data(struct amdgpu_cs_fence_i= nfo *fence_info,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0struct drm_amdgpu_cs_chunk_data *data);
>
>=C2=A0 /**
>=C2=A0 =C2=A0* Reserve VMID
>=C2=A0 =C2=A0* \param=C2=A0 =C2=A0context - \c [in]=C2=A0 GPU Context >=C2=A0 =C2=A0* \param=C2=A0 =C2=A0flags - \c [in]=C2=A0 TBD
>=C2=A0 =C2=A0*
> diff --git a/amdgpu/amdgpu_bo.c b/amdgpu/amdgpu_bo.c
> index c0f42e81..21bc73aa 100644
> --- a/amdgpu/amdgpu_bo.c
> +++ b/amdgpu/amdgpu_bo.c
> @@ -611,20 +611,56 @@ drm_public int amdgpu_create_bo_from_user_mem(am= dgpu_device_handle dev,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0pthread_mutex_lock(&dev->bo_ta= ble_mutex);
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0r =3D handle_table_insert(&dev-&g= t;bo_handles, (*buf_handle)->handle,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0*buf_handle);
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0pthread_mutex_unlock(&dev->bo_= table_mutex);
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (r)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0amdgpu_bo= _free(*buf_handle);
>=C2=A0 out:
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return r;
>=C2=A0 }
>
> +drm_public int amdgpu_bo_list_create_raw(amdgpu_device_handle dev, > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint= 32_t number_of_buffers,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 stru= ct drm_amdgpu_bo_list_entry *buffers,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint= 32_t *result)
> +{
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0union drm_amdgpu_bo_list args;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0int r;
> +
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0memset(&args, 0, sizeof(args));
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0args.in.operation =3D AMDGPU_BO_LIST_OP_CR= EATE;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0args.in.bo_number =3D number_of_buffers; > +=C2=A0 =C2=A0 =C2=A0 =C2=A0args.in.bo_info_size =3D sizeof(struct drm= _amdgpu_bo_list_entry);
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0args.in.bo_info_ptr =3D (uint64_t)(uintptr= _t)buffers;
> +
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0r =3D drmCommandWriteRead(dev->fd, DRM_= AMDGPU_BO_LIST,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&args, sizeof(args));
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0if (r)
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return r;
> +
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0*result =3D args.out.list_handle;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0return 0;
> +}
> +
> +drm_public int amdgpu_bo_list_destroy_raw(amdgpu_device_handle dev, > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0uint32_t bo_list)
> +{
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0union drm_amdgpu_bo_list args;
> +
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0memset(&args, 0, sizeof(args));
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0args.in.operation =3D AMDGPU_BO_LIST_OP_DE= STROY;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0args.in.list_handle =3D bo_list;
> +
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0return drmCommandWriteRead(dev->fd, DRM= _AMDGPU_BO_LIST,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &args, sizeof(args));=
> +}
> +
>=C2=A0 drm_public int amdgpu_bo_list_create(amdgpu_device_handle dev, >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint32_t num= ber_of_resources,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 amdgpu_bo_ha= ndle *resources,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint8_t *res= ource_prios,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 amdgpu_bo_li= st_handle *result)
>=C2=A0 {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0struct drm_amdgpu_bo_list_entry *list= ;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0union drm_amdgpu_bo_list args;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0unsigned i;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0int r;
> diff --git a/amdgpu/amdgpu_cs.c b/amdgpu/amdgpu_cs.c
> index 3b8231aa..5bedf748 100644
> --- a/amdgpu/amdgpu_cs.c
> +++ b/amdgpu/amdgpu_cs.c
> @@ -724,20 +724,45 @@ drm_public int amdgpu_cs_submit_raw(amdgpu_devic= e_handle dev,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0r =3D drmCommandWriteRead(dev->fd,= DRM_AMDGPU_CS,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&cs, sizeof(cs));
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (r)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return r;=
>
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (seq_no)
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0*seq_no = =3D cs.out.handle;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return 0;
>=C2=A0 }
>
> +drm_public int amdgpu_cs_submit_raw2(amdgpu_device_handle dev,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 amdgpu_context_han= dle context,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint32_t bo_list_h= andle,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 int num_chunks, > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 struct drm_amdgpu_= cs_chunk *chunks,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint64_t *seq_no)<= br> > +{
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0union drm_amdgpu_cs cs =3D {0};
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0uint64_t *chunk_array;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0int i, r;
> +
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0chunk_array =3D alloca(sizeof(uint64_t) * = num_chunks);
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0for (i =3D 0; i < num_chunks; i++)
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0chunk_array[i]= =3D (uint64_t)(uintptr_t)&chunks[i];
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0cs.in.chunks =3D (uint64_t)(uintptr_t)chun= k_array;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0cs.in.ctx_id =3D context->id;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0cs.in.bo_list_handle =3D bo_list_handle; > +=C2=A0 =C2=A0 =C2=A0 =C2=A0cs.in.num_chunks =3D num_chunks;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0r =3D drmCommandWriteRead(dev->fd, DRM_= AMDGPU_CS,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&cs, sizeof(cs));
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0if (!r && seq_no)
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0*seq_no =3D cs= .out.handle;
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0return r;
> +}
> +
>=C2=A0 drm_public void amdgpu_cs_chunk_fence_info_to_data(struct amdgpu= _cs_fence_info *fence_info,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0struct drm_amdgpu_cs_chunk_data *data)
>=C2=A0 {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0data->fence_data.handle =3D fence_= info->handle->handle;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0data->fence_data.offset =3D fence_= info->offset * sizeof(uint64_t);
>=C2=A0 }
>
>=C2=A0 drm_public void amdgpu_cs_chunk_fence_to_dep(struct amdgpu_cs_fe= nce *fence,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0struct drm_amdgpu_cs_chunk_dep *dep)
>=C2=A0 {
> --
> 2.17.1
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
> https://lists.freedesktop.org/= mailman/listinfo/amd-gfx
--000000000000fa1bbe057f943962-- --===============0620389953== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KYW1kLWdmeCBt YWlsaW5nIGxpc3QKYW1kLWdmeEBsaXN0cy5mcmVlZGVza3RvcC5vcmcKaHR0cHM6Ly9saXN0cy5m cmVlZGVza3RvcC5vcmcvbWFpbG1hbi9saXN0aW5mby9hbWQtZ2Z4Cg== --===============0620389953==--