All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Marek Olšák" <maraeo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
To: "Christian König" <christian.koenig-5C7GfCeVMHo@public.gmane.org>
Cc: amd-gfx mailing list
	<amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org>
Subject: Re: [PATCH libdrm] amdgpu: add a faster BO list API
Date: Wed, 9 Jan 2019 07:36:36 -0500	[thread overview]
Message-ID: <CAAxE2A5M2WW6uPFo0a=+6ukbtgx5xHfkKUKOB9dgtB=qH88htQ@mail.gmail.com> (raw)
In-Reply-To: <a0a15ed6-eb1a-fbbe-7c1b-e3b9a64c1008-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>


[-- Attachment #1.1: Type: text/plain, Size: 12537 bytes --]

On Wed, Jan 9, 2019, 5:28 AM Christian König <
ckoenig.leichtzumerken-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org wrote:

> Looks good, but I'm wondering what's the actual improvement?
>

No malloc calls and 1 less for loop copying the bo list.

Marek


> Christian.
>
> Am 07.01.19 um 20:31 schrieb Marek Olšák:
> > From: Marek Olšák <marek.olsak-5C7GfCeVMHo@public.gmane.org>
> >
> > ---
> >   amdgpu/amdgpu-symbol-check |  3 ++
> >   amdgpu/amdgpu.h            | 56 +++++++++++++++++++++++++++++++++++++-
> >   amdgpu/amdgpu_bo.c         | 36 ++++++++++++++++++++++++
> >   amdgpu/amdgpu_cs.c         | 25 +++++++++++++++++
> >   4 files changed, 119 insertions(+), 1 deletion(-)
> >
> > diff --git a/amdgpu/amdgpu-symbol-check b/amdgpu/amdgpu-symbol-check
> > index 6f5e0f95..96a44b40 100755
> > --- a/amdgpu/amdgpu-symbol-check
> > +++ b/amdgpu/amdgpu-symbol-check
> > @@ -12,20 +12,22 @@ _edata
> >   _end
> >   _fini
> >   _init
> >   amdgpu_bo_alloc
> >   amdgpu_bo_cpu_map
> >   amdgpu_bo_cpu_unmap
> >   amdgpu_bo_export
> >   amdgpu_bo_free
> >   amdgpu_bo_import
> >   amdgpu_bo_inc_ref
> > +amdgpu_bo_list_create_raw
> > +amdgpu_bo_list_destroy_raw
> >   amdgpu_bo_list_create
> >   amdgpu_bo_list_destroy
> >   amdgpu_bo_list_update
> >   amdgpu_bo_query_info
> >   amdgpu_bo_set_metadata
> >   amdgpu_bo_va_op
> >   amdgpu_bo_va_op_raw
> >   amdgpu_bo_wait_for_idle
> >   amdgpu_create_bo_from_user_mem
> >   amdgpu_cs_chunk_fence_info_to_data
> > @@ -40,20 +42,21 @@ amdgpu_cs_destroy_semaphore
> >   amdgpu_cs_destroy_syncobj
> >   amdgpu_cs_export_syncobj
> >   amdgpu_cs_fence_to_handle
> >   amdgpu_cs_import_syncobj
> >   amdgpu_cs_query_fence_status
> >   amdgpu_cs_query_reset_state
> >   amdgpu_query_sw_info
> >   amdgpu_cs_signal_semaphore
> >   amdgpu_cs_submit
> >   amdgpu_cs_submit_raw
> > +amdgpu_cs_submit_raw2
> >   amdgpu_cs_syncobj_export_sync_file
> >   amdgpu_cs_syncobj_import_sync_file
> >   amdgpu_cs_syncobj_reset
> >   amdgpu_cs_syncobj_signal
> >   amdgpu_cs_syncobj_wait
> >   amdgpu_cs_wait_fences
> >   amdgpu_cs_wait_semaphore
> >   amdgpu_device_deinitialize
> >   amdgpu_device_initialize
> >   amdgpu_find_bo_by_cpu_mapping
> > diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h
> > index dc51659a..5b800033 100644
> > --- a/amdgpu/amdgpu.h
> > +++ b/amdgpu/amdgpu.h
> > @@ -35,20 +35,21 @@
> >   #define _AMDGPU_H_
> >
> >   #include <stdint.h>
> >   #include <stdbool.h>
> >
> >   #ifdef __cplusplus
> >   extern "C" {
> >   #endif
> >
> >   struct drm_amdgpu_info_hw_ip;
> > +struct drm_amdgpu_bo_list_entry;
> >
> >
>  /*--------------------------------------------------------------------------*/
> >   /* --------------------------- Defines
> ------------------------------------ */
> >
>  /*--------------------------------------------------------------------------*/
> >
> >   /**
> >    * Define max. number of Command Buffers (IB) which could be sent to
> the single
> >    * hardware IP to accommodate CE/DE requirements
> >    *
> >    * \sa amdgpu_cs_ib_info
> > @@ -767,34 +768,65 @@ int amdgpu_bo_cpu_unmap(amdgpu_bo_handle
> buf_handle);
> >    *                            and no GPU access is scheduled.
> >    *                          1 GPU access is in fly or scheduled
> >    *
> >    * \return   0 - on success
> >    *          <0 - Negative POSIX Error code
> >    */
> >   int amdgpu_bo_wait_for_idle(amdgpu_bo_handle buf_handle,
> >                           uint64_t timeout_ns,
> >                           bool *buffer_busy);
> >
> > +/**
> > + * Creates a BO list handle for command submission.
> > + *
> > + * \param   dev                      - \c [in] Device handle.
> > + *                              See #amdgpu_device_initialize()
> > + * \param   number_of_buffers        - \c [in] Number of BOs in the list
> > + * \param   buffers          - \c [in] List of BO handles
> > + * \param   result           - \c [out] Created BO list handle
> > + *
> > + * \return   0 on success\n
> > + *          <0 - Negative POSIX Error code
> > + *
> > + * \sa amdgpu_bo_list_destroy_raw()
> > +*/
> > +int amdgpu_bo_list_create_raw(amdgpu_device_handle dev,
> > +                           uint32_t number_of_buffers,
> > +                           struct drm_amdgpu_bo_list_entry *buffers,
> > +                           uint32_t *result);
> > +
> > +/**
> > + * Destroys a BO list handle.
> > + *
> > + * \param   bo_list  - \c [in] BO list handle.
> > + *
> > + * \return   0 on success\n
> > + *          <0 - Negative POSIX Error code
> > + *
> > + * \sa amdgpu_bo_list_create_raw(), amdgpu_cs_submit_raw2()
> > +*/
> > +int amdgpu_bo_list_destroy_raw(amdgpu_device_handle dev, uint32_t
> bo_list);
> > +
> >   /**
> >    * Creates a BO list handle for command submission.
> >    *
> >    * \param   dev                     - \c [in] Device handle.
> >    *                             See #amdgpu_device_initialize()
> >    * \param   number_of_resources     - \c [in] Number of BOs in the list
> >    * \param   resources               - \c [in] List of BO handles
> >    * \param   resource_prios  - \c [in] Optional priority for each handle
> >    * \param   result          - \c [out] Created BO list handle
> >    *
> >    * \return   0 on success\n
> >    *          <0 - Negative POSIX Error code
> >    *
> > - * \sa amdgpu_bo_list_destroy()
> > + * \sa amdgpu_bo_list_destroy(), amdgpu_cs_submit_raw2()
> >   */
> >   int amdgpu_bo_list_create(amdgpu_device_handle dev,
> >                         uint32_t number_of_resources,
> >                         amdgpu_bo_handle *resources,
> >                         uint8_t *resource_prios,
> >                         amdgpu_bo_list_handle *result);
> >
> >   /**
> >    * Destroys a BO list handle.
> >    *
> > @@ -1580,20 +1612,42 @@ struct drm_amdgpu_cs_chunk;
> >   struct drm_amdgpu_cs_chunk_dep;
> >   struct drm_amdgpu_cs_chunk_data;
> >
> >   int amdgpu_cs_submit_raw(amdgpu_device_handle dev,
> >                        amdgpu_context_handle context,
> >                        amdgpu_bo_list_handle bo_list_handle,
> >                        int num_chunks,
> >                        struct drm_amdgpu_cs_chunk *chunks,
> >                        uint64_t *seq_no);
> >
> > +/**
> > + * Submit raw command submission to the kernel with a raw BO list
> handle.
> > + *
> > + * \param   dev             - \c [in] device handle
> > + * \param   context    - \c [in] context handle for context id
> > + * \param   bo_list_handle - \c [in] raw bo list handle (0 for none)
> > + * \param   num_chunks - \c [in] number of CS chunks to submit
> > + * \param   chunks     - \c [in] array of CS chunks
> > + * \param   seq_no     - \c [out] output sequence number for submission.
> > + *
> > + * \return   0 on success\n
> > + *          <0 - Negative POSIX Error code
> > + *
> > + * \sa amdgpu_bo_list_create_raw(), amdgpu_bo_list_destroy_raw()
> > + */
> > +int amdgpu_cs_submit_raw2(amdgpu_device_handle dev,
> > +                       amdgpu_context_handle context,
> > +                       uint32_t bo_list_handle,
> > +                       int num_chunks,
> > +                       struct drm_amdgpu_cs_chunk *chunks,
> > +                       uint64_t *seq_no);
> > +
> >   void amdgpu_cs_chunk_fence_to_dep(struct amdgpu_cs_fence *fence,
> >                                 struct drm_amdgpu_cs_chunk_dep *dep);
> >   void amdgpu_cs_chunk_fence_info_to_data(struct amdgpu_cs_fence_info
> *fence_info,
> >                                       struct drm_amdgpu_cs_chunk_data
> *data);
> >
> >   /**
> >    * Reserve VMID
> >    * \param   context - \c [in]  GPU Context
> >    * \param   flags - \c [in]  TBD
> >    *
> > diff --git a/amdgpu/amdgpu_bo.c b/amdgpu/amdgpu_bo.c
> > index c0f42e81..21bc73aa 100644
> > --- a/amdgpu/amdgpu_bo.c
> > +++ b/amdgpu/amdgpu_bo.c
> > @@ -611,20 +611,56 @@ drm_public int
> amdgpu_create_bo_from_user_mem(amdgpu_device_handle dev,
> >       pthread_mutex_lock(&dev->bo_table_mutex);
> >       r = handle_table_insert(&dev->bo_handles, (*buf_handle)->handle,
> >                               *buf_handle);
> >       pthread_mutex_unlock(&dev->bo_table_mutex);
> >       if (r)
> >               amdgpu_bo_free(*buf_handle);
> >   out:
> >       return r;
> >   }
> >
> > +drm_public int amdgpu_bo_list_create_raw(amdgpu_device_handle dev,
> > +                                      uint32_t number_of_buffers,
> > +                                      struct drm_amdgpu_bo_list_entry
> *buffers,
> > +                                      uint32_t *result)
> > +{
> > +     union drm_amdgpu_bo_list args;
> > +     int r;
> > +
> > +     memset(&args, 0, sizeof(args));
> > +     args.in.operation = AMDGPU_BO_LIST_OP_CREATE;
> > +     args.in.bo_number = number_of_buffers;
> > +     args.in.bo_info_size = sizeof(struct drm_amdgpu_bo_list_entry);
> > +     args.in.bo_info_ptr = (uint64_t)(uintptr_t)buffers;
> > +
> > +     r = drmCommandWriteRead(dev->fd, DRM_AMDGPU_BO_LIST,
> > +                             &args, sizeof(args));
> > +     if (r)
> > +             return r;
> > +
> > +     *result = args.out.list_handle;
> > +     return 0;
> > +}
> > +
> > +drm_public int amdgpu_bo_list_destroy_raw(amdgpu_device_handle dev,
> > +                                       uint32_t bo_list)
> > +{
> > +     union drm_amdgpu_bo_list args;
> > +
> > +     memset(&args, 0, sizeof(args));
> > +     args.in.operation = AMDGPU_BO_LIST_OP_DESTROY;
> > +     args.in.list_handle = bo_list;
> > +
> > +     return drmCommandWriteRead(dev->fd, DRM_AMDGPU_BO_LIST,
> > +                                &args, sizeof(args));
> > +}
> > +
> >   drm_public int amdgpu_bo_list_create(amdgpu_device_handle dev,
> >                                    uint32_t number_of_resources,
> >                                    amdgpu_bo_handle *resources,
> >                                    uint8_t *resource_prios,
> >                                    amdgpu_bo_list_handle *result)
> >   {
> >       struct drm_amdgpu_bo_list_entry *list;
> >       union drm_amdgpu_bo_list args;
> >       unsigned i;
> >       int r;
> > diff --git a/amdgpu/amdgpu_cs.c b/amdgpu/amdgpu_cs.c
> > index 3b8231aa..5bedf748 100644
> > --- a/amdgpu/amdgpu_cs.c
> > +++ b/amdgpu/amdgpu_cs.c
> > @@ -724,20 +724,45 @@ drm_public int
> amdgpu_cs_submit_raw(amdgpu_device_handle dev,
> >       r = drmCommandWriteRead(dev->fd, DRM_AMDGPU_CS,
> >                               &cs, sizeof(cs));
> >       if (r)
> >               return r;
> >
> >       if (seq_no)
> >               *seq_no = cs.out.handle;
> >       return 0;
> >   }
> >
> > +drm_public int amdgpu_cs_submit_raw2(amdgpu_device_handle dev,
> > +                                  amdgpu_context_handle context,
> > +                                  uint32_t bo_list_handle,
> > +                                  int num_chunks,
> > +                                  struct drm_amdgpu_cs_chunk *chunks,
> > +                                  uint64_t *seq_no)
> > +{
> > +     union drm_amdgpu_cs cs = {0};
> > +     uint64_t *chunk_array;
> > +     int i, r;
> > +
> > +     chunk_array = alloca(sizeof(uint64_t) * num_chunks);
> > +     for (i = 0; i < num_chunks; i++)
> > +             chunk_array[i] = (uint64_t)(uintptr_t)&chunks[i];
> > +     cs.in.chunks = (uint64_t)(uintptr_t)chunk_array;
> > +     cs.in.ctx_id = context->id;
> > +     cs.in.bo_list_handle = bo_list_handle;
> > +     cs.in.num_chunks = num_chunks;
> > +     r = drmCommandWriteRead(dev->fd, DRM_AMDGPU_CS,
> > +                             &cs, sizeof(cs));
> > +     if (!r && seq_no)
> > +             *seq_no = cs.out.handle;
> > +     return r;
> > +}
> > +
> >   drm_public void amdgpu_cs_chunk_fence_info_to_data(struct
> amdgpu_cs_fence_info *fence_info,
> >                                       struct drm_amdgpu_cs_chunk_data
> *data)
> >   {
> >       data->fence_data.handle = fence_info->handle->handle;
> >       data->fence_data.offset = fence_info->offset * sizeof(uint64_t);
> >   }
> >
> >   drm_public void amdgpu_cs_chunk_fence_to_dep(struct amdgpu_cs_fence
> *fence,
> >                                       struct drm_amdgpu_cs_chunk_dep
> *dep)
> >   {
>
>

[-- Attachment #1.2: Type: text/html, Size: 16157 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  parent reply	other threads:[~2019-01-09 12:36 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-07 19:31 [PATCH libdrm] amdgpu: add a faster BO list API Marek Olšák
     [not found] ` <20190107193104.4361-1-maraeo-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2019-01-08  3:16   ` Zhou, David(ChunMing)
2019-01-09 10:28   ` Christian König
     [not found]     ` <a0a15ed6-eb1a-fbbe-7c1b-e3b9a64c1008-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2019-01-09 12:36       ` Marek Olšák [this message]
     [not found]         ` <CAAxE2A5M2WW6uPFo0a=+6ukbtgx5xHfkKUKOB9dgtB=qH88htQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-01-09 13:08           ` Christian König
     [not found]             ` <513ee137-7e99-c8fc-9e3b-e9077ead60a3-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2019-01-09 16:14               ` Marek Olšák
     [not found]                 ` <CAAxE2A5WYWCWAPA0K+vYDirtT6BV7QJoZSbEhh0Z57OF860mWQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-01-09 18:41                   ` Christian König
     [not found]                     ` <7f85afd6-b17b-1c50-ba03-c03dd6e9a362-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2019-01-09 23:39                       ` Marek Olšák
     [not found]                         ` <CAAxE2A5RjR=+2Rs5HDx1rV0ftdkZJX=6TQDkvRQSxfo++vnXOA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-01-10  9:15                           ` Koenig, Christian
     [not found]                             ` <e23ecf17-dbd4-ecef-f8fc-4dc849e7bddf-5C7GfCeVMHo@public.gmane.org>
2019-01-10 11:41                               ` Marek Olšák
     [not found]                                 ` <CAAxE2A6z_LLzzsLqsBtLyXcFTsLG_8FQc7=oN2p_nLJGoXbmgg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-01-10 11:51                                   ` Christian König
     [not found]                                     ` <7544c927-8b1f-c7d0-dd9d-21311ffca542-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2019-01-10 12:25                                       ` Marek Olšák
2019-01-16 12:46   ` Bas Nieuwenhuizen
     [not found]     ` <CAP+8YyFD+LxEQOLOY+mDC5v3OOyh1De2DcXK0sRtMW0t7z20SQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-01-16 12:55       ` Christian König
     [not found]         ` <74054b1e-5211-3bfc-ab0f-27e8604759d1-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2019-01-16 14:31           ` Marek Olšák
     [not found]             ` <CAAxE2A5ywFkNMtPbesU_kuSwKCmsPJ0D8wRFuSp14mpORcwYhg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-01-16 14:34               ` Koenig, Christian
     [not found]                 ` <a550562a-7d36-9acf-3143-217c507e667a-5C7GfCeVMHo@public.gmane.org>
2019-01-16 14:39                   ` Marek Olšák
     [not found]                     ` <CAAxE2A4k8JtkrS2XfgRdmYY3NVR4ges=Yqfh-TH9O=LnaVv02g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-01-16 14:43                       ` Christian König
     [not found]                         ` <3d525127-825b-efab-b0c8-76550634d1c1-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2019-01-16 17:39                           ` Marek Olšák
     [not found]                             ` <CAAxE2A77=9-qpfUmt-PQf5=Gx72SLZ5QvNaSYLJ9D6o0fiEz4Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-01-16 21:41                               ` Marek Olšák
2019-01-16 14:37       ` Marek Olšák
     [not found]         ` <CAAxE2A5chwbGmQN2yqVCfvF=TPvFMN6Qu-iFUuRW-zBVm=AN9w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-01-16 15:15           ` Bas Nieuwenhuizen
     [not found]             ` <CAP+8YyFhXpM8eHEjWwy+yAs4s7A7FyrkYO8=FA0tf6M6n-ka+g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-01-16 16:14               ` Marek Olšák

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAAxE2A5M2WW6uPFo0a=+6ukbtgx5xHfkKUKOB9dgtB=qH88htQ@mail.gmail.com' \
    --to=maraeo-re5jqeeqqe8avxtiumwx3w@public.gmane.org \
    --cc=amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org \
    --cc=christian.koenig-5C7GfCeVMHo@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.