All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Bridgman, John" <John.Bridgman-5C7GfCeVMHo@public.gmane.org>
To: Ming Yang <minos.future-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	"Kuehling, Felix" <Felix.Kuehling-5C7GfCeVMHo@public.gmane.org>
Cc: "Deucher,
	Alexander" <Alexander.Deucher-5C7GfCeVMHo@public.gmane.org>,
	"amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org"
	<amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org>
Subject: RE: Documentation about AMD's HSA implementation?
Date: Tue, 13 Feb 2018 23:42:24 +0000	[thread overview]
Message-ID: <BN6PR12MB13483BBA577C518F18F7B100E8F60@BN6PR12MB1348.namprd12.prod.outlook.com> (raw)
In-Reply-To: <CAEVNDXvqQdZgP-YrgWqGpOkCDSNz6uJ0Ggrz_MRopOHZL31XpQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>



>-----Original Message-----
>From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf Of
>Ming Yang
>Sent: Tuesday, February 13, 2018 4:59 PM
>To: Kuehling, Felix
>Cc: Deucher, Alexander; amd-gfx@lists.freedesktop.org
>Subject: Re: Documentation about AMD's HSA implementation?
>
>That's very helpful, thanks!
>
>On Tue, Feb 13, 2018 at 4:17 PM, Felix Kuehling <felix.kuehling@amd.com>
>wrote:
>> On 2018-02-13 04:06 PM, Ming Yang wrote:
>>> Thanks for the suggestions!  But I might ask several specific
>>> questions, as I can't find the answer in those documents, to give
>>> myself a quick start if that's okay. Pointing me to the
>>> files/functions would be good enough.  Any explanations are
>>> appreciated.   My purpose is to hack it with different scheduling
>>> policy with real-time and predictability consideration.
>>>
>>> - Where/How is the packet scheduler implemented?  How are packets
>>> from multiple queues scheduled?  What about scheduling packets from
>>> queues in different address spaces?
>>
>> This is done mostly in firmware. The CP engine supports up to 32 queues.
>> We share those between KFD and AMDGPU. KFD gets 24 queues to use.
>> Usually that is 6 queues times 4 pipes. Pipes are threads in the CP
>> micro engine. Within each pipe the queues are time-multiplexed.
>
>Please correct me if I'm wrong.  CP is computing processor, like the Execution
>Engine in NVIDIA GPU. Pipe is like wavefront (warp) scheduler multiplexing
>queues, in order to hide memory latency.

CP is one step back from that - it's a "command processor" which reads command packets from driver (PM4 format) or application (AQL format) then manages the execution of each command on the GPU. A typical packet might be "dispatch", which initiates a compute operation on an N-dimensional array, or "draw" which initiates the rendering of an array of triangles. Those compute and render commands then generate a (typically) large number of wavefronts which are multiplexed on the shader core (by SQ IIRC). Most of our recent GPUs have one micro engine for graphics ("ME") and two for compute ("MEC"). Marketing refers to each pipe on an MEC block as an "ACE".
>
>>
>> If we need more than 24 queues, or if we have more than 8 processes,
>> the hardware scheduler (HWS) adds another layer scheduling, basically
>> round-robin between batches of 24 queues or 8 processes. Once you get
>> into such an over-subscribed scenario your performance and GPU
>> utilization can suffers quite badly.
>
>HWS is also implemented in the firmware that's closed-source?

Correct - HWS is implemented in the MEC microcode. We also include a simple SW scheduler in the open source driver code, however. 
>
>>
>>>
>>> - I noticed the new support of concurrency of multi-processes in the
>>> archive of this mailing list.  Could you point me to the code that
>>> implements this?
>>
>> That's basically just a switch that tells the firmware that it is
>> allowed to schedule queues from different processes at the same time.
>> The upper limit is the number of VMIDs that HWS can work with. It
>> needs to assign a unique VMID to each process (each VMID representing
>> a separate address space, page table, etc.). If there are more
>> processes than VMIDs, the HWS has to time-multiplex.
>
>HWS dispatch packets in their order of becoming the head of the queue, i.e.,
>being pointed by the read_index? So in this way it's FIFO.  Or round-robin
>between queues? You mentioned round-robin over batches in the over-
>subscribed scenario.

Round robin between sets of queues. The HWS logic generates sets as follows:

1. "set resources" packet from driver tells scheduler how many VMIDs and HW queues it can use

2. "runlist" packet from driver provides list of processes and list of queues for each process

3. if multi-process switch not set, HWS schedules as many queues from the first process in the runlist as it has HW queues (see #1)

4. at the end of process quantum (set by driver) either switch to next process (if all queues from first process have been scheduled) or schedule next set of queues from the same process

5. when all queues from all processes have been scheduled and run for a process quantum, go back to the start of the runlist and repeat

If the multi-process switch is set, and the number of queues for a process is less than the number of HW queues available, then in step #3 above HWS will start scheduling queues for additional processes, using a different VMID for each process, and continue until it either runs out of VMIDs or HW queues (or reaches the end of the runlist). All of the queues and processes would then run together for a process quantum before switching to the next queue set.

>
>This might not be a big deal for performance, but it matters for predictability
>and real-time analysis.

Agreed. In general you would not want to overcommit either VMIDs or HW queues in a real-time scenario, and for hard real time you would probably want to limit to a single queue per pipe since the MEC also multiplexes between HW queues on a pipe even without HWS. 

>
>>
>>>
>>> - Also another related question -- where/how is the
>>> preemption/context switch between packets/queues implemented?
>>
>> As long as you don't oversubscribe the available VMIDs, there is no
>> real context switching. Everything can run concurrently. When you
>> start oversubscribing HW queues or VMIDs, the HWS firmware will start
>> multiplexing. This is all handled inside the firmware and is quite
>> transparent even to KFD.
>
>I see.  So the preemption in at least AMD's implementation is not switching
>out the executing kernel, but just letting new kernels to run concurrently with
>the existing ones.  This means the performance is degraded when too many
>workloads are submitted.  The running kernels leave the GPU only when they
>are done.

Both - you can have multiple kernels executing concurrently (each generating multiple threads in the shader core) AND switch out the currently executing set of kernels via preemption. 

>
>Is there any reason for not preempting/switching out the existing kernel,
>besides context switch overheads?  NVIDIA is not providing this option either.
>Non-preemption hurts the real-time property in terms of priority inversion.  I
>understand preemption should not be massively used but having such an
>option may help a lot for real-time systems.

If I understand you correctly, you can have it either way depending on the number of queues you enable simultaneously. At any given time you are typically only going to be running the kernels from one queue on each pipe, ie with 3 pipes and 24 queues you would typically only be running 3 kernels at a time. This seemed like a good compromise between scalability and efficiency. 

>
>>
>> KFD interacts with the HWS firmware through the HIQ (HSA interface
>> queue). It supports packets for unmapping queues, we can send it a new
>> runlist (basically a bunch of map-process and map-queue packets). The
>> interesting files to look at are kfd_packet_manager.c,
>> kfd_kernel_queue_<hw>.c and kfd_device_queue_manager.c.
>>
>
>So in this way, if we want to implement different scheduling policy, we should
>control the submission of packets to the queues in runtime/KFD, before
>getting to the firmware.  Because it's out of access once it's submitted to the
>HWS in the firmware.

Correct - there is a tradeoff between "easily scheduling lots of work" and fine-grained control. Limiting the number of queues you run simultaneously is another way of taking back control. 

You're probably past this, but you might find the original introduction to KFD useful in some way:

https://lwn.net/Articles/605153/

>
>Best,
>Mark
>
>> Regards,
>>   Felix
>>
>>>
>>> Thanks in advance!
>>>
>>> Best,
>>> Mark
>>>
>>>> On 13 Feb 2018, at 2:56 PM, Felix Kuehling <felix.kuehling@amd.com>
>wrote:
>>>> There is also this: https://gpuopen.com/professional-compute/, which
>>>> give pointer to several libraries and tools that built on top of ROCm.
>>>>
>>>> Another thing to keep in mind is, that ROCm is diverging from the
>>>> strict HSA standard in some important ways. For example the HSA
>>>> standard includes HSAIL as an intermediate representation that gets
>>>> finalized on the target system, whereas ROCm compiles directly to native
>GPU ISA.
>>>>
>>>> Regards,
>>>>   Felix
>>>>
>>>> On Tue, Feb 13, 2018 at 9:40 AM, Deucher, Alexander
><Alexander.Deucher@amd.com> wrote:
>>>>> The ROCm documentation is probably a good place to start:
>>>>>
>>>>> https://rocm.github.io/documentation.html
>>>>>
>>>>>
>>>>> Alex
>>>>>
>>>>> ________________________________
>>>>> From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> on behalf of
>>>>> Ming Yang <minos.future@gmail.com>
>>>>> Sent: Tuesday, February 13, 2018 12:00 AM
>>>>> To: amd-gfx@lists.freedesktop.org
>>>>> Subject: Documentation about AMD's HSA implementation?
>>>>>
>>>>> Hi,
>>>>>
>>>>> I'm interested in HSA and excited when I found AMD's fully
>>>>> open-stack ROCm supporting it. Before digging into the code, I
>>>>> wonder if there's any documentation available about AMD's HSA
>>>>> implementation, either book, whitepaper, paper, or documentation.
>>>>>
>>>>> I did find helpful materials about HSA, including HSA standards on
>>>>> this page
>>>>> (http://www.hsafoundation.com/standards/) and a nice book about
>HSA
>>>>> (Heterogeneous System Architecture A New Compute Platform
>Infrastructure).
>>>>> But regarding the documentation about AMD's implementation, I
>>>>> haven't found anything yet.
>>>>>
>>>>> Please let me know if there are ones publicly accessible. If no,
>>>>> any suggestions on learning the implementation of specific system
>>>>> components, e.g., queue scheduling.
>>>>>
>>>>> Best,
>>>>> Mark
>>
>_______________________________________________
>amd-gfx mailing list
>amd-gfx@lists.freedesktop.org
>https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  parent reply	other threads:[~2018-02-13 23:42 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-02-13  5:00 Documentation about AMD's HSA implementation? Ming Yang
     [not found] ` <CAEVNDXv8__4bYKLZc1zWYSdeK_0VkgTUeD-ex=vpLzyCgK88fg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2018-02-13 14:40   ` Deucher, Alexander
     [not found]     ` <BN6PR12MB1652082F493EA92A38CD969FF7F60-/b2+HYfkarQqUD6E6FAiowdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
2018-02-13 19:56       ` Felix Kuehling
     [not found]         ` <aaf9750c-5cef-a49d-13f2-9f46428d2324-5C7GfCeVMHo@public.gmane.org>
2018-02-13 21:03           ` Panariti, David
2018-02-13 21:06       ` Ming Yang
     [not found]         ` <CAEVNDXvb3T_WeocZri=7Q1ihsARV+esOgqeZ=uEQnAKJe7Q1Cg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2018-02-13 21:17           ` Felix Kuehling
     [not found]             ` <4b128f4a-065e-fccd-fe92-baefeda66017-5C7GfCeVMHo@public.gmane.org>
2018-02-13 21:58               ` Ming Yang
     [not found]                 ` <CAEVNDXvqQdZgP-YrgWqGpOkCDSNz6uJ0Ggrz_MRopOHZL31XpQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2018-02-13 22:31                   ` Felix Kuehling
2018-02-13 23:42                   ` Bridgman, John [this message]
     [not found]                     ` <BN6PR12MB13483BBA577C518F18F7B100E8F60-/b2+HYfkarQX0pEhCR5T8QdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
2018-02-13 23:45                       ` Bridgman, John
     [not found]                         ` <BN6PR12MB11720563598A3218601AE2C995F50@BN6PR12MB1172.namprd12.prod.outlook.com>
     [not found]                           ` <BN6PR12MB11720563598A3218601AE2C995F50-/b2+HYfkarTft/eMqzLDqAdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
2018-02-14  6:05                             ` Ming Yang
     [not found]                               ` <CAEVNDXv0CwU9et6KzM1X70x+8SDac0F4kPv1t3XPvuBs=gzzdg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2018-03-17 16:35                                 ` Ming Yang
     [not found]                                   ` <CAEVNDXswb36_KsTychd-q_U69Km2qVBGD6oerGCioAK8A+52Dg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2018-03-17 20:17                                     ` Bridgman, John
     [not found]                                       ` <BN6PR12MB13481F9FFE9BB463B218C08CE8D60-/b2+HYfkarQX0pEhCR5T8QdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
2018-03-20  1:06                                         ` Ming Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BN6PR12MB13483BBA577C518F18F7B100E8F60@BN6PR12MB1348.namprd12.prod.outlook.com \
    --to=john.bridgman-5c7gfcevmho@public.gmane.org \
    --cc=Alexander.Deucher-5C7GfCeVMHo@public.gmane.org \
    --cc=Felix.Kuehling-5C7GfCeVMHo@public.gmane.org \
    --cc=amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org \
    --cc=minos.future-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.