dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
To: Jason Ekstrand <jason@jlekstrand.net>
Cc: Intel GFX <intel-gfx@lists.freedesktop.org>,
	Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>,
	Chris Wilson <chris.p.wilson@intel.com>,
	Maling list - DRI developers <dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy
Date: Wed, 28 Apr 2021 23:16:19 +0300	[thread overview]
Message-ID: <c7c35344-b2ee-f338-0100-409fe14a1f17@intel.com> (raw)
In-Reply-To: <6efdf140-4144-d688-16e0-4089beffce0e@intel.com>

On 28/04/2021 23:14, Lionel Landwerlin wrote:
> On 28/04/2021 22:54, Jason Ekstrand wrote:
>> On Wed, Apr 28, 2021 at 2:50 PM Lionel Landwerlin
>> <lionel.g.landwerlin@intel.com> wrote:
>>> On 28/04/2021 22:24, Jason Ekstrand wrote:
>>>
>>> On Wed, Apr 28, 2021 at 3:43 AM Jani Nikula 
>>> <jani.nikula@linux.intel.com> wrote:
>>>
>>> On Tue, 27 Apr 2021, Umesh Nerlige Ramappa 
>>> <umesh.nerlige.ramappa@intel.com> wrote:
>>>
>>> Perf measurements rely on CPU and engine timestamps to correlate
>>> events of interest across these time domains. Current mechanisms get
>>> these timestamps separately and the calculated delta between these
>>> timestamps lack enough accuracy.
>>>
>>> To improve the accuracy of these time measurements to within a few us,
>>> add a query that returns the engine and cpu timestamps captured as
>>> close to each other as possible.
>>>
>>> Cc: dri-devel, Jason and Daniel for review.
>>>
>>> Thanks!
>>>
>>> v2: (Tvrtko)
>>> - document clock reference used
>>> - return cpu timestamp always
>>> - capture cpu time just before lower dword of cs timestamp
>>>
>>> v3: (Chris)
>>> - use uncore-rpm
>>> - use __query_cs_timestamp helper
>>>
>>> v4: (Lionel)
>>> - Kernel perf subsytem allows users to specify the clock id to be used
>>>    in perf_event_open. This clock id is used by the perf subsystem to
>>>    return the appropriate cpu timestamp in perf events. Similarly, let
>>>    the user pass the clockid to this query so that cpu timestamp
>>>    corresponds to the clock id requested.
>>>
>>> v5: (Tvrtko)
>>> - Use normal ktime accessors instead of fast versions
>>> - Add more uApi documentation
>>>
>>> v6: (Lionel)
>>> - Move switch out of spinlock
>>>
>>> v7: (Chris)
>>> - cs_timestamp is a misnomer, use cs_cycles instead
>>> - return the cs cycle frequency as well in the query
>>>
>>> v8:
>>> - Add platform and engine specific checks
>>>
>>> v9: (Lionel)
>>> - Return 2 cpu timestamps in the query - captured before and after the
>>>    register read
>>>
>>> v10: (Chris)
>>> - Use local_clock() to measure time taken to read lower dword of
>>>    register and return it to user.
>>>
>>> v11: (Jani)
>>> - IS_GEN deprecated. User GRAPHICS_VER instead.
>>>
>>> Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
>>> ---
>>>   drivers/gpu/drm/i915/i915_query.c | 145 
>>> ++++++++++++++++++++++++++++++
>>>   include/uapi/drm/i915_drm.h       |  48 ++++++++++
>>>   2 files changed, 193 insertions(+)
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_query.c 
>>> b/drivers/gpu/drm/i915/i915_query.c
>>> index fed337ad7b68..2594b93901ac 100644
>>> --- a/drivers/gpu/drm/i915/i915_query.c
>>> +++ b/drivers/gpu/drm/i915/i915_query.c
>>> @@ -6,6 +6,8 @@
>>>
>>>   #include <linux/nospec.h>
>>>
>>> +#include "gt/intel_engine_pm.h"
>>> +#include "gt/intel_engine_user.h"
>>>   #include "i915_drv.h"
>>>   #include "i915_perf.h"
>>>   #include "i915_query.h"
>>> @@ -90,6 +92,148 @@ static int query_topology_info(struct 
>>> drm_i915_private *dev_priv,
>>>        return total_length;
>>>   }
>>>
>>> +typedef u64 (*__ktime_func_t)(void);
>>> +static __ktime_func_t __clock_id_to_func(clockid_t clk_id)
>>> +{
>>> +     /*
>>> +      * Use logic same as the perf subsystem to allow user to 
>>> select the
>>> +      * reference clock id to be used for timestamps.
>>> +      */
>>> +     switch (clk_id) {
>>> +     case CLOCK_MONOTONIC:
>>> +             return &ktime_get_ns;
>>> +     case CLOCK_MONOTONIC_RAW:
>>> +             return &ktime_get_raw_ns;
>>> +     case CLOCK_REALTIME:
>>> +             return &ktime_get_real_ns;
>>> +     case CLOCK_BOOTTIME:
>>> +             return &ktime_get_boottime_ns;
>>> +     case CLOCK_TAI:
>>> +             return &ktime_get_clocktai_ns;
>>> +     default:
>>> +             return NULL;
>>> +     }
>>> +}
>>> +
>>> +static inline int
>>> +__read_timestamps(struct intel_uncore *uncore,
>>> +               i915_reg_t lower_reg,
>>> +               i915_reg_t upper_reg,
>>> +               u64 *cs_ts,
>>> +               u64 *cpu_ts,
>>> +               __ktime_func_t cpu_clock)
>>> +{
>>> +     u32 upper, lower, old_upper, loop = 0;
>>> +
>>> +     upper = intel_uncore_read_fw(uncore, upper_reg);
>>> +     do {
>>> +             cpu_ts[1] = local_clock();
>>> +             cpu_ts[0] = cpu_clock();
>>> +             lower = intel_uncore_read_fw(uncore, lower_reg);
>>> +             cpu_ts[1] = local_clock() - cpu_ts[1];
>>> +             old_upper = upper;
>>> +             upper = intel_uncore_read_fw(uncore, upper_reg);
>>> +     } while (upper != old_upper && loop++ < 2);
>>> +
>>> +     *cs_ts = (u64)upper << 32 | lower;
>>> +
>>> +     return 0;
>>> +}
>>> +
>>> +static int
>>> +__query_cs_cycles(struct intel_engine_cs *engine,
>>> +               u64 *cs_ts, u64 *cpu_ts,
>>> +               __ktime_func_t cpu_clock)
>>> +{
>>> +     struct intel_uncore *uncore = engine->uncore;
>>> +     enum forcewake_domains fw_domains;
>>> +     u32 base = engine->mmio_base;
>>> +     intel_wakeref_t wakeref;
>>> +     int ret;
>>> +
>>> +     fw_domains = intel_uncore_forcewake_for_reg(uncore,
>>> + RING_TIMESTAMP(base),
>>> + FW_REG_READ);
>>> +
>>> +     with_intel_runtime_pm(uncore->rpm, wakeref) {
>>> +             spin_lock_irq(&uncore->lock);
>>> +             intel_uncore_forcewake_get__locked(uncore, fw_domains);
>>> +
>>> +             ret = __read_timestamps(uncore,
>>> +                                     RING_TIMESTAMP(base),
>>> + RING_TIMESTAMP_UDW(base),
>>> +                                     cs_ts,
>>> +                                     cpu_ts,
>>> +                                     cpu_clock);
>>> +
>>> +             intel_uncore_forcewake_put__locked(uncore, fw_domains);
>>> +             spin_unlock_irq(&uncore->lock);
>>> +     }
>>> +
>>> +     return ret;
>>> +}
>>> +
>>> +static int
>>> +query_cs_cycles(struct drm_i915_private *i915,
>>> +             struct drm_i915_query_item *query_item)
>>> +{
>>> +     struct drm_i915_query_cs_cycles __user *query_ptr;
>>> +     struct drm_i915_query_cs_cycles query;
>>> +     struct intel_engine_cs *engine;
>>> +     __ktime_func_t cpu_clock;
>>> +     int ret;
>>> +
>>> +     if (GRAPHICS_VER(i915) < 6)
>>> +             return -ENODEV;
>>> +
>>> +     query_ptr = u64_to_user_ptr(query_item->data_ptr);
>>> +     ret = copy_query_item(&query, sizeof(query), sizeof(query), 
>>> query_item);
>>> +     if (ret != 0)
>>> +             return ret;
>>> +
>>> +     if (query.flags)
>>> +             return -EINVAL;
>>> +
>>> +     if (query.rsvd)
>>> +             return -EINVAL;
>>> +
>>> +     cpu_clock = __clock_id_to_func(query.clockid);
>>> +     if (!cpu_clock)
>>> +             return -EINVAL;
>>> +
>>> +     engine = intel_engine_lookup_user(i915,
>>> + query.engine.engine_class,
>>> + query.engine.engine_instance);
>>> +     if (!engine)
>>> +             return -EINVAL;
>>> +
>>> +     if (GRAPHICS_VER(i915) == 6 &&
>>> +         query.engine.engine_class != I915_ENGINE_CLASS_RENDER)
>>> +             return -ENODEV;
>>> +
>>> +     query.cs_frequency = engine->gt->clock_frequency;
>>> +     ret = __query_cs_cycles(engine,
>>> +                             &query.cs_cycles,
>>> +                             query.cpu_timestamp,
>>> +                             cpu_clock);
>>> +     if (ret)
>>> +             return ret;
>>> +
>>> +     if (put_user(query.cs_frequency, &query_ptr->cs_frequency))
>>> +             return -EFAULT;
>>> +
>>> +     if (put_user(query.cpu_timestamp[0], 
>>> &query_ptr->cpu_timestamp[0]))
>>> +             return -EFAULT;
>>> +
>>> +     if (put_user(query.cpu_timestamp[1], 
>>> &query_ptr->cpu_timestamp[1]))
>>> +             return -EFAULT;
>>> +
>>> +     if (put_user(query.cs_cycles, &query_ptr->cs_cycles))
>>> +             return -EFAULT;
>>> +
>>> +     return sizeof(query);
>>> +}
>>> +
>>>   static int
>>>   query_engine_info(struct drm_i915_private *i915,
>>>                  struct drm_i915_query_item *query_item)
>>> @@ -424,6 +568,7 @@ static int (* const i915_query_funcs[])(struct 
>>> drm_i915_private *dev_priv,
>>>        query_topology_info,
>>>        query_engine_info,
>>>        query_perf_config,
>>> +     query_cs_cycles,
>>>   };
>>>
>>>   int i915_query_ioctl(struct drm_device *dev, void *data, struct 
>>> drm_file *file)
>>> diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
>>> index 6a34243a7646..08b00f1709b5 100644
>>> --- a/include/uapi/drm/i915_drm.h
>>> +++ b/include/uapi/drm/i915_drm.h
>>> @@ -2230,6 +2230,10 @@ struct drm_i915_query_item {
>>>   #define DRM_I915_QUERY_TOPOLOGY_INFO    1
>>>   #define DRM_I915_QUERY_ENGINE_INFO   2
>>>   #define DRM_I915_QUERY_PERF_CONFIG      3
>>> +     /**
>>> +      * Query Command Streamer timestamp register.
>>> +      */
>>> +#define DRM_I915_QUERY_CS_CYCLES     4
>>>   /* Must be kept compact -- no holes and well documented */
>>>
>>>        /**
>>> @@ -2397,6 +2401,50 @@ struct drm_i915_engine_info {
>>>        __u64 rsvd1[4];
>>>   };
>>>
>>> +/**
>>> + * struct drm_i915_query_cs_cycles
>>> + *
>>> + * The query returns the command streamer cycles and the frequency 
>>> that can be
>>> + * used to calculate the command streamer timestamp. In addition 
>>> the query
>>> + * returns a set of cpu timestamps that indicate when the command 
>>> streamer cycle
>>> + * count was captured.
>>> + */
>>> +struct drm_i915_query_cs_cycles {
>>> +     /** Engine for which command streamer cycles is queried. */
>>> +     struct i915_engine_class_instance engine;
>>>
>>> Why is this per-engine?  Do we actually expect it to change between
>>> engines?
>>>
>>>
>>> Each engine has its own timestamp register.
>>>
>>>
>>>    If so, we may have a problem because Vulkan expects a
>>> unified timestamp domain for all command streamer timestamp queries.
>>>
>>>
>>> I don't think it does : "
>>>
>>> Timestamps may only be meaningfully compared if they are written by 
>>> commands submitted to the same queue.
>> Yes but vkGetCalibratedTimestampsEXT() doesn't take a queue or even a
>> queue family.
>
>
> I know, I brought up the issue recently. See khronos issue 2551.
>
> You might not like the resolution... I did propose to do a rev2 of the 
> extension to let the user specify the queue.
>
> We can still do that in the future.
>
>
>>    Also, VkPhysicalDeviceLimits::timestampPeriod gives a
>> single timestampPeriod for all queues.
>
>
> That is fine for us, we should have the same period on all command 
> streamers.
>
>
> -Lionel


Here is the Mesa MR using this extension btw : 
https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/9407


-Lionel


>
>
>>    It's possible that Vulkan
>> messed up real bad there but I thought we did a HW survey at the time
>> and determined that it was ok.
>>
>> --Jason
>>
>>
>>> " [1]
>>>
>>>
>>> [1] : 
>>> https://www.khronos.org/registry/vulkan/specs/1.2-extensions/man/html/vkCmdWriteTimestamp.html
>>>
>>>
>>> -Lionel
>>>
>>>
>>>
>>> --Jason
>>>
>>>
>>> +     /** Must be zero. */
>>> +     __u32 flags;
>>> +
>>> +     /**
>>> +      * Command streamer cycles as read from the command streamer
>>> +      * register at 0x358 offset.
>>> +      */
>>> +     __u64 cs_cycles;
>>> +
>>> +     /** Frequency of the cs cycles in Hz. */
>>> +     __u64 cs_frequency;
>>> +
>>> +     /**
>>> +      * CPU timestamps in ns. cpu_timestamp[0] is captured before 
>>> reading the
>>> +      * cs_cycles register using the reference clockid set by the 
>>> user.
>>> +      * cpu_timestamp[1] is the time taken in ns to read the lower 
>>> dword of
>>> +      * the cs_cycles register.
>>> +      */
>>> +     __u64 cpu_timestamp[2];
>>> +
>>> +     /**
>>> +      * Reference clock id for CPU timestamp. For definition, see
>>> +      * clock_gettime(2) and perf_event_open(2). Supported clock 
>>> ids are
>>> +      * CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, CLOCK_REALTIME, 
>>> CLOCK_BOOTTIME,
>>> +      * CLOCK_TAI.
>>> +      */
>>> +     __s32 clockid;
>>> +
>>> +     /** Must be zero. */
>>> +     __u32 rsvd;
>>> +};
>>> +
>>>   /**
>>>    * struct drm_i915_query_engine_info
>>>    *
>>>
>>> -- 
>>> Jani Nikula, Intel Open Source Graphics Center
>>>
>>>
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel


_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

  reply	other threads:[~2021-04-28 20:16 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20210427214913.46956-1-umesh.nerlige.ramappa@intel.com>
     [not found] ` <20210427214913.46956-2-umesh.nerlige.ramappa@intel.com>
2021-04-28  8:43   ` [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy Jani Nikula
2021-04-28 19:24     ` Jason Ekstrand
2021-04-28 19:49       ` Lionel Landwerlin
2021-04-28 19:54         ` Jason Ekstrand
2021-04-28 20:14           ` Lionel Landwerlin
2021-04-28 20:16             ` Lionel Landwerlin [this message]
2021-04-28 20:45             ` Jason Ekstrand
2021-04-28 21:18               ` Lionel Landwerlin
2021-04-29 11:15     ` Daniel Vetter
2021-04-29  0:34 [PATCH 0/1] Add support for querying engine cycles Umesh Nerlige Ramappa
2021-04-29  0:34 ` [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy Umesh Nerlige Ramappa
2021-04-29  8:34   ` Lionel Landwerlin
2021-04-29 19:07   ` Jason Ekstrand
2021-04-30 22:26     ` Umesh Nerlige Ramappa
2021-04-30 23:00       ` Dixit, Ashutosh
2021-04-30 23:23         ` Dixit, Ashutosh
2021-05-01  0:35         ` Jason Ekstrand
2021-05-01  2:19           ` Umesh Nerlige Ramappa
2021-05-01  4:01             ` Dixit, Ashutosh
2021-05-01 15:27               ` Jason Ekstrand
2021-05-03 18:29                 ` Umesh Nerlige Ramappa
2021-05-04  0:12 [PATCH 0/1] Add support for querying engine cycles Umesh Nerlige Ramappa
2021-05-04  0:12 ` [PATCH 1/1] i915/query: Correlate engine and cpu timestamps with better accuracy Umesh Nerlige Ramappa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c7c35344-b2ee-f338-0100-409fe14a1f17@intel.com \
    --to=lionel.g.landwerlin@intel.com \
    --cc=chris.p.wilson@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=jason@jlekstrand.net \
    --cc=umesh.nerlige.ramappa@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).