On Wed, Oct 17, 2018 at 12:14 AM Keith Packard <keithp@keithp.com> wrote:
Jason Ekstrand <jason@jlekstrand.net> writes:

> Doing all of the CPU sampling on one side or the other of the GPU sampling
> would probably reduce our window.

True, although as I said, it's taking several µs to get through the
loop, and the gpu clock tick is far smaller than that, so even adding
the two values together to make it fit the current implementation won't
make the deviation that much larger.

> This leaves us with a delta of I + max(P(M), P(R), P(G)).  In
> particular, any two real-number valued times are, instantaneously,
> within that interval.

That, at least, would be easy to compute, and scale nicely if we added
more clocks in the future.

> Personally, I'm completely content to have the delta just be a the first
> one: a bound on the difference between any two real-valued times.  At this
> point, I can guarantee you that far more thought has been put into this
> mesa-dev discussion than was put into the spec and I think we're rapidly
> getting to the point of diminishing returns. :-)

It seems likely. How about we do the above computation for the current
code and leave it at that?

Sounds like a plan.  Note that I should be computed as I = end - start + monotonic_raw_tick_ns to ensure we get a big enough interval.  Given that monotonic_raw_tick_ns is likely 1, this doesn't expand things much.

I think a comment is likely also in order.  Probably not containing the entire e-mail thread but maybe some of my reasoning above?

--Jason