On Tue, Oct 16, 2018 at 5:56 PM Keith Packard <keithp@keithp.com> wrote:
Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl> writes:

> You can make the monotonic case the same as the raw case if you make
> sure to always sample the CPU first by e.g. splitting the loops into
> two and doing CPU in the first and GPU in the second. That way you
> make the case above impossible.

Right, I had thought of that, and it probably solves the problem for
us. If more time domains are added, things become 'more complicated'
though.

Doing all of the CPU sampling on one side or the other of the GPU sampling would probably reduce our window.
 
> That said "start of the interval of the tick" is kinda arbitrary and
> you could pick random other points on that interval, so depending on
> what requirements you put on it (i.e. can the chosen position be
> different per call, consistent but implicit or explicitly picked which
> might be leaked through the interface) the max deviation changes. So
> depending on interpretation this thing can be very moot ...

It doesn't really matter what phase you use; the timer increments
periodically, and what really matters is the time when that happens
relative to other clocks changing.

Agreed.

Thinking about this a bit more, I think it helps to consider each clock to be a real number that's changing continuously in time and what you actually measure is floor(x / P(x)) where P(x) is the period of x in nanoseconds.. (or ceil; it doesn't matter so long as you're consistent.)  At any given point, the clocks do have an exact value relative to each other.  When you sample, you grab floor(M / P(M)), floor(G / P(G)), and floor(R / P(R)) all in some interval of size I.  The delta between the real values sampled is most I but the sampling takes a floor operation, so the actual value of any given clock C may be as much as P(C) greater than what was sampled but it cannot be lower (assuming the floor convention).  This leaves us with a delta of I + max(P(M), P(R), P(G)).  In particular, any two real-number valued times are, instantaneously, within that interval.

The next question becomes, if I sample again and assume zero clock drift, what are the bounds on the next sampling.  Above, we calculated the maximum delta between real-valued clocks.  However, because we're sampling again, we may end up with more phase shift issues and any clock may, again, be off by as much as P(C).  However, again assuming no drift, no clock is going to be off with respect to itself; just sampled at a different phase so I think the most delta you can see between two clocks in the two samplings is the sum of their periods.  So if the delta we're looking for is a delta for a theoretical second sampling, I think it's I plus the maximum of the sums of all pairs of periods.

Personally, I'm completely content to have the delta just be a the first one: a bound on the difference between any two real-valued times.  At this point, I can guarantee you that far more thought has been put into this mesa-dev discussion than was put into the spec and I think we're rapidly getting to the point of diminishing returns. :-)

--Jason