From mboxrd@z Thu Jan 1 00:00:00 1970 From: Miroslav Lichvar Subject: Re: Improving accuracy of PHC readings Date: Tue, 23 Oct 2018 16:07:33 +0200 Message-ID: <20181023140733.GA12019@localhost> References: <20181019095137.GG4407@localhost> <20181022224802.5injr523fgw2c4qz@localhost> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: netdev@vger.kernel.org, "Keller, Jacob E" To: Richard Cochran Return-path: Received: from mx1.redhat.com ([209.132.183.28]:42940 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726277AbeJWWbL (ORCPT ); Tue, 23 Oct 2018 18:31:11 -0400 Content-Disposition: inline In-Reply-To: <20181022224802.5injr523fgw2c4qz@localhost> Sender: netdev-owner@vger.kernel.org List-ID: On Mon, Oct 22, 2018 at 03:48:02PM -0700, Richard Cochran wrote: > On Fri, Oct 19, 2018 at 11:51:37AM +0200, Miroslav Lichvar wrote: > > The extra timestamp doesn't fit the API of the PTP_SYS_OFFSET ioctl, > > so it would need to shift the timestamp it returns by the missing > > intervals (assuming the frequency offset between the PHC and system > > clock is small), or a new ioctl could be introduced that would return > > all timestamps in an array looking like this: > > > > [sys, phc, sys, sys, phc, sys, ...] > > How about a new ioctl with number of trials as input and single offset > as output? The difference between the system timestamps is important as it gives an upper bound on the error in the offset, so I think the output should be at least a pair of offset and delay. The question is from which triplet should be the offset and delay calculated. The one with the minimum delay is a good choice, but it's not the only option. For instance, an average or median from all triplets that have delay smaller than the minimum + 30 nanoseconds may give a more stable offset. This is not that different from an NTP client filtering measurements made over network. I'm not sure if we should try to solve it in the kernel or drivers. My preference would be to give the user space all the data and process it there. If the increased size of the array is an issue, we can reduce the maximum number of readings. Does that make sense? -- Miroslav Lichvar