linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Creating cyclecounter and lock member in timecounter structure [ Was Re: [RFC 1/4] drm/i915/perf: Add support to correlate GPU timestamp with system time]
       [not found]   ` <151074872901.26264.3145709294590477412@mail.alporthouse.com>
@ 2017-11-23  7:34     ` Sagar Arun Kamble
  2017-11-23 18:59       ` Thomas Gleixner
  0 siblings, 1 reply; 7+ messages in thread
From: Sagar Arun Kamble @ 2017-11-23  7:34 UTC (permalink / raw)
  To: John Stultz, Thomas Gleixner, Stephen Boyd
  Cc: Chris Wilson, linux-kernel, Sagar A Kamble

Hi,

We needed inputs on possible optimization that can be done to timecounter/cyclecounter structures/usage.
This mail is in response to review of patch https://patchwork.freedesktop.org/patch/188448/.

As Chris's observation below, about dozen of timecounter users in the kernel have below structures
defined individually:

spinlock_t lock;
struct cyclecounter cc;
struct timecounter tc;

Can we move lock and cc to tc? That way it will be convenient.
Also it will allow unifying the locking/overflow watchdog handling across all drivers.

Please suggest.

Thanks
Sagar

On 11/15/2017 5:55 PM, Chris Wilson wrote:
> Quoting Sagar Arun Kamble (2017-11-15 12:13:51)
>>   #include <drm/drmP.h>
>>   #include <drm/intel-gtt.h>
>> @@ -2149,6 +2150,14 @@ struct i915_perf_stream {
>>           * @oa_config: The OA configuration used by the stream.
>>           */
>>          struct i915_oa_config *oa_config;
>> +
>> +       /**
>> +        * System time correlation variables.
>> +        */
>> +       struct cyclecounter cc;
>> +       spinlock_t systime_lock;
>> +       struct timespec64 start_systime;
>> +       struct timecounter tc;
> This pattern is repeated a lot by struct timecounter users. (I'm still
> trying to understand why the common case is not catered for by a
> convenience timecounter api.)
>
>>   };
>>   
>>   /**
>> diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
>> index 00be015..72ddc34 100644
>> --- a/drivers/gpu/drm/i915/i915_perf.c
>> +++ b/drivers/gpu/drm/i915/i915_perf.c
>> @@ -192,6 +192,7 @@
>>    */
>>   
>>   #include <linux/anon_inodes.h>
>> +#include <linux/clocksource.h>
>>   #include <linux/sizes.h>
>>   #include <linux/uuid.h>
>>   
>> @@ -2391,6 +2392,56 @@ static unsigned int i915_perf_poll(struct file *file, poll_table *wait)
>>   }
>>   
>>   /**
>> + * i915_cyclecounter_read - read raw cycle/timestamp counter
>> + * @cc: cyclecounter structure
>> + */
>> +static u64 i915_cyclecounter_read(const struct cyclecounter *cc)
>> +{
>> +       struct i915_perf_stream *stream = container_of(cc, typeof(*stream), cc);
>> +       struct drm_i915_private *dev_priv = stream->dev_priv;
>> +       u64 ts_count;
>> +
>> +       intel_runtime_pm_get(dev_priv);
>> +       ts_count = I915_READ64_2x32(GEN4_TIMESTAMP,
>> +                                   GEN7_TIMESTAMP_UDW);
>> +       intel_runtime_pm_put(dev_priv);
>> +
>> +       return ts_count;
>> +}
>> +
>> +static void i915_perf_init_cyclecounter(struct i915_perf_stream *stream)
>> +{
>> +       struct drm_i915_private *dev_priv = stream->dev_priv;
>> +       int cs_ts_freq = dev_priv->perf.oa.timestamp_frequency;
>> +       struct cyclecounter *cc = &stream->cc;
>> +       u32 maxsec;
>> +
>> +       cc->read = i915_cyclecounter_read;
>> +       cc->mask = CYCLECOUNTER_MASK(CS_TIMESTAMP_WIDTH(dev_priv));
>> +       maxsec = cc->mask / cs_ts_freq;
>> +
>> +       clocks_calc_mult_shift(&cc->mult, &cc->shift, cs_ts_freq,
>> +                              NSEC_PER_SEC, maxsec);
>> +}
>> +
>> +static void i915_perf_init_timecounter(struct i915_perf_stream *stream)
>> +{
>> +#define SYSTIME_START_OFFSET   350000 /* Counter read takes about 350us */
>> +       unsigned long flags;
>> +       u64 ns;
>> +
>> +       i915_perf_init_cyclecounter(stream);
>> +       spin_lock_init(&stream->systime_lock);
>> +
>> +       getnstimeofday64(&stream->start_systime);
>> +       ns = timespec64_to_ns(&stream->start_systime) + SYSTIME_START_OFFSET;
> Use ktime directly. Or else Arnd will be back with a patch to fix it.
> (All non-ktime interfaces are effectively deprecated; obsolete for
> drivers.)
>
>> +       spin_lock_irqsave(&stream->systime_lock, flags);
>> +       timecounter_init(&stream->tc, &stream->cc, ns);
>> +       spin_unlock_irqrestore(&stream->systime_lock, flags);
>> +}
>> +
>> +/**
>>    * i915_perf_enable_locked - handle `I915_PERF_IOCTL_ENABLE` ioctl
>>    * @stream: A disabled i915 perf stream
>>    *
>> @@ -2408,6 +2459,8 @@ static void i915_perf_enable_locked(struct i915_perf_stream *stream)
>>          /* Allow stream->ops->enable() to refer to this */
>>          stream->enabled = true;
>>   
>> +       i915_perf_init_timecounter(stream);
>> +
>>          if (stream->ops->enable)
>>                  stream->ops->enable(stream);
>>   }
>> diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
>> index cfdf4f8..e7e6966 100644
>> --- a/drivers/gpu/drm/i915/i915_reg.h
>> +++ b/drivers/gpu/drm/i915/i915_reg.h
>> @@ -8882,6 +8882,12 @@ enum skl_power_gate {
>>   
>>   /* Gen4+ Timestamp and Pipe Frame time stamp registers */
>>   #define GEN4_TIMESTAMP         _MMIO(0x2358)
>> +#define GEN7_TIMESTAMP_UDW     _MMIO(0x235C)
>> +#define PRE_GEN7_TIMESTAMP_WIDTH       32
>> +#define GEN7_TIMESTAMP_WIDTH           36
>> +#define CS_TIMESTAMP_WIDTH(dev_priv) \
>> +       (INTEL_GEN(dev_priv) < 7 ? PRE_GEN7_TIMESTAMP_WIDTH : \
>> +                                  GEN7_TIMESTAMP_WIDTH)
> s/PRE_GEN7/GEN4/ would be consistent.
> If you really want to add support for earlier, I9XX_.
>
> Ok. I can accept the justification, and we are not the only ones who do
> the cyclecounter -> timecounter correction like this.
> -Chris

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Creating cyclecounter and lock member in timecounter structure [ Was Re: [RFC 1/4] drm/i915/perf: Add support to correlate GPU timestamp with system time]
  2017-11-23  7:34     ` Creating cyclecounter and lock member in timecounter structure [ Was Re: [RFC 1/4] drm/i915/perf: Add support to correlate GPU timestamp with system time] Sagar Arun Kamble
@ 2017-11-23 18:59       ` Thomas Gleixner
  2017-11-24  9:06         ` Sagar Arun Kamble
  0 siblings, 1 reply; 7+ messages in thread
From: Thomas Gleixner @ 2017-11-23 18:59 UTC (permalink / raw)
  To: Sagar Arun Kamble; +Cc: John Stultz, Stephen Boyd, Chris Wilson, linux-kernel

On Thu, 23 Nov 2017, Sagar Arun Kamble wrote:
> We needed inputs on possible optimization that can be done to
> timecounter/cyclecounter structures/usage.
> This mail is in response to review of patch
> https://patchwork.freedesktop.org/patch/188448/.
> 
> As Chris's observation below, about dozen of timecounter users in the kernel
> have below structures
> defined individually:
> 
> spinlock_t lock;
> struct cyclecounter cc;
> struct timecounter tc;
> 
> Can we move lock and cc to tc? That way it will be convenient.
> Also it will allow unifying the locking/overflow watchdog handling across all
> drivers.

Looks like none of the timecounter usage sites has a real need to separate
timecounter and cyclecounter.

The lock is a different question. The locking of the various drivers
differs and I have no idea how you want to handle that. Just sticking the
lock into the datastructure and then not making use of it in the
timercounter code and leave it to the callsites does not make sense.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Creating cyclecounter and lock member in timecounter structure [ Was Re: [RFC 1/4] drm/i915/perf: Add support to correlate GPU timestamp with system time]
  2017-11-23 18:59       ` Thomas Gleixner
@ 2017-11-24  9:06         ` Sagar Arun Kamble
  2017-11-24 13:31           ` Thomas Gleixner
  0 siblings, 1 reply; 7+ messages in thread
From: Sagar Arun Kamble @ 2017-11-24  9:06 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: John Stultz, Stephen Boyd, Chris Wilson, linux-kernel



On 11/24/2017 12:29 AM, Thomas Gleixner wrote:
> On Thu, 23 Nov 2017, Sagar Arun Kamble wrote:
>> We needed inputs on possible optimization that can be done to
>> timecounter/cyclecounter structures/usage.
>> This mail is in response to review of patch
>> https://patchwork.freedesktop.org/patch/188448/.
>>
>> As Chris's observation below, about dozen of timecounter users in the kernel
>> have below structures
>> defined individually:
>>
>> spinlock_t lock;
>> struct cyclecounter cc;
>> struct timecounter tc;
>>
>> Can we move lock and cc to tc? That way it will be convenient.
>> Also it will allow unifying the locking/overflow watchdog handling across all
>> drivers.
> Looks like none of the timecounter usage sites has a real need to separate
> timecounter and cyclecounter.

Yes. Will share patch for this change.

> The lock is a different question. The locking of the various drivers
> differs and I have no idea how you want to handle that. Just sticking the
> lock into the datastructure and then not making use of it in the
> timercounter code and leave it to the callsites does not make sense.

Most of the locks are held around timecounter_read. In some instances it is held when cyclecounter is
updated standalone or is updated along with timecounter calls.
Was thinking if we move the lock in timecounter functions, drivers just have to do locking around its
operations on cyclecounter. But then another problem I see is there are variation of locking calls
like lock_irqsave, lock_bh, write_lock_irqsave (some using rwlock_t). Should this all locking be left
to driver only then?

> Thanks,
>
> 	tglx

Thanks
Sagar

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Creating cyclecounter and lock member in timecounter structure [ Was Re: [RFC 1/4] drm/i915/perf: Add support to correlate GPU timestamp with system time]
  2017-11-24  9:06         ` Sagar Arun Kamble
@ 2017-11-24 13:31           ` Thomas Gleixner
  2017-11-27 10:05             ` Sagar Arun Kamble
  0 siblings, 1 reply; 7+ messages in thread
From: Thomas Gleixner @ 2017-11-24 13:31 UTC (permalink / raw)
  To: Sagar Arun Kamble; +Cc: John Stultz, Stephen Boyd, Chris Wilson, linux-kernel

On Fri, 24 Nov 2017, Sagar Arun Kamble wrote:
> On 11/24/2017 12:29 AM, Thomas Gleixner wrote:
> > On Thu, 23 Nov 2017, Sagar Arun Kamble wrote:
> > > We needed inputs on possible optimization that can be done to
> > > timecounter/cyclecounter structures/usage.
> > > This mail is in response to review of patch
> > > https://patchwork.freedesktop.org/patch/188448/.
> > > 
> > > As Chris's observation below, about dozen of timecounter users in the
> > > kernel
> > > have below structures
> > > defined individually:
> > > 
> > > spinlock_t lock;
> > > struct cyclecounter cc;
> > > struct timecounter tc;
> > > 
> > > Can we move lock and cc to tc? That way it will be convenient.
> > > Also it will allow unifying the locking/overflow watchdog handling across
> > > all
> > > drivers.
> > Looks like none of the timecounter usage sites has a real need to separate
> > timecounter and cyclecounter.
> 
> Yes. Will share patch for this change.
> 
> > The lock is a different question. The locking of the various drivers
> > differs and I have no idea how you want to handle that. Just sticking the
> > lock into the datastructure and then not making use of it in the
> > timercounter code and leave it to the callsites does not make sense.
> 
> Most of the locks are held around timecounter_read. In some instances it
> is held when cyclecounter is updated standalone or is updated along with
> timecounter calls.  Was thinking if we move the lock in timecounter
> functions, drivers just have to do locking around its operations on
> cyclecounter. But then another problem I see is there are variation of
> locking calls like lock_irqsave, lock_bh, write_lock_irqsave (some using
> rwlock_t). Should this all locking be left to driver only then?

You could have the lock in the struct and protect the inner workings in the
related core functions.

That might remove locking requirements from some of the callers and the
others still have their own thing around it.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Creating cyclecounter and lock member in timecounter structure [ Was Re: [RFC 1/4] drm/i915/perf: Add support to correlate GPU timestamp with system time]
  2017-11-24 13:31           ` Thomas Gleixner
@ 2017-11-27 10:05             ` Sagar Arun Kamble
  2017-11-30 21:03               ` Saeed Mahameed
  0 siblings, 1 reply; 7+ messages in thread
From: Sagar Arun Kamble @ 2017-11-27 10:05 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: John Stultz, Stephen Boyd, Chris Wilson, linux-kernel, netdev,
	linux-rdma



On 11/24/2017 7:01 PM, Thomas Gleixner wrote:
> On Fri, 24 Nov 2017, Sagar Arun Kamble wrote:
>> On 11/24/2017 12:29 AM, Thomas Gleixner wrote:
>>> On Thu, 23 Nov 2017, Sagar Arun Kamble wrote:
>>>> We needed inputs on possible optimization that can be done to
>>>> timecounter/cyclecounter structures/usage.
>>>> This mail is in response to review of patch
>>>> https://patchwork.freedesktop.org/patch/188448/.
>>>>
>>>> As Chris's observation below, about dozen of timecounter users in the
>>>> kernel
>>>> have below structures
>>>> defined individually:
>>>>
>>>> spinlock_t lock;
>>>> struct cyclecounter cc;
>>>> struct timecounter tc;
>>>>
>>>> Can we move lock and cc to tc? That way it will be convenient.
>>>> Also it will allow unifying the locking/overflow watchdog handling across
>>>> all
>>>> drivers.
>>> Looks like none of the timecounter usage sites has a real need to separate
>>> timecounter and cyclecounter.
>> Yes. Will share patch for this change.
>>
>>> The lock is a different question. The locking of the various drivers
>>> differs and I have no idea how you want to handle that. Just sticking the
>>> lock into the datastructure and then not making use of it in the
>>> timercounter code and leave it to the callsites does not make sense.
>> Most of the locks are held around timecounter_read. In some instances it
>> is held when cyclecounter is updated standalone or is updated along with
>> timecounter calls.  Was thinking if we move the lock in timecounter
>> functions, drivers just have to do locking around its operations on
>> cyclecounter. But then another problem I see is there are variation of
>> locking calls like lock_irqsave, lock_bh, write_lock_irqsave (some using
>> rwlock_t). Should this all locking be left to driver only then?
> You could have the lock in the struct and protect the inner workings in the
> related core functions.
>
> That might remove locking requirements from some of the callers and the
> others still have their own thing around it.

For drivers having static/fixed cyclecounter, we can rely only on lock inside timecounter.
Most of the network drivers update cyclecounter at runtime and they will have to rely on two locks if
we add one to timecounter. This may not be efficient for them. Also the lock in timecounter has to be less restrictive (may be seqlock) I guess.

Cc'd Mellanox list for inputs on this.

I have started feeling that the current approach of drivers managing the locks is the right one so better leave the
lock out of timecounter.

> Thanks,
>
> 	tglx

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Creating cyclecounter and lock member in timecounter structure [ Was Re: [RFC 1/4] drm/i915/perf: Add support to correlate GPU timestamp with system time]
  2017-11-27 10:05             ` Sagar Arun Kamble
@ 2017-11-30 21:03               ` Saeed Mahameed
  2017-12-01  7:42                 ` Sagar Arun Kamble
  0 siblings, 1 reply; 7+ messages in thread
From: Saeed Mahameed @ 2017-11-30 21:03 UTC (permalink / raw)
  To: Sagar Arun Kamble
  Cc: Thomas Gleixner, John Stultz, Stephen Boyd, Chris Wilson,
	linux-kernel, Linux Netdev List, linux-rdma

On Mon, Nov 27, 2017 at 2:05 AM, Sagar Arun Kamble
<sagar.a.kamble@intel.com> wrote:
>
>
> On 11/24/2017 7:01 PM, Thomas Gleixner wrote:
>>
>> On Fri, 24 Nov 2017, Sagar Arun Kamble wrote:
>>>
>>> On 11/24/2017 12:29 AM, Thomas Gleixner wrote:
>>>>
>>>> On Thu, 23 Nov 2017, Sagar Arun Kamble wrote:
>>>>>
>>>>> We needed inputs on possible optimization that can be done to
>>>>> timecounter/cyclecounter structures/usage.
>>>>> This mail is in response to review of patch
>>>>> https://patchwork.freedesktop.org/patch/188448/.
>>>>>
>>>>> As Chris's observation below, about dozen of timecounter users in the
>>>>> kernel
>>>>> have below structures
>>>>> defined individually:
>>>>>
>>>>> spinlock_t lock;
>>>>> struct cyclecounter cc;
>>>>> struct timecounter tc;
>>>>>
>>>>> Can we move lock and cc to tc? That way it will be convenient.
>>>>> Also it will allow unifying the locking/overflow watchdog handling
>>>>> across
>>>>> all
>>>>> drivers.
>>>>
>>>> Looks like none of the timecounter usage sites has a real need to
>>>> separate
>>>> timecounter and cyclecounter.
>>>
>>> Yes. Will share patch for this change.
>>>
>>>> The lock is a different question. The locking of the various drivers
>>>> differs and I have no idea how you want to handle that. Just sticking
>>>> the
>>>> lock into the datastructure and then not making use of it in the
>>>> timercounter code and leave it to the callsites does not make sense.
>>>
>>> Most of the locks are held around timecounter_read. In some instances it
>>> is held when cyclecounter is updated standalone or is updated along with
>>> timecounter calls.  Was thinking if we move the lock in timecounter
>>> functions, drivers just have to do locking around its operations on
>>> cyclecounter. But then another problem I see is there are variation of
>>> locking calls like lock_irqsave, lock_bh, write_lock_irqsave (some using
>>> rwlock_t). Should this all locking be left to driver only then?
>>
>> You could have the lock in the struct and protect the inner workings in
>> the
>> related core functions.
>>
>> That might remove locking requirements from some of the callers and the
>> others still have their own thing around it.
>
>
> For drivers having static/fixed cyclecounter, we can rely only on lock
> inside timecounter.
> Most of the network drivers update cyclecounter at runtime and they will
> have to rely on two locks if
> we add one to timecounter. This may not be efficient for them. Also the lock
> in timecounter has to be less restrictive (may be seqlock) I guess.
>
> Cc'd Mellanox list for inputs on this.
>
> I have started feeling that the current approach of drivers managing the
> locks is the right one so better leave the
> lock out of timecounter.
>

I agree here,

In mlx5 we rely on our own read/write lock to serialize access to
mlx5_clock struct (mlx5 timecounter and cyclecounter).
the access is not as simple as
lock()
call time_counter_API
unlock()

Sometimes we also explicitly update/adjust timecycles counters with
mlx5 specific calculations after we read the timecounter all inside
our lock.
e.g.
@mlx5_ptp_adjfreq()

    write_lock_irqsave(&clock->lock, flags);
    timecounter_read(&clock->tc);
    clock->cycles.mult = neg_adj ? clock->nominal_c_mult - diff :
                       clock->nominal_c_mult + diff;
    write_unlock_irqrestore(&clock->lock, flags);

So i don't think it will be a simple task to have a generic thread
safe timecounter API, without the need to specifically adjust it for
all driver use-cases.
Also as said above, in runtime it is not obvious in which context the
timecounter will be accessed irq/soft irq/user.

let's keep it as is, and let the driver decide which locking scheme is
most suitable for it.

Thanks,
Saeed.

>> Thanks,
>>
>>         tglx
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Creating cyclecounter and lock member in timecounter structure [ Was Re: [RFC 1/4] drm/i915/perf: Add support to correlate GPU timestamp with system time]
  2017-11-30 21:03               ` Saeed Mahameed
@ 2017-12-01  7:42                 ` Sagar Arun Kamble
  0 siblings, 0 replies; 7+ messages in thread
From: Sagar Arun Kamble @ 2017-12-01  7:42 UTC (permalink / raw)
  To: Saeed Mahameed
  Cc: Thomas Gleixner, John Stultz, Stephen Boyd, Chris Wilson,
	linux-kernel, Linux Netdev List, linux-rdma



On 12/1/2017 2:33 AM, Saeed Mahameed wrote:
> On Mon, Nov 27, 2017 at 2:05 AM, Sagar Arun Kamble
> <sagar.a.kamble@intel.com> wrote:
>>
>> On 11/24/2017 7:01 PM, Thomas Gleixner wrote:
>>> On Fri, 24 Nov 2017, Sagar Arun Kamble wrote:
>>>> On 11/24/2017 12:29 AM, Thomas Gleixner wrote:
>>>>> On Thu, 23 Nov 2017, Sagar Arun Kamble wrote:
>>>>>> We needed inputs on possible optimization that can be done to
>>>>>> timecounter/cyclecounter structures/usage.
>>>>>> This mail is in response to review of patch
>>>>>> https://patchwork.freedesktop.org/patch/188448/.
>>>>>>
>>>>>> As Chris's observation below, about dozen of timecounter users in the
>>>>>> kernel
>>>>>> have below structures
>>>>>> defined individually:
>>>>>>
>>>>>> spinlock_t lock;
>>>>>> struct cyclecounter cc;
>>>>>> struct timecounter tc;
>>>>>>
>>>>>> Can we move lock and cc to tc? That way it will be convenient.
>>>>>> Also it will allow unifying the locking/overflow watchdog handling
>>>>>> across
>>>>>> all
>>>>>> drivers.
>>>>> Looks like none of the timecounter usage sites has a real need to
>>>>> separate
>>>>> timecounter and cyclecounter.
>>>> Yes. Will share patch for this change.
>>>>
>>>>> The lock is a different question. The locking of the various drivers
>>>>> differs and I have no idea how you want to handle that. Just sticking
>>>>> the
>>>>> lock into the datastructure and then not making use of it in the
>>>>> timercounter code and leave it to the callsites does not make sense.
>>>> Most of the locks are held around timecounter_read. In some instances it
>>>> is held when cyclecounter is updated standalone or is updated along with
>>>> timecounter calls.  Was thinking if we move the lock in timecounter
>>>> functions, drivers just have to do locking around its operations on
>>>> cyclecounter. But then another problem I see is there are variation of
>>>> locking calls like lock_irqsave, lock_bh, write_lock_irqsave (some using
>>>> rwlock_t). Should this all locking be left to driver only then?
>>> You could have the lock in the struct and protect the inner workings in
>>> the
>>> related core functions.
>>>
>>> That might remove locking requirements from some of the callers and the
>>> others still have their own thing around it.
>>
>> For drivers having static/fixed cyclecounter, we can rely only on lock
>> inside timecounter.
>> Most of the network drivers update cyclecounter at runtime and they will
>> have to rely on two locks if
>> we add one to timecounter. This may not be efficient for them. Also the lock
>> in timecounter has to be less restrictive (may be seqlock) I guess.
>>
>> Cc'd Mellanox list for inputs on this.
>>
>> I have started feeling that the current approach of drivers managing the
>> locks is the right one so better leave the
>> lock out of timecounter.
>>
> I agree here,
>
> In mlx5 we rely on our own read/write lock to serialize access to
> mlx5_clock struct (mlx5 timecounter and cyclecounter).
> the access is not as simple as
> lock()
> call time_counter_API
> unlock()
>
> Sometimes we also explicitly update/adjust timecycles counters with
> mlx5 specific calculations after we read the timecounter all inside
> our lock.
> e.g.
> @mlx5_ptp_adjfreq()
>
>      write_lock_irqsave(&clock->lock, flags);
>      timecounter_read(&clock->tc);
>      clock->cycles.mult = neg_adj ? clock->nominal_c_mult - diff :
>                         clock->nominal_c_mult + diff;
>      write_unlock_irqrestore(&clock->lock, flags);
>
> So i don't think it will be a simple task to have a generic thread
> safe timecounter API, without the need to specifically adjust it for
> all driver use-cases.
> Also as said above, in runtime it is not obvious in which context the
> timecounter will be accessed irq/soft irq/user.
>
> let's keep it as is, and let the driver decide which locking scheme is
> most suitable for it.

Yes. Thanks for your inputs Saeed.

Regards
Sagar

>
> Thanks,
> Saeed.
>
>>> Thanks,
>>>
>>>          tglx
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-12-01  7:42 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <1510748034-14034-1-git-send-email-sagar.a.kamble@intel.com>
     [not found] ` <1510748034-14034-2-git-send-email-sagar.a.kamble@intel.com>
     [not found]   ` <151074872901.26264.3145709294590477412@mail.alporthouse.com>
2017-11-23  7:34     ` Creating cyclecounter and lock member in timecounter structure [ Was Re: [RFC 1/4] drm/i915/perf: Add support to correlate GPU timestamp with system time] Sagar Arun Kamble
2017-11-23 18:59       ` Thomas Gleixner
2017-11-24  9:06         ` Sagar Arun Kamble
2017-11-24 13:31           ` Thomas Gleixner
2017-11-27 10:05             ` Sagar Arun Kamble
2017-11-30 21:03               ` Saeed Mahameed
2017-12-01  7:42                 ` Sagar Arun Kamble

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).