From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752292AbcAFSGT (ORCPT ); Wed, 6 Jan 2016 13:06:19 -0500 Received: from mx1.redhat.com ([209.132.183.28]:59553 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751605AbcAFSGR (ORCPT ); Wed, 6 Jan 2016 13:06:17 -0500 Message-ID: <568D5797.8000904@redhat.com> Date: Wed, 06 Jan 2016 13:06:15 -0500 From: Prarit Bhargava User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: John Stultz CC: lkml , Thomas Gleixner , Ingo Molnar , Xunlei Pang , Peter Zijlstra , Baolin Wang , Arnd Bergmann Subject: Re: [PATCH 1/2] kernel, timekeeping, add trylock option to ktime_get_with_offset() References: <1452085234-10667-1-git-send-email-prarit@redhat.com> <1452085234-10667-2-git-send-email-prarit@redhat.com> In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 01/06/2016 12:33 PM, John Stultz wrote: > On Wed, Jan 6, 2016 at 9:28 AM, John Stultz wrote: >> On Wed, Jan 6, 2016 at 5:00 AM, Prarit Bhargava wrote: >>> -ktime_t ktime_get_with_offset(enum tk_offsets offs) >>> +ktime_t ktime_get_with_offset(enum tk_offsets offs, int trylock) >>> { >>> struct timekeeper *tk = &tk_core.timekeeper; >>> unsigned int seq; >>> ktime_t base, *offset = offsets[offs]; >>> s64 nsecs; >>> + unsigned long flags = 0; >>> + >>> + if (unlikely(!timekeeping_initialized)) >>> + return ktime_set(0, 0); >>> >>> WARN_ON(timekeeping_suspended); >>> >>> + if (trylock && !raw_spin_trylock_irqsave(&timekeeper_lock, flags)) >>> + return ktime_set(KTIME_MAX, 0); >> >> Wait.. this doesn't make sense. The timekeeper lock is only for reading. > > Only for writing.. sorry.. still drinking my coffee. > >> What I was suggesting to you off line is to have something that avoids >> spinning on the seqcounter should if a bug occurs and we IPI all the >> cpus, that we don't deadlock or block any printk messages. > > And more clearly here, if a cpu takes a write on the seqcounter in > update_wall_time() and at that point another cpu hits a bug, and IPIs > the cpus, the system would deadlock. That's really what I want to > avoid. Right -- but the only time that the seq_lock is taken for writing is when the timekeeper_lock is acquired (including update_wall_time()). This means that if (!raw_spin_trylock_irqsave(&timekeeper_lock, flags)) is equivalent to if (tk_core.seq & 1) // sequence_t is odd when writing The problem with the latter is that it is possible that there is no protection from a writer setting tk_core.seq odd AFTER I've read it, and the protection for that AFAICT comes from the timekeeper_lock. That means I need to check to see if the timekeeper_lock is locked. And the patch does exactly that -- checks to see if the lock is available, and if not avoids spinning on the seq_lock. P. > > thanks > -john >