From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756041AbeASPor (ORCPT ); Fri, 19 Jan 2018 10:44:47 -0500 Received: from smtp.codeaurora.org ([198.145.29.96]:40552 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755749AbeASPoi (ORCPT ); Fri, 19 Jan 2018 10:44:38 -0500 DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org BBACA605A4 Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=none smtp.mailfrom=pkondeti@codeaurora.org Date: Fri, 19 Jan 2018 21:14:30 +0530 From: Pavan Kondeti To: Steven Rostedt Cc: williams@redhat.com, Ingo Molnar , LKML , Peter Zijlstra , Thomas Gleixner , bristot@redhat.com, jkacur@redhat.com, efault@gmx.de, hpa@zytor.com, torvalds@linux-foundation.org, swood@redhat.com, linux-tip-commits@vger.kernel.org Subject: Re: [tip:sched/core] sched/rt: Simplify the IPI based RT balancing logic Message-ID: <20180119154430.GC14011@codeaurora.org> References: <20170424114732.1aac6dc4@gandalf.local.home> <20180119100353.7f9f5154@gandalf.local.home> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180119100353.7f9f5154@gandalf.local.home> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Steven, On Fri, Jan 19, 2018 at 10:03:53AM -0500, Steven Rostedt wrote: > On Fri, 19 Jan 2018 14:53:05 +0530 > Pavan Kondeti wrote: > > > I am seeing "spinlock already unlocked" BUG for rd->rto_lock on a 4.9 > > stable kernel based system. This issue is observed only after > > inclusion of this patch. It appears to me that rq->rd can change > > between spinlock is acquired and released in rto_push_irq_work_func() > > IRQ work if hotplug is in progress. It was only reported couple of > > times during long stress testing. The issue can be easily reproduced > > if an artificial delay is introduced between lock and unlock of > > rto_lock. The rq->rd is changed under rq->lock, so we can protect this > > race with rq->lock. The below patch solved the problem. we are taking > > rq->lock in pull_rt_task()->tell_cpu_to_push(), so I extended the same > > here. Please let me know your thoughts on this. > > As so rq->rd can change. Interesting. > > > > > diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c > > index d863d39..478192b 100644 > > --- a/kernel/sched/rt.c > > +++ b/kernel/sched/rt.c > > @@ -2284,6 +2284,7 @@ void rto_push_irq_work_func(struct irq_work *work) > > raw_spin_unlock(&rq->lock); > > } > > > > + raw_spin_lock(&rq->lock); > > > What about just saving the rd then? > > struct root_domain *rd; > > rd = READ_ONCE(rq->rd); > > then use that. Then we don't need to worry about it changing. > Yeah, it should work. I will give it a try and send the patch for review. -- Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc. Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.