From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932499AbeASRqc (ORCPT ); Fri, 19 Jan 2018 12:46:32 -0500 Received: from smtp.codeaurora.org ([198.145.29.96]:47324 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755932AbeASRqZ (ORCPT ); Fri, 19 Jan 2018 12:46:25 -0500 DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org B8A7A60265 Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=none smtp.mailfrom=pkondeti@codeaurora.org Date: Fri, 19 Jan 2018 23:16:17 +0530 From: Pavan Kondeti To: Steven Rostedt Cc: williams@redhat.com, Ingo Molnar , LKML , Peter Zijlstra , Thomas Gleixner , bristot@redhat.com, jkacur@redhat.com, efault@gmx.de, hpa@zytor.com, torvalds@linux-foundation.org, swood@redhat.com, linux-tip-commits@vger.kernel.org Subject: Re: [tip:sched/core] sched/rt: Simplify the IPI based RT balancing logic Message-ID: <20180119174617.GA6563@codeaurora.org> References: <20170424114732.1aac6dc4@gandalf.local.home> <20180119100353.7f9f5154@gandalf.local.home> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180119100353.7f9f5154@gandalf.local.home> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 19, 2018 at 10:03:53AM -0500, Steven Rostedt wrote: > On Fri, 19 Jan 2018 14:53:05 +0530 > Pavan Kondeti wrote: > > > I am seeing "spinlock already unlocked" BUG for rd->rto_lock on a 4.9 > > stable kernel based system. This issue is observed only after > > inclusion of this patch. It appears to me that rq->rd can change > > between spinlock is acquired and released in rto_push_irq_work_func() > > IRQ work if hotplug is in progress. It was only reported couple of > > times during long stress testing. The issue can be easily reproduced > > if an artificial delay is introduced between lock and unlock of > > rto_lock. The rq->rd is changed under rq->lock, so we can protect this > > race with rq->lock. The below patch solved the problem. we are taking > > rq->lock in pull_rt_task()->tell_cpu_to_push(), so I extended the same > > here. Please let me know your thoughts on this. > > As so rq->rd can change. Interesting. > > > > > diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c > > index d863d39..478192b 100644 > > --- a/kernel/sched/rt.c > > +++ b/kernel/sched/rt.c > > @@ -2284,6 +2284,7 @@ void rto_push_irq_work_func(struct irq_work *work) > > raw_spin_unlock(&rq->lock); > > } > > > > + raw_spin_lock(&rq->lock); > > > What about just saving the rd then? > > struct root_domain *rd; > > rd = READ_ONCE(rq->rd); > > then use that. Then we don't need to worry about it changing. > I am thinking of another problem because of the race between rto_push_irq_work_func() and rq_attach_root() where rq->rd is modified. Lets say, we cache the rq->rd here and queued the IRQ work on a remote CPU. In the mean time, the rq_attach_root() might drop all the references to this cached (old) rd and wants to free it. The rq->rd is freed in RCU-sched callback. If that remote CPU is in RCU quiescent state, the rq->rd can get freed before the IRQ work is executed. This results in the corruption of the remote CPU's IRQ work list. Right? Taking rq->lock in rto_push_irq_work_func() also does not help here. Probably we have to wait for the IRQ work to finish before freeing the older root domain in RCU-sched callback. Thanks, Pavan -- Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc. Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.