From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2713DC433DF for ; Mon, 24 Aug 2020 14:47:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 066EE206B5 for ; Mon, 24 Aug 2020 14:47:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726570AbgHXOrw (ORCPT ); Mon, 24 Aug 2020 10:47:52 -0400 Received: from netrider.rowland.org ([192.131.102.5]:37163 "HELO netrider.rowland.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with SMTP id S1726037AbgHXOrv (ORCPT ); Mon, 24 Aug 2020 10:47:51 -0400 Received: (qmail 331659 invoked by uid 1000); 24 Aug 2020 10:47:50 -0400 Date: Mon, 24 Aug 2020 10:47:50 -0400 From: Alan Stern To: Bart Van Assche Cc: Jens Axboe , linux-block@vger.kernel.org, Christoph Hellwig , Stanley Chu , Ming Lei , stable , Can Guo Subject: Re: [PATCH] block: Fix a race in the runtime power management code Message-ID: <20200824144750.GC329866@rowland.harvard.edu> References: <20200824030607.19357-1-bvanassche@acm.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200824030607.19357-1-bvanassche@acm.org> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org On Sun, Aug 23, 2020 at 08:06:07PM -0700, Bart Van Assche wrote: > With the current implementation the following race can happen: > * blk_pre_runtime_suspend() calls blk_freeze_queue_start() and > blk_mq_unfreeze_queue(). > * blk_queue_enter() calls blk_queue_pm_only() and that function returns > true. > * blk_queue_enter() calls blk_pm_request_resume() and that function does > not call pm_request_resume() because the queue runtime status is > RPM_ACTIVE. > * blk_pre_runtime_suspend() changes the queue status into RPM_SUSPENDING. > > Fix this race by changing the queue runtime status into RPM_SUSPENDING > before switching q_usage_counter to atomic mode. > > Cc: Alan Stern > Cc: Stanley Chu > Cc: Ming Lei > Cc: stable > Fixes: 986d413b7c15 ("blk-mq: Enable support for runtime power management") > Signed-off-by: Can Guo > Signed-off-by: Bart Van Assche > --- > block/blk-pm.c | 15 +++++++++------ > 1 file changed, 9 insertions(+), 6 deletions(-) > > diff --git a/block/blk-pm.c b/block/blk-pm.c > index b85234d758f7..17bd020268d4 100644 > --- a/block/blk-pm.c > +++ b/block/blk-pm.c > @@ -67,6 +67,10 @@ int blk_pre_runtime_suspend(struct request_queue *q) > > WARN_ON_ONCE(q->rpm_status != RPM_ACTIVE); > > + spin_lock_irq(&q->queue_lock); > + q->rpm_status = RPM_SUSPENDING; > + spin_unlock_irq(&q->queue_lock); > + > /* > * Increase the pm_only counter before checking whether any > * non-PM blk_queue_enter() calls are in progress to avoid that any > @@ -89,15 +93,14 @@ int blk_pre_runtime_suspend(struct request_queue *q) > /* Switch q_usage_counter back to per-cpu mode. */ > blk_mq_unfreeze_queue(q); > > - spin_lock_irq(&q->queue_lock); > - if (ret < 0) > + if (ret < 0) { > + spin_lock_irq(&q->queue_lock); > + q->rpm_status = RPM_ACTIVE; > pm_runtime_mark_last_busy(q->dev); > - else > - q->rpm_status = RPM_SUSPENDING; > - spin_unlock_irq(&q->queue_lock); > + spin_unlock_irq(&q->queue_lock); > > - if (ret) > blk_clear_pm_only(q); > + } > > return ret; > } Acked-by: Alan Stern