From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0704C43334 for ; Fri, 10 Jun 2022 10:52:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345227AbiFJKwt (ORCPT ); Fri, 10 Jun 2022 06:52:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344113AbiFJKw0 (ORCPT ); Fri, 10 Jun 2022 06:52:26 -0400 Received: from www262.sakura.ne.jp (www262.sakura.ne.jp [202.181.97.72]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99AB030546F for ; Fri, 10 Jun 2022 03:49:01 -0700 (PDT) Received: from fsav311.sakura.ne.jp (fsav311.sakura.ne.jp [153.120.85.142]) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTP id 25AAmeRP004665; Fri, 10 Jun 2022 19:48:40 +0900 (JST) (envelope-from penguin-kernel@I-love.SAKURA.ne.jp) Received: from www262.sakura.ne.jp (202.181.97.72) by fsav311.sakura.ne.jp (F-Secure/fsigk_smtp/550/fsav311.sakura.ne.jp); Fri, 10 Jun 2022 19:48:40 +0900 (JST) X-Virus-Status: clean(F-Secure/fsigk_smtp/550/fsav311.sakura.ne.jp) Received: from [192.168.1.9] (M106072142033.v4.enabler.ne.jp [106.72.142.33]) (authenticated bits=0) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTPSA id 25AAmdfi004659 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NO); Fri, 10 Jun 2022 19:48:40 +0900 (JST) (envelope-from penguin-kernel@I-love.SAKURA.ne.jp) Message-ID: <8d3a0f55-d861-ba93-0d25-b1172eaa8343@I-love.SAKURA.ne.jp> Date: Fri, 10 Jun 2022 19:48:36 +0900 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.10.0 Subject: [PATCH v4] rtc: Replace flush_scheduled_work() with flush_work(). Content-Language: en-US To: Alexandre Belloni , Alessandro Zummo Cc: linux-rtc@vger.kernel.org References: From: Tetsuo Handa In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-rtc@vger.kernel.org Since "struct rtc_device" is per a device struct, I assume that clear_uie() needs to wait for only one work associated with that device. Therefore, wait for only that work using flush_work(). Signed-off-by: Tetsuo Handa --- Please see commit c4f135d643823a86 ("workqueue: Wrap flush_workqueue() using a macro") for background. Changes in v4: Use flush_work() instead of introducing a dedicated WQ. drivers/rtc/dev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/rtc/dev.c b/drivers/rtc/dev.c index 69325aeede1a..5cf90daf975c 100644 --- a/drivers/rtc/dev.c +++ b/drivers/rtc/dev.c @@ -96,7 +96,7 @@ static int clear_uie(struct rtc_device *rtc) } if (rtc->uie_task_active) { spin_unlock_irq(&rtc->irq_lock); - flush_scheduled_work(); + flush_work(&rtc->uie_task); spin_lock_irq(&rtc->irq_lock); } rtc->uie_irq_active = 0; -- 2.18.4