From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933070AbbKRN0z (ORCPT ); Wed, 18 Nov 2015 08:26:55 -0500 Received: from mx2.suse.de ([195.135.220.15]:36404 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933035AbbKRN0w (ORCPT ); Wed, 18 Nov 2015 08:26:52 -0500 From: Petr Mladek To: Andrew Morton , Oleg Nesterov , Tejun Heo , Ingo Molnar , Peter Zijlstra Cc: Steven Rostedt , "Paul E. McKenney" , Josh Triplett , Thomas Gleixner , Linus Torvalds , Jiri Kosina , Borislav Petkov , Michal Hocko , linux-mm@kvack.org, Vlastimil Babka , linux-api@vger.kernel.org, linux-kernel@vger.kernel.org, Petr Mladek Subject: [PATCH v3 10/22] kthread: Allow to modify delayed kthread work Date: Wed, 18 Nov 2015 14:25:15 +0100 Message-Id: <1447853127-3461-11-git-send-email-pmladek@suse.com> X-Mailer: git-send-email 1.8.5.6 In-Reply-To: <1447853127-3461-1-git-send-email-pmladek@suse.com> References: <1447853127-3461-1-git-send-email-pmladek@suse.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There are situations when we need to modify the delay of a delayed kthread work. For example, when the work depends on an event and the initial delay means a timeout. Then we want to queue the work immediately when the event happens. This patch implements mod_delayed_kthread_work() as inspired workqueues. It tries to cancel the pending work and queue it again with the given timeout. A very special case is when the work is being canceled at the same time. cancel_*kthread_work_sync() operation blocks queuing until the running work finishes. Therefore we do nothing and let cancel() win. This should not normally happen as the caller is supposed to synchronize these operations a reasonable way. Signed-off-by: Petr Mladek --- include/linux/kthread.h | 4 ++++ kernel/kthread.c | 50 +++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 54 insertions(+) diff --git a/include/linux/kthread.h b/include/linux/kthread.h index dd2a587a2bd7..f501dfeaa0e3 100644 --- a/include/linux/kthread.h +++ b/include/linux/kthread.h @@ -168,6 +168,10 @@ bool queue_delayed_kthread_work(struct kthread_worker *worker, struct delayed_kthread_work *dwork, unsigned long delay); +bool mod_delayed_kthread_work(struct kthread_worker *worker, + struct delayed_kthread_work *dwork, + unsigned long delay); + void flush_kthread_work(struct kthread_work *work); void flush_kthread_worker(struct kthread_worker *worker); diff --git a/kernel/kthread.c b/kernel/kthread.c index d12aa91cc44d..4c3b845c719e 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -1003,6 +1003,56 @@ out: return ret; } +/** + * mod_delayed_kthread_work - modify delay of or queue a delayed kthread work + * @worker: kthread worker to use + * @dwork: delayed kthread work to queue + * @delay: number of jiffies to wait before queuing + * + * If @dwork is idle, equivalent to queue_delayed_kthread work(). Otherwise, + * modify @dwork's timer so that it expires after @delay. If @delay is zero, + * @work is guaranteed to be queued immediately. + * + * Return: %false if @dwork was idle and queued. Return %true if @dwork was + * pending and its timer was modified. + * + * A special case is when cancel_work_sync() is running in parallel. + * It blocks further queuing. We let the cancel() win and return %false. + * The caller is supposed to synchronize these operations a reasonable way. + * + * This function is safe to call from any context including IRQ handler. + * See try_to_grab_pending_kthread_work() for details. + */ +bool mod_delayed_kthread_work(struct kthread_worker *worker, + struct delayed_kthread_work *dwork, + unsigned long delay) +{ + struct kthread_work *work = &dwork->work; + unsigned long flags; + int ret = 0; + +try_again: + spin_lock_irqsave(&worker->lock, flags); + WARN_ON_ONCE(work->worker && work->worker != worker); + + if (work->canceling) + goto out; + + ret = try_to_cancel_kthread_work(work, &worker->lock, &flags); + if (ret == -EAGAIN) + goto try_again; + + if (work->canceling) + ret = 0; + else + __queue_delayed_kthread_work(worker, dwork, delay); + +out: + spin_unlock_irqrestore(&worker->lock, flags); + return ret; +} +EXPORT_SYMBOL_GPL(mod_delayed_kthread_work); + static bool __cancel_kthread_work_sync(struct kthread_work *work) { struct kthread_worker *worker; -- 1.8.5.6 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Petr Mladek Subject: [PATCH v3 10/22] kthread: Allow to modify delayed kthread work Date: Wed, 18 Nov 2015 14:25:15 +0100 Message-ID: <1447853127-3461-11-git-send-email-pmladek@suse.com> References: <1447853127-3461-1-git-send-email-pmladek@suse.com> Return-path: In-Reply-To: <1447853127-3461-1-git-send-email-pmladek@suse.com> Sender: owner-linux-mm@kvack.org To: Andrew Morton , Oleg Nesterov , Tejun Heo , Ingo Molnar , Peter Zijlstra Cc: Steven Rostedt , "Paul E. McKenney" , Josh Triplett , Thomas Gleixner , Linus Torvalds , Jiri Kosina , Borislav Petkov , Michal Hocko , linux-mm@kvack.org, Vlastimil Babka , linux-api@vger.kernel.org, linux-kernel@vger.kernel.org, Petr Mladek List-Id: linux-api@vger.kernel.org There are situations when we need to modify the delay of a delayed kthread work. For example, when the work depends on an event and the initial delay means a timeout. Then we want to queue the work immediately when the event happens. This patch implements mod_delayed_kthread_work() as inspired workqueues. It tries to cancel the pending work and queue it again with the given timeout. A very special case is when the work is being canceled at the same time. cancel_*kthread_work_sync() operation blocks queuing until the running work finishes. Therefore we do nothing and let cancel() win. This should not normally happen as the caller is supposed to synchronize these operations a reasonable way. Signed-off-by: Petr Mladek --- include/linux/kthread.h | 4 ++++ kernel/kthread.c | 50 +++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 54 insertions(+) diff --git a/include/linux/kthread.h b/include/linux/kthread.h index dd2a587a2bd7..f501dfeaa0e3 100644 --- a/include/linux/kthread.h +++ b/include/linux/kthread.h @@ -168,6 +168,10 @@ bool queue_delayed_kthread_work(struct kthread_worker *worker, struct delayed_kthread_work *dwork, unsigned long delay); +bool mod_delayed_kthread_work(struct kthread_worker *worker, + struct delayed_kthread_work *dwork, + unsigned long delay); + void flush_kthread_work(struct kthread_work *work); void flush_kthread_worker(struct kthread_worker *worker); diff --git a/kernel/kthread.c b/kernel/kthread.c index d12aa91cc44d..4c3b845c719e 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -1003,6 +1003,56 @@ out: return ret; } +/** + * mod_delayed_kthread_work - modify delay of or queue a delayed kthread work + * @worker: kthread worker to use + * @dwork: delayed kthread work to queue + * @delay: number of jiffies to wait before queuing + * + * If @dwork is idle, equivalent to queue_delayed_kthread work(). Otherwise, + * modify @dwork's timer so that it expires after @delay. If @delay is zero, + * @work is guaranteed to be queued immediately. + * + * Return: %false if @dwork was idle and queued. Return %true if @dwork was + * pending and its timer was modified. + * + * A special case is when cancel_work_sync() is running in parallel. + * It blocks further queuing. We let the cancel() win and return %false. + * The caller is supposed to synchronize these operations a reasonable way. + * + * This function is safe to call from any context including IRQ handler. + * See try_to_grab_pending_kthread_work() for details. + */ +bool mod_delayed_kthread_work(struct kthread_worker *worker, + struct delayed_kthread_work *dwork, + unsigned long delay) +{ + struct kthread_work *work = &dwork->work; + unsigned long flags; + int ret = 0; + +try_again: + spin_lock_irqsave(&worker->lock, flags); + WARN_ON_ONCE(work->worker && work->worker != worker); + + if (work->canceling) + goto out; + + ret = try_to_cancel_kthread_work(work, &worker->lock, &flags); + if (ret == -EAGAIN) + goto try_again; + + if (work->canceling) + ret = 0; + else + __queue_delayed_kthread_work(worker, dwork, delay); + +out: + spin_unlock_irqrestore(&worker->lock, flags); + return ret; +} +EXPORT_SYMBOL_GPL(mod_delayed_kthread_work); + static bool __cancel_kthread_work_sync(struct kthread_work *work) { struct kthread_worker *worker; -- 1.8.5.6 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org