From: James Simmons <jsimmons@infradead.org>
To: NeilBrown <neilb@suse.com>
Cc: Oleg Drokin <oleg.drokin@intel.com>,
Andreas Dilger <andreas.dilger@intel.com>,
Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
lkml <linux-kernel@vger.kernel.org>,
lustre <lustre-devel@lists.lustre.org>
Subject: Re: [PATCH 18/19] staging: lustre: replace l_wait_event_exclusive_head() with wait_event_idle_exclusive
Date: Wed, 17 Jan 2018 15:36:28 +0000 (GMT) [thread overview]
Message-ID: <alpine.LFD.2.21.1801171536160.11282@casper.infradead.org> (raw)
In-Reply-To: <151538209398.23920.3167444012358212836.stgit@noble>
> This l_wait_event_exclusive_head() will wait indefinitely
> if the timeout is zero. If it does wait with a timeout
> and times out, the timeout for next time is set to zero.
>
> The can be mapped to a call to either
> wait_event_idle_exclusive()
> or
> wait_event_idle_exclusive_timeout()
> depending in the timeout setting.
>
> The current code arrange for LIFO queuing of waiters,
> but include/event.h doesn't support that yet.
> Until it does, fall back on FIFO with
> wait_event_idle_exclusive{,_timeout}().
Reviewed-by: James Simmons <jsimmons@infradead.org>
> Signed-off-by: NeilBrown <neilb@suse.com>
> ---
> drivers/staging/lustre/lustre/ptlrpc/service.c | 43 ++++++++++++++----------
> 1 file changed, 25 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/staging/lustre/lustre/ptlrpc/service.c b/drivers/staging/lustre/lustre/ptlrpc/service.c
> index 6e3403417434..29fdb54f16ca 100644
> --- a/drivers/staging/lustre/lustre/ptlrpc/service.c
> +++ b/drivers/staging/lustre/lustre/ptlrpc/service.c
> @@ -1897,15 +1897,6 @@ ptlrpc_check_rqbd_pool(struct ptlrpc_service_part *svcpt)
> }
> }
>
> -static int
> -ptlrpc_retry_rqbds(void *arg)
> -{
> - struct ptlrpc_service_part *svcpt = arg;
> -
> - svcpt->scp_rqbd_timeout = 0;
> - return -ETIMEDOUT;
> -}
> -
> static inline int
> ptlrpc_threads_enough(struct ptlrpc_service_part *svcpt)
> {
> @@ -1968,13 +1959,17 @@ ptlrpc_server_request_incoming(struct ptlrpc_service_part *svcpt)
> return !list_empty(&svcpt->scp_req_incoming);
> }
>
> +/* We perfer lifo queuing, but kernel doesn't provide that yet. */
> +#ifndef wait_event_idle_exclusive_lifo
> +#define wait_event_idle_exclusive_lifo wait_event_idle_exclusive
> +#define wait_event_idle_exclusive_lifo_timeout wait_event_idle_exclusive_timeout
> +#endif
> +
> static __attribute__((__noinline__)) int
> ptlrpc_wait_event(struct ptlrpc_service_part *svcpt,
> struct ptlrpc_thread *thread)
> {
> /* Don't exit while there are replies to be handled */
> - struct l_wait_info lwi = LWI_TIMEOUT(svcpt->scp_rqbd_timeout,
> - ptlrpc_retry_rqbds, svcpt);
>
> /* XXX: Add this back when libcfs watchdog is merged upstream
> lc_watchdog_disable(thread->t_watchdog);
> @@ -1982,13 +1977,25 @@ ptlrpc_wait_event(struct ptlrpc_service_part *svcpt,
>
> cond_resched();
>
> - l_wait_event_exclusive_head(svcpt->scp_waitq,
> - ptlrpc_thread_stopping(thread) ||
> - ptlrpc_server_request_incoming(svcpt) ||
> - ptlrpc_server_request_pending(svcpt,
> - false) ||
> - ptlrpc_rqbd_pending(svcpt) ||
> - ptlrpc_at_check(svcpt), &lwi);
> + if (svcpt->scp_rqbd_timeout == 0)
> + wait_event_idle_exclusive_lifo(
> + svcpt->scp_waitq,
> + ptlrpc_thread_stopping(thread) ||
> + ptlrpc_server_request_incoming(svcpt) ||
> + ptlrpc_server_request_pending(svcpt,
> + false) ||
> + ptlrpc_rqbd_pending(svcpt) ||
> + ptlrpc_at_check(svcpt));
> + else if (0 == wait_event_idle_exclusive_lifo_timeout(
> + svcpt->scp_waitq,
> + ptlrpc_thread_stopping(thread) ||
> + ptlrpc_server_request_incoming(svcpt) ||
> + ptlrpc_server_request_pending(svcpt,
> + false) ||
> + ptlrpc_rqbd_pending(svcpt) ||
> + ptlrpc_at_check(svcpt),
> + svcpt->scp_rqbd_timeout))
> + svcpt->scp_rqbd_timeout = 0;
>
> if (ptlrpc_thread_stopping(thread))
> return -EINTR;
>
>
>
next prev parent reply other threads:[~2018-01-17 15:36 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-08 3:28 [PATCH 5 v2: 00/19] staging: lustre: use standard wait_event macros NeilBrown
2018-01-08 3:28 ` [PATCH 06/19] staging: lustre: introduce and use l_wait_event_abortable() NeilBrown
2018-01-17 15:30 ` James Simmons
2018-01-08 3:28 ` [PATCH 17/19] staging: lustre: remove l_wait_event from ptlrpc_set_wait NeilBrown
2018-01-17 15:36 ` James Simmons
2018-01-08 3:28 ` [PATCH 02/19] staging: lustre: discard SVC_SIGNAL and related functions NeilBrown
2018-01-17 15:26 ` James Simmons
2018-01-08 3:28 ` [PATCH 07/19] staging: lustre: simplify l_wait_event when intr handler but no timeout NeilBrown
2018-01-17 15:29 ` James Simmons
2018-01-08 3:28 ` [PATCH 10/19] staging: lustre: simplify waiting in ptlrpc_invalidate_import() NeilBrown
2018-01-17 15:32 ` James Simmons
2018-01-08 3:28 ` [PATCH 03/19] staging: lustre: replace simple cases of l_wait_event() with wait_event() NeilBrown
2018-01-17 15:27 ` James Simmons
2018-01-08 3:28 ` [PATCH 14/19] staging: lustre: improve waiting in sptlrpc_req_refresh_ctx NeilBrown
2018-01-17 15:34 ` James Simmons
2018-01-08 3:28 ` [PATCH 05/19] staging: lustre: use wait_event_idle_timeout() where appropriate NeilBrown
2018-01-17 15:27 ` James Simmons
2018-01-08 3:28 ` [PATCH 01/19] sched/wait: add wait_event_idle() functions NeilBrown
2018-01-17 15:26 ` James Simmons
2018-01-08 3:28 ` [PATCH 09/19] staging: lustre: open code polling loop instead of using l_wait_event() NeilBrown
2018-01-17 15:32 ` James Simmons
2018-01-08 3:28 ` [PATCH 12/19] staging: lustre: make polling loop in ptlrpc_unregister_bulk more obvious NeilBrown
2018-01-17 15:33 ` James Simmons
2018-01-08 3:28 ` [PATCH 11/19] staging: lustre: remove back_to_sleep() NeilBrown
2018-01-17 15:33 ` James Simmons
2018-01-08 3:28 ` [PATCH 13/19] staging: lustre: use wait_event_idle_timeout in ptlrpcd() NeilBrown
2018-01-17 15:34 ` James Simmons
2018-01-08 3:28 ` [PATCH 04/19] staging: lustre: discard cfs_time_seconds() NeilBrown
2018-01-08 16:52 ` James Simmons
2018-01-08 17:00 ` Greg Kroah-Hartman
2018-01-08 18:04 ` James Simmons
2018-01-09 8:24 ` Greg Kroah-Hartman
2018-01-17 15:29 ` James Simmons
2018-01-08 3:28 ` [PATCH 08/19] staging: lustre: simplify waiting in ldlm_completion_ast() NeilBrown
2018-01-17 15:31 ` James Simmons
2018-01-08 3:28 ` [PATCH 15/19] staging: lustre: use explicit poll loop in ptlrpc_service_unlink_rqbd NeilBrown
2018-01-17 15:35 ` James Simmons
2018-01-08 3:28 ` [PATCH 16/19] staging: lustre: use explicit poll loop in ptlrpc_unregister_reply NeilBrown
2018-01-17 15:35 ` James Simmons
2018-01-08 3:28 ` [PATCH 18/19] staging: lustre: replace l_wait_event_exclusive_head() with wait_event_idle_exclusive NeilBrown
2018-01-17 15:36 ` James Simmons [this message]
2018-01-08 3:28 ` [PATCH 19/19] staging: lustre: remove l_wait_event() and related code NeilBrown
2018-01-17 15:36 ` James Simmons
2018-01-08 14:59 ` [PATCH 5 v2: 00/19] staging: lustre: use standard wait_event macros Greg Kroah-Hartman
2018-01-08 16:21 ` James Simmons
2018-01-08 16:36 ` Greg Kroah-Hartman
2018-01-08 18:06 ` James Simmons
2018-01-09 8:25 ` Greg Kroah-Hartman
2018-01-09 1:44 ` NeilBrown
2018-01-17 15:24 ` James Simmons
2018-02-12 21:22 [PATCH 00/19] RESEND " NeilBrown
2018-02-12 23:47 ` [PATCH 18/19] staging: lustre: replace l_wait_event_exclusive_head() with wait_event_idle_exclusive NeilBrown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.LFD.2.21.1801171536160.11282@casper.infradead.org \
--to=jsimmons@infradead.org \
--cc=andreas.dilger@intel.com \
--cc=gregkh@linuxfoundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lustre-devel@lists.lustre.org \
--cc=neilb@suse.com \
--cc=oleg.drokin@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).