From: Joseph Qi <joseph.qi@linux.alibaba.com>
To: Jakob Koschel <jakobkoschel@gmail.com>,
Mark Fasheh <mark@fasheh.com>,
Andrew Morton <akpm@linux-foundation.org>
Cc: Joel Becker <jlbec@evilplan.org>,
Geert Uytterhoeven <geert@linux-m68k.org>,
Miguel Ojeda <ojeda@kernel.org>,
Masahiro Yamada <masahiroy@kernel.org>,
ocfs2-devel@oss.oracle.com, linux-kernel@vger.kernel.org,
Mike Rapoport <rppt@kernel.org>,
Brian Johannesmeyer <bjohannesmeyer@gmail.com>,
Cristiano Giuffrida <c.giuffrida@vu.nl>,
"Bos, H.J." <h.j.bos@vu.nl>
Subject: Re: [PATCH] ocfs2: replace usage of found with dedicated list iterator variable
Date: Fri, 25 Mar 2022 11:15:24 +0800 [thread overview]
Message-ID: <9de48770-7dfa-1dd8-5eab-5fc0ac0499d5@linux.alibaba.com> (raw)
In-Reply-To: <20220324071650.61168-1-jakobkoschel@gmail.com>
On 3/24/22 3:16 PM, Jakob Koschel wrote:
> To move the list iterator variable into the list_for_each_entry_*()
> macro in the future it should be avoided to use the list iterator
> variable after the loop body.
>
> To *never* use the list iterator variable after the loop it was
> concluded to use a separate iterator variable instead of a
> found boolean [1].
>
> This removes the need to use a found variable and simply checking if
> the variable was set, can determine if the break/goto was hit.
>
> Link: https://lore.kernel.org/all/CAHk-=wgRr_D8CB-D9Kg-c=EHreAsk5SqXPwr9Y7k9sA6cWXJ6w@mail.gmail.com/
> Signed-off-by: Jakob Koschel <jakobkoschel@gmail.com>
Looks good.
Reviewed-by: Joseph Qi <joseph.qi@linux.alibaba.com>
> ---
> fs/ocfs2/dlm/dlmunlock.c | 21 ++++++++++-----------
> fs/ocfs2/quota_local.c | 10 +++++-----
> 2 files changed, 15 insertions(+), 16 deletions(-)
>
> diff --git a/fs/ocfs2/dlm/dlmunlock.c b/fs/ocfs2/dlm/dlmunlock.c
> index 61103b2d69fb..7318e4794ef9 100644
> --- a/fs/ocfs2/dlm/dlmunlock.c
> +++ b/fs/ocfs2/dlm/dlmunlock.c
> @@ -392,9 +392,9 @@ int dlm_unlock_lock_handler(struct o2net_msg *msg, u32 len, void *data,
> struct dlm_ctxt *dlm = data;
> struct dlm_unlock_lock *unlock = (struct dlm_unlock_lock *)msg->buf;
> struct dlm_lock_resource *res = NULL;
> - struct dlm_lock *lock = NULL;
> + struct dlm_lock *lock = NULL, *iter;
> enum dlm_status status = DLM_NORMAL;
> - int found = 0, i;
> + int i;
> struct dlm_lockstatus *lksb = NULL;
> int ignore;
> u32 flags;
> @@ -437,7 +437,6 @@ int dlm_unlock_lock_handler(struct o2net_msg *msg, u32 len, void *data,
> }
>
> queue=&res->granted;
> - found = 0;
> spin_lock(&res->spinlock);
> if (res->state & DLM_LOCK_RES_RECOVERING) {
> spin_unlock(&res->spinlock);
> @@ -461,21 +460,21 @@ int dlm_unlock_lock_handler(struct o2net_msg *msg, u32 len, void *data,
> }
>
> for (i=0; i<3; i++) {
> - list_for_each_entry(lock, queue, list) {
> - if (lock->ml.cookie == unlock->cookie &&
> - lock->ml.node == unlock->node_idx) {
> - dlm_lock_get(lock);
> - found = 1;
> + list_for_each_entry(iter, queue, list) {
> + if (iter->ml.cookie == unlock->cookie &&
> + iter->ml.node == unlock->node_idx) {
> + dlm_lock_get(iter);
> + lock = iter;
> break;
> }
> }
> - if (found)
> + if (lock)
> break;
> /* scan granted -> converting -> blocked queues */
> queue++;
> }
> spin_unlock(&res->spinlock);
> - if (!found) {
> + if (!lock) {
> status = DLM_IVLOCKID;
> goto not_found;
> }
> @@ -505,7 +504,7 @@ int dlm_unlock_lock_handler(struct o2net_msg *msg, u32 len, void *data,
> dlm_kick_thread(dlm, res);
>
> not_found:
> - if (!found)
> + if (!lock)
> mlog(ML_ERROR, "failed to find lock to unlock! "
> "cookie=%u:%llu\n",
> dlm_get_lock_cookie_node(be64_to_cpu(unlock->cookie)),
> diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c
> index 0e4b16d4c037..38cc75bd3127 100644
> --- a/fs/ocfs2/quota_local.c
> +++ b/fs/ocfs2/quota_local.c
> @@ -923,19 +923,19 @@ static struct ocfs2_quota_chunk *ocfs2_find_free_entry(struct super_block *sb,
> {
> struct mem_dqinfo *info = sb_dqinfo(sb, type);
> struct ocfs2_mem_dqinfo *oinfo = info->dqi_priv;
> - struct ocfs2_quota_chunk *chunk;
> + struct ocfs2_quota_chunk *chunk = NULL, *iter;
> struct ocfs2_local_disk_chunk *dchunk;
> int found = 0, len;
>
> - list_for_each_entry(chunk, &oinfo->dqi_chunk, qc_chunk) {
> + list_for_each_entry(iter, &oinfo->dqi_chunk, qc_chunk) {
> dchunk = (struct ocfs2_local_disk_chunk *)
> - chunk->qc_headerbh->b_data;
> + iter->qc_headerbh->b_data;
> if (le32_to_cpu(dchunk->dqc_free) > 0) {
> - found = 1;
> + chunk = iter;
> break;
> }
> }
> - if (!found)
> + if (!chunk)
> return NULL;
>
> if (chunk->qc_num < oinfo->dqi_chunks - 1) {
>
> base-commit: f443e374ae131c168a065ea1748feac6b2e76613
prev parent reply other threads:[~2022-03-25 3:15 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-24 7:16 [PATCH] ocfs2: replace usage of found with dedicated list iterator variable Jakob Koschel
2022-03-25 3:15 ` Joseph Qi [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9de48770-7dfa-1dd8-5eab-5fc0ac0499d5@linux.alibaba.com \
--to=joseph.qi@linux.alibaba.com \
--cc=akpm@linux-foundation.org \
--cc=bjohannesmeyer@gmail.com \
--cc=c.giuffrida@vu.nl \
--cc=geert@linux-m68k.org \
--cc=h.j.bos@vu.nl \
--cc=jakobkoschel@gmail.com \
--cc=jlbec@evilplan.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mark@fasheh.com \
--cc=masahiroy@kernel.org \
--cc=ocfs2-devel@oss.oracle.com \
--cc=ojeda@kernel.org \
--cc=rppt@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).