From: Manish Rangankar <mrangankar@marvell.com>
To: "cgel.zte@gmail.com" <cgel.zte@gmail.com>
Cc: GR-QLogic-Storage-Upstream
<GR-QLogic-Storage-Upstream@marvell.com>,
"jejb@linux.ibm.com" <jejb@linux.ibm.com>,
"martin.petersen@oracle.com" <martin.petersen@oracle.com>,
"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"Minghao Chi (CGEL ZTE)" <chi.minghao@zte.com.cn>,
Zeal Robot <zealci@zte.com.cn>
Subject: RE: [EXT] [PATCH] qedi: Remove redundant 'flush_workqueue()' calls
Date: Mon, 7 Feb 2022 04:47:44 +0000 [thread overview]
Message-ID: <CO6PR18MB44195156F85CBE435DA1D5A3D82C9@CO6PR18MB4419.namprd18.prod.outlook.com> (raw)
In-Reply-To: <20220127013934.1184923-1-chi.minghao@zte.com.cn>
> -----Original Message-----
> From: cgel.zte@gmail.com <cgel.zte@gmail.com>
> Sent: Thursday, January 27, 2022 7:10 AM
> To: Nilesh Javali <njavali@marvell.com>
> Cc: Manish Rangankar <mrangankar@marvell.com>; GR-QLogic-Storage-
> Upstream <GR-QLogic-Storage-Upstream@marvell.com>; jejb@linux.ibm.com;
> martin.petersen@oracle.com; linux-scsi@vger.kernel.org; linux-
> kernel@vger.kernel.org; Minghao Chi (CGEL ZTE) <chi.minghao@zte.com.cn>;
> Zeal Robot <zealci@zte.com.cn>; CGEL ZTE <cgel.zte@gmail.com>
> Subject: [EXT] [PATCH] qedi: Remove redundant 'flush_workqueue()' calls
>
> External Email
>
> ----------------------------------------------------------------------
> From: "Minghao Chi (CGEL ZTE)" <chi.minghao@zte.com.cn>
>
> 'destroy_workqueue()' already drains the queue before destroying it, so there is
> no need to flush it explicitly.
>
> Remove the redundant 'flush_workqueue()' calls.
>
> Reported-by: Zeal Robot <zealci@zte.com.cn>
> Signed-off-by: Minghao Chi (CGEL ZTE) <chi.minghao@zte.com.cn>
> Signed-off-by: CGEL ZTE <cgel.zte@gmail.com>
> ---
> drivers/scsi/qedi/qedi_main.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c index
> 832a856dd367..83ffba7f51da 100644
> --- a/drivers/scsi/qedi/qedi_main.c
> +++ b/drivers/scsi/qedi/qedi_main.c
> @@ -2418,13 +2418,11 @@ static void __qedi_remove(struct pci_dev *pdev,
> int mode)
> iscsi_host_remove(qedi->shost);
>
> if (qedi->tmf_thread) {
> - flush_workqueue(qedi->tmf_thread);
> destroy_workqueue(qedi->tmf_thread);
> qedi->tmf_thread = NULL;
> }
>
> if (qedi->offload_thread) {
> - flush_workqueue(qedi->offload_thread);
> destroy_workqueue(qedi->offload_thread);
> qedi->offload_thread = NULL;
> }
Thanks,
Acked-by: Manish Rangankar <mrangankar@marvell.com>
next prev parent reply other threads:[~2022-02-07 5:54 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-27 1:39 [PATCH] qedi: Remove redundant 'flush_workqueue()' calls cgel.zte
2022-02-07 4:47 ` Manish Rangankar [this message]
2022-02-08 4:52 ` Martin K. Petersen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CO6PR18MB44195156F85CBE435DA1D5A3D82C9@CO6PR18MB4419.namprd18.prod.outlook.com \
--to=mrangankar@marvell.com \
--cc=GR-QLogic-Storage-Upstream@marvell.com \
--cc=cgel.zte@gmail.com \
--cc=chi.minghao@zte.com.cn \
--cc=jejb@linux.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=martin.petersen@oracle.com \
--cc=zealci@zte.com.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).