linux-cifs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Steve French <smfrench@gmail.com>
To: Long Li <longli@microsoft.com>
Cc: Steve French <sfrench@samba.org>,
	CIFS <linux-cifs@vger.kernel.org>,
	samba-technical <samba-technical@lists.samba.org>,
	LKML <linux-kernel@vger.kernel.org>,
	longli@linuxonhyperv.com, Stable <stable@vger.kernel.org>
Subject: Re: [PATCH 6/7] cifs: smbd: Only queue work for error recovery on memory registration
Date: Sun, 27 Oct 2019 14:59:16 -0500	[thread overview]
Message-ID: <CAH2r5mto-Jbp1_yoLsFuiCWiFd-HA8TFVFB91CjDaBABq9PiuQ@mail.gmail.com> (raw)
In-Reply-To: <1571259116-102015-7-git-send-email-longli@linuxonhyperv.com>

I cleaned up minor cosmetic nit spotted by checkpatch

$ scripts/checkpatch.pl
0001-cifs-smbd-Only-queue-work-for-error-recovery-on-memo.patch
WARNING: Possible unwrapped commit description (prefer a maximum 75
chars per line)
#7:
It's not necessary to queue invalidated memory registration to work queue, as

WARNING: Block comments use a trailing */ on a separate line
#58: FILE: fs/cifs/smbdirect.c:2614:
+ * current I/O */

total: 0 errors, 2 warnings, 38 lines checked

On Wed, Oct 16, 2019 at 4:11 PM longli--- via samba-technical
<samba-technical@lists.samba.org> wrote:
>
> From: Long Li <longli@microsoft.com>
>
> It's not necessary to queue invalidated memory registration to work queue, as
> all we need to do is to unmap the SG and make it usable again. This can save
> CPU cycles in normal data paths as memory registration errors are rare and
> normally only happens during reconnection.
>
> Signed-off-by: Long Li <longli@microsoft.com>
> Cc: stable@vger.kernel.org
> ---
>  fs/cifs/smbdirect.c | 26 +++++++++++++++-----------
>  1 file changed, 15 insertions(+), 11 deletions(-)
>
> diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c
> index cf001f10d555..c00629a41d81 100644
> --- a/fs/cifs/smbdirect.c
> +++ b/fs/cifs/smbdirect.c
> @@ -2269,12 +2269,7 @@ static void smbd_mr_recovery_work(struct work_struct *work)
>         int rc;
>
>         list_for_each_entry(smbdirect_mr, &info->mr_list, list) {
> -               if (smbdirect_mr->state == MR_INVALIDATED)
> -                       ib_dma_unmap_sg(
> -                               info->id->device, smbdirect_mr->sgl,
> -                               smbdirect_mr->sgl_count,
> -                               smbdirect_mr->dir);
> -               else if (smbdirect_mr->state == MR_ERROR) {
> +               if (smbdirect_mr->state == MR_ERROR) {
>
>                         /* recover this MR entry */
>                         rc = ib_dereg_mr(smbdirect_mr->mr);
> @@ -2602,11 +2597,20 @@ int smbd_deregister_mr(struct smbd_mr *smbdirect_mr)
>                  */
>                 smbdirect_mr->state = MR_INVALIDATED;
>
> -       /*
> -        * Schedule the work to do MR recovery for future I/Os
> -        * MR recovery is slow and we don't want it to block the current I/O
> -        */
> -       queue_work(info->workqueue, &info->mr_recovery_work);
> +       if (smbdirect_mr->state == MR_INVALIDATED) {
> +               ib_dma_unmap_sg(
> +                       info->id->device, smbdirect_mr->sgl,
> +                       smbdirect_mr->sgl_count,
> +                       smbdirect_mr->dir);
> +               smbdirect_mr->state = MR_READY;
> +               if (atomic_inc_return(&info->mr_ready_count) == 1)
> +                       wake_up_interruptible(&info->wait_mr);
> +       } else
> +               /*
> +                * Schedule the work to do MR recovery for future I/Os
> +                * MR recovery is slow and we don't want it to block the
> +                * current I/O */
> +               queue_work(info->workqueue, &info->mr_recovery_work);
>
>  done:
>         if (atomic_dec_and_test(&info->mr_used_count))
> --
> 2.17.1
>
>


-- 
Thanks,

Steve

  reply	other threads:[~2019-10-27 19:59 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-16 20:51 [PATCH 0/7] cifs: smbd: Improve reliability on transport reconnect longli
2019-10-16 20:51 ` [PATCH 1/7] cifs: Don't display RDMA transport on reconnect longli
2019-10-16 20:51 ` [PATCH 2/7] cifs: smbd: Invalidate and deregister memory registration on re-send longli
2019-10-16 20:51 ` [PATCH 3/7] cifs: smbd: Return -EINVAL when the number of iovs exceeds SMBDIRECT_MAX_SGE longli
2019-10-16 20:51 ` [PATCH 4/7] cifs: smbd: Add messages on RDMA session destroy and reconnection longli
2019-10-16 20:51 ` [PATCH 5/7] cifs: smbd: Return -ECONNABORTED when trasnport is not in connected state longli
2019-10-16 20:51 ` [PATCH 6/7] cifs: smbd: Only queue work for error recovery on memory registration longli
2019-10-27 19:59   ` Steve French [this message]
2019-10-16 20:51 ` [PATCH 7/7] cifs: smbd: Return -EAGAIN when transport is reconnecting longli
2019-10-27 20:02   ` Steve French

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAH2r5mto-Jbp1_yoLsFuiCWiFd-HA8TFVFB91CjDaBABq9PiuQ@mail.gmail.com \
    --to=smfrench@gmail.com \
    --cc=linux-cifs@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longli@linuxonhyperv.com \
    --cc=longli@microsoft.com \
    --cc=samba-technical@lists.samba.org \
    --cc=sfrench@samba.org \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).