linux-cifs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Pavel Shilovskiy <pshilov@microsoft.com>
To: David Wysochanski <dwysocha@redhat.com>,
	Ronnie Sahlberg <lsahlber@redhat.com>
Cc: linux-cifs <linux-cifs@vger.kernel.org>,
	Frank Sorenson <sorenson@redhat.com>
Subject: RE: list_del corruption while iterating retry_list in cifs_reconnect still seen on 5.4-rc3
Date: Fri, 18 Oct 2019 20:59:40 +0000	[thread overview]
Message-ID: <DM5PR21MB0185857761946DBE12AEB3ADB66C0@DM5PR21MB0185.namprd21.prod.outlook.com> (raw)
In-Reply-To: <CALF+zOkmFMtxsnrUR-anXOdMzUFxtAWG+VYLAQuq3DJuH=eDMw@mail.gmail.com>

Thanks for the good news that the patch is stable in your workload!

The extra flag may not be necessary and we may rely on a MID state but we would need to handle two states actually: MID_RETRY_NEEDED and MID_SHUTDOWN - see clean_demultiplex_info() which is doing the same things with mid as cifs_reconnect(). Please add ref counting to both functions since they both can race with system call threads.

I also think that we need to create as smaller patch as possible to avoid hidden regressions. That's why I don't think we should change IF() to WARN_ON() in the same patch and keep  it separately without the stable tag.

Another general thought is that including extra logic into the MID state may complicate the code. Having a flag like MID_QUEUED would reflect the meaning more straightforward: if mis is queued then de-queue it (aka remove it from the list), else - skip this step. This may be changed later if you think this will complicate the small stable patch.

--
Best regards,
Pavel Shilovsky

-----Original Message-----
From: David Wysochanski <dwysocha@redhat.com> 
Sent: Friday, October 18, 2019 3:12 AM
To: Ronnie Sahlberg <lsahlber@redhat.com>
Cc: Pavel Shilovskiy <pshilov@microsoft.com>; linux-cifs <linux-cifs@vger.kernel.org>; Frank Sorenson <sorenson@redhat.com>
Subject: Re: list_del corruption while iterating retry_list in cifs_reconnect still seen on 5.4-rc3

On Fri, Oct 18, 2019 at 5:27 AM Ronnie Sahlberg <lsahlber@redhat.com> wrote:
>
>
> ----- Original Message -----
> > From: "David Wysochanski" <dwysocha@redhat.com>
> > To: "Ronnie Sahlberg" <lsahlber@redhat.com>
> > Cc: "Pavel Shilovskiy" <pshilov@microsoft.com>, "linux-cifs" <linux-cifs@vger.kernel.org>, "Frank Sorenson"
> > <sorenson@redhat.com>
> > Sent: Friday, 18 October, 2019 6:16:45 PM
> > Subject: Re: list_del corruption while iterating retry_list in 
> > cifs_reconnect still seen on 5.4-rc3
> >
> > On Thu, Oct 17, 2019 at 6:53 PM Ronnie Sahlberg <lsahlber@redhat.com> wrote:
> > >
> > >
> > >
> > >
>
> Good comments.
> New version of the patch, please test and see comments inline below
>
> diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c index 
> bdea4b3e8005..8a78358693a5 100644
> --- a/fs/cifs/connect.c
> +++ b/fs/cifs/connect.c
> @@ -564,8 +564,13 @@ cifs_reconnect(struct TCP_Server_Info *server)
>         spin_lock(&GlobalMid_Lock);
>         list_for_each_safe(tmp, tmp2, &server->pending_mid_q) {
>                 mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
> -               if (mid_entry->mid_state == MID_REQUEST_SUBMITTED)
> -                       mid_entry->mid_state = MID_RETRY_NEEDED;
> +               kref_get(&mid_entry->refcount);
> +               WARN_ON(mid_entry->mid_state != MID_REQUEST_SUBMITTED);
> +               /*
> +                * Set MID_RETRY_NEEDED to prevent the demultiplex loop from
> +                * removing us, or our neighbours, from the linked list.
> +                */
> +               mid_entry->mid_state = MID_RETRY_NEEDED;
>                 list_move(&mid_entry->qhead, &retry_list);
>         }
>         spin_unlock(&GlobalMid_Lock);
> @@ -575,7 +580,9 @@ cifs_reconnect(struct TCP_Server_Info *server)
>         list_for_each_safe(tmp, tmp2, &retry_list) {
>                 mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
>                 list_del_init(&mid_entry->qhead);
> +
>                 mid_entry->callback(mid_entry);
> +               cifs_mid_q_entry_release(mid_entry);
>         }
>
>         if (cifs_rdma_enabled(server)) { @@ -895,7 +902,7 @@ 
> dequeue_mid(struct mid_q_entry *mid, bool malformed)
>         if (mid->mid_flags & MID_DELETED)
>                 printk_once(KERN_WARNING
>                             "trying to dequeue a deleted mid\n");
> -       else
> +       else if (mid->mid_state != MID_RETRY_NEEDED)

I'm just using an 'if' here not 'else if'.  Do you see any issue with that?

Actually this section needed a little of reorganizing due to the setting of the mid_state.  So I have this now for this hunk:

        mid->when_received = jiffies;
 #endif
        spin_lock(&GlobalMid_Lock);
-       if (!malformed)
-               mid->mid_state = MID_RESPONSE_RECEIVED;
-       else
-               mid->mid_state = MID_RESPONSE_MALFORMED;
        /*
         * Trying to handle/dequeue a mid after the send_recv()
         * function has finished processing it is a bug.
@@ -895,8 +893,14 @@ static inline int reconn_setup_dfs_targets(struct cifs_sb_info *cifs_sb,
        if (mid->mid_flags & MID_DELETED)
                printk_once(KERN_WARNING
                            "trying to dequeue a deleted mid\n");
-       else
+       if (mid->mid_state != MID_RETRY_NEEDED)
                list_del_init(&mid->qhead);
+
+       if (!malformed)
+               mid->mid_state = MID_RESPONSE_RECEIVED;
+       else
+               mid->mid_state = MID_RESPONSE_MALFORMED;
+
        spin_unlock(&GlobalMid_Lock);
 }



>                 list_del_init(&mid->qhead);
>         spin_unlock(&GlobalMid_Lock);
>  }
> diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c index 
> 308ad0f495e1..17a430b58673 100644
> --- a/fs/cifs/transport.c
> +++ b/fs/cifs/transport.c
> @@ -173,7 +173,8 @@ void
>  cifs_delete_mid(struct mid_q_entry *mid)  {
>         spin_lock(&GlobalMid_Lock);
> -       list_del_init(&mid->qhead);
> +       if (mid->mid_state != MID_RETRY_NEEDED)
> +               list_del_init(&mid->qhead);
>         mid->mid_flags |= MID_DELETED;
>         spin_unlock(&GlobalMid_Lock);
>
> @@ -872,7 +873,8 @@ cifs_sync_mid_result(struct mid_q_entry *mid, struct TCP_Server_Info *server)
>                 rc = -EHOSTDOWN;
>                 break;
>         default:
> -               list_del_init(&mid->qhead);
> +               if (mid->mid_state != MID_RETRY_NEEDED)
> +                       list_del_init(&mid->qhead);
>                 cifs_server_dbg(VFS, "%s: invalid mid state mid=%llu state=%d\n",
>                          __func__, mid->mid, mid->mid_state);
>                 rc = -EIO;
>
>
>
>
>
>
> > >
> > > ----- Original Message -----
> > > > From: "Pavel Shilovskiy" <pshilov@microsoft.com>
> > > > To: "Ronnie Sahlberg" <lsahlber@redhat.com>, "David Wysochanski"
> > > > <dwysocha@redhat.com>
> > > > Cc: "linux-cifs" <linux-cifs@vger.kernel.org>, "Frank Sorenson"
> > > > <sorenson@redhat.com>
> > > > Sent: Friday, 18 October, 2019 8:02:23 AM
> > > > Subject: RE: list_del corruption while iterating retry_list in 
> > > > cifs_reconnect still seen on 5.4-rc3
> > > >
> > > > Ok, looking at cifs_delete_mid():
> > > >
> > > >  172 void
> > > >  173 cifs_delete_mid(struct mid_q_entry *mid)
> > > >  174 {
> > > >  175 >-------spin_lock(&GlobalMid_Lock);
> > > >  176 >-------list_del_init(&mid->qhead);
> > > >  177 >-------mid->mid_flags |= MID_DELETED;
> > > >  178 >-------spin_unlock(&GlobalMid_Lock);
> > > >  179
> > > >  180 >-------DeleteMidQEntry(mid);
> > > >  181 }
> > > >
> > > > So, regardless of us taking references on the mid itself or not, 
> > > > the mid might be removed from the list. I also don't think 
> > > > taking GlobalMid_Lock would help much because the next mid in 
> > > > the list might be deleted from the list by another process while 
> > > > cifs_reconnect is calling callback for the current mid.
> > > >
> >
> > Yes the above is consistent with my tracing the crash after the first
> > initial refcount patch was applied.
> > After the simple refcount patch, when iterating the retry_loop, it was
> > processing an orphaned list with a single item over and over and
> > eventually ran itself down to refcount == 0 and crashed like before.
> >
> >
> > > > Instead, shouldn't we try marking the mid as being reconnected? Once we
> > > > took
> > > > a reference, let's mark mid->mid_flags with a new flag MID_RECONNECT
> > > > under
> > > > the GlobalMid_Lock. Then modify cifs_delete_mid() to check for this flag
> > > > and
> > > > do not remove the mid from the list if the flag exists.
> > >
> > > That could work. But then we should also use that flag to suppress the
> > > other places where we do a list_del*, so something like this ?
> > >
> > > diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
> > > index 50dfd9049370..b324fff33e53 100644
> > > --- a/fs/cifs/cifsglob.h
> > > +++ b/fs/cifs/cifsglob.h
> > > @@ -1702,6 +1702,7 @@ static inline bool is_retryable_error(int error)
> > >  /* Flags */
> > >  #define   MID_WAIT_CANCELLED    1 /* Cancelled while waiting for response
> > >  */
> > >  #define   MID_DELETED            2 /* Mid has been dequeued/deleted */
> > > +#define   MID_RECONNECT          4 /* Mid is being used during reconnect
> > > */
> > >
> > Do we need this extra flag?  Can just use  mid_state ==
> > MID_RETRY_NEEDED in the necessary places?
>
> That is a good point.
> It saves us a redundant flag.
>
> >
> >
> > >  /* Types of response buffer returned from SendReceive2 */
> > >  #define   CIFS_NO_BUFFER        0    /* Response buffer not returned */
> > > diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
> > > index bdea4b3e8005..b142bd2a3ef5 100644
> > > --- a/fs/cifs/connect.c
> > > +++ b/fs/cifs/connect.c
> > > @@ -564,6 +564,8 @@ cifs_reconnect(struct TCP_Server_Info *server)
> > >         spin_lock(&GlobalMid_Lock);
> > >         list_for_each_safe(tmp, tmp2, &server->pending_mid_q) {
> > >                 mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
> > > +               kref_get(&mid_entry->refcount);
> > > +               mid_entry->mid_flags |= MID_RECONNECT;
> > >                 if (mid_entry->mid_state == MID_REQUEST_SUBMITTED)
> > >                         mid_entry->mid_state = MID_RETRY_NEEDED;
> >
> > What happens if the state is wrong going in there, and it is not set
> > to MID_RETRY_NEEDED, but yet we queue up the retry_list and run it
> > below?
> > Should the above 'if' check for MID_REQUEST_SUBMITTED be a WARN_ON
> > followed by unconditionally setting the state?
> >
> > WARN_ON(mid_entry->mid_state != MID_REQUEST_SUBMITTED);
> > /* Unconditionally set MID_RETRY_NEEDED */
> > mid_etnry->mid_state = MID_RETRY_NEEDED;
>
> Yepp.
>
> >
> >
> > >                 list_move(&mid_entry->qhead, &retry_list);
> > > @@ -575,7 +577,9 @@ cifs_reconnect(struct TCP_Server_Info *server)
> > >         list_for_each_safe(tmp, tmp2, &retry_list) {
> > >                 mid_entry = list_entry(tmp, struct mid_q_entry, qhead);
> > >                 list_del_init(&mid_entry->qhead);
> > > +
> > >                 mid_entry->callback(mid_entry);
> > > +               cifs_mid_q_entry_release(mid_entry);
> > >         }
> > >
> > >         if (cifs_rdma_enabled(server)) {
> > > @@ -895,7 +899,7 @@ dequeue_mid(struct mid_q_entry *mid, bool malformed)
> > >         if (mid->mid_flags & MID_DELETED)
> > >                 printk_once(KERN_WARNING
> > >                             "trying to dequeue a deleted mid\n");
> > > -       else
> > > +       else if (!(mid->mid_flags & MID_RECONNECT))
> >
> > Instead of the above,
> >
> >  -       else
> > +          else if (mid_entry->mid_state == MID_RETRY_NEEDED)
>
> Yes, but mid_state != MID_RETRY_NEEDED
>

Yeah good catch on that - somehow I reversed the logic, and when I
tested the former it blew up spectacularly almost instantaenously!
Doh!

So far the latest patch has been running for about 25 minutes, which
is I think the longest this test has survived.
I need a bit more runtime to be sure it's good, but if it keeps going
I'll plan to create a patch header and submit to list by end of today.
Thanks Ronnie and Pavel for the help tracking this down.







  reply	other threads:[~2019-10-18 20:59 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-16 19:27 list_del corruption while iterating retry_list in cifs_reconnect still seen on 5.4-rc3 David Wysochanski
2019-10-17  0:17 ` Ronnie Sahlberg
2019-10-17  9:05   ` Ronnie Sahlberg
2019-10-17 11:42     ` David Wysochanski
2019-10-17 14:08       ` Ronnie Sahlberg
2019-10-17 15:29         ` David Wysochanski
2019-10-17 18:29           ` Pavel Shilovskiy
2019-10-17 19:23             ` David Wysochanski
2019-10-17 19:58               ` Pavel Shilovskiy
2019-10-17 20:34                 ` David Wysochanski
2019-10-17 21:44                   ` Ronnie Sahlberg
2019-10-17 22:02                     ` Pavel Shilovskiy
2019-10-17 22:53                       ` Ronnie Sahlberg
2019-10-17 23:20                         ` Pavel Shilovskiy
2019-10-17 23:41                           ` Ronnie Sahlberg
2019-10-18  8:16                         ` David Wysochanski
2019-10-18  9:27                           ` Ronnie Sahlberg
2019-10-18 10:12                             ` David Wysochanski
2019-10-18 20:59                               ` Pavel Shilovskiy [this message]
2019-10-18 21:21                                 ` David Wysochanski
2019-10-18 21:44                                   ` David Wysochanski
2019-10-18 22:45                                     ` Pavel Shilovskiy
2019-10-19 11:09                                       ` David Wysochanski
2019-10-21 21:54                                         ` Pavel Shilovsky
2019-10-22 18:39                                           ` David Wysochanski
2019-10-22 21:20                                             ` ronnie sahlberg
2019-10-22 21:25                                               ` Pavel Shilovsky
2019-10-22 21:32                                                 ` ronnie sahlberg
2019-10-19 23:35                                 ` [RFC PATCH v2] cifs: Fix list_del corruption of retry_list in cifs_reconnect Dave Wysochanski
2019-10-21 22:34                                   ` Pavel Shilovsky
2019-10-19  9:44                           ` list_del corruption while iterating retry_list in cifs_reconnect still seen on 5.4-rc3 Ronnie Sahlberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM5PR21MB0185857761946DBE12AEB3ADB66C0@DM5PR21MB0185.namprd21.prod.outlook.com \
    --to=pshilov@microsoft.com \
    --cc=dwysocha@redhat.com \
    --cc=linux-cifs@vger.kernel.org \
    --cc=lsahlber@redhat.com \
    --cc=sorenson@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).