linux-cifs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Easily reproducible deadlock on 5.4-rc4 with cifsInodeInfo.lock_sem semaphore after network disruption or smb restart
@ 2019-10-21 17:22 David Wysochanski
  2019-10-21 22:19 ` Pavel Shilovsky
  0 siblings, 1 reply; 6+ messages in thread
From: David Wysochanski @ 2019-10-21 17:22 UTC (permalink / raw)
  To: linux-cifs

[-- Attachment #1: Type: text/plain, Size: 10296 bytes --]

HI all,

We recently found another problem when running a simple test case
where multiple processes open/read/close a single file, and a network
disruption or smb restart occurs.

After just a few minutes of restarts, a few of the processes error
out, but a few remain.  If you stop the disruption, the share is
accessible again, but the original processes are blocked indefinitely
waiting on lock_sem

Once this occurs, you'll see two threads deadlocked like the below.  Basically:
thread1: down_read(lock_sem) inside cifs_strict_readv(); does not
block, but blocks on network temporarily
thread2: down_write(lock_sem) inside _cifsFileInfo_put; blocks due to thread 1
thread1: down_read(lock_sem) inside cifs_reopen_file (after
reconnect); unblocks to reconnect, but then blocks due to thread 2

At this point there is a deadlock, both threads are waiting for each
other to release lock_sem

The root of the problem seems to be having to hold the read sem over a
network disruption, and no check when it comes back before down_read a
second time.
Inside cifs_strict_readv there's a comment that explains why we need
hold the sem over the read, and it's so no one modifies the lock list
with a brlock that prevents writing.
However, in this case at least, we're only opening files for reading,
and inside _cifsFileInfo_put it needs to down_write to delete the
outstanding lock records.
So we're not really hitting that case, but are modifying the list.

One approach to avoid the deadlock is for the reading process to drop
the lock_sem before re-aquiring it again inside cifs_reopen_file.
We would drop the lock just before calling, and then inside we'd need
to check for the condition of the conflicting brlock, and then
possibly error out somehow.
But that is probably not preferred.

Instead, can the reader only down_read one time?  cifs_relock_file is
only called from one place (cifs_reopen_file) and it's when we're
reconnecting, but
cifs_reopen_file is called from many places. I'm not sure if we there
is something in existence we can use that says we don't need to
re-aquire it, or if we need to add a flag or something?





Here is the live crash session

crash> ps -m | grep UN
[0 01:54:10.166] [UN]  PID: 13271  TASK: ffff93b8a7349e40  CPU: 2
COMMAND: "python3-openloo"
[0 01:54:10.167] [UN]  PID: 13273  TASK: ffff93b8b3310000  CPU: 2
COMMAND: "python3-openloo"
[0 01:54:10.169] [UN]  PID: 13269  TASK: ffff93b8a734dac0  CPU: 0
COMMAND: "python3-openloo"
[0 01:54:10.208] [UN]  PID: 13272  TASK: ffff93b8a7348000  CPU: 7
COMMAND: "python3-openloo"
[0 01:54:10.216] [UN]  PID: 13265  TASK: ffff93b8a7358000  CPU: 6
COMMAND: "python3-openloo"
[0 01:54:10.172] [UN]  PID: 13263  TASK: ffff93b8b3313c80  CPU: 2
COMMAND: "python3-openloo"
[0 01:54:10.229] [UN]  PID: 13267  TASK: ffff93b8a7359e40  CPU: 5
COMMAND: "python3-openloo"
[0 01:54:10.234] [UN]  PID: 13277  TASK: ffff93b8a7dddac0  CPU: 3
COMMAND: "python3-openloo"
[0 01:54:10.235] [UN]  PID: 13270  TASK: ffff93b8a734bc80  CPU: 1
COMMAND: "python3-openloo"
[0 01:54:10.244] [UN]  PID: 13276  TASK: ffff93b8a7dd8000  CPU: 6
COMMAND: "python3-openloo"
[0 01:54:10.260] [UN]  PID: 13264  TASK: ffff93b8b3315ac0  CPU: 6
COMMAND: "python3-openloo"
[0 01:54:10.257] [UN]  PID: 13274  TASK: ffff93b8a7ddbc80  CPU: 1
COMMAND: "python3-openloo"
[0 01:54:31.262] [UN]  PID: 13279  TASK: ffff93b8a7d90000  CPU: 1
COMMAND: "python3-openloo" <--- block same instant
[0 01:54:31.265] [UN]  PID: 13275  TASK: ffff93b8a7dd9e40  CPU: 4
COMMAND: "python3-openloo" <--- block same instant
crash> bt 13279
PID: 13279  TASK: ffff93b8a7d90000  CPU: 1   COMMAND: "python3-openloo"
 #0 [ffffaae1c0a77c20] __schedule at ffffffffa19d4239
 #1 [ffffaae1c0a77cb0] schedule at ffffffffa19d46b9
 #2 [ffffaae1c0a77cc8] rwsem_down_write_slowpath at ffffffffa1131ada
<------------- T+1: pid 13279 down_write(... lock_sem); blocks due to
pid 13275
 #3 [ffffaae1c0a77d98] _cifsFileInfo_put at ffffffffc0759818 [cifs]
 #4 [ffffaae1c0a77e78] cifs_close at ffffffffc0759b4f [cifs]
 #5 [ffffaae1c0a77e88] __fput at ffffffffa12f6e51
 #6 [ffffaae1c0a77ed0] task_work_run at ffffffffa10fc827
 #7 [ffffaae1c0a77f08] exit_to_usermode_loop at ffffffffa1003f83
 #8 [ffffaae1c0a77f38] do_syscall_64 at ffffffffa10043c2
 #9 [ffffaae1c0a77f50] entry_SYSCALL_64_after_hwframe at ffffffffa1a0008c
    RIP: 00007fc6caca1778  RSP: 00007ffed0c43de8  RFLAGS: 00000246
    RAX: 0000000000000000  RBX: 00007fc6bd5a5f30  RCX: 00007fc6caca1778
    RDX: 0000000000000000  RSI: 0000000000000000  RDI: 0000000000000003
    RBP: 0000558ebb9ca0d0   R8: 00007fc6cab941a8   R9: 0000000000000000
    R10: 0000558ebb9ca0d0  R11: 0000000000000246  R12: 0000000000000000
    R13: 0000000000000000  R14: 00007fc6cab4df0c  R15: 00007ffed0c43f10
    ORIG_RAX: 0000000000000003  CS: 0033  SS: 002b
crash> bt 13275
PID: 13275  TASK: ffff93b8a7dd9e40  CPU: 4   COMMAND: "python3-openloo"
 #0 [ffffaae1c0a57a00] __schedule at ffffffffa19d4239
 #1 [ffffaae1c0a57a90] schedule at ffffffffa19d46b9
 #2 [ffffaae1c0a57aa8] rwsem_down_read_slowpath at ffffffffa19d72b4
 #3 [ffffaae1c0a57b60] cifs_reopen_file at ffffffffc0755352 [cifs]
<------------- T+2: pid 13275 down_read(... lock_sem); blocks due to
pid 13279
 #4 [ffffaae1c0a57c08] cifs_readpage_worker at ffffffffc07575f0 [cifs]
 #5 [ffffaae1c0a57cd0] cifs_readpage at ffffffffc0758096 [cifs]
 #6 [ffffaae1c0a57d08] generic_file_read_iter at ffffffffa123acd2
 #7 [ffffaae1c0a57e10] cifs_strict_readv at ffffffffc07606a5 [cifs]
<------------- T+0: pid 13275 down_read(... lock_sem); does not block
 #8 [ffffaae1c0a57e40] new_sync_read at ffffffffa12f336a
 #9 [ffffaae1c0a57ec8] vfs_read at ffffffffa12f5fad
#10 [ffffaae1c0a57f00] ksys_read at ffffffffa12f62df
#11 [ffffaae1c0a57f38] do_syscall_64 at ffffffffa10042bb
#12 [ffffaae1c0a57f50] entry_SYSCALL_64_after_hwframe at ffffffffa1a0008c
    RIP: 00007f6ed0b7c055  RSP: 00007ffeea637e68  RFLAGS: 00000246
    RAX: ffffffffffffffda  RBX: 00007f6ed05cd600  RCX: 00007f6ed0b7c055
    RDX: 000000000000001e  RSI: 00007f6ec3404c10  RDI: 0000000000000003
    RBP: 000000000000001e   R8: 00007f6ed0a6f1a8   R9: 0000000000000000
    R10: 00007f6ed090faf0  R11: 0000000000000246  R12: 00007f6ec3480f40
    R13: 0000000000000003  R14: 00007f6ec3404c10  R15: 000055c449ea60d0
    ORIG_RAX: 0000000000000000  CS: 0033  SS: 002b


Here is my tracing from the attached debug patch which shows the
conditions, and points at the bugzilla we have open:
[  233.588248] CIFS VFS: file=fs/cifs/file.c line=480 pid=13279
cifs_sem_op op=up_write sem=ffff93b8a7424bd8
[  233.589462] CIFS VFS: file=fs/cifs/file.c line=311 pid=13279
cifs_sem_op op=down_write sem=ffff93b8a7424bd8
[  233.592333] CIFS VFS: file=fs/cifs/file.c line=314 pid=13275
cifs_sem_op op=up_write sem=ffff93b8a7424bd8
[  233.612324] CIFS VFS: file=fs/cifs/file.c line=314 pid=13279
cifs_sem_op op=up_write sem=ffff93b8a7424bd8
[  233.612510] CIFS VFS: file=fs/cifs/file.c line=3920 pid=13275
cifs_sem_op op=down_read sem=ffff93b8a7424bd8
[  233.616487] CIFS VFS: file=fs/cifs/file.c line=3920 pid=13279
cifs_sem_op op=down_read sem=ffff93b8a7424bd8  T0:
https://bugzilla.redhat.com/show_bug.cgi?id=1760835#c9
[  254.389458] CIFS VFS: Send error in read = -9
[  254.391943] CIFS VFS: file=fs/cifs/file.c line=3926 pid=13279
cifs_sem_op op=up_read sem=ffff93b8a7424bd8
[  254.397453] CIFS VFS: file=fs/cifs/file.c line=471 pid=13279
cifs_sem_op op=down_write sem=ffff93b8a7424bd8  T1:
https://bugzilla.redhat.com/show_bug.cgi?id=1760835#c9
[  254.399798] CIFS VFS: file=fs/cifs/file.c line=630 pid=13275
cifs_sem_op op=down_read sem=ffff93b8a7424bd8   T2:
https://bugzilla.redhat.com/show_bug.cgi?id=1760835#c9



There was a patch that removed a lockdep warning, but it looks like
this removed a semi-valid warning, because clearly we can deadlock
here:

commit 560d388950ceda5e7c7cdef7f3d9a8ff297bbf9d
Author: Rabin Vincent <rabinv@axis.com>
Date:   Wed May 3 17:17:21 2017 +0200

    CIFS: silence lockdep splat in cifs_relock_file()

    cifs_relock_file() can perform a down_write() on the inode's lock_sem even
    though it was already performed in cifs_strict_readv().  Lockdep complains
    about this.  AFAICS, there is no problem here, and lockdep just needs to be
    told that this nesting is OK.

     =============================================
     [ INFO: possible recursive locking detected ]
     4.11.0+ #20 Not tainted
     ---------------------------------------------
     cat/701 is trying to acquire lock:
      (&cifsi->lock_sem){++++.+}, at: cifs_reopen_file+0x7a7/0xc00

     but task is already holding lock:
      (&cifsi->lock_sem){++++.+}, at: cifs_strict_readv+0x177/0x310

     other info that might help us debug this:
      Possible unsafe locking scenario:

            CPU0
            ----
       lock(&cifsi->lock_sem);
       lock(&cifsi->lock_sem);

      *** DEADLOCK ***

      May be due to missing lock nesting notation

     1 lock held by cat/701:
      #0:  (&cifsi->lock_sem){++++.+}, at: cifs_strict_readv+0x177/0x310

     stack backtrace:
     CPU: 0 PID: 701 Comm: cat Not tainted 4.11.0+ #20
     Call Trace:
      dump_stack+0x85/0xc2
      __lock_acquire+0x17dd/0x2260
      ? trace_hardirqs_on_thunk+0x1a/0x1c
      ? preempt_schedule_irq+0x6b/0x80
      lock_acquire+0xcc/0x260
      ? lock_acquire+0xcc/0x260
      ? cifs_reopen_file+0x7a7/0xc00
      down_read+0x2d/0x70
      ? cifs_reopen_file+0x7a7/0xc00
      cifs_reopen_file+0x7a7/0xc00
      ? printk+0x43/0x4b
      cifs_readpage_worker+0x327/0x8a0
      cifs_readpage+0x8c/0x2a0
      generic_file_read_iter+0x692/0xd00
      cifs_strict_readv+0x29f/0x310
      generic_file_splice_read+0x11c/0x1c0
      do_splice_to+0xa5/0xc0
      splice_direct_to_actor+0xfa/0x350
      ? generic_pipe_buf_nosteal+0x10/0x10
      do_splice_direct+0xb5/0xe0
      do_sendfile+0x278/0x3a0
      SyS_sendfile64+0xc4/0xe0
      entry_SYSCALL_64_fastpath+0x1f/0xbe

    Signed-off-by: Rabin Vincent <rabinv@axis.com>
    Acked-by: Pavel Shilovsky <pshilov@microsoft.com>
    Signed-off-by: Steve French <smfrench@gmail.com>

[-- Attachment #2: 0001-debug-cifs-lock_sem-deadlock-redhat-bug-1760835.patch --]
[-- Type: text/x-patch, Size: 10878 bytes --]

From 68fe8004456298070247fd23c3af91b332d98d35 Mon Sep 17 00:00:00 2001
From: Dave Wysochanski <dwysocha@redhat.com>
Date: Mon, 21 Oct 2019 07:28:04 -0400
Subject: [PATCH] debug cifs lock_sem deadlock - redhat bug 1760835

debug of https://bugzilla.redhat.com/show_bug.cgi?id=1760835

cifs_dbg trace of lock_sem
turn off ratelimiting
---
 Makefile            |  2 +-
 fs/cifs/cifsfs.c    |  1 +
 fs/cifs/cifsproto.h | 34 ++++++++++++++++++++++++++++++++++
 fs/cifs/file.c      | 32 ++++++++++++++++++++++++++++++++
 fs/cifs/smb2file.c  |  2 ++
 lib/ratelimit.c     |  3 +--
 6 files changed, 71 insertions(+), 3 deletions(-)

diff --git a/Makefile b/Makefile
index 5475cdb6d57d..47b0d42f0418 100644
--- a/Makefile
+++ b/Makefile
@@ -2,7 +2,7 @@
 VERSION = 5
 PATCHLEVEL = 4
 SUBLEVEL = 0
-EXTRAVERSION = -rc4
+EXTRAVERSION = -rc4-dbg3
 NAME = Nesting Opossum
 
 # *DOCUMENTATION*
diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
index c049c7b3aa87..431a9d46b409 100644
--- a/fs/cifs/cifsfs.c
+++ b/fs/cifs/cifsfs.c
@@ -1335,6 +1335,7 @@ static ssize_t cifs_copy_file_range(struct file *src_file, loff_t off,
 	struct cifsInodeInfo *cifsi = inode;
 
 	inode_init_once(&cifsi->vfs_inode);
+	cifs_sem_op_trace_init_rwsem(&cifsi->lock_sem);
 	init_rwsem(&cifsi->lock_sem);
 }
 
diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h
index e53e9f62b87b..5488e5e11e40 100644
--- a/fs/cifs/cifsproto.h
+++ b/fs/cifs/cifsproto.h
@@ -65,6 +65,40 @@ extern int smb_send(struct TCP_Server_Info *, struct smb_hdr *,
 	else							\
 		trace_smb3_exit_done(curr_xid, __func__);	\
 } while (0)
+
+
+/* Debug lock_sem */
+#define cifs_sem_op_trace(SEM, OP) \
+do { \
+	cifs_dbg(VFS, "file=%s line=%d pid=%d cifs_sem_op op=%s sem=%px\n", \
+		 __FILE__, __LINE__, current->pid, #OP, SEM); \
+} while(0)
+
+#define cifs_sem_op_trace_init_rwsem(X) \
+do { \
+	cifs_sem_op_trace(X, init_rwsem); \
+} while (0)
+
+#define cifs_sem_op_trace_down_read(X) \
+do { \
+	cifs_sem_op_trace(X, down_read); \
+} while (0)
+
+#define cifs_sem_op_trace_up_read(X) \
+do { \
+	cifs_sem_op_trace(X, up_read); \
+} while (0)
+
+#define cifs_sem_op_trace_down_write(X) \
+do { \
+	cifs_sem_op_trace(X, down_write); \
+} while (0)
+
+#define cifs_sem_op_trace_up_write(X) \
+do { \
+	cifs_sem_op_trace(X, up_write); \
+} while (0)
+
 extern int init_cifs_idmap(void);
 extern void exit_cifs_idmap(void);
 extern int init_cifs_spnego(void);
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index 5ad15de2bb4f..58e554cc397a 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -270,6 +270,7 @@ int cifs_posix_open(char *full_path, struct inode **pinode,
 	struct cifs_fid_locks *cur;
 	bool has_locks = false;
 
+	cifs_sem_op_trace_down_read(&cinode->lock_sem);
 	down_read(&cinode->lock_sem);
 	list_for_each_entry(cur, &cinode->llist, llist) {
 		if (!list_empty(&cur->locks)) {
@@ -277,6 +278,7 @@ int cifs_posix_open(char *full_path, struct inode **pinode,
 			break;
 		}
 	}
+	cifs_sem_op_trace_up_read(&cinode->lock_sem);
 	up_read(&cinode->lock_sem);
 	return has_locks;
 }
@@ -306,8 +308,10 @@ struct cifsFileInfo *
 	INIT_LIST_HEAD(&fdlocks->locks);
 	fdlocks->cfile = cfile;
 	cfile->llist = fdlocks;
+	cifs_sem_op_trace_down_write(&cinode->lock_sem);
 	down_write(&cinode->lock_sem);
 	list_add(&fdlocks->llist, &cinode->llist);
+	cifs_sem_op_trace_up_write(&cinode->lock_sem);
 	up_write(&cinode->lock_sem);
 
 	cfile->count = 1;
@@ -464,6 +468,7 @@ void _cifsFileInfo_put(struct cifsFileInfo *cifs_file, bool wait_oplock_handler)
 	 * Delete any outstanding lock records. We'll lose them when the file
 	 * is closed anyway.
 	 */
+	cifs_sem_op_trace_down_write(&cifsi->lock_sem);
 	down_write(&cifsi->lock_sem);
 	list_for_each_entry_safe(li, tmp, &cifs_file->llist->locks, llist) {
 		list_del(&li->llist);
@@ -472,6 +477,7 @@ void _cifsFileInfo_put(struct cifsFileInfo *cifs_file, bool wait_oplock_handler)
 	}
 	list_del(&cifs_file->llist->llist);
 	kfree(cifs_file->llist);
+	cifs_sem_op_trace_up_write(&cifsi->lock_sem);
 	up_write(&cifsi->lock_sem);
 
 	cifs_put_tlink(cifs_file->tlink);
@@ -621,9 +627,11 @@ int cifs_open(struct inode *inode, struct file *file)
 	struct cifs_tcon *tcon = tlink_tcon(cfile->tlink);
 	int rc = 0;
 
+	cifs_sem_op_trace_down_read(&cinode->lock_sem);
 	down_read_nested(&cinode->lock_sem, SINGLE_DEPTH_NESTING);
 	if (cinode->can_cache_brlcks) {
 		/* can cache locks - no need to relock */
+		cifs_sem_op_trace_up_read(&cinode->lock_sem);
 		up_read(&cinode->lock_sem);
 		return rc;
 	}
@@ -635,6 +643,7 @@ int cifs_open(struct inode *inode, struct file *file)
 	else
 		rc = tcon->ses->server->ops->push_mand_locks(cfile);
 
+	cifs_sem_op_trace_up_read(&cinode->lock_sem);
 	up_read(&cinode->lock_sem);
 	return rc;
 }
@@ -1001,6 +1010,7 @@ int cifs_closedir(struct inode *inode, struct file *file)
 	struct TCP_Server_Info *server = tlink_tcon(cfile->tlink)->ses->server;
 	bool exist;
 
+	cifs_sem_op_trace_down_read(&cinode->lock_sem);
 	down_read(&cinode->lock_sem);
 
 	exist = cifs_find_lock_conflict(cfile, offset, length, type,
@@ -1019,6 +1029,7 @@ int cifs_closedir(struct inode *inode, struct file *file)
 	else
 		flock->fl_type = F_UNLCK;
 
+	cifs_sem_op_trace_up_read(&cinode->lock_sem);
 	up_read(&cinode->lock_sem);
 	return rc;
 }
@@ -1027,8 +1038,10 @@ int cifs_closedir(struct inode *inode, struct file *file)
 cifs_lock_add(struct cifsFileInfo *cfile, struct cifsLockInfo *lock)
 {
 	struct cifsInodeInfo *cinode = CIFS_I(d_inode(cfile->dentry));
+	cifs_sem_op_trace_down_write(&cinode->lock_sem);
 	down_write(&cinode->lock_sem);
 	list_add_tail(&lock->llist, &cfile->llist->locks);
+	cifs_sem_op_trace_up_write(&cinode->lock_sem);
 	up_write(&cinode->lock_sem);
 }
 
@@ -1049,6 +1062,7 @@ int cifs_closedir(struct inode *inode, struct file *file)
 
 try_again:
 	exist = false;
+	cifs_sem_op_trace_down_write(&cinode->lock_sem);
 	down_write(&cinode->lock_sem);
 
 	exist = cifs_find_lock_conflict(cfile, lock->offset, lock->length,
@@ -1056,6 +1070,7 @@ int cifs_closedir(struct inode *inode, struct file *file)
 					CIFS_LOCK_OP);
 	if (!exist && cinode->can_cache_brlcks) {
 		list_add_tail(&lock->llist, &cfile->llist->locks);
+		cifs_sem_op_trace_up_write(&cinode->lock_sem);
 		up_write(&cinode->lock_sem);
 		return rc;
 	}
@@ -1066,16 +1081,19 @@ int cifs_closedir(struct inode *inode, struct file *file)
 		rc = -EACCES;
 	else {
 		list_add_tail(&lock->blist, &conf_lock->blist);
+		cifs_sem_op_trace_up_write(&cinode->lock_sem);
 		up_write(&cinode->lock_sem);
 		rc = wait_event_interruptible(lock->block_q,
 					(lock->blist.prev == &lock->blist) &&
 					(lock->blist.next == &lock->blist));
 		if (!rc)
 			goto try_again;
+		cifs_sem_op_trace_down_write(&cinode->lock_sem);
 		down_write(&cinode->lock_sem);
 		list_del_init(&lock->blist);
 	}
 
+	cifs_sem_op_trace_up_write(&cinode->lock_sem);
 	up_write(&cinode->lock_sem);
 	return rc;
 }
@@ -1097,6 +1115,7 @@ int cifs_closedir(struct inode *inode, struct file *file)
 	if ((flock->fl_flags & FL_POSIX) == 0)
 		return 1;
 
+	cifs_sem_op_trace_down_read(&cinode->lock_sem);
 	down_read(&cinode->lock_sem);
 	posix_test_lock(file, flock);
 
@@ -1105,6 +1124,7 @@ int cifs_closedir(struct inode *inode, struct file *file)
 		rc = 1;
 	}
 
+	cifs_sem_op_trace_up_read(&cinode->lock_sem);
 	up_read(&cinode->lock_sem);
 	return rc;
 }
@@ -1125,13 +1145,16 @@ int cifs_closedir(struct inode *inode, struct file *file)
 		return rc;
 
 try_again:
+	cifs_sem_op_trace_down_write(&cinode->lock_sem);
 	down_write(&cinode->lock_sem);
 	if (!cinode->can_cache_brlcks) {
+		cifs_sem_op_trace_up_write(&cinode->lock_sem);
 		up_write(&cinode->lock_sem);
 		return rc;
 	}
 
 	rc = posix_lock_file(file, flock, NULL);
+	cifs_sem_op_trace_up_write(&cinode->lock_sem);
 	up_write(&cinode->lock_sem);
 	if (rc == FILE_LOCK_DEFERRED) {
 		rc = wait_event_interruptible(flock->fl_wait, !flock->fl_blocker);
@@ -1331,8 +1354,10 @@ struct lock_to_push {
 	int rc = 0;
 
 	/* we are going to update can_cache_brlcks here - need a write access */
+	cifs_sem_op_trace_down_write(&cinode->lock_sem);
 	down_write(&cinode->lock_sem);
 	if (!cinode->can_cache_brlcks) {
+		cifs_sem_op_trace_up_write(&cinode->lock_sem);
 		up_write(&cinode->lock_sem);
 		return rc;
 	}
@@ -1345,6 +1370,7 @@ struct lock_to_push {
 		rc = tcon->ses->server->ops->push_mand_locks(cfile);
 
 	cinode->can_cache_brlcks = false;
+	cifs_sem_op_trace_up_write(&cinode->lock_sem);
 	up_write(&cinode->lock_sem);
 	return rc;
 }
@@ -1522,6 +1548,7 @@ struct lock_to_push {
 	if (!buf)
 		return -ENOMEM;
 
+	cifs_sem_op_trace_down_write(&cinode->lock_sem);
 	down_write(&cinode->lock_sem);
 	for (i = 0; i < 2; i++) {
 		cur = buf;
@@ -1592,6 +1619,7 @@ struct lock_to_push {
 		}
 	}
 
+	cifs_sem_op_trace_up_write(&cinode->lock_sem);
 	up_write(&cinode->lock_sem);
 	kfree(buf);
 	return rc;
@@ -3148,6 +3176,7 @@ ssize_t cifs_user_writev(struct kiocb *iocb, struct iov_iter *from)
 	 * We need to hold the sem to be sure nobody modifies lock list
 	 * with a brlock that prevents writing.
 	 */
+	cifs_sem_op_trace_down_read(&cinode->lock_sem);
 	down_read(&cinode->lock_sem);
 
 	rc = generic_write_checks(iocb, from);
@@ -3161,6 +3190,7 @@ ssize_t cifs_user_writev(struct kiocb *iocb, struct iov_iter *from)
 	else
 		rc = -EACCES;
 out:
+	cifs_sem_op_trace_up_read(&cinode->lock_sem);
 	up_read(&cinode->lock_sem);
 	inode_unlock(inode);
 
@@ -3887,11 +3917,13 @@ ssize_t cifs_user_readv(struct kiocb *iocb, struct iov_iter *to)
 	 * We need to hold the sem to be sure nobody modifies lock list
 	 * with a brlock that prevents reading.
 	 */
+	cifs_sem_op_trace_down_read(&cinode->lock_sem);
 	down_read(&cinode->lock_sem);
 	if (!cifs_find_lock_conflict(cfile, iocb->ki_pos, iov_iter_count(to),
 				     tcon->ses->server->vals->shared_lock_type,
 				     0, NULL, CIFS_READ_OP))
 		rc = generic_file_read_iter(iocb, to);
+	cifs_sem_op_trace_up_read(&cinode->lock_sem);
 	up_read(&cinode->lock_sem);
 	return rc;
 }
diff --git a/fs/cifs/smb2file.c b/fs/cifs/smb2file.c
index e6a1fc72018f..9f89a765d977 100644
--- a/fs/cifs/smb2file.c
+++ b/fs/cifs/smb2file.c
@@ -145,6 +145,7 @@
 
 	cur = buf;
 
+	cifs_sem_op_trace_down_write(&cinode->lock_sem);
 	down_write(&cinode->lock_sem);
 	list_for_each_entry_safe(li, tmp, &cfile->llist->locks, llist) {
 		if (flock->fl_start > li->offset ||
@@ -206,6 +207,7 @@
 		} else
 			cifs_free_llist(&tmp_llist);
 	}
+	cifs_sem_op_trace_up_write(&cinode->lock_sem);
 	up_write(&cinode->lock_sem);
 
 	kfree(buf);
diff --git a/lib/ratelimit.c b/lib/ratelimit.c
index e01a93f46f83..bc87f790304c 100644
--- a/lib/ratelimit.c
+++ b/lib/ratelimit.c
@@ -29,8 +29,7 @@ int ___ratelimit(struct ratelimit_state *rs, const char *func)
 	unsigned long flags;
 	int ret;
 
-	if (!rs->interval)
-		return 1;
+	return 1;
 
 	/*
 	 * If we contend on this state's lock then almost
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-10-22 19:01 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-21 17:22 Easily reproducible deadlock on 5.4-rc4 with cifsInodeInfo.lock_sem semaphore after network disruption or smb restart David Wysochanski
2019-10-21 22:19 ` Pavel Shilovsky
2019-10-22  3:04   ` David Wysochanski
2019-10-22 16:09     ` Pavel Shilovsky
2019-10-22 16:43       ` David Wysochanski
2019-10-22 19:00         ` David Wysochanski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).