linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/9] Assorted cleanups for staging/.../lustre/ldlm/ldlm_flock.c
@ 2017-10-23  0:53 NeilBrown
  2017-10-23  0:53 ` [PATCH 9/9] staging: lustre: ldlm: remove unused field 'fwd_generation' NeilBrown
                   ` (9 more replies)
  0 siblings, 10 replies; 22+ messages in thread
From: NeilBrown @ 2017-10-23  0:53 UTC (permalink / raw)
  To: Oleg Drokin, James Simmons, Andreas Dilger, Greg Kroah-Hartman
  Cc: lustre-devel, linux-kernel

This file contains quite a bit of dead code and unused variables.
This patch series cleans it up in various ways.
It should change behaviour at all, just code
readability/maintainabilty.

I sent the back in July but got not response, possibly because there
were included with other patches which caused a distraction.
So here they are by themselves.

Thanks,
NeilBrown


---

NeilBrown (9):
      staging: lustre: ldlm: remove 'first_enq' arg from ldlm_process_flock_lock()
      staging: lustre: ldlm: remove unused 'work_list' arg from ldlm_process_flock_lock()
      staging: lustre: ldlm: remove unneeded 'err' arg to ldlm_process_flock_lock()
      staging: lustre: ldlm: remove 'flags' arg from ldlm_process_flock_lock()
      staging: lustre: ldlm: remove unused 'overlaps' variable
      staging: lustre: ldlm: remove 'flags' arg from ldlm_flock_destroy()
      staging: lustre: ldlm: tidy list walking in ldlm_flock()
      staging: lustre: ldlm: remove unnecessary 'ownlocks' variable.
      staging: lustre: ldlm: remove unused field 'fwd_generation'


 drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |  208 +++++------------------
 1 file changed, 42 insertions(+), 166 deletions(-)

--
Signature

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/9] staging: lustre: ldlm: remove 'first_enq' arg from ldlm_process_flock_lock()
  2017-10-23  0:53 [PATCH 0/9] Assorted cleanups for staging/.../lustre/ldlm/ldlm_flock.c NeilBrown
                   ` (3 preceding siblings ...)
  2017-10-23  0:53 ` [PATCH 2/9] staging: lustre: ldlm: remove unused 'work_list' " NeilBrown
@ 2017-10-23  0:53 ` NeilBrown
  2017-10-27  9:08   ` Dilger, Andreas
  2017-10-23  0:53 ` [PATCH 8/9] staging: lustre: ldlm: remove unnecessary 'ownlocks' variable NeilBrown
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 22+ messages in thread
From: NeilBrown @ 2017-10-23  0:53 UTC (permalink / raw)
  To: Oleg Drokin, James Simmons, Andreas Dilger, Greg Kroah-Hartman
  Cc: lustre-devel, linux-kernel

it is only ever set to '1', so we can just assume that and remove the code.

Signed-off-by: NeilBrown <neilb@suse.com>
---
 drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |   15 ++-------------
 1 file changed, 2 insertions(+), 13 deletions(-)

diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
index cb826e9e840e..f719dc05e1ea 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
@@ -121,15 +121,9 @@ ldlm_flock_destroy(struct ldlm_lock *lock, enum ldlm_mode mode, __u64 flags)
  * It is also responsible for splitting a lock if a portion of the lock
  * is released.
  *
- * If \a first_enq is 0 (ie, called from ldlm_reprocess_queue):
- *   - blocking ASTs have already been sent
- *
- * If \a first_enq is 1 (ie, called from ldlm_lock_enqueue):
- *   - blocking ASTs have not been sent yet, so list of conflicting locks
- *     would be collected and ASTs sent.
  */
 static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags,
-				   int first_enq, enum ldlm_error *err,
+				   enum ldlm_error *err,
 				   struct list_head *work_list)
 {
 	struct ldlm_resource *res = req->l_resource;
@@ -197,11 +191,6 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags,
 			if (!ldlm_flocks_overlap(lock, req))
 				continue;
 
-			if (!first_enq) {
-				reprocess_failed = 1;
-				continue;
-			}
-
 			if (*flags & LDLM_FL_BLOCK_NOWAIT) {
 				ldlm_flock_destroy(req, mode, *flags);
 				*err = -EAGAIN;
@@ -605,7 +594,7 @@ ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
 		/* We need to reprocess the lock to do merges or splits
 		 * with existing locks owned by this process.
 		 */
-		ldlm_process_flock_lock(lock, &noreproc, 1, &err, NULL);
+		ldlm_process_flock_lock(lock, &noreproc, &err, NULL);
 	}
 	unlock_res_and_lock(lock);
 	return rc;

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 3/9] staging: lustre: ldlm: remove unneeded 'err' arg to ldlm_process_flock_lock()
  2017-10-23  0:53 [PATCH 0/9] Assorted cleanups for staging/.../lustre/ldlm/ldlm_flock.c NeilBrown
                   ` (5 preceding siblings ...)
  2017-10-23  0:53 ` [PATCH 8/9] staging: lustre: ldlm: remove unnecessary 'ownlocks' variable NeilBrown
@ 2017-10-23  0:53 ` NeilBrown
  2017-10-27  9:11   ` Dilger, Andreas
  2017-10-23  0:53 ` [PATCH 7/9] staging: lustre: ldlm: tidy list walking in ldlm_flock() NeilBrown
                   ` (2 subsequent siblings)
  9 siblings, 1 reply; 22+ messages in thread
From: NeilBrown @ 2017-10-23  0:53 UTC (permalink / raw)
  To: Oleg Drokin, James Simmons, Andreas Dilger, Greg Kroah-Hartman
  Cc: lustre-devel, linux-kernel

This arg is used to return an error code, but the returned code is never
looked at.  So there is no point returning it.

Signed-off-by: NeilBrown <neilb@suse.com>
---
 drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |   10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
index 766653b4d8a5..05e6b67b0e72 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
@@ -122,8 +122,7 @@ ldlm_flock_destroy(struct ldlm_lock *lock, enum ldlm_mode mode, __u64 flags)
  * is released.
  *
  */
-static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags,
-				   enum ldlm_error *err)
+static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags)
 {
 	struct ldlm_resource *res = req->l_resource;
 	struct ldlm_namespace *ns = ldlm_res_to_ns(res);
@@ -145,8 +144,6 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags,
 	       req->l_policy_data.l_flock.start,
 	       req->l_policy_data.l_flock.end);
 
-	*err = ELDLM_OK;
-
 	/* No blocking ASTs are sent to the clients for
 	 * Posix file & record locks
 	 */
@@ -192,7 +189,6 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags,
 
 			if (*flags & LDLM_FL_BLOCK_NOWAIT) {
 				ldlm_flock_destroy(req, mode, *flags);
-				*err = -EAGAIN;
 				return LDLM_ITER_STOP;
 			}
 
@@ -330,7 +326,6 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags,
 			if (IS_ERR(new2)) {
 				ldlm_flock_destroy(req, lock->l_granted_mode,
 						   *flags);
-				*err = PTR_ERR(new2);
 				return LDLM_ITER_STOP;
 			}
 			goto reprocess;
@@ -440,7 +435,6 @@ ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
 	struct obd_import	      *imp = NULL;
 	struct ldlm_flock_wait_data     fwd;
 	struct l_wait_info	      lwi;
-	enum ldlm_error		    err;
 	int			     rc = 0;
 
 	OBD_FAIL_TIMEOUT(OBD_FAIL_LDLM_CP_CB_WAIT2, 4);
@@ -593,7 +587,7 @@ ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
 		/* We need to reprocess the lock to do merges or splits
 		 * with existing locks owned by this process.
 		 */
-		ldlm_process_flock_lock(lock, &noreproc, &err);
+		ldlm_process_flock_lock(lock, &noreproc);
 	}
 	unlock_res_and_lock(lock);
 	return rc;

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/9] staging: lustre: ldlm: remove unused 'work_list' arg from ldlm_process_flock_lock()
  2017-10-23  0:53 [PATCH 0/9] Assorted cleanups for staging/.../lustre/ldlm/ldlm_flock.c NeilBrown
                   ` (2 preceding siblings ...)
  2017-10-23  0:53 ` [PATCH 4/9] staging: lustre: ldlm: remove 'flags' arg from ldlm_process_flock_lock() NeilBrown
@ 2017-10-23  0:53 ` NeilBrown
  2017-10-27  9:09   ` Dilger, Andreas
  2017-10-23  0:53 ` [PATCH 1/9] staging: lustre: ldlm: remove 'first_enq' " NeilBrown
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 22+ messages in thread
From: NeilBrown @ 2017-10-23  0:53 UTC (permalink / raw)
  To: Oleg Drokin, James Simmons, Andreas Dilger, Greg Kroah-Hartman
  Cc: lustre-devel, linux-kernel

'work_list' is only set to NULL, and is never used.
So discard it.

Signed-off-by: NeilBrown <neilb@suse.com>
---
 drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |    5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
index f719dc05e1ea..766653b4d8a5 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
@@ -123,8 +123,7 @@ ldlm_flock_destroy(struct ldlm_lock *lock, enum ldlm_mode mode, __u64 flags)
  *
  */
 static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags,
-				   enum ldlm_error *err,
-				   struct list_head *work_list)
+				   enum ldlm_error *err)
 {
 	struct ldlm_resource *res = req->l_resource;
 	struct ldlm_namespace *ns = ldlm_res_to_ns(res);
@@ -594,7 +593,7 @@ ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
 		/* We need to reprocess the lock to do merges or splits
 		 * with existing locks owned by this process.
 		 */
-		ldlm_process_flock_lock(lock, &noreproc, &err, NULL);
+		ldlm_process_flock_lock(lock, &noreproc, &err);
 	}
 	unlock_res_and_lock(lock);
 	return rc;

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 4/9] staging: lustre: ldlm: remove 'flags' arg from ldlm_process_flock_lock()
  2017-10-23  0:53 [PATCH 0/9] Assorted cleanups for staging/.../lustre/ldlm/ldlm_flock.c NeilBrown
  2017-10-23  0:53 ` [PATCH 9/9] staging: lustre: ldlm: remove unused field 'fwd_generation' NeilBrown
  2017-10-23  0:53 ` [PATCH 5/9] staging: lustre: ldlm: remove unused 'overlaps' variable NeilBrown
@ 2017-10-23  0:53 ` NeilBrown
  2017-10-27  9:19   ` Dilger, Andreas
  2017-10-23  0:53 ` [PATCH 2/9] staging: lustre: ldlm: remove unused 'work_list' " NeilBrown
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 22+ messages in thread
From: NeilBrown @ 2017-10-23  0:53 UTC (permalink / raw)
  To: Oleg Drokin, James Simmons, Andreas Dilger, Greg Kroah-Hartman
  Cc: lustre-devel, linux-kernel

This is only ever set to LDLM_FL_WAIT_NOREPROC, so we can remove the arg
and discard any code that is only run when it doesn't have that value.

Signed-off-by: NeilBrown <neilb@suse.com>
---
 drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |  110 ++++-------------------
 1 file changed, 21 insertions(+), 89 deletions(-)

diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
index 05e6b67b0e72..2d1fa2b33129 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
@@ -122,7 +122,7 @@ ldlm_flock_destroy(struct ldlm_lock *lock, enum ldlm_mode mode, __u64 flags)
  * is released.
  *
  */
-static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags)
+static int ldlm_process_flock_lock(struct ldlm_lock *req)
 {
 	struct ldlm_resource *res = req->l_resource;
 	struct ldlm_namespace *ns = ldlm_res_to_ns(res);
@@ -138,8 +138,8 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags)
 	const struct ldlm_callback_suite null_cbs = { };
 
 	CDEBUG(D_DLMTRACE,
-	       "flags %#llx owner %llu pid %u mode %u start %llu end %llu\n",
-	       *flags, new->l_policy_data.l_flock.owner,
+	       "owner %llu pid %u mode %u start %llu end %llu\n",
+	       new->l_policy_data.l_flock.owner,
 	       new->l_policy_data.l_flock.pid, mode,
 	       req->l_policy_data.l_flock.start,
 	       req->l_policy_data.l_flock.end);
@@ -150,74 +150,16 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags)
 	req->l_blocking_ast = NULL;
 
 reprocess:
-	if ((*flags == LDLM_FL_WAIT_NOREPROC) || (mode == LCK_NL)) {
-		/* This loop determines where this processes locks start
-		 * in the resource lr_granted list.
-		 */
-		list_for_each(tmp, &res->lr_granted) {
-			lock = list_entry(tmp, struct ldlm_lock,
-					  l_res_link);
-			if (ldlm_same_flock_owner(lock, req)) {
-				ownlocks = tmp;
-				break;
-			}
-		}
-	} else {
-		int reprocess_failed = 0;
-
-		lockmode_verify(mode);
-
-		/* This loop determines if there are existing locks
-		 * that conflict with the new lock request.
-		 */
-		list_for_each(tmp, &res->lr_granted) {
-			lock = list_entry(tmp, struct ldlm_lock,
-					  l_res_link);
-
-			if (ldlm_same_flock_owner(lock, req)) {
-				if (!ownlocks)
-					ownlocks = tmp;
-				continue;
-			}
-
-			/* locks are compatible, overlap doesn't matter */
-			if (lockmode_compat(lock->l_granted_mode, mode))
-				continue;
-
-			if (!ldlm_flocks_overlap(lock, req))
-				continue;
-
-			if (*flags & LDLM_FL_BLOCK_NOWAIT) {
-				ldlm_flock_destroy(req, mode, *flags);
-				return LDLM_ITER_STOP;
-			}
-
-			if (*flags & LDLM_FL_TEST_LOCK) {
-				ldlm_flock_destroy(req, mode, *flags);
-				req->l_req_mode = lock->l_granted_mode;
-				req->l_policy_data.l_flock.pid =
-					lock->l_policy_data.l_flock.pid;
-				req->l_policy_data.l_flock.start =
-					lock->l_policy_data.l_flock.start;
-				req->l_policy_data.l_flock.end =
-					lock->l_policy_data.l_flock.end;
-				*flags |= LDLM_FL_LOCK_CHANGED;
-				return LDLM_ITER_STOP;
-			}
-
-			ldlm_resource_add_lock(res, &res->lr_waiting, req);
-			*flags |= LDLM_FL_BLOCK_GRANTED;
-			return LDLM_ITER_STOP;
+	/* This loop determines where this processes locks start
+	 * in the resource lr_granted list.
+	 */
+	list_for_each(tmp, &res->lr_granted) {
+		lock = list_entry(tmp, struct ldlm_lock,
+				  l_res_link);
+		if (ldlm_same_flock_owner(lock, req)) {
+			ownlocks = tmp;
+			break;
 		}
-		if (reprocess_failed)
-			return LDLM_ITER_CONTINUE;
-	}
-
-	if (*flags & LDLM_FL_TEST_LOCK) {
-		ldlm_flock_destroy(req, mode, *flags);
-		req->l_req_mode = LCK_NL;
-		*flags |= LDLM_FL_LOCK_CHANGED;
-		return LDLM_ITER_STOP;
 	}
 
 	/* Scan the locks owned by this process that overlap this request.
@@ -267,7 +209,8 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags)
 			}
 
 			if (added) {
-				ldlm_flock_destroy(lock, mode, *flags);
+				ldlm_flock_destroy(lock, mode,
+						   LDLM_FL_WAIT_NOREPROC);
 			} else {
 				new = lock;
 				added = 1;
@@ -293,7 +236,8 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags)
 					new->l_policy_data.l_flock.end + 1;
 				break;
 			}
-			ldlm_flock_destroy(lock, lock->l_req_mode, *flags);
+			ldlm_flock_destroy(lock, lock->l_req_mode,
+					   LDLM_FL_WAIT_NOREPROC);
 			continue;
 		}
 		if (new->l_policy_data.l_flock.end >=
@@ -325,7 +269,7 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags)
 			lock_res_and_lock(req);
 			if (IS_ERR(new2)) {
 				ldlm_flock_destroy(req, lock->l_granted_mode,
-						   *flags);
+						   LDLM_FL_WAIT_NOREPROC);
 				return LDLM_ITER_STOP;
 			}
 			goto reprocess;
@@ -354,9 +298,8 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags)
 					     &new2->l_remote_handle,
 					     &new2->l_exp_hash);
 		}
-		if (*flags == LDLM_FL_WAIT_NOREPROC)
-			ldlm_lock_addref_internal_nolock(new2,
-							 lock->l_granted_mode);
+		ldlm_lock_addref_internal_nolock(new2,
+						 lock->l_granted_mode);
 
 		/* insert new2 at lock */
 		ldlm_resource_add_lock(res, ownlocks, new2);
@@ -377,22 +320,13 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags)
 		ldlm_resource_add_lock(res, ownlocks, req);
 	}
 
-	if (*flags != LDLM_FL_WAIT_NOREPROC) {
-		/* The only one possible case for client-side calls flock
-		 * policy function is ldlm_flock_completion_ast inside which
-		 * carries LDLM_FL_WAIT_NOREPROC flag.
-		 */
-		CERROR("Illegal parameter for client-side-only module.\n");
-		LBUG();
-	}
-
 	/* In case we're reprocessing the requested lock we can't destroy
 	 * it until after calling ldlm_add_ast_work_item() above so that laawi()
 	 * can bump the reference count on \a req. Otherwise \a req
 	 * could be freed before the completion AST can be sent.
 	 */
 	if (added)
-		ldlm_flock_destroy(req, mode, *flags);
+		ldlm_flock_destroy(req, mode, LDLM_FL_WAIT_NOREPROC);
 
 	ldlm_resource_dump(D_INFO, res);
 	return LDLM_ITER_CONTINUE;
@@ -582,12 +516,10 @@ ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
 		getlk->fl_start = (loff_t)lock->l_policy_data.l_flock.start;
 		getlk->fl_end = (loff_t)lock->l_policy_data.l_flock.end;
 	} else {
-		__u64 noreproc = LDLM_FL_WAIT_NOREPROC;
-
 		/* We need to reprocess the lock to do merges or splits
 		 * with existing locks owned by this process.
 		 */
-		ldlm_process_flock_lock(lock, &noreproc);
+		ldlm_process_flock_lock(lock);
 	}
 	unlock_res_and_lock(lock);
 	return rc;

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 5/9] staging: lustre: ldlm: remove unused 'overlaps' variable
  2017-10-23  0:53 [PATCH 0/9] Assorted cleanups for staging/.../lustre/ldlm/ldlm_flock.c NeilBrown
  2017-10-23  0:53 ` [PATCH 9/9] staging: lustre: ldlm: remove unused field 'fwd_generation' NeilBrown
@ 2017-10-23  0:53 ` NeilBrown
  2017-10-27  9:20   ` Dilger, Andreas
  2017-10-23  0:53 ` [PATCH 4/9] staging: lustre: ldlm: remove 'flags' arg from ldlm_process_flock_lock() NeilBrown
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 22+ messages in thread
From: NeilBrown @ 2017-10-23  0:53 UTC (permalink / raw)
  To: Oleg Drokin, James Simmons, Andreas Dilger, Greg Kroah-Hartman
  Cc: lustre-devel, linux-kernel

'overlaps' is never used, only incremented.

Signed-off-by: NeilBrown <neilb@suse.com>
---
 drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |    3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
index 2d1fa2b33129..d5a5742a1171 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
@@ -133,7 +133,6 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
 	struct ldlm_lock *new2 = NULL;
 	enum ldlm_mode mode = req->l_req_mode;
 	int added = (mode == LCK_NL);
-	int overlaps = 0;
 	int splitted = 0;
 	const struct ldlm_callback_suite null_cbs = { };
 
@@ -226,8 +225,6 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
 		    lock->l_policy_data.l_flock.start)
 			break;
 
-		++overlaps;
-
 		if (new->l_policy_data.l_flock.start <=
 		    lock->l_policy_data.l_flock.start) {
 			if (new->l_policy_data.l_flock.end <

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6/9] staging: lustre: ldlm: remove 'flags' arg from ldlm_flock_destroy()
  2017-10-23  0:53 [PATCH 0/9] Assorted cleanups for staging/.../lustre/ldlm/ldlm_flock.c NeilBrown
                   ` (7 preceding siblings ...)
  2017-10-23  0:53 ` [PATCH 7/9] staging: lustre: ldlm: tidy list walking in ldlm_flock() NeilBrown
@ 2017-10-23  0:53 ` NeilBrown
  2017-10-27  9:23   ` Dilger, Andreas
  2017-10-24 22:20 ` [PATCH 0/9] Assorted cleanups for staging/.../lustre/ldlm/ldlm_flock.c James Simmons
  9 siblings, 1 reply; 22+ messages in thread
From: NeilBrown @ 2017-10-23  0:53 UTC (permalink / raw)
  To: Oleg Drokin, James Simmons, Andreas Dilger, Greg Kroah-Hartman
  Cc: lustre-devel, linux-kernel

The only value ever passed in LDLM_FL_WAIT_NOREPROC, so assume that
instead of passing it.

Signed-off-by: NeilBrown <neilb@suse.com>
---
 drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |   36 ++++++++++-------------
 1 file changed, 16 insertions(+), 20 deletions(-)

diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
index d5a5742a1171..1bf56892fcf5 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
@@ -88,24 +88,23 @@ ldlm_flocks_overlap(struct ldlm_lock *lock, struct ldlm_lock *new)
 }
 
 static inline void
-ldlm_flock_destroy(struct ldlm_lock *lock, enum ldlm_mode mode, __u64 flags)
+ldlm_flock_destroy(struct ldlm_lock *lock, enum ldlm_mode mode)
 {
-	LDLM_DEBUG(lock, "%s(mode: %d, flags: 0x%llx)",
-		   __func__, mode, flags);
+	LDLM_DEBUG(lock, "%s(mode: %d)",
+		   __func__, mode);
 
 	/* Safe to not lock here, since it should be empty anyway */
 	LASSERT(hlist_unhashed(&lock->l_exp_flock_hash));
 
 	list_del_init(&lock->l_res_link);
-	if (flags == LDLM_FL_WAIT_NOREPROC) {
-		/* client side - set a flag to prevent sending a CANCEL */
-		lock->l_flags |= LDLM_FL_LOCAL_ONLY | LDLM_FL_CBPENDING;
 
-		/* when reaching here, it is under lock_res_and_lock(). Thus,
-		 * need call the nolock version of ldlm_lock_decref_internal
-		 */
-		ldlm_lock_decref_internal_nolock(lock, mode);
-	}
+	/* client side - set a flag to prevent sending a CANCEL */
+	lock->l_flags |= LDLM_FL_LOCAL_ONLY | LDLM_FL_CBPENDING;
+
+	/* when reaching here, it is under lock_res_and_lock(). Thus,
+	 * need call the nolock version of ldlm_lock_decref_internal
+	 */
+	ldlm_lock_decref_internal_nolock(lock, mode);
 
 	ldlm_lock_destroy_nolock(lock);
 }
@@ -208,8 +207,7 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
 			}
 
 			if (added) {
-				ldlm_flock_destroy(lock, mode,
-						   LDLM_FL_WAIT_NOREPROC);
+				ldlm_flock_destroy(lock, mode);
 			} else {
 				new = lock;
 				added = 1;
@@ -233,8 +231,7 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
 					new->l_policy_data.l_flock.end + 1;
 				break;
 			}
-			ldlm_flock_destroy(lock, lock->l_req_mode,
-					   LDLM_FL_WAIT_NOREPROC);
+			ldlm_flock_destroy(lock, lock->l_req_mode);
 			continue;
 		}
 		if (new->l_policy_data.l_flock.end >=
@@ -265,8 +262,7 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
 						NULL, 0, LVB_T_NONE);
 			lock_res_and_lock(req);
 			if (IS_ERR(new2)) {
-				ldlm_flock_destroy(req, lock->l_granted_mode,
-						   LDLM_FL_WAIT_NOREPROC);
+				ldlm_flock_destroy(req, lock->l_granted_mode);
 				return LDLM_ITER_STOP;
 			}
 			goto reprocess;
@@ -323,7 +319,7 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
 	 * could be freed before the completion AST can be sent.
 	 */
 	if (added)
-		ldlm_flock_destroy(req, mode, LDLM_FL_WAIT_NOREPROC);
+		ldlm_flock_destroy(req, mode);
 
 	ldlm_resource_dump(D_INFO, res);
 	return LDLM_ITER_CONTINUE;
@@ -477,7 +473,7 @@ ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
 			LDLM_DEBUG(lock, "client-side enqueue deadlock received");
 			rc = -EDEADLK;
 		}
-		ldlm_flock_destroy(lock, mode, LDLM_FL_WAIT_NOREPROC);
+		ldlm_flock_destroy(lock, mode);
 		unlock_res_and_lock(lock);
 
 		/* Need to wake up the waiter if we were evicted */
@@ -498,7 +494,7 @@ ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
 		 * in the lock changes we can decref the appropriate refcount.
 		 */
 		LASSERT(ldlm_is_test_lock(lock));
-		ldlm_flock_destroy(lock, getlk->fl_type, LDLM_FL_WAIT_NOREPROC);
+		ldlm_flock_destroy(lock, getlk->fl_type);
 		switch (lock->l_granted_mode) {
 		case LCK_PR:
 			getlk->fl_type = F_RDLCK;

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 7/9] staging: lustre: ldlm: tidy list walking in ldlm_flock()
  2017-10-23  0:53 [PATCH 0/9] Assorted cleanups for staging/.../lustre/ldlm/ldlm_flock.c NeilBrown
                   ` (6 preceding siblings ...)
  2017-10-23  0:53 ` [PATCH 3/9] staging: lustre: ldlm: remove unneeded 'err' arg to ldlm_process_flock_lock() NeilBrown
@ 2017-10-23  0:53 ` NeilBrown
  2017-10-27  9:32   ` Dilger, Andreas
  2017-10-23  0:53 ` [PATCH 6/9] staging: lustre: ldlm: remove 'flags' arg from ldlm_flock_destroy() NeilBrown
  2017-10-24 22:20 ` [PATCH 0/9] Assorted cleanups for staging/.../lustre/ldlm/ldlm_flock.c James Simmons
  9 siblings, 1 reply; 22+ messages in thread
From: NeilBrown @ 2017-10-23  0:53 UTC (permalink / raw)
  To: Oleg Drokin, James Simmons, Andreas Dilger, Greg Kroah-Hartman
  Cc: lustre-devel, linux-kernel

Use list_for_each_entry variants to
avoid the explicit list_entry() calls.
This allows us to use list_for_each_entry_safe_from()
instread of adding a local list-walking macro.

Also improve some comments so that it is more obvious
that the locks are sorted per-owner and that we need
to find the insertion point.

Signed-off-by: NeilBrown <neilb@suse.com>
---
 drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |   45 ++++++++++-------------
 1 file changed, 19 insertions(+), 26 deletions(-)

diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
index 1bf56892fcf5..0bf6dce1c5b1 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
@@ -59,17 +59,6 @@
 #include <linux/list.h>
 #include "ldlm_internal.h"
 
-/**
- * list_for_remaining_safe - iterate over the remaining entries in a list
- *	      and safeguard against removal of a list entry.
- * \param pos   the &struct list_head to use as a loop counter. pos MUST
- *	      have been initialized prior to using it in this macro.
- * \param n     another &struct list_head to use as temporary storage
- * \param head  the head for your list.
- */
-#define list_for_remaining_safe(pos, n, head) \
-	for (n = pos->next; pos != (head); pos = n, n = pos->next)
-
 static inline int
 ldlm_same_flock_owner(struct ldlm_lock *lock, struct ldlm_lock *new)
 {
@@ -125,8 +114,8 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
 {
 	struct ldlm_resource *res = req->l_resource;
 	struct ldlm_namespace *ns = ldlm_res_to_ns(res);
-	struct list_head *tmp;
-	struct list_head *ownlocks = NULL;
+	struct ldlm_lock *tmp;
+	struct ldlm_lock *ownlocks = NULL;
 	struct ldlm_lock *lock = NULL;
 	struct ldlm_lock *new = req;
 	struct ldlm_lock *new2 = NULL;
@@ -151,23 +140,23 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
 	/* This loop determines where this processes locks start
 	 * in the resource lr_granted list.
 	 */
-	list_for_each(tmp, &res->lr_granted) {
-		lock = list_entry(tmp, struct ldlm_lock,
-				  l_res_link);
+	list_for_each_entry(lock, &res->lr_granted, l_res_link) {
 		if (ldlm_same_flock_owner(lock, req)) {
-			ownlocks = tmp;
+			ownlocks = lock;
 			break;
 		}
 	}
 
-	/* Scan the locks owned by this process that overlap this request.
+	/* Scan the locks owned by this process to find the insertion point
+	 * (as locks are ordered), and to handle overlaps.
 	 * We may have to merge or split existing locks.
 	 */
-	if (!ownlocks)
-		ownlocks = &res->lr_granted;
-
-	list_for_remaining_safe(ownlocks, tmp, &res->lr_granted) {
-		lock = list_entry(ownlocks, struct ldlm_lock, l_res_link);
+	if (ownlocks)
+		lock = ownlocks;
+	else
+		lock = list_entry(&res->lr_granted,
+				  struct ldlm_lock, l_res_link);
+	list_for_each_entry_safe_from(lock, tmp, &res->lr_granted, l_res_link) {
 
 		if (!ldlm_same_flock_owner(lock, new))
 			break;
@@ -295,7 +284,7 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
 						 lock->l_granted_mode);
 
 		/* insert new2 at lock */
-		ldlm_resource_add_lock(res, ownlocks, new2);
+		ldlm_resource_add_lock(res, &lock->l_res_link, new2);
 		LDLM_LOCK_RELEASE(new2);
 		break;
 	}
@@ -309,8 +298,12 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
 
 	if (!added) {
 		list_del_init(&req->l_res_link);
-		/* insert new lock before ownlocks in list. */
-		ldlm_resource_add_lock(res, ownlocks, req);
+		/* insert new lock before "lock", which might be the
+		 * next lock for this owner, or might be the first
+		 * lock for the next owner, or might not be a lock at
+		 * all, but instead points at the head of the list
+		 */
+		ldlm_resource_add_lock(res, &lock->l_res_link, req);
 	}
 
 	/* In case we're reprocessing the requested lock we can't destroy

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 9/9] staging: lustre: ldlm: remove unused field 'fwd_generation'
  2017-10-23  0:53 [PATCH 0/9] Assorted cleanups for staging/.../lustre/ldlm/ldlm_flock.c NeilBrown
@ 2017-10-23  0:53 ` NeilBrown
  2017-10-27  9:39   ` [lustre-devel] " Dilger, Andreas
  2017-10-23  0:53 ` [PATCH 5/9] staging: lustre: ldlm: remove unused 'overlaps' variable NeilBrown
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 22+ messages in thread
From: NeilBrown @ 2017-10-23  0:53 UTC (permalink / raw)
  To: Oleg Drokin, James Simmons, Andreas Dilger, Greg Kroah-Hartman
  Cc: lustre-devel, linux-kernel

With this field gone, we don't need local variables 'imp' or 'obd'
any more.

Signed-off-by: NeilBrown <neilb@suse.com>
---
 drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |   21 +++------------------
 1 file changed, 3 insertions(+), 18 deletions(-)

diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
index 774d8667769a..9c0e9cd00000 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
@@ -311,7 +311,6 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
 
 struct ldlm_flock_wait_data {
 	struct ldlm_lock *fwd_lock;
-	int	       fwd_generation;
 };
 
 static void
@@ -342,11 +341,9 @@ int
 ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
 {
 	struct file_lock		*getlk = lock->l_ast_data;
-	struct obd_device	      *obd;
-	struct obd_import	      *imp = NULL;
-	struct ldlm_flock_wait_data     fwd;
-	struct l_wait_info	      lwi;
-	int			     rc = 0;
+	struct ldlm_flock_wait_data	fwd;
+	struct l_wait_info		lwi;
+	int				rc = 0;
 
 	OBD_FAIL_TIMEOUT(OBD_FAIL_LDLM_CP_CB_WAIT2, 4);
 	if (OBD_FAIL_PRECHECK(OBD_FAIL_LDLM_CP_CB_WAIT3)) {
@@ -374,18 +371,6 @@ ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
 
 	LDLM_DEBUG(lock, "client-side enqueue returned a blocked lock, sleeping");
 	fwd.fwd_lock = lock;
-	obd = class_exp2obd(lock->l_conn_export);
-
-	/* if this is a local lock, there is no import */
-	if (obd)
-		imp = obd->u.cli.cl_import;
-
-	if (imp) {
-		spin_lock(&imp->imp_lock);
-		fwd.fwd_generation = imp->imp_generation;
-		spin_unlock(&imp->imp_lock);
-	}
-
 	lwi = LWI_TIMEOUT_INTR(0, NULL, ldlm_flock_interrupted_wait, &fwd);
 
 	/* Go to sleep until the lock is granted. */

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 8/9] staging: lustre: ldlm: remove unnecessary 'ownlocks' variable.
  2017-10-23  0:53 [PATCH 0/9] Assorted cleanups for staging/.../lustre/ldlm/ldlm_flock.c NeilBrown
                   ` (4 preceding siblings ...)
  2017-10-23  0:53 ` [PATCH 1/9] staging: lustre: ldlm: remove 'first_enq' " NeilBrown
@ 2017-10-23  0:53 ` NeilBrown
  2017-10-27  9:33   ` Dilger, Andreas
  2017-10-23  0:53 ` [PATCH 3/9] staging: lustre: ldlm: remove unneeded 'err' arg to ldlm_process_flock_lock() NeilBrown
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 22+ messages in thread
From: NeilBrown @ 2017-10-23  0:53 UTC (permalink / raw)
  To: Oleg Drokin, James Simmons, Andreas Dilger, Greg Kroah-Hartman
  Cc: lustre-devel, linux-kernel

Now that the code has been simplified, 'ownlocks' is not
necessary.

The loop which sets it exits with 'lock' having the same value as
'ownlocks', or pointing to the head of the list if ownlocks is NULL.

The current code then tests ownlocks and sets 'lock' to exactly the
value that it currently has.

So discard 'ownlocks'.

Also remove unnecessary initialization of 'lock'.

Signed-off-by: NeilBrown <neilb@suse.com>
---
 drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |   15 +++------------
 1 file changed, 3 insertions(+), 12 deletions(-)

diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
index 0bf6dce1c5b1..774d8667769a 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
@@ -115,8 +115,7 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
 	struct ldlm_resource *res = req->l_resource;
 	struct ldlm_namespace *ns = ldlm_res_to_ns(res);
 	struct ldlm_lock *tmp;
-	struct ldlm_lock *ownlocks = NULL;
-	struct ldlm_lock *lock = NULL;
+	struct ldlm_lock *lock;
 	struct ldlm_lock *new = req;
 	struct ldlm_lock *new2 = NULL;
 	enum ldlm_mode mode = req->l_req_mode;
@@ -140,22 +139,14 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
 	/* This loop determines where this processes locks start
 	 * in the resource lr_granted list.
 	 */
-	list_for_each_entry(lock, &res->lr_granted, l_res_link) {
-		if (ldlm_same_flock_owner(lock, req)) {
-			ownlocks = lock;
+	list_for_each_entry(lock, &res->lr_granted, l_res_link)
+		if (ldlm_same_flock_owner(lock, req))
 			break;
-		}
-	}
 
 	/* Scan the locks owned by this process to find the insertion point
 	 * (as locks are ordered), and to handle overlaps.
 	 * We may have to merge or split existing locks.
 	 */
-	if (ownlocks)
-		lock = ownlocks;
-	else
-		lock = list_entry(&res->lr_granted,
-				  struct ldlm_lock, l_res_link);
 	list_for_each_entry_safe_from(lock, tmp, &res->lr_granted, l_res_link) {
 
 		if (!ldlm_same_flock_owner(lock, new))

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 0/9] Assorted cleanups for staging/.../lustre/ldlm/ldlm_flock.c
  2017-10-23  0:53 [PATCH 0/9] Assorted cleanups for staging/.../lustre/ldlm/ldlm_flock.c NeilBrown
                   ` (8 preceding siblings ...)
  2017-10-23  0:53 ` [PATCH 6/9] staging: lustre: ldlm: remove 'flags' arg from ldlm_flock_destroy() NeilBrown
@ 2017-10-24 22:20 ` James Simmons
  2017-10-25 21:55   ` NeilBrown
  9 siblings, 1 reply; 22+ messages in thread
From: James Simmons @ 2017-10-24 22:20 UTC (permalink / raw)
  To: NeilBrown
  Cc: Oleg Drokin, Andreas Dilger, Greg Kroah-Hartman, lustre-devel,
	linux-kernel


> This file contains quite a bit of dead code and unused variables.
> This patch series cleans it up in various ways.
> It should change behaviour at all, just code
> readability/maintainabilty.
> 
> I sent the back in July but got not response, possibly because there
> were included with other patches which caused a distraction.
> So here they are by themselves.

Thanks for separating them out. I will give them a spin.

> Thanks,
> NeilBrown
> 
> 
> ---
> 
> NeilBrown (9):
>       staging: lustre: ldlm: remove 'first_enq' arg from ldlm_process_flock_lock()
>       staging: lustre: ldlm: remove unused 'work_list' arg from ldlm_process_flock_lock()
>       staging: lustre: ldlm: remove unneeded 'err' arg to ldlm_process_flock_lock()
>       staging: lustre: ldlm: remove 'flags' arg from ldlm_process_flock_lock()
>       staging: lustre: ldlm: remove unused 'overlaps' variable
>       staging: lustre: ldlm: remove 'flags' arg from ldlm_flock_destroy()
>       staging: lustre: ldlm: tidy list walking in ldlm_flock()
>       staging: lustre: ldlm: remove unnecessary 'ownlocks' variable.
>       staging: lustre: ldlm: remove unused field 'fwd_generation'
> 
> 
>  drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |  208 +++++------------------
>  1 file changed, 42 insertions(+), 166 deletions(-)
> 
> --
> Signature
> 
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 0/9] Assorted cleanups for staging/.../lustre/ldlm/ldlm_flock.c
  2017-10-24 22:20 ` [PATCH 0/9] Assorted cleanups for staging/.../lustre/ldlm/ldlm_flock.c James Simmons
@ 2017-10-25 21:55   ` NeilBrown
  0 siblings, 0 replies; 22+ messages in thread
From: NeilBrown @ 2017-10-25 21:55 UTC (permalink / raw)
  To: James Simmons
  Cc: Oleg Drokin, Andreas Dilger, Greg Kroah-Hartman, lustre-devel,
	linux-kernel

[-- Attachment #1: Type: text/plain, Size: 896 bytes --]

On Tue, Oct 24 2017, James Simmons wrote:

>> This file contains quite a bit of dead code and unused variables.
>> This patch series cleans it up in various ways.
>> It should change behaviour at all, just code
>> readability/maintainabilty.
>> 
>> I sent the back in July but got not response, possibly because there
>> were included with other patches which caused a distraction.
>> So here they are by themselves.
>
> Thanks for separating them out. I will give them a spin.
>

Thanks.  I have a growing series of cleanups (43 at present, including
these and the namei/dcache patches).  Is posting them here the best
approach?  I assume I should keep them in well-defined groups.  Should I
wait for one set to be merged before sending more, or should I just send
them when I think they are ready and you (or others) will look at them
as time permits.

Thanks,
NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/9] staging: lustre: ldlm: remove 'first_enq' arg from ldlm_process_flock_lock()
  2017-10-23  0:53 ` [PATCH 1/9] staging: lustre: ldlm: remove 'first_enq' " NeilBrown
@ 2017-10-27  9:08   ` Dilger, Andreas
  0 siblings, 0 replies; 22+ messages in thread
From: Dilger, Andreas @ 2017-10-27  9:08 UTC (permalink / raw)
  To: NeilBrown
  Cc: Drokin, Oleg, James Simmons, Greg Kroah-Hartman, lustre-devel,
	linux-kernel

On Oct 22, 2017, at 18:53, NeilBrown <neilb@suse.com> wrote:
> 
> it is only ever set to '1', so we can just assume that and remove the code.
> 
> Signed-off-by: NeilBrown <neilb@suse.com>

Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>

> ---
> drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |   15 ++-------------
> 1 file changed, 2 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> index cb826e9e840e..f719dc05e1ea 100644
> --- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> +++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> @@ -121,15 +121,9 @@ ldlm_flock_destroy(struct ldlm_lock *lock, enum ldlm_mode mode, __u64 flags)
>  * It is also responsible for splitting a lock if a portion of the lock
>  * is released.
>  *
> - * If \a first_enq is 0 (ie, called from ldlm_reprocess_queue):
> - *   - blocking ASTs have already been sent
> - *
> - * If \a first_enq is 1 (ie, called from ldlm_lock_enqueue):
> - *   - blocking ASTs have not been sent yet, so list of conflicting locks
> - *     would be collected and ASTs sent.
>  */
> static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags,
> -				   int first_enq, enum ldlm_error *err,
> +				   enum ldlm_error *err,
> 				   struct list_head *work_list)
> {
> 	struct ldlm_resource *res = req->l_resource;
> @@ -197,11 +191,6 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags,
> 			if (!ldlm_flocks_overlap(lock, req))
> 				continue;
> 
> -			if (!first_enq) {
> -				reprocess_failed = 1;
> -				continue;
> -			}
> -
> 			if (*flags & LDLM_FL_BLOCK_NOWAIT) {
> 				ldlm_flock_destroy(req, mode, *flags);
> 				*err = -EAGAIN;
> @@ -605,7 +594,7 @@ ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
> 		/* We need to reprocess the lock to do merges or splits
> 		 * with existing locks owned by this process.
> 		 */
> -		ldlm_process_flock_lock(lock, &noreproc, 1, &err, NULL);
> +		ldlm_process_flock_lock(lock, &noreproc, &err, NULL);
> 	}
> 	unlock_res_and_lock(lock);
> 	return rc;
> 
> 

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Intel Corporation

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/9] staging: lustre: ldlm: remove unused 'work_list' arg from ldlm_process_flock_lock()
  2017-10-23  0:53 ` [PATCH 2/9] staging: lustre: ldlm: remove unused 'work_list' " NeilBrown
@ 2017-10-27  9:09   ` Dilger, Andreas
  0 siblings, 0 replies; 22+ messages in thread
From: Dilger, Andreas @ 2017-10-27  9:09 UTC (permalink / raw)
  To: NeilBrown
  Cc: Drokin, Oleg, James Simmons, Greg Kroah-Hartman, lustre-devel,
	linux-kernel

On Oct 22, 2017, at 18:53, NeilBrown <neilb@suse.com> wrote:
> 
> 'work_list' is only set to NULL, and is never used.
> So discard it.
> 
> Signed-off-by: NeilBrown <neilb@suse.com>

Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>

> ---
> drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |    5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> index f719dc05e1ea..766653b4d8a5 100644
> --- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> +++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> @@ -123,8 +123,7 @@ ldlm_flock_destroy(struct ldlm_lock *lock, enum ldlm_mode mode, __u64 flags)
>  *
>  */
> static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags,
> -				   enum ldlm_error *err,
> -				   struct list_head *work_list)
> +				   enum ldlm_error *err)
> {
> 	struct ldlm_resource *res = req->l_resource;
> 	struct ldlm_namespace *ns = ldlm_res_to_ns(res);
> @@ -594,7 +593,7 @@ ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
> 		/* We need to reprocess the lock to do merges or splits
> 		 * with existing locks owned by this process.
> 		 */
> -		ldlm_process_flock_lock(lock, &noreproc, &err, NULL);
> +		ldlm_process_flock_lock(lock, &noreproc, &err);
> 	}
> 	unlock_res_and_lock(lock);
> 	return rc;
> 
> 

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Intel Corporation

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/9] staging: lustre: ldlm: remove unneeded 'err' arg to ldlm_process_flock_lock()
  2017-10-23  0:53 ` [PATCH 3/9] staging: lustre: ldlm: remove unneeded 'err' arg to ldlm_process_flock_lock() NeilBrown
@ 2017-10-27  9:11   ` Dilger, Andreas
  0 siblings, 0 replies; 22+ messages in thread
From: Dilger, Andreas @ 2017-10-27  9:11 UTC (permalink / raw)
  To: NeilBrown
  Cc: Drokin, Oleg, James Simmons, Greg Kroah-Hartman, lustre-devel,
	linux-kernel

On Oct 22, 2017, at 18:53, NeilBrown <neilb@suse.com> wrote:
> 
> This arg is used to return an error code, but the returned code is never
> looked at.  So there is no point returning it.
> 
> Signed-off-by: NeilBrown <neilb@suse.com>

Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>

> ---
> drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |   10 ++--------
> 1 file changed, 2 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> index 766653b4d8a5..05e6b67b0e72 100644
> --- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> +++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> @@ -122,8 +122,7 @@ ldlm_flock_destroy(struct ldlm_lock *lock, enum ldlm_mode mode, __u64 flags)
>  * is released.
>  *
>  */
> -static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags,
> -				   enum ldlm_error *err)
> +static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags)
> {
> 	struct ldlm_resource *res = req->l_resource;
> 	struct ldlm_namespace *ns = ldlm_res_to_ns(res);
> @@ -145,8 +144,6 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags,
> 	       req->l_policy_data.l_flock.start,
> 	       req->l_policy_data.l_flock.end);
> 
> -	*err = ELDLM_OK;
> -
> 	/* No blocking ASTs are sent to the clients for
> 	 * Posix file & record locks
> 	 */
> @@ -192,7 +189,6 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags,
> 
> 			if (*flags & LDLM_FL_BLOCK_NOWAIT) {
> 				ldlm_flock_destroy(req, mode, *flags);
> -				*err = -EAGAIN;
> 				return LDLM_ITER_STOP;
> 			}
> 
> @@ -330,7 +326,6 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags,
> 			if (IS_ERR(new2)) {
> 				ldlm_flock_destroy(req, lock->l_granted_mode,
> 						   *flags);
> -				*err = PTR_ERR(new2);
> 				return LDLM_ITER_STOP;
> 			}
> 			goto reprocess;
> @@ -440,7 +435,6 @@ ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
> 	struct obd_import	      *imp = NULL;
> 	struct ldlm_flock_wait_data     fwd;
> 	struct l_wait_info	      lwi;
> -	enum ldlm_error		    err;
> 	int			     rc = 0;
> 
> 	OBD_FAIL_TIMEOUT(OBD_FAIL_LDLM_CP_CB_WAIT2, 4);
> @@ -593,7 +587,7 @@ ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
> 		/* We need to reprocess the lock to do merges or splits
> 		 * with existing locks owned by this process.
> 		 */
> -		ldlm_process_flock_lock(lock, &noreproc, &err);
> +		ldlm_process_flock_lock(lock, &noreproc);
> 	}
> 	unlock_res_and_lock(lock);
> 	return rc;
> 
> 

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Intel Corporation

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 4/9] staging: lustre: ldlm: remove 'flags' arg from ldlm_process_flock_lock()
  2017-10-23  0:53 ` [PATCH 4/9] staging: lustre: ldlm: remove 'flags' arg from ldlm_process_flock_lock() NeilBrown
@ 2017-10-27  9:19   ` Dilger, Andreas
  0 siblings, 0 replies; 22+ messages in thread
From: Dilger, Andreas @ 2017-10-27  9:19 UTC (permalink / raw)
  To: NeilBrown
  Cc: Drokin, Oleg, James Simmons, Greg Kroah-Hartman, lustre-devel,
	linux-kernel

On Oct 22, 2017, at 18:53, NeilBrown <neilb@suse.com> wrote:
> 
> This is only ever set to LDLM_FL_WAIT_NOREPROC, so we can remove the arg
> and discard any code that is only run when it doesn't have that value.
> 
> Signed-off-by: NeilBrown <neilb@suse.com>

Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>

> ---
> drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |  110 ++++-------------------
> 1 file changed, 21 insertions(+), 89 deletions(-)
> 
> diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> index 05e6b67b0e72..2d1fa2b33129 100644
> --- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> +++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> @@ -122,7 +122,7 @@ ldlm_flock_destroy(struct ldlm_lock *lock, enum ldlm_mode mode, __u64 flags)
>  * is released.
>  *
>  */
> -static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags)
> +static int ldlm_process_flock_lock(struct ldlm_lock *req)
> {
> 	struct ldlm_resource *res = req->l_resource;
> 	struct ldlm_namespace *ns = ldlm_res_to_ns(res);
> @@ -138,8 +138,8 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags)
> 	const struct ldlm_callback_suite null_cbs = { };
> 
> 	CDEBUG(D_DLMTRACE,
> -	       "flags %#llx owner %llu pid %u mode %u start %llu end %llu\n",
> -	       *flags, new->l_policy_data.l_flock.owner,
> +	       "owner %llu pid %u mode %u start %llu end %llu\n",
> +	       new->l_policy_data.l_flock.owner,
> 	       new->l_policy_data.l_flock.pid, mode,
> 	       req->l_policy_data.l_flock.start,
> 	       req->l_policy_data.l_flock.end);
> @@ -150,74 +150,16 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags)
> 	req->l_blocking_ast = NULL;
> 
> reprocess:
> -	if ((*flags == LDLM_FL_WAIT_NOREPROC) || (mode == LCK_NL)) {
> -		/* This loop determines where this processes locks start
> -		 * in the resource lr_granted list.
> -		 */
> -		list_for_each(tmp, &res->lr_granted) {
> -			lock = list_entry(tmp, struct ldlm_lock,
> -					  l_res_link);
> -			if (ldlm_same_flock_owner(lock, req)) {
> -				ownlocks = tmp;
> -				break;
> -			}
> -		}
> -	} else {
> -		int reprocess_failed = 0;
> -
> -		lockmode_verify(mode);
> -
> -		/* This loop determines if there are existing locks
> -		 * that conflict with the new lock request.
> -		 */
> -		list_for_each(tmp, &res->lr_granted) {
> -			lock = list_entry(tmp, struct ldlm_lock,
> -					  l_res_link);
> -
> -			if (ldlm_same_flock_owner(lock, req)) {
> -				if (!ownlocks)
> -					ownlocks = tmp;
> -				continue;
> -			}
> -
> -			/* locks are compatible, overlap doesn't matter */
> -			if (lockmode_compat(lock->l_granted_mode, mode))
> -				continue;
> -
> -			if (!ldlm_flocks_overlap(lock, req))
> -				continue;
> -
> -			if (*flags & LDLM_FL_BLOCK_NOWAIT) {
> -				ldlm_flock_destroy(req, mode, *flags);
> -				return LDLM_ITER_STOP;
> -			}
> -
> -			if (*flags & LDLM_FL_TEST_LOCK) {
> -				ldlm_flock_destroy(req, mode, *flags);
> -				req->l_req_mode = lock->l_granted_mode;
> -				req->l_policy_data.l_flock.pid =
> -					lock->l_policy_data.l_flock.pid;
> -				req->l_policy_data.l_flock.start =
> -					lock->l_policy_data.l_flock.start;
> -				req->l_policy_data.l_flock.end =
> -					lock->l_policy_data.l_flock.end;
> -				*flags |= LDLM_FL_LOCK_CHANGED;
> -				return LDLM_ITER_STOP;
> -			}
> -
> -			ldlm_resource_add_lock(res, &res->lr_waiting, req);
> -			*flags |= LDLM_FL_BLOCK_GRANTED;
> -			return LDLM_ITER_STOP;
> +	/* This loop determines where this processes locks start
> +	 * in the resource lr_granted list.
> +	 */
> +	list_for_each(tmp, &res->lr_granted) {
> +		lock = list_entry(tmp, struct ldlm_lock,
> +				  l_res_link);
> +		if (ldlm_same_flock_owner(lock, req)) {
> +			ownlocks = tmp;
> +			break;
> 		}
> -		if (reprocess_failed)
> -			return LDLM_ITER_CONTINUE;
> -	}
> -
> -	if (*flags & LDLM_FL_TEST_LOCK) {
> -		ldlm_flock_destroy(req, mode, *flags);
> -		req->l_req_mode = LCK_NL;
> -		*flags |= LDLM_FL_LOCK_CHANGED;
> -		return LDLM_ITER_STOP;
> 	}
> 
> 	/* Scan the locks owned by this process that overlap this request.
> @@ -267,7 +209,8 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags)
> 			}
> 
> 			if (added) {
> -				ldlm_flock_destroy(lock, mode, *flags);
> +				ldlm_flock_destroy(lock, mode,
> +						   LDLM_FL_WAIT_NOREPROC);
> 			} else {
> 				new = lock;
> 				added = 1;
> @@ -293,7 +236,8 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags)
> 					new->l_policy_data.l_flock.end + 1;
> 				break;
> 			}
> -			ldlm_flock_destroy(lock, lock->l_req_mode, *flags);
> +			ldlm_flock_destroy(lock, lock->l_req_mode,
> +					   LDLM_FL_WAIT_NOREPROC);
> 			continue;
> 		}
> 		if (new->l_policy_data.l_flock.end >=
> @@ -325,7 +269,7 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags)
> 			lock_res_and_lock(req);
> 			if (IS_ERR(new2)) {
> 				ldlm_flock_destroy(req, lock->l_granted_mode,
> -						   *flags);
> +						   LDLM_FL_WAIT_NOREPROC);
> 				return LDLM_ITER_STOP;
> 			}
> 			goto reprocess;
> @@ -354,9 +298,8 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags)
> 					     &new2->l_remote_handle,
> 					     &new2->l_exp_hash);
> 		}
> -		if (*flags == LDLM_FL_WAIT_NOREPROC)
> -			ldlm_lock_addref_internal_nolock(new2,
> -							 lock->l_granted_mode);
> +		ldlm_lock_addref_internal_nolock(new2,
> +						 lock->l_granted_mode);
> 
> 		/* insert new2 at lock */
> 		ldlm_resource_add_lock(res, ownlocks, new2);
> @@ -377,22 +320,13 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req, __u64 *flags)
> 		ldlm_resource_add_lock(res, ownlocks, req);
> 	}
> 
> -	if (*flags != LDLM_FL_WAIT_NOREPROC) {
> -		/* The only one possible case for client-side calls flock
> -		 * policy function is ldlm_flock_completion_ast inside which
> -		 * carries LDLM_FL_WAIT_NOREPROC flag.
> -		 */
> -		CERROR("Illegal parameter for client-side-only module.\n");
> -		LBUG();
> -	}
> -
> 	/* In case we're reprocessing the requested lock we can't destroy
> 	 * it until after calling ldlm_add_ast_work_item() above so that laawi()
> 	 * can bump the reference count on \a req. Otherwise \a req
> 	 * could be freed before the completion AST can be sent.
> 	 */
> 	if (added)
> -		ldlm_flock_destroy(req, mode, *flags);
> +		ldlm_flock_destroy(req, mode, LDLM_FL_WAIT_NOREPROC);
> 
> 	ldlm_resource_dump(D_INFO, res);
> 	return LDLM_ITER_CONTINUE;
> @@ -582,12 +516,10 @@ ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
> 		getlk->fl_start = (loff_t)lock->l_policy_data.l_flock.start;
> 		getlk->fl_end = (loff_t)lock->l_policy_data.l_flock.end;
> 	} else {
> -		__u64 noreproc = LDLM_FL_WAIT_NOREPROC;
> -
> 		/* We need to reprocess the lock to do merges or splits
> 		 * with existing locks owned by this process.
> 		 */
> -		ldlm_process_flock_lock(lock, &noreproc);
> +		ldlm_process_flock_lock(lock);
> 	}
> 	unlock_res_and_lock(lock);
> 	return rc;
> 
> 

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Intel Corporation

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 5/9] staging: lustre: ldlm: remove unused 'overlaps' variable
  2017-10-23  0:53 ` [PATCH 5/9] staging: lustre: ldlm: remove unused 'overlaps' variable NeilBrown
@ 2017-10-27  9:20   ` Dilger, Andreas
  0 siblings, 0 replies; 22+ messages in thread
From: Dilger, Andreas @ 2017-10-27  9:20 UTC (permalink / raw)
  To: NeilBrown
  Cc: Drokin, Oleg, James Simmons, Greg Kroah-Hartman, lustre-devel,
	linux-kernel

On Oct 22, 2017, at 18:53, NeilBrown <neilb@suse.com> wrote:
> 
> 'overlaps' is never used, only incremented.
> 
> Signed-off-by: NeilBrown <neilb@suse.com>

Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>

> ---
> drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |    3 ---
> 1 file changed, 3 deletions(-)
> 
> diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> index 2d1fa2b33129..d5a5742a1171 100644
> --- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> +++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> @@ -133,7 +133,6 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
> 	struct ldlm_lock *new2 = NULL;
> 	enum ldlm_mode mode = req->l_req_mode;
> 	int added = (mode == LCK_NL);
> -	int overlaps = 0;
> 	int splitted = 0;
> 	const struct ldlm_callback_suite null_cbs = { };
> 
> @@ -226,8 +225,6 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
> 		    lock->l_policy_data.l_flock.start)
> 			break;
> 
> -		++overlaps;
> -
> 		if (new->l_policy_data.l_flock.start <=
> 		    lock->l_policy_data.l_flock.start) {
> 			if (new->l_policy_data.l_flock.end <
> 
> 

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Intel Corporation

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 6/9] staging: lustre: ldlm: remove 'flags' arg from ldlm_flock_destroy()
  2017-10-23  0:53 ` [PATCH 6/9] staging: lustre: ldlm: remove 'flags' arg from ldlm_flock_destroy() NeilBrown
@ 2017-10-27  9:23   ` Dilger, Andreas
  0 siblings, 0 replies; 22+ messages in thread
From: Dilger, Andreas @ 2017-10-27  9:23 UTC (permalink / raw)
  To: NeilBrown
  Cc: Drokin, Oleg, James Simmons, Greg Kroah-Hartman, lustre-devel,
	linux-kernel

On Oct 22, 2017, at 18:53, NeilBrown <neilb@suse.com> wrote:
> 
> The only value ever passed in LDLM_FL_WAIT_NOREPROC, so assume that
> instead of passing it.
> 
> Signed-off-by: NeilBrown <neilb@suse.com>

Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>

> ---
> drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |   36 ++++++++++-------------
> 1 file changed, 16 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> index d5a5742a1171..1bf56892fcf5 100644
> --- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> +++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> @@ -88,24 +88,23 @@ ldlm_flocks_overlap(struct ldlm_lock *lock, struct ldlm_lock *new)
> }
> 
> static inline void
> -ldlm_flock_destroy(struct ldlm_lock *lock, enum ldlm_mode mode, __u64 flags)
> +ldlm_flock_destroy(struct ldlm_lock *lock, enum ldlm_mode mode)
> {
> -	LDLM_DEBUG(lock, "%s(mode: %d, flags: 0x%llx)",
> -		   __func__, mode, flags);
> +	LDLM_DEBUG(lock, "%s(mode: %d)",
> +		   __func__, mode);
> 
> 	/* Safe to not lock here, since it should be empty anyway */
> 	LASSERT(hlist_unhashed(&lock->l_exp_flock_hash));
> 
> 	list_del_init(&lock->l_res_link);
> -	if (flags == LDLM_FL_WAIT_NOREPROC) {
> -		/* client side - set a flag to prevent sending a CANCEL */
> -		lock->l_flags |= LDLM_FL_LOCAL_ONLY | LDLM_FL_CBPENDING;
> 
> -		/* when reaching here, it is under lock_res_and_lock(). Thus,
> -		 * need call the nolock version of ldlm_lock_decref_internal
> -		 */
> -		ldlm_lock_decref_internal_nolock(lock, mode);
> -	}
> +	/* client side - set a flag to prevent sending a CANCEL */
> +	lock->l_flags |= LDLM_FL_LOCAL_ONLY | LDLM_FL_CBPENDING;
> +
> +	/* when reaching here, it is under lock_res_and_lock(). Thus,
> +	 * need call the nolock version of ldlm_lock_decref_internal
> +	 */
> +	ldlm_lock_decref_internal_nolock(lock, mode);
> 
> 	ldlm_lock_destroy_nolock(lock);
> }
> @@ -208,8 +207,7 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
> 			}
> 
> 			if (added) {
> -				ldlm_flock_destroy(lock, mode,
> -						   LDLM_FL_WAIT_NOREPROC);
> +				ldlm_flock_destroy(lock, mode);
> 			} else {
> 				new = lock;
> 				added = 1;
> @@ -233,8 +231,7 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
> 					new->l_policy_data.l_flock.end + 1;
> 				break;
> 			}
> -			ldlm_flock_destroy(lock, lock->l_req_mode,
> -					   LDLM_FL_WAIT_NOREPROC);
> +			ldlm_flock_destroy(lock, lock->l_req_mode);
> 			continue;
> 		}
> 		if (new->l_policy_data.l_flock.end >=
> @@ -265,8 +262,7 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
> 						NULL, 0, LVB_T_NONE);
> 			lock_res_and_lock(req);
> 			if (IS_ERR(new2)) {
> -				ldlm_flock_destroy(req, lock->l_granted_mode,
> -						   LDLM_FL_WAIT_NOREPROC);
> +				ldlm_flock_destroy(req, lock->l_granted_mode);
> 				return LDLM_ITER_STOP;
> 			}
> 			goto reprocess;
> @@ -323,7 +319,7 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
> 	 * could be freed before the completion AST can be sent.
> 	 */
> 	if (added)
> -		ldlm_flock_destroy(req, mode, LDLM_FL_WAIT_NOREPROC);
> +		ldlm_flock_destroy(req, mode);
> 
> 	ldlm_resource_dump(D_INFO, res);
> 	return LDLM_ITER_CONTINUE;
> @@ -477,7 +473,7 @@ ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
> 			LDLM_DEBUG(lock, "client-side enqueue deadlock received");
> 			rc = -EDEADLK;
> 		}
> -		ldlm_flock_destroy(lock, mode, LDLM_FL_WAIT_NOREPROC);
> +		ldlm_flock_destroy(lock, mode);
> 		unlock_res_and_lock(lock);
> 
> 		/* Need to wake up the waiter if we were evicted */
> @@ -498,7 +494,7 @@ ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
> 		 * in the lock changes we can decref the appropriate refcount.
> 		 */
> 		LASSERT(ldlm_is_test_lock(lock));
> -		ldlm_flock_destroy(lock, getlk->fl_type, LDLM_FL_WAIT_NOREPROC);
> +		ldlm_flock_destroy(lock, getlk->fl_type);
> 		switch (lock->l_granted_mode) {
> 		case LCK_PR:
> 			getlk->fl_type = F_RDLCK;
> 
> 

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Intel Corporation

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 7/9] staging: lustre: ldlm: tidy list walking in ldlm_flock()
  2017-10-23  0:53 ` [PATCH 7/9] staging: lustre: ldlm: tidy list walking in ldlm_flock() NeilBrown
@ 2017-10-27  9:32   ` Dilger, Andreas
  0 siblings, 0 replies; 22+ messages in thread
From: Dilger, Andreas @ 2017-10-27  9:32 UTC (permalink / raw)
  To: NeilBrown
  Cc: Drokin, Oleg, James Simmons, Greg Kroah-Hartman, lustre-devel,
	linux-kernel

On Oct 22, 2017, at 18:53, NeilBrown <neilb@suse.com> wrote:
> 
> Use list_for_each_entry variants to
> avoid the explicit list_entry() calls.
> This allows us to use list_for_each_entry_safe_from()
> instread of adding a local list-walking macro.
> 
> Also improve some comments so that it is more obvious
> that the locks are sorted per-owner and that we need
> to find the insertion point.
> 
> Signed-off-by: NeilBrown <neilb@suse.com>

The conversion looks a bit tricky, but appears to be correct.

Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>

> ---
> drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |   45 ++++++++++-------------
> 1 file changed, 19 insertions(+), 26 deletions(-)
> 
> diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> index 1bf56892fcf5..0bf6dce1c5b1 100644
> --- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> +++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> @@ -59,17 +59,6 @@
> #include <linux/list.h>
> #include "ldlm_internal.h"
> 
> -/**
> - * list_for_remaining_safe - iterate over the remaining entries in a list
> - *	      and safeguard against removal of a list entry.
> - * \param pos   the &struct list_head to use as a loop counter. pos MUST
> - *	      have been initialized prior to using it in this macro.
> - * \param n     another &struct list_head to use as temporary storage
> - * \param head  the head for your list.
> - */
> -#define list_for_remaining_safe(pos, n, head) \
> -	for (n = pos->next; pos != (head); pos = n, n = pos->next)
> -
> static inline int
> ldlm_same_flock_owner(struct ldlm_lock *lock, struct ldlm_lock *new)
> {
> @@ -125,8 +114,8 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
> {
> 	struct ldlm_resource *res = req->l_resource;
> 	struct ldlm_namespace *ns = ldlm_res_to_ns(res);
> -	struct list_head *tmp;
> -	struct list_head *ownlocks = NULL;
> +	struct ldlm_lock *tmp;
> +	struct ldlm_lock *ownlocks = NULL;
> 	struct ldlm_lock *lock = NULL;
> 	struct ldlm_lock *new = req;
> 	struct ldlm_lock *new2 = NULL;
> @@ -151,23 +140,23 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
> 	/* This loop determines where this processes locks start
> 	 * in the resource lr_granted list.
> 	 */
> -	list_for_each(tmp, &res->lr_granted) {
> -		lock = list_entry(tmp, struct ldlm_lock,
> -				  l_res_link);
> +	list_for_each_entry(lock, &res->lr_granted, l_res_link) {
> 		if (ldlm_same_flock_owner(lock, req)) {
> -			ownlocks = tmp;
> +			ownlocks = lock;
> 			break;
> 		}
> 	}
> 
> -	/* Scan the locks owned by this process that overlap this request.
> +	/* Scan the locks owned by this process to find the insertion point
> +	 * (as locks are ordered), and to handle overlaps.
> 	 * We may have to merge or split existing locks.
> 	 */
> -	if (!ownlocks)
> -		ownlocks = &res->lr_granted;
> -
> -	list_for_remaining_safe(ownlocks, tmp, &res->lr_granted) {
> -		lock = list_entry(ownlocks, struct ldlm_lock, l_res_link);
> +	if (ownlocks)
> +		lock = ownlocks;
> +	else
> +		lock = list_entry(&res->lr_granted,
> +				  struct ldlm_lock, l_res_link);
> +	list_for_each_entry_safe_from(lock, tmp, &res->lr_granted, l_res_link) {
> 
> 		if (!ldlm_same_flock_owner(lock, new))
> 			break;
> @@ -295,7 +284,7 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
> 						 lock->l_granted_mode);
> 
> 		/* insert new2 at lock */
> -		ldlm_resource_add_lock(res, ownlocks, new2);
> +		ldlm_resource_add_lock(res, &lock->l_res_link, new2);
> 		LDLM_LOCK_RELEASE(new2);
> 		break;
> 	}
> @@ -309,8 +298,12 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
> 
> 	if (!added) {
> 		list_del_init(&req->l_res_link);
> -		/* insert new lock before ownlocks in list. */
> -		ldlm_resource_add_lock(res, ownlocks, req);
> +		/* insert new lock before "lock", which might be the
> +		 * next lock for this owner, or might be the first
> +		 * lock for the next owner, or might not be a lock at
> +		 * all, but instead points at the head of the list
> +		 */
> +		ldlm_resource_add_lock(res, &lock->l_res_link, req);
> 	}
> 
> 	/* In case we're reprocessing the requested lock we can't destroy
> 
> 

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Intel Corporation

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 8/9] staging: lustre: ldlm: remove unnecessary 'ownlocks' variable.
  2017-10-23  0:53 ` [PATCH 8/9] staging: lustre: ldlm: remove unnecessary 'ownlocks' variable NeilBrown
@ 2017-10-27  9:33   ` Dilger, Andreas
  0 siblings, 0 replies; 22+ messages in thread
From: Dilger, Andreas @ 2017-10-27  9:33 UTC (permalink / raw)
  To: NeilBrown
  Cc: Drokin, Oleg, James Simmons, Greg Kroah-Hartman, lustre-devel,
	linux-kernel

On Oct 22, 2017, at 18:53, NeilBrown <neilb@suse.com> wrote:
> 
> Now that the code has been simplified, 'ownlocks' is not
> necessary.
> 
> The loop which sets it exits with 'lock' having the same value as
> 'ownlocks', or pointing to the head of the list if ownlocks is NULL.
> 
> The current code then tests ownlocks and sets 'lock' to exactly the
> value that it currently has.
> 
> So discard 'ownlocks'.
> 
> Also remove unnecessary initialization of 'lock'.
> 
> Signed-off-by: NeilBrown <neilb@suse.com>

Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>

> ---
> drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |   15 +++------------
> 1 file changed, 3 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> index 0bf6dce1c5b1..774d8667769a 100644
> --- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> +++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> @@ -115,8 +115,7 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
> 	struct ldlm_resource *res = req->l_resource;
> 	struct ldlm_namespace *ns = ldlm_res_to_ns(res);
> 	struct ldlm_lock *tmp;
> -	struct ldlm_lock *ownlocks = NULL;
> -	struct ldlm_lock *lock = NULL;
> +	struct ldlm_lock *lock;
> 	struct ldlm_lock *new = req;
> 	struct ldlm_lock *new2 = NULL;
> 	enum ldlm_mode mode = req->l_req_mode;
> @@ -140,22 +139,14 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
> 	/* This loop determines where this processes locks start
> 	 * in the resource lr_granted list.
> 	 */
> -	list_for_each_entry(lock, &res->lr_granted, l_res_link) {
> -		if (ldlm_same_flock_owner(lock, req)) {
> -			ownlocks = lock;
> +	list_for_each_entry(lock, &res->lr_granted, l_res_link)
> +		if (ldlm_same_flock_owner(lock, req))
> 			break;
> -		}
> -	}
> 
> 	/* Scan the locks owned by this process to find the insertion point
> 	 * (as locks are ordered), and to handle overlaps.
> 	 * We may have to merge or split existing locks.
> 	 */
> -	if (ownlocks)
> -		lock = ownlocks;
> -	else
> -		lock = list_entry(&res->lr_granted,
> -				  struct ldlm_lock, l_res_link);
> 	list_for_each_entry_safe_from(lock, tmp, &res->lr_granted, l_res_link) {
> 
> 		if (!ldlm_same_flock_owner(lock, new))
> 
> 

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Intel Corporation

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [lustre-devel] [PATCH 9/9] staging: lustre: ldlm: remove unused field 'fwd_generation'
  2017-10-23  0:53 ` [PATCH 9/9] staging: lustre: ldlm: remove unused field 'fwd_generation' NeilBrown
@ 2017-10-27  9:39   ` Dilger, Andreas
  2017-10-27 11:10     ` NeilBrown
  0 siblings, 1 reply; 22+ messages in thread
From: Dilger, Andreas @ 2017-10-27  9:39 UTC (permalink / raw)
  To: NeilBrown
  Cc: Drokin, Oleg, James Simmons, Greg Kroah-Hartman, linux-kernel,
	lustre-devel

On Oct 22, 2017, at 18:53, NeilBrown <neilb@suse.com> wrote:
> 
> With this field gone, we don't need local variables 'imp' or 'obd'
> any more.
> 
> Signed-off-by: NeilBrown <neilb@suse.com>

Thanks for the patches.

Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>

> ---
> drivers/staging/lustre/lustre/ldlm/ldlm_flock.c |   21 +++------------------
> 1 file changed, 3 insertions(+), 18 deletions(-)
> 
> diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> index 774d8667769a..9c0e9cd00000 100644
> --- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> +++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
> @@ -311,7 +311,6 @@ static int ldlm_process_flock_lock(struct ldlm_lock *req)
> 
> struct ldlm_flock_wait_data {
> 	struct ldlm_lock *fwd_lock;
> -	int	       fwd_generation;
> };
> 
> static void
> @@ -342,11 +341,9 @@ int
> ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
> {
> 	struct file_lock		*getlk = lock->l_ast_data;
> -	struct obd_device	      *obd;
> -	struct obd_import	      *imp = NULL;
> -	struct ldlm_flock_wait_data     fwd;
> -	struct l_wait_info	      lwi;
> -	int			     rc = 0;
> +	struct ldlm_flock_wait_data	fwd;
> +	struct l_wait_info		lwi;
> +	int				rc = 0;
> 
> 	OBD_FAIL_TIMEOUT(OBD_FAIL_LDLM_CP_CB_WAIT2, 4);
> 	if (OBD_FAIL_PRECHECK(OBD_FAIL_LDLM_CP_CB_WAIT3)) {
> @@ -374,18 +371,6 @@ ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
> 
> 	LDLM_DEBUG(lock, "client-side enqueue returned a blocked lock, sleeping");
> 	fwd.fwd_lock = lock;
> -	obd = class_exp2obd(lock->l_conn_export);
> -
> -	/* if this is a local lock, there is no import */
> -	if (obd)
> -		imp = obd->u.cli.cl_import;
> -
> -	if (imp) {
> -		spin_lock(&imp->imp_lock);
> -		fwd.fwd_generation = imp->imp_generation;
> -		spin_unlock(&imp->imp_lock);
> -	}
> -
> 	lwi = LWI_TIMEOUT_INTR(0, NULL, ldlm_flock_interrupted_wait, &fwd);
> 
> 	/* Go to sleep until the lock is granted. */
> 
> 
> _______________________________________________
> lustre-devel mailing list
> lustre-devel@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-devel-lustre.org

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Intel Corporation

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [lustre-devel] [PATCH 9/9] staging: lustre: ldlm: remove unused field 'fwd_generation'
  2017-10-27  9:39   ` [lustre-devel] " Dilger, Andreas
@ 2017-10-27 11:10     ` NeilBrown
  0 siblings, 0 replies; 22+ messages in thread
From: NeilBrown @ 2017-10-27 11:10 UTC (permalink / raw)
  To: Dilger, Andreas
  Cc: Drokin, Oleg, James Simmons, Greg Kroah-Hartman, linux-kernel,
	lustre-devel

[-- Attachment #1: Type: text/plain, Size: 394 bytes --]

On Fri, Oct 27 2017, Dilger, Andreas wrote:

> On Oct 22, 2017, at 18:53, NeilBrown <neilb@suse.com> wrote:
>> 
>> With this field gone, we don't need local variables 'imp' or 'obd'
>> any more.
>> 
>> Signed-off-by: NeilBrown <neilb@suse.com>
>
> Thanks for the patches.

And thanks a lot for the review!

NeilBrown

>
> Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2017-10-27 11:10 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-23  0:53 [PATCH 0/9] Assorted cleanups for staging/.../lustre/ldlm/ldlm_flock.c NeilBrown
2017-10-23  0:53 ` [PATCH 9/9] staging: lustre: ldlm: remove unused field 'fwd_generation' NeilBrown
2017-10-27  9:39   ` [lustre-devel] " Dilger, Andreas
2017-10-27 11:10     ` NeilBrown
2017-10-23  0:53 ` [PATCH 5/9] staging: lustre: ldlm: remove unused 'overlaps' variable NeilBrown
2017-10-27  9:20   ` Dilger, Andreas
2017-10-23  0:53 ` [PATCH 4/9] staging: lustre: ldlm: remove 'flags' arg from ldlm_process_flock_lock() NeilBrown
2017-10-27  9:19   ` Dilger, Andreas
2017-10-23  0:53 ` [PATCH 2/9] staging: lustre: ldlm: remove unused 'work_list' " NeilBrown
2017-10-27  9:09   ` Dilger, Andreas
2017-10-23  0:53 ` [PATCH 1/9] staging: lustre: ldlm: remove 'first_enq' " NeilBrown
2017-10-27  9:08   ` Dilger, Andreas
2017-10-23  0:53 ` [PATCH 8/9] staging: lustre: ldlm: remove unnecessary 'ownlocks' variable NeilBrown
2017-10-27  9:33   ` Dilger, Andreas
2017-10-23  0:53 ` [PATCH 3/9] staging: lustre: ldlm: remove unneeded 'err' arg to ldlm_process_flock_lock() NeilBrown
2017-10-27  9:11   ` Dilger, Andreas
2017-10-23  0:53 ` [PATCH 7/9] staging: lustre: ldlm: tidy list walking in ldlm_flock() NeilBrown
2017-10-27  9:32   ` Dilger, Andreas
2017-10-23  0:53 ` [PATCH 6/9] staging: lustre: ldlm: remove 'flags' arg from ldlm_flock_destroy() NeilBrown
2017-10-27  9:23   ` Dilger, Andreas
2017-10-24 22:20 ` [PATCH 0/9] Assorted cleanups for staging/.../lustre/ldlm/ldlm_flock.c James Simmons
2017-10-25 21:55   ` NeilBrown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).