linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/14] staging: lustre: missing fixes from lustre 2.8
@ 2017-02-18 21:47 James Simmons
  2017-02-18 21:47 ` [PATCH 01/14] staging: lustre: llite: lower message level for ll_setattr_raw() James Simmons
                   ` (13 more replies)
  0 siblings, 14 replies; 19+ messages in thread
From: James Simmons @ 2017-02-18 21:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, devel, Andreas Dilger, Oleg Drokin
  Cc: Linux Kernel Mailing List, Lustre Development List, James Simmons

This patch series is a batch of fixes from the lustre 2.8
release that is missing from the upstream client. These
patches are dependent on the order applied.

Alex Zhuravlev (1):
  staging: lustre: llog: limit file size of plain logs

Andreas Dilger (1):
  staging: lustre: llite: remove extraneous export parameter

Bobi Jam (2):
  staging: lustre: llite: lower message level for ll_setattr_raw()
  staging: lustre: llite: omit to update wire data

James Simmons (1):
  staging: lustre: lprocfs: move lprocfs_stats_[un]lock to a source file

Jinshan Xiong (4):
  staging: lustre: osc: remove obsolete asserts
  staging: lustre: lov: cleanup when cl_io_iter_init() fails
  staging: lustre: ldlm: handle ldlm lock cancel race when evicting client.
  staging: lustre: osc: further LRU OSC cleanup after eviction

Niu Yawei (1):
  staging: lustre: ldlm: fix race of starting bl threads

Vitaly Fertman (2):
  staging: lustre: ldlm: reduce ldlm pool recalc window
  staging: lustre: ldlm: disconnect speedup

Yang Sheng (1):
  staging: lustre: lov: trying smaller memory allocations

wang di (1):
  staging: lustre: llog: change lgh_hdr_lock to mutex

 drivers/staging/lustre/lustre/include/cl_object.h  |  13 +-
 .../staging/lustre/lustre/include/lprocfs_status.h | 120 ++------------
 drivers/staging/lustre/lustre/include/lustre_dlm.h |  11 +-
 .../lustre/lustre/include/lustre_dlm_flags.h       |   3 +
 drivers/staging/lustre/lustre/include/lustre_log.h |   2 +-
 .../staging/lustre/lustre/include/obd_support.h    |   1 +
 drivers/staging/lustre/lustre/ldlm/ldlm_internal.h |   5 +-
 drivers/staging/lustre/lustre/ldlm/ldlm_lock.c     |  47 ++++--
 drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c    | 178 +++++++++++++--------
 drivers/staging/lustre/lustre/ldlm/ldlm_pool.c     |   7 +-
 drivers/staging/lustre/lustre/ldlm/ldlm_request.c  |  14 +-
 drivers/staging/lustre/lustre/ldlm/ldlm_resource.c |   2 +-
 drivers/staging/lustre/lustre/llite/file.c         |  49 +++---
 drivers/staging/lustre/lustre/llite/glimpse.c      |   4 +-
 drivers/staging/lustre/lustre/llite/lcommon_cl.c   |   8 +-
 drivers/staging/lustre/lustre/llite/lcommon_misc.c |   2 +-
 .../staging/lustre/lustre/llite/llite_internal.h   |   2 +-
 drivers/staging/lustre/lustre/llite/llite_lib.c    |  14 +-
 drivers/staging/lustre/lustre/llite/llite_mmap.c   |   4 +-
 drivers/staging/lustre/lustre/llite/lproc_llite.c  |   2 +-
 drivers/staging/lustre/lustre/llite/rw.c           |   2 +-
 drivers/staging/lustre/lustre/llite/vvp_dev.c      |  10 +-
 drivers/staging/lustre/lustre/llite/vvp_io.c       |   1 +
 drivers/staging/lustre/lustre/llite/xattr.c        |   2 +-
 .../staging/lustre/lustre/lov/lov_cl_internal.h    |  41 +++--
 drivers/staging/lustre/lustre/lov/lov_io.c         |  24 +--
 drivers/staging/lustre/lustre/lov/lov_object.c     |   2 +-
 drivers/staging/lustre/lustre/obdclass/cl_object.c |   6 +-
 drivers/staging/lustre/lustre/obdclass/llog.c      |  18 ++-
 .../lustre/lustre/obdclass/lprocfs_status.c        | 111 +++++++++++++
 .../staging/lustre/lustre/obdecho/echo_client.c    |   6 +-
 drivers/staging/lustre/lustre/osc/lproc_osc.c      |   2 +-
 drivers/staging/lustre/lustre/osc/osc_cache.c      |   3 +-
 .../staging/lustre/lustre/osc/osc_cl_internal.h    |   4 +-
 drivers/staging/lustre/lustre/osc/osc_internal.h   |   3 +-
 drivers/staging/lustre/lustre/osc/osc_io.c         |  48 ++----
 drivers/staging/lustre/lustre/osc/osc_lock.c       |  60 ++++---
 drivers/staging/lustre/lustre/osc/osc_object.c     |  12 +-
 drivers/staging/lustre/lustre/osc/osc_page.c       |  77 +++++++--
 drivers/staging/lustre/lustre/osc/osc_request.c    |  12 +-
 40 files changed, 554 insertions(+), 378 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 01/14] staging: lustre: llite: lower message level for ll_setattr_raw()
  2017-02-18 21:47 [PATCH 00/14] staging: lustre: missing fixes from lustre 2.8 James Simmons
@ 2017-02-18 21:47 ` James Simmons
  2017-02-18 21:47 ` [PATCH 02/14] staging: lustre: llite: omit to update wire data James Simmons
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: James Simmons @ 2017-02-18 21:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, devel, Andreas Dilger, Oleg Drokin
  Cc: Linux Kernel Mailing List, Lustre Development List, Bobi Jam,
	James Simmons

From: Bobi Jam <bobijam.xu@intel.com>

Truncate and write can happen at the same time, so that a file can
be set modified even though the file is not restored from released
state, and ll_hsm_state_set() is not applicable for the file, and
it will return error in this case, we'd lower the error message level
in this case.

Signed-off-by: Bobi Jam <bobijam.xu@intel.com>
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-6817
Reviewed-on: http://review.whamcloud.com/15541
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: John L. Hammond <john.hammond@intel.com>
Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/llite/llite_lib.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/staging/lustre/lustre/llite/llite_lib.c b/drivers/staging/lustre/lustre/llite/llite_lib.c
index b229cbc..34422df 100644
--- a/drivers/staging/lustre/lustre/llite/llite_lib.c
+++ b/drivers/staging/lustre/lustre/llite/llite_lib.c
@@ -1513,6 +1513,7 @@ int ll_setattr_raw(struct dentry *dentry, struct iattr *attr, bool hsm_import)
 		 */
 		attr->ia_valid |= MDS_OPEN_OWNEROVERRIDE;
 		op_data->op_bias |= MDS_DATA_MODIFIED;
+		clear_bit(LLIF_DATA_MODIFIED, &lli->lli_flags);
 	}
 
 	rc = ll_md_setattr(dentry, op_data);
@@ -1560,8 +1561,15 @@ int ll_setattr_raw(struct dentry *dentry, struct iattr *attr, bool hsm_import)
 		int rc2;
 
 		rc2 = ll_hsm_state_set(inode, &hss);
+		/*
+		 * truncate and write can happen at the same time, so that
+		 * the file can be set modified even though the file is not
+		 * restored from released state, and ll_hsm_state_set() is
+		 * not applicable for the file, and rc2 < 0 is normal in this
+		 * case.
+		 */
 		if (rc2 < 0)
-			CERROR(DFID "HSM set dirty failed: rc2 = %d\n",
+			CDEBUG(D_INFO, DFID "HSM set dirty failed: rc2 = %d\n",
 			       PFID(ll_inode2fid(inode)), rc2);
 	}
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 02/14] staging: lustre: llite: omit to update wire data
  2017-02-18 21:47 [PATCH 00/14] staging: lustre: missing fixes from lustre 2.8 James Simmons
  2017-02-18 21:47 ` [PATCH 01/14] staging: lustre: llite: lower message level for ll_setattr_raw() James Simmons
@ 2017-02-18 21:47 ` James Simmons
  2017-02-18 21:47 ` [PATCH 03/14] staging: lustre: osc: remove obsolete asserts James Simmons
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: James Simmons @ 2017-02-18 21:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, devel, Andreas Dilger, Oleg Drokin
  Cc: Linux Kernel Mailing List, Lustre Development List, Bobi Jam,
	James Simmons

From: Bobi Jam <bobijam.xu@intel.com>

In ll_setattr_raw(), after op_data->op_attr has been copied, the attr
is updated and op_data->op_attr does not get updated afterward.

Signed-off-by: Bobi Jam <bobijam.xu@intel.com>
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-6813
Reviewed-on: http://review.whamcloud.com/16462
Reviewed-by: Jinshan Xiong <jinshan.xiong@intel.com>
Reviewed-by: Niu Yawei <yawei.niu@intel.com>
Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/llite/llite_lib.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/staging/lustre/lustre/llite/llite_lib.c b/drivers/staging/lustre/lustre/llite/llite_lib.c
index 34422df..973eee6 100644
--- a/drivers/staging/lustre/lustre/llite/llite_lib.c
+++ b/drivers/staging/lustre/lustre/llite/llite_lib.c
@@ -1504,8 +1504,6 @@ int ll_setattr_raw(struct dentry *dentry, struct iattr *attr, bool hsm_import)
 		goto out;
 	}
 
-	op_data->op_attr = *attr;
-
 	if (!hsm_import && attr->ia_valid & ATTR_SIZE) {
 		/*
 		 * If we are changing file size, file content is
@@ -1516,6 +1514,8 @@ int ll_setattr_raw(struct dentry *dentry, struct iattr *attr, bool hsm_import)
 		clear_bit(LLIF_DATA_MODIFIED, &lli->lli_flags);
 	}
 
+	op_data->op_attr = *attr;
+
 	rc = ll_md_setattr(dentry, op_data);
 	if (rc)
 		goto out;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 03/14] staging: lustre: osc: remove obsolete asserts
  2017-02-18 21:47 [PATCH 00/14] staging: lustre: missing fixes from lustre 2.8 James Simmons
  2017-02-18 21:47 ` [PATCH 01/14] staging: lustre: llite: lower message level for ll_setattr_raw() James Simmons
  2017-02-18 21:47 ` [PATCH 02/14] staging: lustre: llite: omit to update wire data James Simmons
@ 2017-02-18 21:47 ` James Simmons
  2017-02-18 21:47 ` [PATCH 04/14] staging: lustre: lov: cleanup when cl_io_iter_init() fails James Simmons
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: James Simmons @ 2017-02-18 21:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, devel, Andreas Dilger, Oleg Drokin
  Cc: Linux Kernel Mailing List, Lustre Development List,
	Jinshan Xiong, James Simmons

From: Jinshan Xiong <jinshan.xiong@intel.com>

Remove the no longer needed assert in the function
osc_cache_truncate_start(). The assertion in
osc_object_prune() will become faulty with upcoming
changes. The reason this will become a problem is
that there may exist freeing pages in object's
radix tree at the time of osc_object_prune(), which
causes failure at the assertion of (osc->oo_npages == 0).
This patch prevents that problem from happening.

Signed-off-by: Jinshan Xiong <jinshan.xiong@intel.com>
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-6271
Reviewed-on: http://review.whamcloud.com/16456
Reviewed-on: http://review.whamcloud.com/16727
Reviewed-by: Bobi Jam <bobijam@hotmail.com>
Reviewed-by: John L. Hammond <john.hammond@intel.com>
Reviewed-by: James Simmons <uja.ornl@yahoo.com>
Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/osc/osc_cache.c  | 1 -
 drivers/staging/lustre/lustre/osc/osc_object.c | 4 ----
 2 files changed, 5 deletions(-)

diff --git a/drivers/staging/lustre/lustre/osc/osc_cache.c b/drivers/staging/lustre/lustre/osc/osc_cache.c
index 0490478..6445bbe 100644
--- a/drivers/staging/lustre/lustre/osc/osc_cache.c
+++ b/drivers/staging/lustre/lustre/osc/osc_cache.c
@@ -2790,7 +2790,6 @@ int osc_cache_truncate_start(const struct lu_env *env, struct osc_object *obj,
 			 * We have to wait for this extent because we can't
 			 * truncate that page.
 			 */
-			LASSERT(!ext->oe_hp);
 			OSC_EXTENT_DUMP(D_CACHE, ext,
 					"waiting for busy extent\n");
 			waiting = osc_extent_get(ext);
diff --git a/drivers/staging/lustre/lustre/osc/osc_object.c b/drivers/staging/lustre/lustre/osc/osc_object.c
index d3e5ca7..4f8e78b 100644
--- a/drivers/staging/lustre/lustre/osc/osc_object.c
+++ b/drivers/staging/lustre/lustre/osc/osc_object.c
@@ -200,10 +200,6 @@ static int osc_object_prune(const struct lu_env *env, struct cl_object *obj)
 	struct osc_object       *osc = cl2osc(obj);
 	struct ldlm_res_id      *resname = &osc_env_info(env)->oti_resname;
 
-	LASSERTF(osc->oo_npages == 0,
-		 DFID "still have %lu pages, obj: %p, osc: %p\n",
-		 PFID(lu_object_fid(&obj->co_lu)), osc->oo_npages, obj, osc);
-
 	/* DLM locks don't hold a reference of osc_object so we have to
 	 * clear it before the object is being destroyed.
 	 */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 04/14] staging: lustre: lov: cleanup when cl_io_iter_init() fails
  2017-02-18 21:47 [PATCH 00/14] staging: lustre: missing fixes from lustre 2.8 James Simmons
                   ` (2 preceding siblings ...)
  2017-02-18 21:47 ` [PATCH 03/14] staging: lustre: osc: remove obsolete asserts James Simmons
@ 2017-02-18 21:47 ` James Simmons
  2017-02-18 21:47 ` [PATCH 05/14] staging: lustre: ldlm: handle ldlm lock cancel race when evicting client James Simmons
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: James Simmons @ 2017-02-18 21:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, devel, Andreas Dilger, Oleg Drokin
  Cc: Linux Kernel Mailing List, Lustre Development List,
	Jinshan Xiong, James Simmons

From: Jinshan Xiong <jinshan.xiong@intel.com>

In lov_io_iter_init(), if cl_io_iter_init() against sub io fails,
it should call cl_io_iter_fini() to cleanup leftover information;

Signed-off-by: Jinshan Xiong <jinshan.xiong@intel.com>
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-6271
Reviewed-on: http://review.whamcloud.com/16456
Reviewed-by: Bobi Jam <bobijam@hotmail.com>
Reviewed-by: John L. Hammond <john.hammond@intel.com>
Reviewed-by: James Simmons <uja.ornl@yahoo.com>
Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/lov/lov_io.c | 24 +++++++++++++-----------
 1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/drivers/staging/lustre/lustre/lov/lov_io.c b/drivers/staging/lustre/lustre/lov/lov_io.c
index e0f0756..df77b25 100644
--- a/drivers/staging/lustre/lustre/lov/lov_io.c
+++ b/drivers/staging/lustre/lustre/lov/lov_io.c
@@ -424,21 +424,23 @@ static int lov_io_iter_init(const struct lu_env *env,
 
 		end = lov_offset_mod(end, 1);
 		sub = lov_sub_get(env, lio, stripe);
-		if (!IS_ERR(sub)) {
-			lov_io_sub_inherit(sub->sub_io, lio, stripe,
-					   start, end);
-			rc = cl_io_iter_init(sub->sub_env, sub->sub_io);
-			lov_sub_put(sub);
-			CDEBUG(D_VFSTRACE, "shrink: %d [%llu, %llu)\n",
-			       stripe, start, end);
-		} else {
+		if (IS_ERR(sub)) {
 			rc = PTR_ERR(sub);
+			break;
 		}
 
-		if (!rc)
-			list_add_tail(&sub->sub_linkage, &lio->lis_active);
-		else
+		lov_io_sub_inherit(sub->sub_io, lio, stripe, start, end);
+		rc = cl_io_iter_init(sub->sub_env, sub->sub_io);
+		if (rc)
+			cl_io_iter_fini(sub->sub_env, sub->sub_io);
+		lov_sub_put(sub);
+		if (rc)
 			break;
+
+		CDEBUG(D_VFSTRACE, "shrink: %d [%llu, %llu)\n",
+		       stripe, start, end);
+
+		list_add_tail(&sub->sub_linkage, &lio->lis_active);
 	}
 	return rc;
 }
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 05/14] staging: lustre: ldlm: handle ldlm lock cancel race when evicting client.
  2017-02-18 21:47 [PATCH 00/14] staging: lustre: missing fixes from lustre 2.8 James Simmons
                   ` (3 preceding siblings ...)
  2017-02-18 21:47 ` [PATCH 04/14] staging: lustre: lov: cleanup when cl_io_iter_init() fails James Simmons
@ 2017-02-18 21:47 ` James Simmons
  2017-02-18 21:47 ` [PATCH 06/14] staging: lustre: osc: further LRU OSC cleanup after eviction James Simmons
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: James Simmons @ 2017-02-18 21:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, devel, Andreas Dilger, Oleg Drokin
  Cc: Linux Kernel Mailing List, Lustre Development List,
	Jinshan Xiong, James Simmons

From: Jinshan Xiong <jinshan.xiong@intel.com>

A ldlm lock could be canceled simutaneously by ldlm bl thread and
cleanup_resource(). In this case, only one side will win the race
and the other side should wait for the work to complete. Eviction
on group lock is now well supported.

Signed-off-by: Jinshan Xiong <jinshan.xiong@intel.com>
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-6271
Reviewed-on: http://review.whamcloud.com/16456
Reviewed-by: Bobi Jam <bobijam@hotmail.com>
Reviewed-by: John L. Hammond <john.hammond@intel.com>
Reviewed-by: James Simmons <uja.ornl@yahoo.com>
Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/include/cl_object.h  |  7 +++-
 .../lustre/lustre/include/lustre_dlm_flags.h       |  3 ++
 drivers/staging/lustre/lustre/ldlm/ldlm_lock.c     | 46 ++++++++++++++++------
 drivers/staging/lustre/lustre/ldlm/ldlm_request.c  | 14 ++++++-
 drivers/staging/lustre/lustre/ldlm/ldlm_resource.c |  2 +-
 drivers/staging/lustre/lustre/llite/vvp_io.c       |  1 +
 drivers/staging/lustre/lustre/osc/osc_lock.c       |  2 +
 drivers/staging/lustre/lustre/osc/osc_request.c    | 10 ++++-
 8 files changed, 68 insertions(+), 17 deletions(-)

diff --git a/drivers/staging/lustre/lustre/include/cl_object.h b/drivers/staging/lustre/lustre/include/cl_object.h
index e4c0c44..12b3222 100644
--- a/drivers/staging/lustre/lustre/include/cl_object.h
+++ b/drivers/staging/lustre/lustre/include/cl_object.h
@@ -1640,9 +1640,14 @@ enum cl_enq_flags {
 	 */
 	CEF_PEEK	= 0x00000040,
 	/**
+	 * Lock match only. Used by group lock in I/O as group lock
+	 * is known to exist.
+	 */
+	CEF_LOCK_MATCH	= BIT(7),
+	/**
 	 * mask of enq_flags.
 	 */
-	CEF_MASK         = 0x0000007f,
+	CEF_MASK	= 0x000000ff,
 };
 
 /**
diff --git a/drivers/staging/lustre/lustre/include/lustre_dlm_flags.h b/drivers/staging/lustre/lustre/include/lustre_dlm_flags.h
index a0f064d..11331ae 100644
--- a/drivers/staging/lustre/lustre/include/lustre_dlm_flags.h
+++ b/drivers/staging/lustre/lustre/include/lustre_dlm_flags.h
@@ -121,6 +121,9 @@
 #define ldlm_set_test_lock(_l)          LDLM_SET_FLAG((_l), 1ULL << 19)
 #define ldlm_clear_test_lock(_l)        LDLM_CLEAR_FLAG((_l), 1ULL << 19)
 
+/** match lock only */
+#define LDLM_FL_MATCH_LOCK		0x0000000000100000ULL /* bit  20 */
+
 /**
  * Immediately cancel such locks when they block some other locks. Send
  * cancel notification to original lock holder, but expect no reply. This
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
index 5a94265..16c2a8b 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
@@ -771,19 +771,11 @@ void ldlm_lock_decref_internal(struct ldlm_lock *lock, enum ldlm_mode mode)
 
 	ldlm_lock_decref_internal_nolock(lock, mode);
 
-	if (ldlm_is_local(lock) &&
+	if ((ldlm_is_local(lock) || lock->l_req_mode == LCK_GROUP) &&
 	    !lock->l_readers && !lock->l_writers) {
 		/* If this is a local lock on a server namespace and this was
 		 * the last reference, cancel the lock.
-		 */
-		CDEBUG(D_INFO, "forcing cancel of local lock\n");
-		ldlm_set_cbpending(lock);
-	}
-
-	if (!lock->l_readers && !lock->l_writers &&
-	    (ldlm_is_cbpending(lock) || lock->l_req_mode == LCK_GROUP)) {
-		/* If we received a blocked AST and this was the last reference,
-		 * run the callback.
+		 *
 		 * Group locks are special:
 		 * They must not go in LRU, but they are not called back
 		 * like non-group locks, instead they are manually released.
@@ -791,6 +783,13 @@ void ldlm_lock_decref_internal(struct ldlm_lock *lock, enum ldlm_mode mode)
 		 * they are manually released, so we remove them when they have
 		 * no more reader or writer references. - LU-6368
 		 */
+		ldlm_set_cbpending(lock);
+	}
+
+	if (!lock->l_readers && !lock->l_writers && ldlm_is_cbpending(lock)) {
+		/* If we received a blocked AST and this was the last reference,
+		 * run the callback.
+		 */
 		LDLM_DEBUG(lock, "final decref done on cbpending lock");
 
 		LDLM_LOCK_GET(lock); /* dropped by bl thread */
@@ -1882,6 +1881,19 @@ int ldlm_run_ast_work(struct ldlm_namespace *ns, struct list_head *rpc_list,
 	return rc;
 }
 
+static bool is_bl_done(struct ldlm_lock *lock)
+{
+	bool bl_done = true;
+
+	if (!ldlm_is_bl_done(lock)) {
+		lock_res_and_lock(lock);
+		bl_done = ldlm_is_bl_done(lock);
+		unlock_res_and_lock(lock);
+	}
+
+	return bl_done;
+}
+
 /**
  * Helper function to call blocking AST for LDLM lock \a lock in a
  * "cancelling" mode.
@@ -1899,8 +1911,20 @@ void ldlm_cancel_callback(struct ldlm_lock *lock)
 		} else {
 			LDLM_DEBUG(lock, "no blocking ast");
 		}
+		/* only canceller can set bl_done bit */
+		ldlm_set_bl_done(lock);
+		wake_up_all(&lock->l_waitq);
+	} else if (!ldlm_is_bl_done(lock)) {
+		struct l_wait_info lwi = { 0 };
+
+		/*
+		 * The lock is guaranteed to have been canceled once
+		 * returning from this function.
+		 */
+		unlock_res_and_lock(lock);
+		l_wait_event(lock->l_waitq, is_bl_done(lock), &lwi);
+		lock_res_and_lock(lock);
 	}
-	ldlm_set_bl_done(lock);
 }
 
 /**
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_request.c b/drivers/staging/lustre/lustre/ldlm/ldlm_request.c
index ebfda36..84eeaa5 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_request.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_request.c
@@ -1029,13 +1029,23 @@ int ldlm_cli_cancel(const struct lustre_handle *lockh,
 	struct ldlm_lock *lock;
 	LIST_HEAD(cancels);
 
-	/* concurrent cancels on the same handle can happen */
-	lock = ldlm_handle2lock_long(lockh, LDLM_FL_CANCELING);
+	lock = ldlm_handle2lock_long(lockh, 0);
 	if (!lock) {
 		LDLM_DEBUG_NOLOCK("lock is already being destroyed");
 		return 0;
 	}
 
+	lock_res_and_lock(lock);
+	/* Lock is being canceled and the caller doesn't want to wait */
+	if (ldlm_is_canceling(lock) && (cancel_flags & LCF_ASYNC)) {
+		unlock_res_and_lock(lock);
+		LDLM_LOCK_RELEASE(lock);
+		return 0;
+	}
+
+	ldlm_set_canceling(lock);
+	unlock_res_and_lock(lock);
+
 	rc = ldlm_cli_cancel_local(lock);
 	if (rc == LDLM_FL_LOCAL_ONLY || cancel_flags & LCF_LOCAL) {
 		LDLM_LOCK_RELEASE(lock);
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c b/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c
index d16f5e9..633f65b 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_resource.c
@@ -806,7 +806,7 @@ static void cleanup_resource(struct ldlm_resource *res, struct list_head *q,
 
 		unlock_res(res);
 		ldlm_lock2handle(lock, &lockh);
-		rc = ldlm_cli_cancel(&lockh, LCF_ASYNC);
+		rc = ldlm_cli_cancel(&lockh, LCF_LOCAL);
 		if (rc)
 			CERROR("ldlm_cli_cancel: %d\n", rc);
 		LDLM_LOCK_RELEASE(lock);
diff --git a/drivers/staging/lustre/lustre/llite/vvp_io.c b/drivers/staging/lustre/lustre/llite/vvp_io.c
index 3e9cf71..711126e 100644
--- a/drivers/staging/lustre/lustre/llite/vvp_io.c
+++ b/drivers/staging/lustre/lustre/llite/vvp_io.c
@@ -219,6 +219,7 @@ static int vvp_io_one_lock_index(const struct lu_env *env, struct cl_io *io,
 	if (vio->vui_fd && (vio->vui_fd->fd_flags & LL_FILE_GROUP_LOCKED)) {
 		descr->cld_mode = CLM_GROUP;
 		descr->cld_gid  = vio->vui_fd->fd_grouplock.lg_gid;
+		enqflags |= CEF_LOCK_MATCH;
 	} else {
 		descr->cld_mode  = mode;
 	}
diff --git a/drivers/staging/lustre/lustre/osc/osc_lock.c b/drivers/staging/lustre/lustre/osc/osc_lock.c
index 5f799a4..efecd92 100644
--- a/drivers/staging/lustre/lustre/osc/osc_lock.c
+++ b/drivers/staging/lustre/lustre/osc/osc_lock.c
@@ -167,6 +167,8 @@ static __u64 osc_enq2ldlm_flags(__u32 enqflags)
 		result |= LDLM_FL_AST_DISCARD_DATA;
 	if (enqflags & CEF_PEEK)
 		result |= LDLM_FL_TEST_LOCK;
+	if (enqflags & CEF_LOCK_MATCH)
+		result |= LDLM_FL_MATCH_LOCK;
 	return result;
 }
 
diff --git a/drivers/staging/lustre/lustre/osc/osc_request.c b/drivers/staging/lustre/lustre/osc/osc_request.c
index c4cfe18..8e22807 100644
--- a/drivers/staging/lustre/lustre/osc/osc_request.c
+++ b/drivers/staging/lustre/lustre/osc/osc_request.c
@@ -2011,7 +2011,7 @@ int osc_enqueue_base(struct obd_export *exp, struct ldlm_res_id *res_id,
 	}
 
 no_match:
-	if (*flags & LDLM_FL_TEST_LOCK)
+	if (*flags & (LDLM_FL_TEST_LOCK | LDLM_FL_MATCH_LOCK))
 		return -ENOLCK;
 	if (intent) {
 		req = ptlrpc_request_alloc(class_exp2cliimp(exp),
@@ -2495,7 +2495,13 @@ static int osc_ldlm_resource_invalidate(struct cfs_hash *hs,
 			osc = lock->l_ast_data;
 			cl_object_get(osc2cl(osc));
 		}
-		lock->l_ast_data = NULL;
+
+		/*
+		 * clear LDLM_FL_CLEANED flag to make sure it will be canceled
+		 * by the 2nd round of ldlm_namespace_clean() call in
+		 * osc_import_event().
+		 */
+		ldlm_clear_cleaned(lock);
 	}
 	unlock_res(res);
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 06/14] staging: lustre: osc: further LRU OSC cleanup after eviction
  2017-02-18 21:47 [PATCH 00/14] staging: lustre: missing fixes from lustre 2.8 James Simmons
                   ` (4 preceding siblings ...)
  2017-02-18 21:47 ` [PATCH 05/14] staging: lustre: ldlm: handle ldlm lock cancel race when evicting client James Simmons
@ 2017-02-18 21:47 ` James Simmons
  2017-02-18 21:47 ` [PATCH 07/14] staging: lustre: lov: trying smaller memory allocations James Simmons
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: James Simmons @ 2017-02-18 21:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, devel, Andreas Dilger, Oleg Drokin
  Cc: Linux Kernel Mailing List, Lustre Development List,
	Jinshan Xiong, James Simmons

From: Jinshan Xiong <jinshan.xiong@intel.com>

Define osc_lru_reserve() and osc_lru_unreserve() to reserve LRU
slots in osc_io_write_iter_init() and unreserve them in fini();

Signed-off-by: Jinshan Xiong <jinshan.xiong@intel.com>
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-6271
Reviewed-on: http://review.whamcloud.com/16456
Reviewed-by: Bobi Jam <bobijam@hotmail.com>
Reviewed-by: John L. Hammond <john.hammond@intel.com>
Reviewed-by: James Simmons <uja.ornl@yahoo.com>
Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 .../staging/lustre/lustre/osc/osc_cl_internal.h    |  4 +-
 drivers/staging/lustre/lustre/osc/osc_internal.h   |  3 +-
 drivers/staging/lustre/lustre/osc/osc_io.c         | 48 +++++---------
 drivers/staging/lustre/lustre/osc/osc_lock.c       | 46 ++++++++------
 drivers/staging/lustre/lustre/osc/osc_object.c     |  8 ++-
 drivers/staging/lustre/lustre/osc/osc_page.c       | 73 +++++++++++++++++++---
 6 files changed, 118 insertions(+), 64 deletions(-)

diff --git a/drivers/staging/lustre/lustre/osc/osc_cl_internal.h b/drivers/staging/lustre/lustre/osc/osc_cl_internal.h
index c09ab97d..270212f 100644
--- a/drivers/staging/lustre/lustre/osc/osc_cl_internal.h
+++ b/drivers/staging/lustre/lustre/osc/osc_cl_internal.h
@@ -62,7 +62,9 @@ struct osc_io {
 	/** super class */
 	struct cl_io_slice oi_cl;
 	/** true if this io is lockless. */
-	unsigned int		oi_lockless;
+	unsigned int		oi_lockless:1,
+	/** true if this io is counted as active IO */
+				oi_is_active:1;
 	/** how many LRU pages are reserved for this IO */
 	unsigned long		oi_lru_reserved;
 
diff --git a/drivers/staging/lustre/lustre/osc/osc_internal.h b/drivers/staging/lustre/lustre/osc/osc_internal.h
index 8abd83f..845e795 100644
--- a/drivers/staging/lustre/lustre/osc/osc_internal.h
+++ b/drivers/staging/lustre/lustre/osc/osc_internal.h
@@ -133,7 +133,8 @@ int osc_build_rpc(const struct lu_env *env, struct client_obd *cli,
 		  struct list_head *ext_list, int cmd);
 long osc_lru_shrink(const struct lu_env *env, struct client_obd *cli,
 		    long target, bool force);
-long osc_lru_reclaim(struct client_obd *cli, unsigned long npages);
+unsigned long osc_lru_reserve(struct client_obd *cli, unsigned long npages);
+void osc_lru_unreserve(struct client_obd *cli, unsigned long npages);
 
 unsigned long osc_ldlm_weigh_ast(struct ldlm_lock *dlmlock);
 
diff --git a/drivers/staging/lustre/lustre/osc/osc_io.c b/drivers/staging/lustre/lustre/osc/osc_io.c
index 0b4cc42..f991bee 100644
--- a/drivers/staging/lustre/lustre/osc/osc_io.c
+++ b/drivers/staging/lustre/lustre/osc/osc_io.c
@@ -354,7 +354,10 @@ static int osc_io_iter_init(const struct lu_env *env,
 
 	spin_lock(&imp->imp_lock);
 	if (likely(!imp->imp_invalid)) {
+		struct osc_io *oio = osc_env_io(env);
+
 		atomic_inc(&osc->oo_nr_ios);
+		oio->oi_is_active = 1;
 		rc = 0;
 	}
 	spin_unlock(&imp->imp_lock);
@@ -368,10 +371,7 @@ static int osc_io_write_iter_init(const struct lu_env *env,
 	struct cl_io *io = ios->cis_io;
 	struct osc_io *oio = osc_env_io(env);
 	struct osc_object *osc = cl2osc(ios->cis_obj);
-	struct client_obd *cli = osc_cli(osc);
-	unsigned long c;
 	unsigned long npages;
-	unsigned long max_pages;
 
 	if (cl_io_is_append(io))
 		return osc_io_iter_init(env, ios);
@@ -380,31 +380,7 @@ static int osc_io_write_iter_init(const struct lu_env *env,
 	if (io->u.ci_rw.crw_pos & ~PAGE_MASK)
 		++npages;
 
-	max_pages = cli->cl_max_pages_per_rpc * cli->cl_max_rpcs_in_flight;
-	if (npages > max_pages)
-		npages = max_pages;
-
-	c = atomic_long_read(cli->cl_lru_left);
-	if (c < npages && osc_lru_reclaim(cli, npages) > 0)
-		c = atomic_long_read(cli->cl_lru_left);
-	while (c >= npages) {
-		if (c == atomic_long_cmpxchg(cli->cl_lru_left, c, c - npages)) {
-			oio->oi_lru_reserved = npages;
-			break;
-		}
-		c = atomic_long_read(cli->cl_lru_left);
-	}
-	if (atomic_long_read(cli->cl_lru_left) < max_pages) {
-		/*
-		 * If there aren't enough pages in the per-OSC LRU then
-		 * wake up the LRU thread to try and clear out space, so
-		 * we don't block if pages are being dirtied quickly.
-		 */
-		CDEBUG(D_CACHE, "%s: queue LRU, left: %lu/%ld.\n",
-		       cli_name(cli), atomic_long_read(cli->cl_lru_left),
-		       max_pages);
-		(void)ptlrpcd_queue_work(cli->cl_lru_work);
-	}
+	oio->oi_lru_reserved = osc_lru_reserve(osc_cli(osc), npages);
 
 	return osc_io_iter_init(env, ios);
 }
@@ -412,11 +388,16 @@ static int osc_io_write_iter_init(const struct lu_env *env,
 static void osc_io_iter_fini(const struct lu_env *env,
 			     const struct cl_io_slice *ios)
 {
-	struct osc_object *osc = cl2osc(ios->cis_obj);
+	struct osc_io *oio = osc_env_io(env);
 
-	LASSERT(atomic_read(&osc->oo_nr_ios) > 0);
-	if (atomic_dec_and_test(&osc->oo_nr_ios))
-		wake_up_all(&osc->oo_io_waitq);
+	if (oio->oi_is_active) {
+		struct osc_object *osc = cl2osc(ios->cis_obj);
+
+		oio->oi_is_active = 0;
+		LASSERT(atomic_read(&osc->oo_nr_ios) > 0);
+		if (atomic_dec_and_test(&osc->oo_nr_ios))
+			wake_up_all(&osc->oo_io_waitq);
+	}
 }
 
 static void osc_io_write_iter_fini(const struct lu_env *env,
@@ -424,10 +405,9 @@ static void osc_io_write_iter_fini(const struct lu_env *env,
 {
 	struct osc_io *oio = osc_env_io(env);
 	struct osc_object *osc = cl2osc(ios->cis_obj);
-	struct client_obd *cli = osc_cli(osc);
 
 	if (oio->oi_lru_reserved > 0) {
-		atomic_long_add(oio->oi_lru_reserved, cli->cl_lru_left);
+		osc_lru_unreserve(osc_cli(osc), oio->oi_lru_reserved);
 		oio->oi_lru_reserved = 0;
 	}
 	oio->oi_write_osclock = NULL;
diff --git a/drivers/staging/lustre/lustre/osc/osc_lock.c b/drivers/staging/lustre/lustre/osc/osc_lock.c
index efecd92..5f7c030 100644
--- a/drivers/staging/lustre/lustre/osc/osc_lock.c
+++ b/drivers/staging/lustre/lustre/osc/osc_lock.c
@@ -840,13 +840,14 @@ static void osc_lock_wake_waiters(const struct lu_env *env,
 	spin_unlock(&oscl->ols_lock);
 }
 
-static void osc_lock_enqueue_wait(const struct lu_env *env,
-				  struct osc_object *obj,
-				  struct osc_lock *oscl)
+static int osc_lock_enqueue_wait(const struct lu_env *env,
+				 struct osc_object *obj,
+				 struct osc_lock *oscl)
 {
 	struct osc_lock *tmp_oscl;
 	struct cl_lock_descr *need = &oscl->ols_cl.cls_lock->cll_descr;
 	struct cl_sync_io *waiter = &osc_env_info(env)->oti_anchor;
+	int rc = 0;
 
 	spin_lock(&obj->oo_ol_spin);
 	list_add_tail(&oscl->ols_nextlock_oscobj, &obj->oo_ol_list);
@@ -883,13 +884,17 @@ static void osc_lock_enqueue_wait(const struct lu_env *env,
 		spin_unlock(&tmp_oscl->ols_lock);
 
 		spin_unlock(&obj->oo_ol_spin);
-		(void)cl_sync_io_wait(env, waiter, 0);
-
+		rc = cl_sync_io_wait(env, waiter, 0);
 		spin_lock(&obj->oo_ol_spin);
+		if (rc < 0)
+			break;
+
 		oscl->ols_owner = NULL;
 		goto restart;
 	}
 	spin_unlock(&obj->oo_ol_spin);
+
+	return rc;
 }
 
 /**
@@ -937,7 +942,9 @@ static int osc_lock_enqueue(const struct lu_env *env,
 		goto enqueue_base;
 	}
 
-	osc_lock_enqueue_wait(env, osc, oscl);
+	result = osc_lock_enqueue_wait(env, osc, oscl);
+	if (result < 0)
+		goto out;
 
 	/* we can grant lockless lock right after all conflicting locks
 	 * are canceled.
@@ -962,7 +969,6 @@ static int osc_lock_enqueue(const struct lu_env *env,
 	 * osc_lock.
 	 */
 	ostid_build_res_name(&osc->oo_oinfo->loi_oi, resname);
-	osc_lock_build_einfo(env, lock, osc, &oscl->ols_einfo);
 	osc_lock_build_policy(env, lock, policy);
 	if (oscl->ols_agl) {
 		oscl->ols_einfo.ei_cbdata = NULL;
@@ -977,18 +983,7 @@ static int osc_lock_enqueue(const struct lu_env *env,
 				  upcall, cookie,
 				  &oscl->ols_einfo, PTLRPCD_SET, async,
 				  oscl->ols_agl);
-	if (result != 0) {
-		oscl->ols_state = OLS_CANCELLED;
-		osc_lock_wake_waiters(env, osc, oscl);
-
-		/* hide error for AGL lock. */
-		if (oscl->ols_agl) {
-			cl_object_put(env, osc2cl(osc));
-			result = 0;
-		}
-		if (anchor)
-			cl_sync_io_note(env, anchor, result);
-	} else {
+	if (!result) {
 		if (osc_lock_is_lockless(oscl)) {
 			oio->oi_lockless = 1;
 		} else if (!async) {
@@ -996,6 +991,18 @@ static int osc_lock_enqueue(const struct lu_env *env,
 			LASSERT(oscl->ols_hold);
 			LASSERT(oscl->ols_dlmlock);
 		}
+	} else if (oscl->ols_agl) {
+		cl_object_put(env, osc2cl(osc));
+		result = 0;
+	}
+
+out:
+	if (result < 0) {
+		oscl->ols_state = OLS_CANCELLED;
+		osc_lock_wake_waiters(env, osc, oscl);
+
+		if (anchor)
+			cl_sync_io_note(env, anchor, result);
 	}
 	return result;
 }
@@ -1159,6 +1166,7 @@ int osc_lock_init(const struct lu_env *env,
 		oscl->ols_flags |= LDLM_FL_BLOCK_GRANTED;
 		oscl->ols_glimpse = 1;
 	}
+	osc_lock_build_einfo(env, lock, cl2osc(obj), &oscl->ols_einfo);
 
 	cl_lock_slice_add(lock, &oscl->ols_cl, obj, &osc_lock_ops);
 
diff --git a/drivers/staging/lustre/lustre/osc/osc_object.c b/drivers/staging/lustre/lustre/osc/osc_object.c
index 4f8e78b..fa621bd 100644
--- a/drivers/staging/lustre/lustre/osc/osc_object.c
+++ b/drivers/staging/lustre/lustre/osc/osc_object.c
@@ -453,9 +453,15 @@ int osc_object_invalidate(const struct lu_env *env, struct osc_object *osc)
 
 	l_wait_event(osc->oo_io_waitq, !atomic_read(&osc->oo_nr_ios), &lwi);
 
-	/* Discard all pages of this object. */
+	/* Discard all dirty pages of this object. */
 	osc_cache_truncate_start(env, osc, 0, NULL);
 
+	/* Discard all caching pages */
+	osc_lock_discard_pages(env, osc, 0, CL_PAGE_EOF, CLM_WRITE);
+
+	/* Clear ast data of dlm lock. Do this after discarding all pages */
+	osc_object_prune(env, osc2cl(osc));
+
 	return 0;
 }
 
diff --git a/drivers/staging/lustre/lustre/osc/osc_page.c b/drivers/staging/lustre/lustre/osc/osc_page.c
index ab9d0d7..03ee340 100644
--- a/drivers/staging/lustre/lustre/osc/osc_page.c
+++ b/drivers/staging/lustre/lustre/osc/osc_page.c
@@ -42,8 +42,8 @@
 
 static void osc_lru_del(struct client_obd *cli, struct osc_page *opg);
 static void osc_lru_use(struct client_obd *cli, struct osc_page *opg);
-static int osc_lru_reserve(const struct lu_env *env, struct osc_object *obj,
-			   struct osc_page *opg);
+static int osc_lru_alloc(const struct lu_env *env, struct client_obd *cli,
+			 struct osc_page *opg);
 
 /** \addtogroup osc
  *  @{
@@ -273,7 +273,7 @@ int osc_page_init(const struct lu_env *env, struct cl_object *obj,
 
 	/* reserve an LRU space for this page */
 	if (page->cp_type == CPT_CACHEABLE && result == 0) {
-		result = osc_lru_reserve(env, osc, opg);
+		result = osc_lru_alloc(env, osc_cli(osc), opg);
 		if (result == 0) {
 			spin_lock(&osc->oo_tree_lock);
 			result = radix_tree_insert(&osc->oo_tree, index, opg);
@@ -676,7 +676,7 @@ long osc_lru_shrink(const struct lu_env *env, struct client_obd *cli,
  * LRU pages in batch. Therefore, the actual number is adjusted at least
  * max_pages_per_rpc.
  */
-long osc_lru_reclaim(struct client_obd *cli, unsigned long npages)
+static long osc_lru_reclaim(struct client_obd *cli, unsigned long npages)
 {
 	struct lu_env *env;
 	struct cl_client_cache *cache = cli->cl_cache;
@@ -749,18 +749,17 @@ long osc_lru_reclaim(struct client_obd *cli, unsigned long npages)
 }
 
 /**
- * osc_lru_reserve() is called to reserve an LRU slot for a cl_page.
+ * osc_lru_alloc() is called to reserve an LRU slot for a cl_page.
  *
  * Usually the LRU slots are reserved in osc_io_iter_rw_init().
  * Only in the case that the LRU slots are in extreme shortage, it should
  * have reserved enough slots for an IO.
  */
-static int osc_lru_reserve(const struct lu_env *env, struct osc_object *obj,
-			   struct osc_page *opg)
+static int osc_lru_alloc(const struct lu_env *env, struct client_obd *cli,
+			 struct osc_page *opg)
 {
 	struct l_wait_info lwi = LWI_INTR(LWI_ON_SIGNAL_NOOP, NULL);
 	struct osc_io *oio = osc_env_io(env);
-	struct client_obd *cli = osc_cli(obj);
 	int rc = 0;
 
 	if (!cli->cl_cache) /* shall not be in LRU */
@@ -801,6 +800,64 @@ static int osc_lru_reserve(const struct lu_env *env, struct osc_object *obj,
 }
 
 /**
+ * osc_lru_reserve() is called to reserve enough LRU slots for I/O.
+ *
+ * The benefit of doing this is to reduce contention against atomic counter
+ * cl_lru_left by changing it from per-page access to per-IO access.
+ */
+unsigned long osc_lru_reserve(struct client_obd *cli, unsigned long npages)
+{
+	unsigned long reserved = 0;
+	unsigned long max_pages;
+	unsigned long c;
+
+	/*
+	 * reserve a full RPC window at most to avoid that a thread accidentally
+	 * consumes too many LRU slots
+	 */
+	max_pages = cli->cl_max_pages_per_rpc * cli->cl_max_rpcs_in_flight;
+	if (npages > max_pages)
+		npages = max_pages;
+
+	c = atomic_long_read(cli->cl_lru_left);
+	if (c < npages && osc_lru_reclaim(cli, npages) > 0)
+		c = atomic_long_read(cli->cl_lru_left);
+	while (c >= npages) {
+		if (c == atomic_long_cmpxchg(cli->cl_lru_left, c, c - npages)) {
+			reserved = npages;
+			break;
+		}
+		c = atomic_long_read(cli->cl_lru_left);
+	}
+	if (atomic_long_read(cli->cl_lru_left) < max_pages) {
+		/*
+		 * If there aren't enough pages in the per-OSC LRU then
+		 * wake up the LRU thread to try and clear out space, so
+		 * we don't block if pages are being dirtied quickly.
+		 */
+		CDEBUG(D_CACHE, "%s: queue LRU, left: %lu/%ld.\n",
+		       cli_name(cli), atomic_long_read(cli->cl_lru_left),
+		       max_pages);
+		(void)ptlrpcd_queue_work(cli->cl_lru_work);
+	}
+
+	return reserved;
+}
+
+/**
+ * osc_lru_unreserve() is called to unreserve LRU slots.
+ *
+ * LRU slots reserved by osc_lru_reserve() may have entries left due to several
+ * reasons such as page already existing or I/O error. Those reserved slots
+ * should be freed by calling this function.
+ */
+void osc_lru_unreserve(struct client_obd *cli, unsigned long npages)
+{
+	atomic_long_add(npages, cli->cl_lru_left);
+	wake_up_all(&osc_lru_waitq);
+}
+
+/**
  * Atomic operations are expensive. We accumulate the accounting for the
  * same page pgdat to get better performance.
  * In practice this can work pretty good because the pages in the same RPC
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 07/14] staging: lustre: lov: trying smaller memory allocations
  2017-02-18 21:47 [PATCH 00/14] staging: lustre: missing fixes from lustre 2.8 James Simmons
                   ` (5 preceding siblings ...)
  2017-02-18 21:47 ` [PATCH 06/14] staging: lustre: osc: further LRU OSC cleanup after eviction James Simmons
@ 2017-02-18 21:47 ` James Simmons
  2017-02-18 21:47 ` [PATCH 08/14] staging: lustre: llite: remove extraneous export parameter James Simmons
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: James Simmons @ 2017-02-18 21:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, devel, Andreas Dilger, Oleg Drokin
  Cc: Linux Kernel Mailing List, Lustre Development List, Yang Sheng,
	James Simmons

From: Yang Sheng <yang.sheng@intel.com>

Reduce struct lov_io_sub to smaller memory usage
on wide-stripe file systems.

Signed-off-by: Yang Sheng <yang.sheng@intel.com>
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-7085
Reviewed-on: http://review.whamcloud.com/17476
Reviewed-by: Bob Glossman <bob.glossman@intel.com>
Reviewed-by: Jian Yu <jian.yu@intel.com>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/include/cl_object.h  |  6 ++--
 drivers/staging/lustre/lustre/llite/file.c         | 16 ++++-----
 drivers/staging/lustre/lustre/llite/glimpse.c      |  4 +--
 drivers/staging/lustre/lustre/llite/lcommon_cl.c   |  8 ++---
 drivers/staging/lustre/lustre/llite/lcommon_misc.c |  2 +-
 .../staging/lustre/lustre/llite/llite_internal.h   |  2 +-
 drivers/staging/lustre/lustre/llite/llite_mmap.c   |  4 +--
 drivers/staging/lustre/lustre/llite/lproc_llite.c  |  2 +-
 drivers/staging/lustre/lustre/llite/rw.c           |  2 +-
 drivers/staging/lustre/lustre/llite/vvp_dev.c      | 10 +++---
 drivers/staging/lustre/lustre/llite/xattr.c        |  2 +-
 .../staging/lustre/lustre/lov/lov_cl_internal.h    | 41 +++++++++++-----------
 drivers/staging/lustre/lustre/lov/lov_object.c     |  2 +-
 drivers/staging/lustre/lustre/obdclass/cl_object.c |  6 ++--
 .../staging/lustre/lustre/obdecho/echo_client.c    |  6 ++--
 drivers/staging/lustre/lustre/osc/lproc_osc.c      |  2 +-
 drivers/staging/lustre/lustre/osc/osc_cache.c      |  2 +-
 drivers/staging/lustre/lustre/osc/osc_lock.c       | 12 +++----
 drivers/staging/lustre/lustre/osc/osc_page.c       |  4 +--
 drivers/staging/lustre/lustre/osc/osc_request.c    |  2 +-
 20 files changed, 67 insertions(+), 68 deletions(-)

diff --git a/drivers/staging/lustre/lustre/include/cl_object.h b/drivers/staging/lustre/lustre/include/cl_object.h
index 12b3222..2bc3ee5 100644
--- a/drivers/staging/lustre/lustre/include/cl_object.h
+++ b/drivers/staging/lustre/lustre/include/cl_object.h
@@ -2437,9 +2437,9 @@ void cl_sync_io_note(const struct lu_env *env, struct cl_sync_io *anchor,
  * @{
  */
 
-struct lu_env *cl_env_get(int *refcheck);
-struct lu_env *cl_env_alloc(int *refcheck, __u32 tags);
-void cl_env_put(struct lu_env *env, int *refcheck);
+struct lu_env *cl_env_get(u16 *refcheck);
+struct lu_env *cl_env_alloc(u16 *refcheck, __u32 tags);
+void cl_env_put(struct lu_env *env, u16 *refcheck);
 unsigned int cl_env_cache_purge(unsigned int nr);
 struct lu_env *cl_env_percpu_get(void);
 void cl_env_percpu_put(struct lu_env *env);
diff --git a/drivers/staging/lustre/lustre/llite/file.c b/drivers/staging/lustre/lustre/llite/file.c
index 10adfcd..b1c9573 100644
--- a/drivers/staging/lustre/lustre/llite/file.c
+++ b/drivers/staging/lustre/lustre/llite/file.c
@@ -1159,7 +1159,7 @@ static ssize_t ll_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
 	struct lu_env      *env;
 	struct vvp_io_args *args;
 	ssize_t	     result;
-	int		 refcheck;
+	u16 refcheck;
 
 	env = cl_env_get(&refcheck);
 	if (IS_ERR(env))
@@ -1183,7 +1183,7 @@ static ssize_t ll_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
 	struct lu_env      *env;
 	struct vvp_io_args *args;
 	ssize_t	     result;
-	int		 refcheck;
+	u16 refcheck;
 
 	env = cl_env_get(&refcheck);
 	if (IS_ERR(env))
@@ -1340,7 +1340,7 @@ static int ll_file_getstripe(struct inode *inode,
 			     struct lov_user_md __user *lum)
 {
 	struct lu_env *env;
-	int refcheck;
+	u16 refcheck;
 	int rc;
 
 	env = cl_env_get(&refcheck);
@@ -1517,7 +1517,7 @@ static int ll_do_fiemap(struct inode *inode, struct fiemap *fiemap,
 {
 	struct ll_fiemap_info_key fmkey = { .lfik_name = KEY_FIEMAP, };
 	struct lu_env *env;
-	int refcheck;
+	u16 refcheck;
 	int rc = 0;
 
 	/* Checks for fiemap flags */
@@ -1623,7 +1623,7 @@ int ll_data_version(struct inode *inode, __u64 *data_version, int flags)
 	struct cl_object *obj = ll_i2info(inode)->lli_clob;
 	struct lu_env *env;
 	struct cl_io *io;
-	int refcheck;
+	u16 refcheck;
 	int result;
 
 	/* If no file object initialized, we consider its version is 0. */
@@ -1668,7 +1668,7 @@ int ll_hsm_release(struct inode *inode)
 	struct obd_client_handle *och = NULL;
 	__u64 data_version = 0;
 	int rc;
-	int refcheck;
+	u16 refcheck;
 
 	CDEBUG(D_INODE, "%s: Releasing file "DFID".\n",
 	       ll_get_fsname(inode->i_sb, NULL, 0),
@@ -2324,7 +2324,7 @@ int cl_sync_file_range(struct inode *inode, loff_t start, loff_t end,
 	struct cl_io *io;
 	struct cl_fsync_io *fio;
 	int result;
-	int refcheck;
+	u16 refcheck;
 
 	if (mode != CL_FSYNC_NONE && mode != CL_FSYNC_LOCAL &&
 	    mode != CL_FSYNC_DISCARD && mode != CL_FSYNC_ALL)
@@ -3271,7 +3271,7 @@ int ll_layout_conf(struct inode *inode, const struct cl_object_conf *conf)
 	struct cl_object *obj = lli->lli_clob;
 	struct lu_env *env;
 	int rc;
-	int refcheck;
+	u16 refcheck;
 
 	if (!obj)
 		return 0;
diff --git a/drivers/staging/lustre/lustre/llite/glimpse.c b/drivers/staging/lustre/lustre/llite/glimpse.c
index 504498d..0143112 100644
--- a/drivers/staging/lustre/lustre/llite/glimpse.c
+++ b/drivers/staging/lustre/lustre/llite/glimpse.c
@@ -138,7 +138,7 @@ int cl_glimpse_lock(const struct lu_env *env, struct cl_io *io,
 }
 
 static int cl_io_get(struct inode *inode, struct lu_env **envout,
-		     struct cl_io **ioout, int *refcheck)
+		     struct cl_io **ioout, u16 *refcheck)
 {
 	struct lu_env	  *env;
 	struct cl_io	   *io;
@@ -178,7 +178,7 @@ int cl_glimpse_size0(struct inode *inode, int agl)
 	struct lu_env	  *env = NULL;
 	struct cl_io	   *io  = NULL;
 	int		     result;
-	int		     refcheck;
+	u16 refcheck;
 
 	result = cl_io_get(inode, &env, &io, &refcheck);
 	if (result > 0) {
diff --git a/drivers/staging/lustre/lustre/llite/lcommon_cl.c b/drivers/staging/lustre/lustre/llite/lcommon_cl.c
index f1036f4..8af6110 100644
--- a/drivers/staging/lustre/lustre/llite/lcommon_cl.c
+++ b/drivers/staging/lustre/lustre/llite/lcommon_cl.c
@@ -72,7 +72,7 @@
  * mutex.
  */
 struct lu_env *cl_inode_fini_env;
-int cl_inode_fini_refcheck;
+u16 cl_inode_fini_refcheck;
 
 /**
  * A mutex serializing calls to slp_inode_fini() under extreme memory
@@ -86,7 +86,7 @@ int cl_setattr_ost(struct cl_object *obj, const struct iattr *attr,
 	struct lu_env *env;
 	struct cl_io  *io;
 	int	    result;
-	int	    refcheck;
+	u16 refcheck;
 
 	env = cl_env_get(&refcheck);
 	if (IS_ERR(env))
@@ -149,7 +149,7 @@ int cl_file_inode_init(struct inode *inode, struct lustre_md *md)
 		}
 	};
 	int result = 0;
-	int refcheck;
+	u16 refcheck;
 
 	LASSERT(md->body->mbo_valid & OBD_MD_FLID);
 	LASSERT(S_ISREG(inode->i_mode));
@@ -237,7 +237,7 @@ void cl_inode_fini(struct inode *inode)
 	struct lu_env	   *env;
 	struct ll_inode_info    *lli  = ll_i2info(inode);
 	struct cl_object	*clob = lli->lli_clob;
-	int refcheck;
+	u16 refcheck;
 	int emergency;
 
 	if (clob) {
diff --git a/drivers/staging/lustre/lustre/llite/lcommon_misc.c b/drivers/staging/lustre/lustre/llite/lcommon_misc.c
index f0c132e..7f7f3f1 100644
--- a/drivers/staging/lustre/lustre/llite/lcommon_misc.c
+++ b/drivers/staging/lustre/lustre/llite/lcommon_misc.c
@@ -124,7 +124,7 @@ int cl_get_grouplock(struct cl_object *obj, unsigned long gid, int nonblock,
 	struct cl_lock	 *lock;
 	struct cl_lock_descr   *descr;
 	__u32		   enqflags;
-	int		     refcheck;
+	u16 refcheck;
 	int		     rc;
 
 	env = cl_env_get(&refcheck);
diff --git a/drivers/staging/lustre/lustre/llite/llite_internal.h b/drivers/staging/lustre/lustre/llite/llite_internal.h
index ecdfd0c..99fb852 100644
--- a/drivers/staging/lustre/lustre/llite/llite_internal.h
+++ b/drivers/staging/lustre/lustre/llite/llite_internal.h
@@ -1329,7 +1329,7 @@ int cl_setattr_ost(struct cl_object *obj, const struct iattr *attr,
 		   unsigned int attr_flags);
 
 extern struct lu_env *cl_inode_fini_env;
-extern int cl_inode_fini_refcheck;
+extern u16 cl_inode_fini_refcheck;
 
 int cl_file_inode_init(struct inode *inode, struct lustre_md *md);
 void cl_inode_fini(struct inode *inode);
diff --git a/drivers/staging/lustre/lustre/llite/llite_mmap.c b/drivers/staging/lustre/lustre/llite/llite_mmap.c
index ee01f20..33dc935 100644
--- a/drivers/staging/lustre/lustre/llite/llite_mmap.c
+++ b/drivers/staging/lustre/lustre/llite/llite_mmap.c
@@ -150,7 +150,7 @@ static int ll_page_mkwrite0(struct vm_area_struct *vma, struct page *vmpage,
 	struct cl_io	    *io;
 	struct vvp_io	   *vio;
 	int		      result;
-	int refcheck;
+	u16 refcheck;
 	sigset_t	     set;
 	struct inode	     *inode;
 	struct ll_inode_info     *lli;
@@ -268,7 +268,7 @@ static int ll_fault0(struct vm_area_struct *vma, struct vm_fault *vmf)
 	unsigned long	    ra_flags;
 	int		      result = 0;
 	int		      fault_ret = 0;
-	int refcheck;
+	u16 refcheck;
 
 	env = cl_env_get(&refcheck);
 	if (IS_ERR(env))
diff --git a/drivers/staging/lustre/lustre/llite/lproc_llite.c b/drivers/staging/lustre/lustre/llite/lproc_llite.c
index f3ee584..40f1fcf 100644
--- a/drivers/staging/lustre/lustre/llite/lproc_llite.c
+++ b/drivers/staging/lustre/lustre/llite/lproc_llite.c
@@ -386,7 +386,7 @@ static ssize_t ll_max_cached_mb_seq_write(struct file *file,
 	struct lu_env *env;
 	long diff = 0;
 	long nrpages = 0;
-	int refcheck;
+	u16 refcheck;
 	long pages_number;
 	int mult;
 	long rc;
diff --git a/drivers/staging/lustre/lustre/llite/rw.c b/drivers/staging/lustre/lustre/llite/rw.c
index 50d027e..1bac51f 100644
--- a/drivers/staging/lustre/lustre/llite/rw.c
+++ b/drivers/staging/lustre/lustre/llite/rw.c
@@ -905,7 +905,7 @@ int ll_writepage(struct page *vmpage, struct writeback_control *wbc)
 	bool redirtied = false;
 	bool unlocked = false;
 	int result;
-	int refcheck;
+	u16 refcheck;
 
 	LASSERT(PageLocked(vmpage));
 	LASSERT(!PageWriteback(vmpage));
diff --git a/drivers/staging/lustre/lustre/llite/vvp_dev.c b/drivers/staging/lustre/lustre/llite/vvp_dev.c
index 3669ea7..6cb2db2 100644
--- a/drivers/staging/lustre/lustre/llite/vvp_dev.c
+++ b/drivers/staging/lustre/lustre/llite/vvp_dev.c
@@ -313,7 +313,7 @@ int cl_sb_init(struct super_block *sb)
 	struct cl_device  *cl;
 	struct lu_env     *env;
 	int rc = 0;
-	int refcheck;
+	u16 refcheck;
 
 	sbi  = ll_s2sbi(sb);
 	env = cl_env_get(&refcheck);
@@ -336,7 +336,7 @@ int cl_sb_fini(struct super_block *sb)
 	struct ll_sb_info *sbi;
 	struct lu_env     *env;
 	struct cl_device  *cld;
-	int		refcheck;
+	u16 refcheck;
 	int		result;
 
 	sbi = ll_s2sbi(sb);
@@ -535,7 +535,7 @@ static int vvp_pgcache_show(struct seq_file *f, void *v)
 	struct cl_object	*clob;
 	struct lu_env	   *env;
 	struct vvp_pgcache_id    id;
-	int		      refcheck;
+	u16 refcheck;
 	int		      result;
 
 	env = cl_env_get(&refcheck);
@@ -584,7 +584,7 @@ static void *vvp_pgcache_start(struct seq_file *f, loff_t *pos)
 {
 	struct ll_sb_info *sbi;
 	struct lu_env     *env;
-	int		refcheck;
+	u16 refcheck;
 
 	sbi = f->private;
 
@@ -608,7 +608,7 @@ static void *vvp_pgcache_next(struct seq_file *f, void *v, loff_t *pos)
 {
 	struct ll_sb_info *sbi;
 	struct lu_env     *env;
-	int		refcheck;
+	u16 refcheck;
 
 	env = cl_env_get(&refcheck);
 	if (!IS_ERR(env)) {
diff --git a/drivers/staging/lustre/lustre/llite/xattr.c b/drivers/staging/lustre/lustre/llite/xattr.c
index 421cc04..3ef0291 100644
--- a/drivers/staging/lustre/lustre/llite/xattr.c
+++ b/drivers/staging/lustre/lustre/llite/xattr.c
@@ -427,7 +427,7 @@ static ssize_t ll_getxattr_lov(struct inode *inode, void *buf, size_t buf_size)
 			.cl_buf.lb_len = buf_size,
 		};
 		struct lu_env *env;
-		int refcheck;
+		u16 refcheck;
 
 		if (!obj)
 			return -ENODATA;
diff --git a/drivers/staging/lustre/lustre/lov/lov_cl_internal.h b/drivers/staging/lustre/lustre/lov/lov_cl_internal.h
index c49a34b..391c632 100644
--- a/drivers/staging/lustre/lustre/lov/lov_cl_internal.h
+++ b/drivers/staging/lustre/lustre/lov/lov_cl_internal.h
@@ -118,7 +118,7 @@ struct lov_device_emerg {
 	 *
 	 * \see cl_env_get()
 	 */
-	int		 emrg_refcheck;
+	u16		 emrg_refcheck;
 };
 
 struct lov_device {
@@ -378,40 +378,39 @@ struct lov_thread_info {
  * State that lov_io maintains for every sub-io.
  */
 struct lov_io_sub {
-	int		  sub_stripe;
-	/**
-	 * sub-io for a stripe. Ideally sub-io's can be stopped and resumed
-	 * independently, with lov acting as a scheduler to maximize overall
-	 * throughput.
-	 */
-	struct cl_io	*sub_io;
+	u16		 sub_stripe;
 	/**
-	 * Linkage into a list (hanging off lov_io::lis_active) of all
-	 * sub-io's active for the current IO iteration.
+	 * environment's refcheck.
+	 *
+	 * \see cl_env_get()
 	 */
-	struct list_head	   sub_linkage;
+	u16			 sub_refcheck;
+	u16			 sub_reenter;
 	/**
 	 * true, iff cl_io_init() was successfully executed against
 	 * lov_io_sub::sub_io.
 	 */
-	int		  sub_io_initialized;
+	u16			 sub_io_initialized:1,
 	/**
 	 * True, iff lov_io_sub::sub_io and lov_io_sub::sub_env weren't
 	 * allocated, but borrowed from a per-device emergency pool.
 	 */
-	int		  sub_borrowed;
+				 sub_borrowed:1;
 	/**
-	 * environment, in which sub-io executes.
+	 * Linkage into a list (hanging off lov_io::lis_active) of all
+	 * sub-io's active for the current IO iteration.
 	 */
-	struct lu_env *sub_env;
+	struct list_head	 sub_linkage;
 	/**
-	 * environment's refcheck.
-	 *
-	 * \see cl_env_get()
+	 * sub-io for a stripe. Ideally sub-io's can be stopped and resumed
+	 * independently, with lov acting as a scheduler to maximize overall
+	 * throughput.
+	 */
+	struct cl_io	*sub_io;
+	/**
+	 * environment, in which sub-io executes.
 	 */
-	int		  sub_refcheck;
-	int		  sub_refcheck2;
-	int		  sub_reenter;
+	struct lu_env *sub_env;
 };
 
 /**
diff --git a/drivers/staging/lustre/lustre/lov/lov_object.c b/drivers/staging/lustre/lustre/lov/lov_object.c
index 977579c..ab3ecfe 100644
--- a/drivers/staging/lustre/lustre/lov/lov_object.c
+++ b/drivers/staging/lustre/lustre/lov/lov_object.c
@@ -746,7 +746,7 @@ static int lov_layout_change(const struct lu_env *unused,
 	const struct lov_layout_operations *old_ops;
 	const struct lov_layout_operations *new_ops;
 	struct lu_env *env;
-	int refcheck;
+	u16 refcheck;
 	int rc;
 
 	LASSERT(0 <= lov->lo_type && lov->lo_type < ARRAY_SIZE(lov_dispatch));
diff --git a/drivers/staging/lustre/lustre/obdclass/cl_object.c b/drivers/staging/lustre/lustre/obdclass/cl_object.c
index 703cb67..08e55d4 100644
--- a/drivers/staging/lustre/lustre/obdclass/cl_object.c
+++ b/drivers/staging/lustre/lustre/obdclass/cl_object.c
@@ -688,7 +688,7 @@ static inline struct cl_env *cl_env_container(struct lu_env *env)
  *
  * \see cl_env_put()
  */
-struct lu_env *cl_env_get(int *refcheck)
+struct lu_env *cl_env_get(u16 *refcheck)
 {
 	struct lu_env *env;
 
@@ -709,7 +709,7 @@ struct lu_env *cl_env_get(int *refcheck)
  *
  * \see cl_env_get()
  */
-struct lu_env *cl_env_alloc(int *refcheck, __u32 tags)
+struct lu_env *cl_env_alloc(u16 *refcheck, u32 tags)
 {
 	struct lu_env *env;
 
@@ -769,7 +769,7 @@ unsigned int cl_env_cache_purge(unsigned int nr)
  * this thread is using environment and it is returned to the allocation
  * cache, or freed straight away, if cache is large enough.
  */
-void cl_env_put(struct lu_env *env, int *refcheck)
+void cl_env_put(struct lu_env *env, u16 *refcheck)
 {
 	struct cl_env *cle;
 
diff --git a/drivers/staging/lustre/lustre/obdecho/echo_client.c b/drivers/staging/lustre/lustre/obdecho/echo_client.c
index 5490761..77b4c55 100644
--- a/drivers/staging/lustre/lustre/obdecho/echo_client.c
+++ b/drivers/staging/lustre/lustre/obdecho/echo_client.c
@@ -816,7 +816,7 @@ static struct lu_device *echo_device_free(const struct lu_env *env,
 	struct echo_object *eco;
 	struct cl_object   *obj;
 	struct lu_fid *fid;
-	int refcheck;
+	u16 refcheck;
 	int rc;
 
 	LASSERTF(ostid_id(oi), DOSTID "\n", POSTID(oi));
@@ -882,7 +882,7 @@ static int cl_echo_object_put(struct echo_object *eco)
 {
 	struct lu_env *env;
 	struct cl_object *obj = echo_obj2cl(eco);
-	int refcheck;
+	u16 refcheck;
 
 	env = cl_env_get(&refcheck);
 	if (IS_ERR(env))
@@ -999,7 +999,7 @@ static int cl_echo_object_brw(struct echo_object *eco, int rw, u64 offset,
 	struct cl_page	  *clp;
 	struct lustre_handle    lh = { 0 };
 	size_t page_size = cl_page_size(obj);
-	int refcheck;
+	u16 refcheck;
 	int rc;
 	int i;
 
diff --git a/drivers/staging/lustre/lustre/osc/lproc_osc.c b/drivers/staging/lustre/lustre/osc/lproc_osc.c
index 575b296..86f252d 100644
--- a/drivers/staging/lustre/lustre/osc/lproc_osc.c
+++ b/drivers/staging/lustre/lustre/osc/lproc_osc.c
@@ -229,7 +229,7 @@ static ssize_t osc_cached_mb_seq_write(struct file *file,
 	rc = atomic_long_read(&cli->cl_lru_in_list) - pages_number;
 	if (rc > 0) {
 		struct lu_env *env;
-		int refcheck;
+		u16 refcheck;
 
 		env = cl_env_get(&refcheck);
 		if (!IS_ERR(env)) {
diff --git a/drivers/staging/lustre/lustre/osc/osc_cache.c b/drivers/staging/lustre/lustre/osc/osc_cache.c
index 6445bbe..f8c5fc0 100644
--- a/drivers/staging/lustre/lustre/osc/osc_cache.c
+++ b/drivers/staging/lustre/lustre/osc/osc_cache.c
@@ -988,7 +988,7 @@ static int osc_extent_truncate(struct osc_extent *ext, pgoff_t trunc_index,
 	int grants = 0;
 	int nr_pages = 0;
 	int rc = 0;
-	int refcheck;
+	u16 refcheck;
 
 	LASSERT(sanity_check(ext) == 0);
 	EASSERT(ext->oe_state == OES_TRUNC, ext);
diff --git a/drivers/staging/lustre/lustre/osc/osc_lock.c b/drivers/staging/lustre/lustre/osc/osc_lock.c
index 5f7c030..940c10c 100644
--- a/drivers/staging/lustre/lustre/osc/osc_lock.c
+++ b/drivers/staging/lustre/lustre/osc/osc_lock.c
@@ -297,7 +297,7 @@ static int osc_lock_upcall(void *cookie, struct lustre_handle *lockh,
 	struct cl_lock_slice *slice = &oscl->ols_cl;
 	struct lu_env *env;
 	int rc;
-	int refcheck;
+	u16 refcheck;
 
 	env = cl_env_get(&refcheck);
 	/* should never happen, similar to osc_ldlm_blocking_ast(). */
@@ -349,7 +349,7 @@ static int osc_lock_upcall_agl(void *cookie, struct lustre_handle *lockh,
 	struct osc_object *osc = cookie;
 	struct ldlm_lock *dlmlock;
 	struct lu_env *env;
-	int refcheck;
+	u16 refcheck;
 
 	env = cl_env_get(&refcheck);
 	LASSERT(!IS_ERR(env));
@@ -384,7 +384,7 @@ static int osc_lock_flush(struct osc_object *obj, pgoff_t start, pgoff_t end,
 			  enum cl_lock_mode mode, int discard)
 {
 	struct lu_env *env;
-	int refcheck;
+	u16 refcheck;
 	int rc = 0;
 	int rc2 = 0;
 
@@ -538,7 +538,7 @@ static int osc_ldlm_blocking_ast(struct ldlm_lock *dlmlock,
 	}
 	case LDLM_CB_CANCELING: {
 		struct lu_env *env;
-		int refcheck;
+		u16 refcheck;
 
 		/*
 		 * This can be called in the context of outer IO, e.g.,
@@ -575,7 +575,7 @@ static int osc_ldlm_glimpse_ast(struct ldlm_lock *dlmlock, void *data)
 	struct req_capsule *cap;
 	struct cl_object *obj = NULL;
 	int result;
-	int refcheck;
+	u16 refcheck;
 
 	LASSERT(lustre_msg_get_opc(req->rq_reqmsg) == LDLM_GL_CALLBACK);
 
@@ -686,7 +686,7 @@ unsigned long osc_ldlm_weigh_ast(struct ldlm_lock *dlmlock)
 	struct osc_lock		*oscl;
 	unsigned long            weight;
 	bool			 found = false;
-	int refcheck;
+	u16 refcheck;
 
 	might_sleep();
 	/*
diff --git a/drivers/staging/lustre/lustre/osc/osc_page.c b/drivers/staging/lustre/lustre/osc/osc_page.c
index 03ee340..ed8a0dc 100644
--- a/drivers/staging/lustre/lustre/osc/osc_page.c
+++ b/drivers/staging/lustre/lustre/osc/osc_page.c
@@ -681,7 +681,7 @@ static long osc_lru_reclaim(struct client_obd *cli, unsigned long npages)
 	struct lu_env *env;
 	struct cl_client_cache *cache = cli->cl_cache;
 	int max_scans;
-	int refcheck;
+	u16 refcheck;
 	long rc = 0;
 
 	LASSERT(cache);
@@ -1045,7 +1045,7 @@ unsigned long osc_cache_shrink_scan(struct shrinker *sk,
 	struct client_obd *cli;
 	struct lu_env *env;
 	long shrank = 0;
-	int refcheck;
+	u16 refcheck;
 	int rc;
 
 	if (!sc->nr_to_scan)
diff --git a/drivers/staging/lustre/lustre/osc/osc_request.c b/drivers/staging/lustre/lustre/osc/osc_request.c
index 8e22807..4cf0664 100644
--- a/drivers/staging/lustre/lustre/osc/osc_request.c
+++ b/drivers/staging/lustre/lustre/osc/osc_request.c
@@ -2538,7 +2538,7 @@ static int osc_import_event(struct obd_device *obd,
 	case IMP_EVENT_INVALIDATE: {
 		struct ldlm_namespace *ns = obd->obd_namespace;
 		struct lu_env *env;
-		int refcheck;
+		u16 refcheck;
 
 		ldlm_namespace_cleanup(ns, LDLM_FL_LOCAL_ONLY);
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 08/14] staging: lustre: llite: remove extraneous export parameter
  2017-02-18 21:47 [PATCH 00/14] staging: lustre: missing fixes from lustre 2.8 James Simmons
                   ` (6 preceding siblings ...)
  2017-02-18 21:47 ` [PATCH 07/14] staging: lustre: lov: trying smaller memory allocations James Simmons
@ 2017-02-18 21:47 ` James Simmons
  2017-02-18 21:47 ` [PATCH 09/14] staging: lustre: ldlm: reduce ldlm pool recalc window James Simmons
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: James Simmons @ 2017-02-18 21:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, devel, Andreas Dilger, Oleg Drokin
  Cc: Linux Kernel Mailing List, Lustre Development List, James Simmons

From: Andreas Dilger <andreas.dilger@intel.com>

The ll_close_inode_openhandle() and ll_md_close() functions passed an
extra "obd_export *md_exp" parameter, but it turns out that all of the
callers already pass inode->i_sb->s_fs_info->lsi_llsbi->ll_md_exp in
one form or another, so it can just be extracted from "inode" directly
as needed.

Signed-off-by: Andreas Dilger <andreas.dilger@intel.com>
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-6627
Reviewed-on: http://review.whamcloud.com/14953
Reviewed-by: Frank Zago <fzago@cray.com>
Reviewed-by: Bobi Jam <bobijam@hotmail.com>
Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/llite/file.c | 33 +++++++++++++-----------------
 1 file changed, 14 insertions(+), 19 deletions(-)

diff --git a/drivers/staging/lustre/lustre/llite/file.c b/drivers/staging/lustre/lustre/llite/file.c
index b1c9573..d8a5e70 100644
--- a/drivers/staging/lustre/lustre/llite/file.c
+++ b/drivers/staging/lustre/lustre/llite/file.c
@@ -116,13 +116,13 @@ static void ll_prepare_close(struct inode *inode, struct md_op_data *op_data,
  * If \a bias is MDS_CLOSE_LAYOUT_SWAP then \a data is a pointer to the inode to
  * swap layouts with.
  */
-static int ll_close_inode_openhandle(struct obd_export *md_exp,
+static int ll_close_inode_openhandle(struct inode *inode,
 				     struct obd_client_handle *och,
-				     struct inode *inode,
 				     enum mds_op_bias bias,
 				     void *data)
 {
 	const struct ll_inode_info *lli = ll_i2info(inode);
+	struct obd_export *md_exp = ll_i2mdexp(inode);
 	struct md_op_data *op_data;
 	struct ptlrpc_request *req = NULL;
 	int rc;
@@ -231,15 +231,13 @@ int ll_md_real_close(struct inode *inode, fmode_t fmode)
 		/* There might be a race and this handle may already
 		 * be closed.
 		 */
-		rc = ll_close_inode_openhandle(ll_i2sbi(inode)->ll_md_exp,
-					       och, inode, 0, NULL);
+		rc = ll_close_inode_openhandle(inode, och, 0, NULL);
 	}
 
 	return rc;
 }
 
-static int ll_md_close(struct obd_export *md_exp, struct inode *inode,
-		       struct file *file)
+static int ll_md_close(struct inode *inode, struct file *file)
 {
 	struct ll_file_data *fd = LUSTRE_FPRIVATE(file);
 	struct ll_inode_info *lli = ll_i2info(inode);
@@ -270,8 +268,7 @@ static int ll_md_close(struct obd_export *md_exp, struct inode *inode,
 	}
 
 	if (fd->fd_och) {
-		rc = ll_close_inode_openhandle(md_exp, fd->fd_och, inode, 0,
-					       NULL);
+		rc = ll_close_inode_openhandle(inode, fd->fd_och, 0, NULL);
 		fd->fd_och = NULL;
 		goto out;
 	}
@@ -296,7 +293,7 @@ static int ll_md_close(struct obd_export *md_exp, struct inode *inode,
 	}
 	mutex_unlock(&lli->lli_och_mutex);
 
-	if (!md_lock_match(md_exp, flags, ll_inode2fid(inode),
+	if (!md_lock_match(ll_i2mdexp(inode), flags, ll_inode2fid(inode),
 			   LDLM_IBITS, &policy, lockmode, &lockh))
 		rc = ll_md_real_close(inode, fd->fd_omode);
 
@@ -345,7 +342,7 @@ int ll_file_release(struct inode *inode, struct file *file)
 		lli->lli_async_rc = 0;
 	}
 
-	rc = ll_md_close(sbi->ll_md_exp, inode, file);
+	rc = ll_md_close(inode, file);
 
 	if (CFS_FAIL_TIMEOUT_MS(OBD_FAIL_PTLRPC_DUMP_LOG, cfs_fail_val))
 		libcfs_debug_dumplog();
@@ -835,7 +832,7 @@ static int ll_md_blocking_lease_ast(struct ldlm_lock *lock,
 		it.it_lock_mode = 0;
 		och->och_lease_handle.cookie = 0ULL;
 	}
-	rc2 = ll_close_inode_openhandle(sbi->ll_md_exp, och, inode, 0, NULL);
+	rc2 = ll_close_inode_openhandle(inode, och, 0, NULL);
 	if (rc2 < 0)
 		CERROR("%s: error closing file "DFID": %d\n",
 		       ll_get_fsname(inode->i_sb, NULL, 0),
@@ -901,8 +898,8 @@ static int ll_swap_layouts_close(struct obd_client_handle *och,
 	 * NB: lease lock handle is released in mdc_close_layout_swap_pack()
 	 * because we still need it to pack l_remote_handle to MDT.
 	 */
-	rc = ll_close_inode_openhandle(ll_i2sbi(inode)->ll_md_exp, och, inode,
-				       MDS_CLOSE_LAYOUT_SWAP, inode2);
+	rc = ll_close_inode_openhandle(inode, och, MDS_CLOSE_LAYOUT_SWAP,
+				       inode2);
 
 	och = NULL; /* freed in ll_close_inode_openhandle() */
 
@@ -937,8 +934,7 @@ static int ll_lease_close(struct obd_client_handle *och, struct inode *inode,
 	if (lease_broken)
 		*lease_broken = cancelled;
 
-	return ll_close_inode_openhandle(ll_i2sbi(inode)->ll_md_exp,
-					 och, inode, 0, NULL);
+	return ll_close_inode_openhandle(inode, och, 0, NULL);
 }
 
 int ll_merge_attr(const struct lu_env *env, struct inode *inode)
@@ -1494,8 +1490,7 @@ int ll_release_openhandle(struct inode *inode, struct lookup_intent *it)
 
 	ll_och_fill(ll_i2sbi(inode)->ll_md_exp, it, och);
 
-	rc = ll_close_inode_openhandle(ll_i2sbi(inode)->ll_md_exp,
-				       och, inode, 0, NULL);
+	rc = ll_close_inode_openhandle(inode, och, 0, NULL);
 out:
 	/* this one is in place of ll_file_open */
 	if (it_disposition(it, DISP_ENQ_OPEN_REF)) {
@@ -1698,8 +1693,8 @@ int ll_hsm_release(struct inode *inode)
 	 * NB: lease lock handle is released in mdc_hsm_release_pack() because
 	 * we still need it to pack l_remote_handle to MDT.
 	 */
-	rc = ll_close_inode_openhandle(ll_i2sbi(inode)->ll_md_exp, och, inode,
-				       MDS_HSM_RELEASE, &data_version);
+	rc = ll_close_inode_openhandle(inode, och, MDS_HSM_RELEASE,
+				       &data_version);
 	och = NULL;
 
 out:
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 09/14] staging: lustre: ldlm: reduce ldlm pool recalc window
  2017-02-18 21:47 [PATCH 00/14] staging: lustre: missing fixes from lustre 2.8 James Simmons
                   ` (7 preceding siblings ...)
  2017-02-18 21:47 ` [PATCH 08/14] staging: lustre: llite: remove extraneous export parameter James Simmons
@ 2017-02-18 21:47 ` James Simmons
  2017-02-18 21:47 ` [PATCH 10/14] staging: lustre: ldlm: disconnect speedup James Simmons
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: James Simmons @ 2017-02-18 21:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, devel, Andreas Dilger, Oleg Drokin
  Cc: Linux Kernel Mailing List, Lustre Development List,
	Vitaly Fertman, James Simmons

From: Vitaly Fertman <vitaly_fertman@xyratex.com>

Reduce the sleep period from 50 seconds down to
LDLM_POOL_CLI_DEF_RECALC_PERIOD which is 10 seconds.

Signed-off-by: Vitaly Fertman <vitaly_fertman@xyratex.com>
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-3031
Xyratex-bug-id: MRP-395 MRP-1366 MRP-1366
Reviewed-by: Andriy Skulysh <Andriy_Skulysh@xyratex.com>
Reviewed-by: Alexey Lyashkov <Alexey_Lyashkov@xyratex.com>
Reviewed-on: http://review.whamcloud.com/5843
Reviewed-by: James Simmons <uja.ornl@yahoo.com>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/ldlm/ldlm_pool.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c b/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c
index 8dfb3c8..13fbbed 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c
@@ -900,8 +900,9 @@ static int ldlm_pools_recalc(enum ldlm_side client)
 {
 	struct ldlm_namespace *ns;
 	struct ldlm_namespace *ns_old = NULL;
+	/* seconds of sleep if no active namespaces */
+	int time = LDLM_POOL_CLI_DEF_RECALC_PERIOD;
 	int nr;
-	int time = 50; /* seconds of sleep if no active namespaces */
 
 	/*
 	 * Recalc at least ldlm_namespace_nr_read(client) namespaces.
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 10/14] staging: lustre: ldlm: disconnect speedup
  2017-02-18 21:47 [PATCH 00/14] staging: lustre: missing fixes from lustre 2.8 James Simmons
                   ` (8 preceding siblings ...)
  2017-02-18 21:47 ` [PATCH 09/14] staging: lustre: ldlm: reduce ldlm pool recalc window James Simmons
@ 2017-02-18 21:47 ` James Simmons
  2017-02-18 21:47 ` [PATCH 11/14] staging: lustre: ldlm: fix race of starting bl threads James Simmons
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: James Simmons @ 2017-02-18 21:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, devel, Andreas Dilger, Oleg Drokin
  Cc: Linux Kernel Mailing List, Lustre Development List,
	Vitaly Fertman, James Simmons

From: Vitaly Fertman <vitaly_fertman@xyratex.com>

disconnect takes too long time if there are many locks to cancel.
besides the amount of time spent on each lock cancel, there is a
resched() in cfs_hash_for_each_relax(), i.e. disconnect or eviction
may take unexpectedly long time. While this patch only contains
the client side fixes the original fix covered changes to both
the server and client code to ensure proper disconnect handling.
Below details the change done on both the server and client so
people can examine the disconnect behavior with both source bases.

- do not cancel locks on disconnect_export;
- export will be left in obd_unlinked_exports list pinned by live
  locks;
- new re-connects will created other non-conflicting exports;
- new locks will cancel obsolete locks on conflicts;
- once all the locks on the disconnected export will be cancelled,
  the export will be destroyed on the last ref put;
- do not cancel in small portions, cancel all together in just 1
  dedicated thread - use server side blocking thread for that;
- cancel blocked locks first so that waiting locks could proceed;
- take care about blocked waiting locks, so that they would get
  cancelled quickly too;
- do not remove lock from waiting list on AST error before moving
  it to elt_expired_locks list, because it removes it from export
  list too; otherwise this blocked lock will not be cancelled
  immediately on failed export;
- cancel lock instead of just destroy for failed export, to make
  full cleanup, i.e. remove it from export list.

Signed-off-by: Vitaly Fertman <vitaly_fertman@xyratex.com>
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-3031
Xyratex-bug-id: MRP-395 MRP-1366 MRP-1366
Reviewed-by: Andriy Skulysh <Andriy_Skulysh@xyratex.com>
Reviewed-by: Alexey Lyashkov <Alexey_Lyashkov@xyratex.com>
Reviewed-on: http://review.whamcloud.com/5843
Reviewed-by: James Simmons <uja.ornl@yahoo.com>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/include/lustre_dlm.h |  11 +-
 .../staging/lustre/lustre/include/obd_support.h    |   1 +
 drivers/staging/lustre/lustre/ldlm/ldlm_internal.h |   5 +-
 drivers/staging/lustre/lustre/ldlm/ldlm_lock.c     |   1 -
 drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c    | 143 +++++++++++++--------
 drivers/staging/lustre/lustre/ldlm/ldlm_pool.c     |   4 +
 6 files changed, 101 insertions(+), 64 deletions(-)

diff --git a/drivers/staging/lustre/lustre/include/lustre_dlm.h b/drivers/staging/lustre/lustre/include/lustre_dlm.h
index b7e61d0..1e86fb5 100644
--- a/drivers/staging/lustre/lustre/include/lustre_dlm.h
+++ b/drivers/staging/lustre/lustre/include/lustre_dlm.h
@@ -812,13 +812,6 @@ struct ldlm_lock {
 	/** referenced export object */
 	struct obd_export	*l_exp_refs_target;
 #endif
-	/**
-	 * export blocking dlm lock list, protected by
-	 * l_export->exp_bl_list_lock.
-	 * Lock order of waiting_lists_spinlock, exp_bl_list_lock and res lock
-	 * is: res lock -> exp_bl_list_lock -> wanting_lists_spinlock.
-	 */
-	struct list_head		l_exp_list;
 };
 
 /**
@@ -1192,6 +1185,10 @@ struct ldlm_namespace *
 		   enum ldlm_side client, enum ldlm_appetite apt,
 		   enum ldlm_ns_type ns_type);
 int ldlm_namespace_cleanup(struct ldlm_namespace *ns, __u64 flags);
+void ldlm_namespace_free_prior(struct ldlm_namespace *ns,
+			       struct obd_import *imp,
+			       int force);
+void ldlm_namespace_free_post(struct ldlm_namespace *ns);
 void ldlm_namespace_get(struct ldlm_namespace *ns);
 void ldlm_namespace_put(struct ldlm_namespace *ns);
 int ldlm_debugfs_setup(void);
diff --git a/drivers/staging/lustre/lustre/include/obd_support.h b/drivers/staging/lustre/lustre/include/obd_support.h
index aaedec7..05a958a 100644
--- a/drivers/staging/lustre/lustre/include/obd_support.h
+++ b/drivers/staging/lustre/lustre/include/obd_support.h
@@ -316,6 +316,7 @@
 #define OBD_FAIL_LDLM_AGL_NOLOCK	 0x31b
 #define OBD_FAIL_LDLM_OST_LVB		 0x31c
 #define OBD_FAIL_LDLM_ENQUEUE_HANG	 0x31d
+#define OBD_FAIL_LDLM_PAUSE_CANCEL2	 0x31f
 #define OBD_FAIL_LDLM_CP_CB_WAIT2	 0x320
 #define OBD_FAIL_LDLM_CP_CB_WAIT3	 0x321
 #define OBD_FAIL_LDLM_CP_CB_WAIT4	 0x322
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_internal.h b/drivers/staging/lustre/lustre/ldlm/ldlm_internal.h
index 5c02501..5d24b48 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_internal.h
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_internal.h
@@ -108,9 +108,7 @@ int ldlm_cancel_lru_local(struct ldlm_namespace *ns,
 
 /* ldlm_resource.c */
 int ldlm_resource_putref_locked(struct ldlm_resource *res);
-void ldlm_namespace_free_prior(struct ldlm_namespace *ns,
-			       struct obd_import *imp, int force);
-void ldlm_namespace_free_post(struct ldlm_namespace *ns);
+
 /* ldlm_lock.c */
 
 struct ldlm_cb_set_arg {
@@ -156,6 +154,7 @@ int ldlm_bl_to_thread_list(struct ldlm_namespace *ns,
 			   struct ldlm_lock_desc *ld,
 			   struct list_head *cancels, int count,
 			   enum ldlm_cancel_flags cancel_flags);
+int ldlm_bl_thread_wakeup(void);
 
 void ldlm_handle_bl_callback(struct ldlm_namespace *ns,
 			     struct ldlm_lock_desc *ld, struct ldlm_lock *lock);
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
index 16c2a8b..ddb4642 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_lock.c
@@ -435,7 +435,6 @@ static struct ldlm_lock *ldlm_lock_new(struct ldlm_resource *resource)
 	lock->l_exp_refs_nr = 0;
 	lock->l_exp_refs_target = NULL;
 #endif
-	INIT_LIST_HEAD(&lock->l_exp_list);
 
 	return lock;
 }
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c b/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c
index 12647af..4c21b9b 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c
@@ -454,6 +454,12 @@ int ldlm_bl_to_thread_list(struct ldlm_namespace *ns, struct ldlm_lock_desc *ld,
 	return ldlm_bl_to_thread(ns, ld, NULL, cancels, count, cancel_flags);
 }
 
+int ldlm_bl_thread_wakeup(void)
+{
+	wake_up(&ldlm_state->ldlm_bl_pool->blp_waitq);
+	return 0;
+}
+
 /* Setinfo coming from Server (eg MDT) to Client (eg MDC)! */
 static int ldlm_handle_setinfo(struct ptlrpc_request *req)
 {
@@ -675,8 +681,11 @@ static int ldlm_callback_handler(struct ptlrpc_request *req)
 	return 0;
 }
 
-static struct ldlm_bl_work_item *ldlm_bl_get_work(struct ldlm_bl_pool *blp)
+static int ldlm_bl_get_work(struct ldlm_bl_pool *blp,
+			    struct ldlm_bl_work_item **p_blwi,
+			    struct obd_export **p_exp)
 {
+	int num_th = atomic_read(&blp->blp_num_threads);
 	struct ldlm_bl_work_item *blwi = NULL;
 	static unsigned int num_bl;
 
@@ -693,13 +702,14 @@ static struct ldlm_bl_work_item *ldlm_bl_get_work(struct ldlm_bl_pool *blp)
 					  blwi_entry);
 
 	if (blwi) {
-		if (++num_bl >= atomic_read(&blp->blp_num_threads))
+		if (++num_bl >= num_th)
 			num_bl = 0;
 		list_del(&blwi->blwi_entry);
 	}
 	spin_unlock(&blp->blp_lock);
+	*p_blwi = blwi;
 
-	return blwi;
+	return (*p_blwi || *p_exp) ? 1 : 0;
 }
 
 /* This only contains temporary data until the thread starts */
@@ -732,6 +742,65 @@ static int ldlm_bl_thread_start(struct ldlm_bl_pool *blp)
 	return 0;
 }
 
+/* Not fatal if racy and have a few too many threads */
+static int ldlm_bl_thread_need_create(struct ldlm_bl_pool *blp,
+				      struct ldlm_bl_work_item *blwi)
+{
+	int busy = atomic_read(&blp->blp_busy_threads);
+
+	if (busy >= blp->blp_max_threads)
+		return 0;
+
+	if (busy < atomic_read(&blp->blp_num_threads))
+		return 0;
+
+	if (blwi && (!blwi->blwi_ns || blwi->blwi_mem_pressure))
+		return 0;
+
+	return 1;
+}
+
+static int ldlm_bl_thread_blwi(struct ldlm_bl_pool *blp,
+			       struct ldlm_bl_work_item *blwi)
+{
+	if (!blwi->blwi_ns)
+		/* added by ldlm_cleanup() */
+		return LDLM_ITER_STOP;
+
+	if (blwi->blwi_mem_pressure)
+		memory_pressure_set();
+
+	OBD_FAIL_TIMEOUT(OBD_FAIL_LDLM_PAUSE_CANCEL2, 4);
+
+	if (blwi->blwi_count) {
+		int count;
+
+		/*
+		 * The special case when we cancel locks in lru
+		 * asynchronously, we pass the list of locks here.
+		 * Thus locks are marked LDLM_FL_CANCELING, but NOT
+		 * canceled locally yet.
+		 */
+		count = ldlm_cli_cancel_list_local(&blwi->blwi_head,
+						   blwi->blwi_count,
+						   LCF_BL_AST);
+		ldlm_cli_cancel_list(&blwi->blwi_head, count, NULL,
+				     blwi->blwi_flags);
+	} else {
+		ldlm_handle_bl_callback(blwi->blwi_ns, &blwi->blwi_ld,
+					blwi->blwi_lock);
+	}
+	if (blwi->blwi_mem_pressure)
+		memory_pressure_clr();
+
+	if (blwi->blwi_flags & LCF_ASYNC)
+		kfree(blwi);
+	else
+		complete(&blwi->blwi_comp);
+
+	return 0;
+}
+
 /**
  * Main blocking requests processing thread.
  *
@@ -742,73 +811,41 @@ static int ldlm_bl_thread_start(struct ldlm_bl_pool *blp)
 static int ldlm_bl_thread_main(void *arg)
 {
 	struct ldlm_bl_pool *blp;
+	struct ldlm_bl_thread_data *bltd = arg;
 
-	{
-		struct ldlm_bl_thread_data *bltd = arg;
-
-		blp = bltd->bltd_blp;
+	blp = bltd->bltd_blp;
 
-		atomic_inc(&blp->blp_num_threads);
-		atomic_inc(&blp->blp_busy_threads);
+	atomic_inc(&blp->blp_num_threads);
+	atomic_inc(&blp->blp_busy_threads);
 
-		complete(&bltd->bltd_comp);
-		/* cannot use bltd after this, it is only on caller's stack */
-	}
+	complete(&bltd->bltd_comp);
+	/* cannot use bltd after this, it is only on caller's stack */
 
 	while (1) {
 		struct l_wait_info lwi = { 0 };
 		struct ldlm_bl_work_item *blwi = NULL;
-		int busy;
+		struct obd_export *exp = NULL;
+		int rc;
 
-		blwi = ldlm_bl_get_work(blp);
-
-		if (!blwi) {
+		rc = ldlm_bl_get_work(blp, &blwi, &exp);
+		if (!rc) {
 			atomic_dec(&blp->blp_busy_threads);
 			l_wait_event_exclusive(blp->blp_waitq,
-					       (blwi = ldlm_bl_get_work(blp)),
+					       ldlm_bl_get_work(blp, &blwi,
+								&exp),
 					       &lwi);
-			busy = atomic_inc_return(&blp->blp_busy_threads);
-		} else {
-			busy = atomic_read(&blp->blp_busy_threads);
+			atomic_inc(&blp->blp_busy_threads);
 		}
 
-		if (!blwi->blwi_ns)
-			/* added by ldlm_cleanup() */
-			break;
-
-		/* Not fatal if racy and have a few too many threads */
-		if (unlikely(busy < blp->blp_max_threads &&
-			     busy >= atomic_read(&blp->blp_num_threads) &&
-			     !blwi->blwi_mem_pressure))
+		if (ldlm_bl_thread_need_create(blp, blwi))
 			/* discard the return value, we tried */
 			ldlm_bl_thread_start(blp);
 
-		if (blwi->blwi_mem_pressure)
-			memory_pressure_set();
-
-		if (blwi->blwi_count) {
-			int count;
-			/* The special case when we cancel locks in LRU
-			 * asynchronously, we pass the list of locks here.
-			 * Thus locks are marked LDLM_FL_CANCELING, but NOT
-			 * canceled locally yet.
-			 */
-			count = ldlm_cli_cancel_list_local(&blwi->blwi_head,
-							   blwi->blwi_count,
-							   LCF_BL_AST);
-			ldlm_cli_cancel_list(&blwi->blwi_head, count, NULL,
-					     blwi->blwi_flags);
-		} else {
-			ldlm_handle_bl_callback(blwi->blwi_ns, &blwi->blwi_ld,
-						blwi->blwi_lock);
-		}
-		if (blwi->blwi_mem_pressure)
-			memory_pressure_clr();
+		if (blwi)
+			rc = ldlm_bl_thread_blwi(blp, blwi);
 
-		if (blwi->blwi_flags & LCF_ASYNC)
-			kfree(blwi);
-		else
-			complete(&blwi->blwi_comp);
+		if (rc == LDLM_ITER_STOP)
+			break;
 	}
 
 	atomic_dec(&blp->blp_busy_threads);
diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c b/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c
index 13fbbed..cf3fc57 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_pool.c
@@ -975,6 +975,10 @@ static int ldlm_pools_recalc(enum ldlm_side client)
 			ldlm_namespace_put(ns);
 		}
 	}
+
+	/* Wake up the blocking threads from time to time. */
+	ldlm_bl_thread_wakeup();
+
 	return time;
 }
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 11/14] staging: lustre: ldlm: fix race of starting bl threads
  2017-02-18 21:47 [PATCH 00/14] staging: lustre: missing fixes from lustre 2.8 James Simmons
                   ` (9 preceding siblings ...)
  2017-02-18 21:47 ` [PATCH 10/14] staging: lustre: ldlm: disconnect speedup James Simmons
@ 2017-02-18 21:47 ` James Simmons
  2017-02-18 21:47 ` [PATCH 12/14] staging: lustre: llog: change lgh_hdr_lock to mutex James Simmons
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 19+ messages in thread
From: James Simmons @ 2017-02-18 21:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, devel, Andreas Dilger, Oleg Drokin
  Cc: Linux Kernel Mailing List, Lustre Development List, Niu Yawei,
	James Simmons

From: Niu Yawei <yawei.niu@intel.com>

There is race in the code of starting bl threads which leads to
thread number exceeds the maximum number when race happened, it
can also lead to duplicated thread name. This patch fixes the
race and cleanup the code a bit.

Signed-off-by: Niu Yawei <yawei.niu@intel.com>
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-7330
Reviewed-on: http://review.whamcloud.com/17026
Reviewed-by: Bobi Jam <bobijam@hotmail.com>
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c | 49 ++++++++++++++-----------
 1 file changed, 28 insertions(+), 21 deletions(-)

diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c b/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c
index 4c21b9b..6f9d540 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c
@@ -714,7 +714,6 @@ static int ldlm_bl_get_work(struct ldlm_bl_pool *blp,
 
 /* This only contains temporary data until the thread starts */
 struct ldlm_bl_thread_data {
-	char			bltd_name[CFS_CURPROC_COMM_MAX];
 	struct ldlm_bl_pool	*bltd_blp;
 	struct completion	bltd_comp;
 	int			bltd_num;
@@ -722,19 +721,32 @@ struct ldlm_bl_thread_data {
 
 static int ldlm_bl_thread_main(void *arg);
 
-static int ldlm_bl_thread_start(struct ldlm_bl_pool *blp)
+static int ldlm_bl_thread_start(struct ldlm_bl_pool *blp, bool check_busy)
 {
 	struct ldlm_bl_thread_data bltd = { .bltd_blp = blp };
 	struct task_struct *task;
 
 	init_completion(&bltd.bltd_comp);
-	bltd.bltd_num = atomic_read(&blp->blp_num_threads);
-	snprintf(bltd.bltd_name, sizeof(bltd.bltd_name),
-		 "ldlm_bl_%02d", bltd.bltd_num);
-	task = kthread_run(ldlm_bl_thread_main, &bltd, "%s", bltd.bltd_name);
+
+	bltd.bltd_num = atomic_inc_return(&blp->blp_num_threads);
+	if (bltd.bltd_num >= blp->blp_max_threads) {
+		atomic_dec(&blp->blp_num_threads);
+		return 0;
+	}
+
+	LASSERTF(bltd.bltd_num > 0, "thread num:%d\n", bltd.bltd_num);
+	if (check_busy &&
+	    atomic_read(&blp->blp_busy_threads) < (bltd.bltd_num - 1)) {
+		atomic_dec(&blp->blp_num_threads);
+		return 0;
+	}
+
+	task = kthread_run(ldlm_bl_thread_main, &bltd, "ldlm_bl_%02d",
+			   bltd.bltd_num);
 	if (IS_ERR(task)) {
 		CERROR("cannot start LDLM thread ldlm_bl_%02d: rc %ld\n",
-		       atomic_read(&blp->blp_num_threads), PTR_ERR(task));
+		       bltd.bltd_num, PTR_ERR(task));
+		atomic_dec(&blp->blp_num_threads);
 		return PTR_ERR(task);
 	}
 	wait_for_completion(&bltd.bltd_comp);
@@ -746,12 +758,11 @@ static int ldlm_bl_thread_start(struct ldlm_bl_pool *blp)
 static int ldlm_bl_thread_need_create(struct ldlm_bl_pool *blp,
 				      struct ldlm_bl_work_item *blwi)
 {
-	int busy = atomic_read(&blp->blp_busy_threads);
-
-	if (busy >= blp->blp_max_threads)
+	if (atomic_read(&blp->blp_num_threads) >= blp->blp_max_threads)
 		return 0;
 
-	if (busy < atomic_read(&blp->blp_num_threads))
+	if (atomic_read(&blp->blp_busy_threads) <
+	    atomic_read(&blp->blp_num_threads))
 		return 0;
 
 	if (blwi && (!blwi->blwi_ns || blwi->blwi_mem_pressure))
@@ -815,9 +826,6 @@ static int ldlm_bl_thread_main(void *arg)
 
 	blp = bltd->bltd_blp;
 
-	atomic_inc(&blp->blp_num_threads);
-	atomic_inc(&blp->blp_busy_threads);
-
 	complete(&bltd->bltd_comp);
 	/* cannot use bltd after this, it is only on caller's stack */
 
@@ -828,27 +836,26 @@ static int ldlm_bl_thread_main(void *arg)
 		int rc;
 
 		rc = ldlm_bl_get_work(blp, &blwi, &exp);
-		if (!rc) {
-			atomic_dec(&blp->blp_busy_threads);
+		if (!rc)
 			l_wait_event_exclusive(blp->blp_waitq,
 					       ldlm_bl_get_work(blp, &blwi,
 								&exp),
 					       &lwi);
-			atomic_inc(&blp->blp_busy_threads);
-		}
+		atomic_inc(&blp->blp_busy_threads);
 
 		if (ldlm_bl_thread_need_create(blp, blwi))
 			/* discard the return value, we tried */
-			ldlm_bl_thread_start(blp);
+			ldlm_bl_thread_start(blp, true);
 
 		if (blwi)
 			rc = ldlm_bl_thread_blwi(blp, blwi);
 
+		atomic_dec(&blp->blp_busy_threads);
+
 		if (rc == LDLM_ITER_STOP)
 			break;
 	}
 
-	atomic_dec(&blp->blp_busy_threads);
 	atomic_dec(&blp->blp_num_threads);
 	complete(&blp->blp_comp);
 	return 0;
@@ -1028,7 +1035,7 @@ static int ldlm_setup(void)
 	}
 
 	for (i = 0; i < blp->blp_min_threads; i++) {
-		rc = ldlm_bl_thread_start(blp);
+		rc = ldlm_bl_thread_start(blp, false);
 		if (rc < 0)
 			goto out;
 	}
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 12/14] staging: lustre: llog: change lgh_hdr_lock to mutex
  2017-02-18 21:47 [PATCH 00/14] staging: lustre: missing fixes from lustre 2.8 James Simmons
                   ` (10 preceding siblings ...)
  2017-02-18 21:47 ` [PATCH 11/14] staging: lustre: ldlm: fix race of starting bl threads James Simmons
@ 2017-02-18 21:47 ` James Simmons
  2017-02-24 16:58   ` Greg Kroah-Hartman
  2017-02-18 21:47 ` [PATCH 13/14] staging: lustre: llog: limit file size of plain logs James Simmons
  2017-02-18 21:47 ` [PATCH 14/14] staging: lustre: lprocfs: move lprocfs_stats_[un]lock to a source file James Simmons
  13 siblings, 1 reply; 19+ messages in thread
From: James Simmons @ 2017-02-18 21:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, devel, Andreas Dilger, Oleg Drokin
  Cc: Linux Kernel Mailing List, Lustre Development List, wang di,
	James Simmons

From: wang di <di.wang@intel.com>

Change lgh_hdr_lock from spinlock to mutex because if
the llog object is a remote object it can be stalled
while being fetched.

Signed-off-by: wang di <di.wang@intel.com>
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-6602
Reviewed-on: http://review.whamcloud.com/15274
Reviewed-by: James Simmons <uja.ornl@yahoo.com>
Reviewed-by: Lai Siyao <lai.siyao@intel.com>
Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/include/lustre_log.h | 2 +-
 drivers/staging/lustre/lustre/obdclass/llog.c      | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/staging/lustre/lustre/include/lustre_log.h b/drivers/staging/lustre/lustre/include/lustre_log.h
index 35e37eb..33f56ff 100644
--- a/drivers/staging/lustre/lustre/include/lustre_log.h
+++ b/drivers/staging/lustre/lustre/include/lustre_log.h
@@ -211,7 +211,7 @@ struct llog_operations {
 /* In-memory descriptor for a log object or log catalog */
 struct llog_handle {
 	struct rw_semaphore	 lgh_lock;
-	spinlock_t		 lgh_hdr_lock; /* protect lgh_hdr data */
+	struct mutex		 lgh_hdr_mutex; /* protect lgh_hdr data */
 	struct llog_logid	 lgh_id; /* id of this log */
 	struct llog_log_hdr	*lgh_hdr;
 	size_t			 lgh_hdr_size;
diff --git a/drivers/staging/lustre/lustre/obdclass/llog.c b/drivers/staging/lustre/lustre/obdclass/llog.c
index 736ea10..83c5b62 100644
--- a/drivers/staging/lustre/lustre/obdclass/llog.c
+++ b/drivers/staging/lustre/lustre/obdclass/llog.c
@@ -61,7 +61,7 @@ static struct llog_handle *llog_alloc_handle(void)
 		return NULL;
 
 	init_rwsem(&loghandle->lgh_lock);
-	spin_lock_init(&loghandle->lgh_hdr_lock);
+	mutex_init(&loghandle->lgh_hdr_mutex);
 	INIT_LIST_HEAD(&loghandle->u.phd.phd_entry);
 	atomic_set(&loghandle->lgh_refcount, 1);
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 13/14] staging: lustre: llog: limit file size of plain logs
  2017-02-18 21:47 [PATCH 00/14] staging: lustre: missing fixes from lustre 2.8 James Simmons
                   ` (11 preceding siblings ...)
  2017-02-18 21:47 ` [PATCH 12/14] staging: lustre: llog: change lgh_hdr_lock to mutex James Simmons
@ 2017-02-18 21:47 ` James Simmons
  2017-02-24 16:59   ` Greg Kroah-Hartman
  2017-02-18 21:47 ` [PATCH 14/14] staging: lustre: lprocfs: move lprocfs_stats_[un]lock to a source file James Simmons
  13 siblings, 1 reply; 19+ messages in thread
From: James Simmons @ 2017-02-18 21:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, devel, Andreas Dilger, Oleg Drokin
  Cc: Linux Kernel Mailing List, Lustre Development List,
	Alex Zhuravlev, James Simmons

From: Alex Zhuravlev <alexey.zhuravlev@intel.com>

on small filesystems plain log can grow dramatically. especially
given large record sizes produced by DNE and extended chunksize.
I saw >50% of space consumed by a single llog file which was still
in use. this leads to test failures (sanityn, etc).
the patch introduces additional limit on plain llog size, which
is calculated as <free space>/64 (128MB at most) at llog creation
time.

Signed-off-by: Alex Zhuravlev <alexey.zhuravlev@intel.com>
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-6838
Reviewed-on: https://review.whamcloud.com/18028
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: wangdi <di.wang@intel.com>
Reviewed-by: Mike Pershin <mike.pershin@intel.com>
Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 drivers/staging/lustre/lustre/obdclass/llog.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/drivers/staging/lustre/lustre/obdclass/llog.c b/drivers/staging/lustre/lustre/obdclass/llog.c
index 83c5b62..320ff6b 100644
--- a/drivers/staging/lustre/lustre/obdclass/llog.c
+++ b/drivers/staging/lustre/lustre/obdclass/llog.c
@@ -319,10 +319,26 @@ static int llog_process_thread(void *arg)
 				 * the case and re-read the current chunk
 				 * otherwise.
 				 */
+				int records;
+
 				if (index > loghandle->lgh_last_idx) {
 					rc = 0;
 					goto out;
 				}
+				/* <2 records means no more records
+				 * if the last record we processed was
+				 * the final one, then the underlying
+				 * object might have been destroyed yet.
+				 * we better don't access that..
+				 */
+				mutex_lock(&loghandle->lgh_hdr_mutex);
+				records = loghandle->lgh_hdr->llh_count;
+				mutex_unlock(&loghandle->lgh_hdr_mutex);
+				if (records <= 1) {
+					rc = 0;
+					goto out;
+				}
+
 				CDEBUG(D_OTHER, "Re-read last llog buffer for new records, index %u, last %u\n",
 				       index, loghandle->lgh_last_idx);
 				/* save offset inside buffer for the re-read */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 14/14] staging: lustre: lprocfs: move lprocfs_stats_[un]lock to a source file
  2017-02-18 21:47 [PATCH 00/14] staging: lustre: missing fixes from lustre 2.8 James Simmons
                   ` (12 preceding siblings ...)
  2017-02-18 21:47 ` [PATCH 13/14] staging: lustre: llog: limit file size of plain logs James Simmons
@ 2017-02-18 21:47 ` James Simmons
  13 siblings, 0 replies; 19+ messages in thread
From: James Simmons @ 2017-02-18 21:47 UTC (permalink / raw)
  To: Greg Kroah-Hartman, devel, Andreas Dilger, Oleg Drokin
  Cc: Linux Kernel Mailing List, Lustre Development List,
	James Simmons, James Simmons

When compiling the kernel without optimization, when using GCOV,
the lprocfs_stats_alloc_one() symbol is not properly exported to
other modules and causes the ptlrpc module to fail loading with
an unknown symbol. There is no reason to export the function
lprocfs_stats_alloc_one. The reason is due to the functions
lprocfs_stats_[un]lock being inline functions in a header file.
Lets untangle this mess and turn those inline functions
into real functions in a source file.

Signed-off-by: James Simmons <uja.ornl@yahoo.com>
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-8836
Reviewed-on: https://review.whamcloud.com/23773
Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
Reviewed-by: Dmitry Eremin <dmitry.eremin@intel.com>
Reviewed-by: John L. Hammond <john.hammond@intel.com>
Signed-off-by: James Simmons <jsimmons@infradead.org>
---
 .../staging/lustre/lustre/include/lprocfs_status.h | 120 ++-------------------
 .../lustre/lustre/obdclass/lprocfs_status.c        | 111 +++++++++++++++++++
 2 files changed, 121 insertions(+), 110 deletions(-)

diff --git a/drivers/staging/lustre/lustre/include/lprocfs_status.h b/drivers/staging/lustre/lustre/include/lprocfs_status.h
index 62753da..242abb8 100644
--- a/drivers/staging/lustre/lustre/include/lprocfs_status.h
+++ b/drivers/staging/lustre/lustre/include/lprocfs_status.h
@@ -374,94 +374,15 @@ int lprocfs_write_frac_helper(const char __user *buffer,
 			      unsigned long count, int *val, int mult);
 int lprocfs_read_frac_helper(char *buffer, unsigned long count,
 			     long val, int mult);
-int lprocfs_stats_alloc_one(struct lprocfs_stats *stats, unsigned int cpuid);
 
-/**
- * Lock statistics structure for access, possibly only on this CPU.
- *
- * The statistics struct may be allocated with per-CPU structures for
- * efficient concurrent update (usually only on server-wide stats), or
- * as a single global struct (e.g. for per-client or per-job statistics),
- * so the required locking depends on the type of structure allocated.
- *
- * For per-CPU statistics, pin the thread to the current cpuid so that
- * will only access the statistics for that CPU.  If the stats structure
- * for the current CPU has not been allocated (or previously freed),
- * allocate it now.  The per-CPU statistics do not need locking since
- * the thread is pinned to the CPU during update.
- *
- * For global statistics, lock the stats structure to prevent concurrent update.
- *
- * \param[in] stats	statistics structure to lock
- * \param[in] opc	type of operation:
- *			LPROCFS_GET_SMP_ID: "lock" and return current CPU index
- *				for incrementing statistics for that CPU
- *			LPROCFS_GET_NUM_CPU: "lock" and return number of used
- *				CPU indices to iterate over all indices
- * \param[out] flags	CPU interrupt saved state for IRQ-safe locking
- *
- * \retval cpuid of current thread or number of allocated structs
- * \retval negative on error (only for opc LPROCFS_GET_SMP_ID + per-CPU stats)
- */
-static inline int lprocfs_stats_lock(struct lprocfs_stats *stats,
-				     enum lprocfs_stats_lock_ops opc,
-				     unsigned long *flags)
-{
-	if (stats->ls_flags & LPROCFS_STATS_FLAG_NOPERCPU) {
-		if (stats->ls_flags & LPROCFS_STATS_FLAG_IRQ_SAFE)
-			spin_lock_irqsave(&stats->ls_lock, *flags);
-		else
-			spin_lock(&stats->ls_lock);
-		return opc == LPROCFS_GET_NUM_CPU ? 1 : 0;
-	}
-
-	switch (opc) {
-	case LPROCFS_GET_SMP_ID: {
-		unsigned int cpuid = get_cpu();
-
-		if (unlikely(!stats->ls_percpu[cpuid])) {
-			int rc = lprocfs_stats_alloc_one(stats, cpuid);
-
-			if (rc < 0) {
-				put_cpu();
-				return rc;
-			}
-		}
-		return cpuid;
-	}
-	case LPROCFS_GET_NUM_CPU:
-		return stats->ls_biggest_alloc_num;
-	default:
-		LBUG();
-	}
-}
-
-/**
- * Unlock statistics structure after access.
- *
- * Unlock the lock acquired via lprocfs_stats_lock() for global statistics,
- * or unpin this thread from the current cpuid for per-CPU statistics.
- *
- * This function must be called using the same arguments as used when calling
- * lprocfs_stats_lock() so that the correct operation can be performed.
- *
- * \param[in] stats	statistics structure to unlock
- * \param[in] opc	type of operation (current cpuid or number of structs)
- * \param[in] flags	CPU interrupt saved state for IRQ-safe locking
- */
-static inline void lprocfs_stats_unlock(struct lprocfs_stats *stats,
-					enum lprocfs_stats_lock_ops opc,
-					unsigned long *flags)
-{
-	if (stats->ls_flags & LPROCFS_STATS_FLAG_NOPERCPU) {
-		if (stats->ls_flags & LPROCFS_STATS_FLAG_IRQ_SAFE)
-			spin_unlock_irqrestore(&stats->ls_lock, *flags);
-		else
-			spin_unlock(&stats->ls_lock);
-	} else if (opc == LPROCFS_GET_SMP_ID) {
-		put_cpu();
-	}
-}
+int lprocfs_stats_alloc_one(struct lprocfs_stats *stats,
+			    unsigned int cpuid);
+int lprocfs_stats_lock(struct lprocfs_stats *stats,
+		       enum lprocfs_stats_lock_ops opc,
+		       unsigned long *flags);
+void lprocfs_stats_unlock(struct lprocfs_stats *stats,
+			  enum lprocfs_stats_lock_ops opc,
+			  unsigned long *flags);
 
 static inline unsigned int
 lprocfs_stats_counter_size(struct lprocfs_stats *stats)
@@ -513,29 +434,8 @@ __s64 lprocfs_read_helper(struct lprocfs_counter *lc,
 			  struct lprocfs_counter_header *header,
 			  enum lprocfs_stats_flags flags,
 			  enum lprocfs_fields_flags field);
-static inline __u64 lprocfs_stats_collector(struct lprocfs_stats *stats,
-					    int idx,
-					    enum lprocfs_fields_flags field)
-{
-	unsigned int i;
-	unsigned int  num_cpu;
-	unsigned long flags	= 0;
-	__u64	      ret	= 0;
-
-	LASSERT(stats);
-
-	num_cpu = lprocfs_stats_lock(stats, LPROCFS_GET_NUM_CPU, &flags);
-	for (i = 0; i < num_cpu; i++) {
-		if (!stats->ls_percpu[i])
-			continue;
-		ret += lprocfs_read_helper(
-				lprocfs_stats_counter_get(stats, i, idx),
-				&stats->ls_cnt_header[idx], stats->ls_flags,
-				field);
-	}
-	lprocfs_stats_unlock(stats, LPROCFS_GET_NUM_CPU, &flags);
-	return ret;
-}
+__u64 lprocfs_stats_collector(struct lprocfs_stats *stats, int idx,
+			      enum lprocfs_fields_flags field);
 
 extern struct lprocfs_stats *
 lprocfs_alloc_stats(unsigned int num, enum lprocfs_stats_flags flags);
diff --git a/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c b/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c
index 2c99717..1ec6e37 100644
--- a/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c
+++ b/drivers/staging/lustre/lustre/obdclass/lprocfs_status.c
@@ -598,6 +598,93 @@ int lprocfs_rd_conn_uuid(struct seq_file *m, void *data)
 }
 EXPORT_SYMBOL(lprocfs_rd_conn_uuid);
 
+/**
+ * Lock statistics structure for access, possibly only on this CPU.
+ *
+ * The statistics struct may be allocated with per-CPU structures for
+ * efficient concurrent update (usually only on server-wide stats), or
+ * as a single global struct (e.g. for per-client or per-job statistics),
+ * so the required locking depends on the type of structure allocated.
+ *
+ * For per-CPU statistics, pin the thread to the current cpuid so that
+ * will only access the statistics for that CPU.  If the stats structure
+ * for the current CPU has not been allocated (or previously freed),
+ * allocate it now.  The per-CPU statistics do not need locking since
+ * the thread is pinned to the CPU during update.
+ *
+ * For global statistics, lock the stats structure to prevent concurrent update.
+ *
+ * \param[in] stats    statistics structure to lock
+ * \param[in] opc      type of operation:
+ *                     LPROCFS_GET_SMP_ID: "lock" and return current CPU index
+ *                             for incrementing statistics for that CPU
+ *                     LPROCFS_GET_NUM_CPU: "lock" and return number of used
+ *                             CPU indices to iterate over all indices
+ * \param[out] flags   CPU interrupt saved state for IRQ-safe locking
+ *
+ * \retval cpuid of current thread or number of allocated structs
+ * \retval negative on error (only for opc LPROCFS_GET_SMP_ID + per-CPU stats)
+ */
+int lprocfs_stats_lock(struct lprocfs_stats *stats,
+		       enum lprocfs_stats_lock_ops opc,
+		       unsigned long *flags)
+{
+	if (stats->ls_flags & LPROCFS_STATS_FLAG_NOPERCPU) {
+		if (stats->ls_flags & LPROCFS_STATS_FLAG_IRQ_SAFE)
+			spin_lock_irqsave(&stats->ls_lock, *flags);
+		else
+			spin_lock(&stats->ls_lock);
+		return opc == LPROCFS_GET_NUM_CPU ? 1 : 0;
+	}
+
+	switch (opc) {
+	case LPROCFS_GET_SMP_ID: {
+		unsigned int cpuid = get_cpu();
+
+		if (unlikely(!stats->ls_percpu[cpuid])) {
+			int rc = lprocfs_stats_alloc_one(stats, cpuid);
+
+			if (rc < 0) {
+				put_cpu();
+				return rc;
+			}
+		}
+		return cpuid;
+	}
+	case LPROCFS_GET_NUM_CPU:
+		return stats->ls_biggest_alloc_num;
+	default:
+		LBUG();
+	}
+}
+
+/**
+ * Unlock statistics structure after access.
+ *
+ * Unlock the lock acquired via lprocfs_stats_lock() for global statistics,
+ * or unpin this thread from the current cpuid for per-CPU statistics.
+ *
+ * This function must be called using the same arguments as used when calling
+ * lprocfs_stats_lock() so that the correct operation can be performed.
+ *
+ * \param[in] stats    statistics structure to unlock
+ * \param[in] opc      type of operation (current cpuid or number of structs)
+ * \param[in] flags    CPU interrupt saved state for IRQ-safe locking
+ */
+void lprocfs_stats_unlock(struct lprocfs_stats *stats,
+			  enum lprocfs_stats_lock_ops opc,
+			  unsigned long *flags)
+{
+	if (stats->ls_flags & LPROCFS_STATS_FLAG_NOPERCPU) {
+		if (stats->ls_flags & LPROCFS_STATS_FLAG_IRQ_SAFE)
+			spin_unlock_irqrestore(&stats->ls_lock, *flags);
+		else
+			spin_unlock(&stats->ls_lock);
+	} else if (opc == LPROCFS_GET_SMP_ID) {
+		put_cpu();
+	}
+}
+
 /** add up per-cpu counters */
 void lprocfs_stats_collect(struct lprocfs_stats *stats, int idx,
 			   struct lprocfs_counter *cnt)
@@ -1146,6 +1233,30 @@ void lprocfs_free_stats(struct lprocfs_stats **statsh)
 }
 EXPORT_SYMBOL(lprocfs_free_stats);
 
+__u64 lprocfs_stats_collector(struct lprocfs_stats *stats, int idx,
+			      enum lprocfs_fields_flags field)
+{
+	unsigned int i;
+	unsigned int  num_cpu;
+	unsigned long flags     = 0;
+	__u64         ret       = 0;
+
+	LASSERT(stats);
+
+	num_cpu = lprocfs_stats_lock(stats, LPROCFS_GET_NUM_CPU, &flags);
+	for (i = 0; i < num_cpu; i++) {
+		if (!stats->ls_percpu[i])
+			continue;
+		ret += lprocfs_read_helper(
+				lprocfs_stats_counter_get(stats, i, idx),
+				&stats->ls_cnt_header[idx], stats->ls_flags,
+				field);
+	}
+	lprocfs_stats_unlock(stats, LPROCFS_GET_NUM_CPU, &flags);
+	return ret;
+}
+EXPORT_SYMBOL(lprocfs_stats_collector);
+
 void lprocfs_clear_stats(struct lprocfs_stats *stats)
 {
 	struct lprocfs_counter		*percpu_cntr;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH 12/14] staging: lustre: llog: change lgh_hdr_lock to mutex
  2017-02-18 21:47 ` [PATCH 12/14] staging: lustre: llog: change lgh_hdr_lock to mutex James Simmons
@ 2017-02-24 16:58   ` Greg Kroah-Hartman
  0 siblings, 0 replies; 19+ messages in thread
From: Greg Kroah-Hartman @ 2017-02-24 16:58 UTC (permalink / raw)
  To: James Simmons
  Cc: devel, Andreas Dilger, Oleg Drokin, wang di,
	Linux Kernel Mailing List, Lustre Development List

On Sat, Feb 18, 2017 at 04:47:13PM -0500, James Simmons wrote:
> From: wang di <di.wang@intel.com>
> 
> Change lgh_hdr_lock from spinlock to mutex because if
> the llog object is a remote object it can be stalled
> while being fetched.

but this lock is never even used!  Why have it at all?

> 
> Signed-off-by: wang di <di.wang@intel.com>
> Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-6602
> Reviewed-on: http://review.whamcloud.com/15274
> Reviewed-by: James Simmons <uja.ornl@yahoo.com>
> Reviewed-by: Lai Siyao <lai.siyao@intel.com>
> Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
> Signed-off-by: James Simmons <jsimmons@infradead.org>
> ---
>  drivers/staging/lustre/lustre/include/lustre_log.h | 2 +-
>  drivers/staging/lustre/lustre/obdclass/llog.c      | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/staging/lustre/lustre/include/lustre_log.h b/drivers/staging/lustre/lustre/include/lustre_log.h
> index 35e37eb..33f56ff 100644
> --- a/drivers/staging/lustre/lustre/include/lustre_log.h
> +++ b/drivers/staging/lustre/lustre/include/lustre_log.h
> @@ -211,7 +211,7 @@ struct llog_operations {
>  /* In-memory descriptor for a log object or log catalog */
>  struct llog_handle {
>  	struct rw_semaphore	 lgh_lock;
> -	spinlock_t		 lgh_hdr_lock; /* protect lgh_hdr data */
> +	struct mutex		 lgh_hdr_mutex; /* protect lgh_hdr data */
>  	struct llog_logid	 lgh_id; /* id of this log */
>  	struct llog_log_hdr	*lgh_hdr;
>  	size_t			 lgh_hdr_size;
> diff --git a/drivers/staging/lustre/lustre/obdclass/llog.c b/drivers/staging/lustre/lustre/obdclass/llog.c
> index 736ea10..83c5b62 100644
> --- a/drivers/staging/lustre/lustre/obdclass/llog.c
> +++ b/drivers/staging/lustre/lustre/obdclass/llog.c
> @@ -61,7 +61,7 @@ static struct llog_handle *llog_alloc_handle(void)
>  		return NULL;
>  
>  	init_rwsem(&loghandle->lgh_lock);
> -	spin_lock_init(&loghandle->lgh_hdr_lock);
> +	mutex_init(&loghandle->lgh_hdr_mutex);

Can't we delete it?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 13/14] staging: lustre: llog: limit file size of plain logs
  2017-02-18 21:47 ` [PATCH 13/14] staging: lustre: llog: limit file size of plain logs James Simmons
@ 2017-02-24 16:59   ` Greg Kroah-Hartman
  2017-02-25  3:50     ` Oleg Drokin
  2017-02-25  4:04     ` Oleg Drokin
  0 siblings, 2 replies; 19+ messages in thread
From: Greg Kroah-Hartman @ 2017-02-24 16:59 UTC (permalink / raw)
  To: James Simmons
  Cc: devel, Andreas Dilger, Oleg Drokin, Alex Zhuravlev,
	Linux Kernel Mailing List, Lustre Development List

On Sat, Feb 18, 2017 at 04:47:14PM -0500, James Simmons wrote:
> From: Alex Zhuravlev <alexey.zhuravlev@intel.com>
> 
> on small filesystems plain log can grow dramatically. especially
> given large record sizes produced by DNE and extended chunksize.
> I saw >50% of space consumed by a single llog file which was still
> in use. this leads to test failures (sanityn, etc).
> the patch introduces additional limit on plain llog size, which
> is calculated as <free space>/64 (128MB at most) at llog creation
> time.
> 
> Signed-off-by: Alex Zhuravlev <alexey.zhuravlev@intel.com>
> Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-6838
> Reviewed-on: https://review.whamcloud.com/18028
> Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
> Reviewed-by: wangdi <di.wang@intel.com>
> Reviewed-by: Mike Pershin <mike.pershin@intel.com>
> Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
> Signed-off-by: James Simmons <jsimmons@infradead.org>
> ---
>  drivers/staging/lustre/lustre/obdclass/llog.c | 16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
> 
> diff --git a/drivers/staging/lustre/lustre/obdclass/llog.c b/drivers/staging/lustre/lustre/obdclass/llog.c
> index 83c5b62..320ff6b 100644
> --- a/drivers/staging/lustre/lustre/obdclass/llog.c
> +++ b/drivers/staging/lustre/lustre/obdclass/llog.c
> @@ -319,10 +319,26 @@ static int llog_process_thread(void *arg)
>  				 * the case and re-read the current chunk
>  				 * otherwise.
>  				 */
> +				int records;
> +
>  				if (index > loghandle->lgh_last_idx) {
>  					rc = 0;
>  					goto out;
>  				}
> +				/* <2 records means no more records
> +				 * if the last record we processed was
> +				 * the final one, then the underlying
> +				 * object might have been destroyed yet.
> +				 * we better don't access that..
> +				 */
> +				mutex_lock(&loghandle->lgh_hdr_mutex);
> +				records = loghandle->lgh_hdr->llh_count;
> +				mutex_unlock(&loghandle->lgh_hdr_mutex);
> +				if (records <= 1) {
> +					rc = 0;
> +					goto out;
> +				}


So you now use the lock, in only one place, when reading a single value?
That makes no sense, it's obviously wrong, or not needed.

Please fix up these two patches...

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 13/14] staging: lustre: llog: limit file size of plain logs
  2017-02-24 16:59   ` Greg Kroah-Hartman
@ 2017-02-25  3:50     ` Oleg Drokin
  2017-02-25  4:04     ` Oleg Drokin
  1 sibling, 0 replies; 19+ messages in thread
From: Oleg Drokin @ 2017-02-25  3:50 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: James Simmons, devel, Andreas Dilger, Alex Zhuravlev,
	Linux Kernel Mailing List, Lustre Development List


On Feb 24, 2017, at 11:59 AM, Greg Kroah-Hartman wrote:

> On Sat, Feb 18, 2017 at 04:47:14PM -0500, James Simmons wrote:
>> From: Alex Zhuravlev <alexey.zhuravlev@intel.com>
>> 
>> on small filesystems plain log can grow dramatically. especially
>> given large record sizes produced by DNE and extended chunksize.
>> I saw >50% of space consumed by a single llog file which was still
>> in use. this leads to test failures (sanityn, etc).
>> the patch introduces additional limit on plain llog size, which
>> is calculated as <free space>/64 (128MB at most) at llog creation
>> time.
>> 
>> Signed-off-by: Alex Zhuravlev <alexey.zhuravlev@intel.com>
>> Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-6838
>> Reviewed-on: https://review.whamcloud.com/18028
>> Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
>> Reviewed-by: wangdi <di.wang@intel.com>
>> Reviewed-by: Mike Pershin <mike.pershin@intel.com>
>> Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
>> Signed-off-by: James Simmons <jsimmons@infradead.org>
>> ---
>> drivers/staging/lustre/lustre/obdclass/llog.c | 16 ++++++++++++++++
>> 1 file changed, 16 insertions(+)
>> 
>> diff --git a/drivers/staging/lustre/lustre/obdclass/llog.c b/drivers/staging/lustre/lustre/obdclass/llog.c
>> index 83c5b62..320ff6b 100644
>> --- a/drivers/staging/lustre/lustre/obdclass/llog.c
>> +++ b/drivers/staging/lustre/lustre/obdclass/llog.c
>> @@ -319,10 +319,26 @@ static int llog_process_thread(void *arg)
>> 				 * the case and re-read the current chunk
>> 				 * otherwise.
>> 				 */
>> +				int records;
>> +
>> 				if (index > loghandle->lgh_last_idx) {
>> 					rc = 0;
>> 					goto out;
>> 				}
>> +				/* <2 records means no more records
>> +				 * if the last record we processed was
>> +				 * the final one, then the underlying
>> +				 * object might have been destroyed yet.
>> +				 * we better don't access that..
>> +				 */
>> +				mutex_lock(&loghandle->lgh_hdr_mutex);
>> +				records = loghandle->lgh_hdr->llh_count;
>> +				mutex_unlock(&loghandle->lgh_hdr_mutex);
>> +				if (records <= 1) {
>> +					rc = 0;
>> +					goto out;
>> +				}
> 
> 
> So you now use the lock, in only one place, when reading a single value?
> That makes no sense, it's obviously wrong, or not needed.
> 
> Please fix up these two patches…

Ah, this is in fact server-side fix, so all the other users were in the
parts not really present in the client.
James, we don't really need this patch in the client, I guess.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 13/14] staging: lustre: llog: limit file size of plain logs
  2017-02-24 16:59   ` Greg Kroah-Hartman
  2017-02-25  3:50     ` Oleg Drokin
@ 2017-02-25  4:04     ` Oleg Drokin
  1 sibling, 0 replies; 19+ messages in thread
From: Oleg Drokin @ 2017-02-25  4:04 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: James Simmons, devel, Andreas Dilger, Alex Zhuravlev,
	Linux Kernel Mailing List, Lustre Development List


On Feb 24, 2017, at 11:59 AM, Greg Kroah-Hartman wrote:

> On Sat, Feb 18, 2017 at 04:47:14PM -0500, James Simmons wrote:
>> From: Alex Zhuravlev <alexey.zhuravlev@intel.com>
>> 
>> on small filesystems plain log can grow dramatically. especially
>> given large record sizes produced by DNE and extended chunksize.
>> I saw >50% of space consumed by a single llog file which was still
>> in use. this leads to test failures (sanityn, etc).
>> the patch introduces additional limit on plain llog size, which
>> is calculated as <free space>/64 (128MB at most) at llog creation
>> time.
>> 
>> Signed-off-by: Alex Zhuravlev <alexey.zhuravlev@intel.com>
>> Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-6838
>> Reviewed-on: https://review.whamcloud.com/18028
>> Reviewed-by: Andreas Dilger <andreas.dilger@intel.com>
>> Reviewed-by: wangdi <di.wang@intel.com>
>> Reviewed-by: Mike Pershin <mike.pershin@intel.com>
>> Reviewed-by: Oleg Drokin <oleg.drokin@intel.com>
>> Signed-off-by: James Simmons <jsimmons@infradead.org>
>> ---
>> drivers/staging/lustre/lustre/obdclass/llog.c | 16 ++++++++++++++++
>> 1 file changed, 16 insertions(+)
>> 
>> diff --git a/drivers/staging/lustre/lustre/obdclass/llog.c b/drivers/staging/lustre/lustre/obdclass/llog.c
>> index 83c5b62..320ff6b 100644
>> --- a/drivers/staging/lustre/lustre/obdclass/llog.c
>> +++ b/drivers/staging/lustre/lustre/obdclass/llog.c
>> @@ -319,10 +319,26 @@ static int llog_process_thread(void *arg)
>> 				 * the case and re-read the current chunk
>> 				 * otherwise.
>> 				 */
>> +				int records;
>> +
>> 				if (index > loghandle->lgh_last_idx) {
>> 					rc = 0;
>> 					goto out;
>> 				}
>> +				/* <2 records means no more records
>> +				 * if the last record we processed was
>> +				 * the final one, then the underlying
>> +				 * object might have been destroyed yet.
>> +				 * we better don't access that..
>> +				 */
>> +				mutex_lock(&loghandle->lgh_hdr_mutex);
>> +				records = loghandle->lgh_hdr->llh_count;
>> +				mutex_unlock(&loghandle->lgh_hdr_mutex);
>> +				if (records <= 1) {
>> +					rc = 0;
>> +					goto out;
>> +				}
> 
> 
> So you now use the lock, in only one place, when reading a single value?
> That makes no sense, it's obviously wrong, or not needed.
> 
> Please fix up these two patches…

Ah, this is in fact server-side fix, so all the other users were in the
parts not really present in the client.
James, we don't really need this patch in the client, I guess.

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2017-02-25  4:04 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-18 21:47 [PATCH 00/14] staging: lustre: missing fixes from lustre 2.8 James Simmons
2017-02-18 21:47 ` [PATCH 01/14] staging: lustre: llite: lower message level for ll_setattr_raw() James Simmons
2017-02-18 21:47 ` [PATCH 02/14] staging: lustre: llite: omit to update wire data James Simmons
2017-02-18 21:47 ` [PATCH 03/14] staging: lustre: osc: remove obsolete asserts James Simmons
2017-02-18 21:47 ` [PATCH 04/14] staging: lustre: lov: cleanup when cl_io_iter_init() fails James Simmons
2017-02-18 21:47 ` [PATCH 05/14] staging: lustre: ldlm: handle ldlm lock cancel race when evicting client James Simmons
2017-02-18 21:47 ` [PATCH 06/14] staging: lustre: osc: further LRU OSC cleanup after eviction James Simmons
2017-02-18 21:47 ` [PATCH 07/14] staging: lustre: lov: trying smaller memory allocations James Simmons
2017-02-18 21:47 ` [PATCH 08/14] staging: lustre: llite: remove extraneous export parameter James Simmons
2017-02-18 21:47 ` [PATCH 09/14] staging: lustre: ldlm: reduce ldlm pool recalc window James Simmons
2017-02-18 21:47 ` [PATCH 10/14] staging: lustre: ldlm: disconnect speedup James Simmons
2017-02-18 21:47 ` [PATCH 11/14] staging: lustre: ldlm: fix race of starting bl threads James Simmons
2017-02-18 21:47 ` [PATCH 12/14] staging: lustre: llog: change lgh_hdr_lock to mutex James Simmons
2017-02-24 16:58   ` Greg Kroah-Hartman
2017-02-18 21:47 ` [PATCH 13/14] staging: lustre: llog: limit file size of plain logs James Simmons
2017-02-24 16:59   ` Greg Kroah-Hartman
2017-02-25  3:50     ` Oleg Drokin
2017-02-25  4:04     ` Oleg Drokin
2017-02-18 21:47 ` [PATCH 14/14] staging: lustre: lprocfs: move lprocfs_stats_[un]lock to a source file James Simmons

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).