All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/2] Use blocked_lock_lock only to protect blocked_hash
@ 2015-03-06  7:53 Daniel Wagner
  2015-03-06  7:53 ` [PATCH v3 1/2] locks: Split insert/delete block functions into flock/posix parts Daniel Wagner
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Daniel Wagner @ 2015-03-06  7:53 UTC (permalink / raw)
  To: Jeff Layton
  Cc: linux-fsdevel, linux-kernel, Daniel Wagner, Alexander Viro,
	J. Bruce Fields

Hi,

Finally, I got a bigger machine and did a quick test round. I expected
to see some improvements but the resutls do not show any real gain. So
they are merely refactoring patches.

4x Intel(R) Xeon(R) CPU E5-4610 v2 @ 2.30GHz

4.0.0-rc2/flock01.data
# NumSamples = 3; Min = 47160.80; Max = 47555.42
# Mean = 47294.254786; Variance = 34110.284932; SD = 184.689699; Median 47166.534982
# each ∎ represents a count of 1
47160.8049 - 47200.2668 [     2]: ∎∎
47200.2668 - 47239.7288 [     0]: 
47239.7288 - 47279.1908 [     0]: 
47279.1908 - 47318.6527 [     0]: 
47318.6527 - 47358.1147 [     0]: 
47358.1147 - 47397.5767 [     0]: 
47397.5767 - 47437.0386 [     0]: 
47437.0386 - 47476.5006 [     0]: 
47476.5006 - 47515.9625 [     0]: 
47515.9625 - 47555.4245 [     1]: ∎

patched/flock01.data
# NumSamples = 21; Min = 45877.22; Max = 50206.70
# Mean = 47042.844720; Variance = 752166.966346; SD = 867.275600; Median 46939.811380
# each ∎ represents a count of 1
45877.2235 - 46310.1709 [     2]: ∎∎
46310.1709 - 46743.1182 [     7]: ∎∎∎∎∎∎∎
46743.1182 - 47176.0655 [     3]: ∎∎∎
47176.0655 - 47609.0128 [     6]: ∎∎∎∎∎∎
47609.0128 - 48041.9602 [     2]: ∎∎
48041.9602 - 48474.9075 [     0]: 
48474.9075 - 48907.8548 [     0]: 
48907.8548 - 49340.8021 [     0]: 
49340.8021 - 49773.7495 [     0]: 
49773.7495 - 50206.6968 [     1]: ∎


4.0.0-rc2/flock02.data
# NumSamples = 1786; Min = 1.86; Max = 3.13
# Mean = 2.204980; Variance = 0.015900; SD = 0.126096; Median 2.177549
# each ∎ represents a count of 13
    1.8606 -     1.9880 [     5]: 
    1.9880 -     2.1154 [   315]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
    2.1154 -     2.2427 [  1040]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
    2.2427 -     2.3701 [   272]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
    2.3701 -     2.4975 [    75]: ∎∎∎∎∎
    2.4975 -     2.6249 [    42]: ∎∎∎
    2.6249 -     2.7523 [    28]: ∎∎
    2.7523 -     2.8796 [     7]: 
    2.8796 -     3.0070 [     1]: 
    3.0070 -     3.1344 [     1]: 

patched/flock02.data
# NumSamples = 4586; Min = 2.14; Max = 4.31
# Mean = 2.619467; Variance = 0.043192; SD = 0.207828; Median 2.575378
# each ∎ represents a count of 27
    2.1385 -     2.3561 [   186]: ∎∎∎∎∎∎
    2.3561 -     2.5737 [  2079]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
    2.5737 -     2.7914 [  1642]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
    2.7914 -     3.0090 [   355]: ∎∎∎∎∎∎∎∎∎∎∎∎∎
    3.0090 -     3.2266 [   246]: ∎∎∎∎∎∎∎∎∎
    3.2266 -     3.4442 [    66]: ∎∎
    3.4442 -     3.6618 [     9]: 
    3.6618 -     3.8795 [     1]: 
    3.8795 -     4.0971 [     0]: 
    4.0971 -     4.3147 [     2]: 


4.0.0-rc2/lease01.data
# NumSamples = 12; Min = 1097.16; Max = 1255.06
# Mean = 1184.550432; Variance = 1590.438052; SD = 39.880297; Median 1190.704582
# each ∎ represents a count of 1
 1097.1556 -  1112.9460 [     1]: ∎
 1112.9460 -  1128.7363 [     0]: 
 1128.7363 -  1144.5267 [     1]: ∎
 1144.5267 -  1160.3170 [     0]: 
 1160.3170 -  1176.1074 [     2]: ∎∎
 1176.1074 -  1191.8977 [     2]: ∎∎
 1191.8977 -  1207.6881 [     2]: ∎∎
 1207.6881 -  1223.4784 [     3]: ∎∎∎
 1223.4784 -  1239.2688 [     0]: 
 1239.2688 -  1255.0591 [     1]: ∎

patched/lease01.data
# NumSamples = 14; Min = 1055.00; Max = 1213.97
# Mean = 1128.800723; Variance = 2225.466357; SD = 47.174849; Median 1114.384900
# each ∎ represents a count of 1
 1054.9959 -  1070.8932 [     2]: ∎∎
 1070.8932 -  1086.7906 [     1]: ∎
 1086.7906 -  1102.6879 [     1]: ∎
 1102.6879 -  1118.5853 [     4]: ∎∎∎∎
 1118.5853 -  1134.4826 [     0]: 
 1134.4826 -  1150.3800 [     1]: ∎
 1150.3800 -  1166.2773 [     2]: ∎∎
 1166.2773 -  1182.1747 [     0]: 
 1182.1747 -  1198.0720 [     2]: ∎∎
 1198.0720 -  1213.9694 [     1]: ∎


4.0.0-rc2/lease02.data
# NumSamples = 12; Min = 841.43; Max = 911.82
# Mean = 888.716745; Variance = 317.221486; SD = 17.810713; Median 894.897002
# each ∎ represents a count of 1
  841.4339 -   848.4727 [     1]: ∎
  848.4727 -   855.5115 [     0]: 
  855.5115 -   862.5503 [     0]: 
  862.5503 -   869.5891 [     0]: 
  869.5891 -   876.6278 [     2]: ∎∎
  876.6278 -   883.6666 [     1]: ∎
  883.6666 -   890.7054 [     1]: ∎
  890.7054 -   897.7442 [     3]: ∎∎∎
  897.7442 -   904.7830 [     2]: ∎∎
  904.7830 -   911.8218 [     2]: ∎∎

patched/lease02.data
# NumSamples = 26; Min = 845.36; Max = 917.22
# Mean = 886.178134; Variance = 320.861100; SD = 17.912596; Median 889.109363
# each ∎ represents a count of 1
  845.3620 -   852.5481 [     2]: ∎∎
  852.5481 -   859.7343 [     1]: ∎
  859.7343 -   866.9204 [     1]: ∎
  866.9204 -   874.1065 [     2]: ∎∎
  874.1065 -   881.2926 [     3]: ∎∎∎
  881.2926 -   888.4788 [     2]: ∎∎
  888.4788 -   895.6649 [     6]: ∎∎∎∎∎∎
  895.6649 -   902.8510 [     4]: ∎∎∎∎
  902.8510 -   910.0372 [     2]: ∎∎
  910.0372 -   917.2233 [     3]: ∎∎∎


4.0.0-rc2/posix01.data
# NumSamples = 5; Min = 46659.56; Max = 48332.45
# Mean = 47237.374603; Variance = 337801.649607; SD = 581.207063; Median 47011.051338
# each ∎ represents a count of 1
46659.5620 - 46826.8508 [     1]: ∎
46826.8508 - 46994.1396 [     1]: ∎
46994.1396 - 47161.4284 [     1]: ∎
47161.4284 - 47328.7172 [     1]: ∎
47328.7172 - 47496.0060 [     0]: 
47496.0060 - 47663.2948 [     0]: 
47663.2948 - 47830.5836 [     0]: 
47830.5836 - 47997.8724 [     0]: 
47997.8724 - 48165.1612 [     0]: 
48165.1612 - 48332.4500 [     1]: ∎

patched/posix01.data
# NumSamples = 5; Min = 47274.54; Max = 47925.60
# Mean = 47693.475091; Variance = 48592.645586; SD = 220.437396; Median 47762.516318
# each ∎ represents a count of 1
47274.5400 - 47339.6458 [     1]: ∎
47339.6458 - 47404.7516 [     0]: 
47404.7516 - 47469.8574 [     0]: 
47469.8574 - 47534.9632 [     0]: 
47534.9632 - 47600.0690 [     0]: 
47600.0690 - 47665.1748 [     0]: 
47665.1748 - 47730.2806 [     1]: ∎
47730.2806 - 47795.3863 [     2]: ∎∎
47795.3863 - 47860.4921 [     0]: 
47860.4921 - 47925.5979 [     1]: ∎


4.0.0-rc2/posix02.data
# NumSamples = 633; Min = 3.28; Max = 6.66
# Mean = 3.742386; Variance = 0.087680; SD = 0.296108; Median 3.683302
# each ∎ represents a count of 4
    3.2834 -     3.6214 [   218]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
    3.6214 -     3.9595 [   338]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
    3.9595 -     4.2975 [    39]: ∎∎∎∎∎∎∎∎∎
    4.2975 -     4.6356 [    23]: ∎∎∎∎∎
    4.6356 -     4.9736 [    13]: ∎∎∎
    4.9736 -     5.3117 [     1]: 
    5.3117 -     5.6497 [     0]: 
    5.6497 -     5.9878 [     0]: 
    5.9878 -     6.3258 [     0]: 
    6.3258 -     6.6638 [     1]: 

patched/posix02.data
# NumSamples = 1266; Min = 3.52; Max = 8.13
# Mean = 4.177377; Variance = 0.139937; SD = 0.374081; Median 4.091573
# each ∎ represents a count of 10
    3.5154 -     3.9770 [   343]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
    3.9770 -     4.4386 [   760]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
    4.4386 -     4.9002 [    87]: ∎∎∎∎∎∎∎∎
    4.9002 -     5.3617 [    55]: ∎∎∎∎∎
    5.3617 -     5.8233 [    17]: ∎
    5.8233 -     6.2849 [     2]: 
    6.2849 -     6.7464 [     0]: 
    6.7464 -     7.2080 [     1]: 
    7.2080 -     7.6696 [     0]: 
    7.6696 -     8.1311 [     1]: 

4.0.0-rc2/posix03.data
# NumSamples = 6; Min = 37.77; Max = 41.77
# Mean = 39.590607; Variance = 1.780789; SD = 1.334462; Median 39.619681
# each ∎ represents a count of 1
   37.7725 -    38.1719 [     1]: ∎
   38.1719 -    38.5713 [     1]: ∎
   38.5713 -    38.9706 [     0]: 
   38.9706 -    39.3700 [     1]: ∎
   39.3700 -    39.7694 [     0]: 
   39.7694 -    40.1688 [     1]: ∎
   40.1688 -    40.5682 [     1]: ∎
   40.5682 -    40.9675 [     0]: 
   40.9675 -    41.3669 [     0]: 
   41.3669 -    41.7663 [     1]: ∎

patched/posix03.data
# NumSamples = 15; Min = 37.45; Max = 44.72
# Mean = 41.543496; Variance = 4.628029; SD = 2.151286; Median 41.293384
# each ∎ represents a count of 1
   37.4498 -    38.1767 [     1]: ∎
   38.1767 -    38.9036 [     0]: 
   38.9036 -    39.6304 [     1]: ∎
   39.6304 -    40.3573 [     4]: ∎∎∎∎
   40.3573 -    41.0842 [     1]: ∎
   41.0842 -    41.8110 [     1]: ∎
   41.8110 -    42.5379 [     1]: ∎
   42.5379 -    43.2648 [     3]: ∎∎∎
   43.2648 -    43.9916 [     0]: 
   43.9916 -    44.7185 [     3]: ∎∎∎

v3:
 - added missing !IS_OFDLCK
 - removed unnecessary locking protecting fl_list by flock_lock_lglock

v2:
 - added a few lockdep assertion
 - dropped spinlock conversion

v1:
 - rebased on v3.19-8975-g3d88348
 - splittet into smaller pieces
 - fixed a wrong usage of __locks_insert/delete_block() and it's posix version
 - added seqfile helpers to avoid ugly open coded version

https://lkml.org/lkml/2015/3/2/351

Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Jeff Layton <jlayton@poochiereds.net>
Cc: "J. Bruce Fields" <bfields@fieldses.org>
Cc: linux-fsdevel@vger.kernel.org
Cc: linux-kernel@vger.kernel.org

Daniel Wagner (2):
  locks: Split insert/delete block functions into flock/posix parts
  locks: Use blocked_lock_lock only to protect blocked_hash

 fs/locks.c | 111 ++++++++++++++++++++++++++++++++++++++++---------------------
 1 file changed, 74 insertions(+), 37 deletions(-)

-- 
2.1.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v3 1/2] locks: Split insert/delete block functions into flock/posix parts
  2015-03-06  7:53 [PATCH v3 0/2] Use blocked_lock_lock only to protect blocked_hash Daniel Wagner
@ 2015-03-06  7:53 ` Daniel Wagner
  2015-03-06  7:53 ` [PATCH v3 2/2] locks: Use blocked_lock_lock only to protect blocked_hash Daniel Wagner
  2015-03-07 14:00   ` Jeff Layton
  2 siblings, 0 replies; 11+ messages in thread
From: Daniel Wagner @ 2015-03-06  7:53 UTC (permalink / raw)
  To: Jeff Layton
  Cc: linux-fsdevel, linux-kernel, Daniel Wagner, J. Bruce Fields,
	Alexander Viro

The locks_insert/delete_block() functions are used for flock, posix
and leases types. blocked_lock_lock is used to serialize all access to
fl_link, fl_block, fl_next and blocked_hash. Here, we prepare the
stage for using blocked_lock_lock only to protect blocked_hash.

Signed-off-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
Cc: Jeff Layton <jlayton@poochiereds.net>
Cc: "J. Bruce Fields" <bfields@fieldses.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
---
 fs/locks.c | 49 ++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 40 insertions(+), 9 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index d4992a1..0c37d68 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -611,11 +611,20 @@ static void locks_delete_global_blocked(struct file_lock *waiter)
  */
 static void __locks_delete_block(struct file_lock *waiter)
 {
-	locks_delete_global_blocked(waiter);
 	list_del_init(&waiter->fl_block);
 	waiter->fl_next = NULL;
 }
 
+/* Posix block variant of __locks_delete_block.
+ *
+ * Must be called with blocked_lock_lock held.
+ */
+static void __locks_delete_posix_block(struct file_lock *waiter)
+{
+	locks_delete_global_blocked(waiter);
+	__locks_delete_block(waiter);
+}
+
 static void locks_delete_block(struct file_lock *waiter)
 {
 	spin_lock(&blocked_lock_lock);
@@ -623,6 +632,13 @@ static void locks_delete_block(struct file_lock *waiter)
 	spin_unlock(&blocked_lock_lock);
 }
 
+static void locks_delete_posix_block(struct file_lock *waiter)
+{
+	spin_lock(&blocked_lock_lock);
+	__locks_delete_posix_block(waiter);
+	spin_unlock(&blocked_lock_lock);
+}
+
 /* Insert waiter into blocker's block list.
  * We use a circular list so that processes can be easily woken up in
  * the order they blocked. The documentation doesn't require this but
@@ -639,8 +655,17 @@ static void __locks_insert_block(struct file_lock *blocker,
 	BUG_ON(!list_empty(&waiter->fl_block));
 	waiter->fl_next = blocker;
 	list_add_tail(&waiter->fl_block, &blocker->fl_block);
-	if (IS_POSIX(blocker) && !IS_OFDLCK(blocker))
-		locks_insert_global_blocked(waiter);
+}
+
+/* Posix block variant of __locks_insert_block.
+ *
+ * Must be called with flc_lock and blocked_lock_lock held.
+ */
+static void __locks_insert_posix_block(struct file_lock *blocker,
+					struct file_lock *waiter)
+{
+	__locks_insert_block(blocker, waiter);
+	locks_insert_global_blocked(waiter);
 }
 
 /* Must be called with flc_lock held. */
@@ -675,7 +700,10 @@ static void locks_wake_up_blocks(struct file_lock *blocker)
 
 		waiter = list_first_entry(&blocker->fl_block,
 				struct file_lock, fl_block);
-		__locks_delete_block(waiter);
+		if (IS_POSIX(blocker) && !IS_OFDLCK(blocker))
+			__locks_delete_posix_block(waiter);
+		else
+			__locks_delete_block(waiter);
 		if (waiter->fl_lmops && waiter->fl_lmops->lm_notify)
 			waiter->fl_lmops->lm_notify(waiter);
 		else
@@ -985,7 +1013,7 @@ static int __posix_lock_file(struct inode *inode, struct file_lock *request, str
 			spin_lock(&blocked_lock_lock);
 			if (likely(!posix_locks_deadlock(request, fl))) {
 				error = FILE_LOCK_DEFERRED;
-				__locks_insert_block(fl, request);
+				__locks_insert_posix_block(fl, request);
 			}
 			spin_unlock(&blocked_lock_lock);
 			goto out;
@@ -1186,7 +1214,7 @@ int posix_lock_file_wait(struct file *filp, struct file_lock *fl)
 		if (!error)
 			continue;
 
-		locks_delete_block(fl);
+		locks_delete_posix_block(fl);
 		break;
 	}
 	return error;
@@ -1283,7 +1311,7 @@ int locks_mandatory_area(int read_write, struct inode *inode,
 				continue;
 		}
 
-		locks_delete_block(&fl);
+		locks_delete_posix_block(&fl);
 		break;
 	}
 
@@ -2104,7 +2132,10 @@ static int do_lock_file_wait(struct file *filp, unsigned int cmd,
 		if (!error)
 			continue;
 
-		locks_delete_block(fl);
+		if (IS_POSIX(fl) && !IS_OFDLCK(fl))
+			locks_delete_posix_block(fl);
+		else
+			locks_delete_block(fl);
 		break;
 	}
 
@@ -2468,7 +2499,7 @@ posix_unblock_lock(struct file_lock *waiter)
 
 	spin_lock(&blocked_lock_lock);
 	if (waiter->fl_next)
-		__locks_delete_block(waiter);
+		__locks_delete_posix_block(waiter);
 	else
 		status = -ENOENT;
 	spin_unlock(&blocked_lock_lock);
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 2/2] locks: Use blocked_lock_lock only to protect blocked_hash
  2015-03-06  7:53 [PATCH v3 0/2] Use blocked_lock_lock only to protect blocked_hash Daniel Wagner
  2015-03-06  7:53 ` [PATCH v3 1/2] locks: Split insert/delete block functions into flock/posix parts Daniel Wagner
@ 2015-03-06  7:53 ` Daniel Wagner
  2015-03-07 14:00   ` Jeff Layton
  2 siblings, 0 replies; 11+ messages in thread
From: Daniel Wagner @ 2015-03-06  7:53 UTC (permalink / raw)
  To: Jeff Layton
  Cc: linux-fsdevel, linux-kernel, Daniel Wagner, J. Bruce Fields,
	Alexander Viro

blocked_lock_lock and file_lock_lglock are used to protect file_lock's
fl_link, fl_block, fl_next, blocked_hash and the percpu
file_lock_list.

Let's use blocked_lock_lock only to protect blocked_hash since it is a
global lock.

Whenever we insert a new lock we are going to grab besides the
flc_lock also the corresponding file_lock_lglock. The global
blocked_lock_lock is only used when blocked_hash is involved.

Since we already use fl_link_cpu to remember which percpu
file_lock_list is referencing to a blocker we just going to use it as
well for all waiters.

Note fl_list is protected by flc_lock. It's easy to get confused...

Signed-off-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
Cc: Jeff Layton <jlayton@poochiereds.net>
Cc: "J. Bruce Fields" <bfields@fieldses.org>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
---
 fs/locks.c | 72 ++++++++++++++++++++++++++++++++++----------------------------
 1 file changed, 39 insertions(+), 33 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 0c37d68..661e58b 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -162,6 +162,20 @@ int lease_break_time = 45;
  * keep a list on each CPU, with each list protected by its own spinlock via
  * the file_lock_lglock. Note that alterations to the list also require that
  * the relevant flc_lock is held.
+ *
+ * In addition, it also protects the fl->fl_block list, and the fl->fl_next
+ * pointer for file_lock structures that are acting as lock requests (in
+ * contrast to those that are acting as records of acquired locks).
+ *
+ * file_lock structures acting as lock requests (waiters) use the same
+ * spinlock as the those acting as lock holder (blocker). E.g. the
+ * blocker is initially added to the file_lock_list living on CPU 0,
+ * all waiters on that blocker are serialized via CPU 0 (see
+ * fl_link_cpu usage).
+ *
+ * In particular, adding an entry to the fl_block list requires that you hold
+ * both the flc_lock and the blocked_lock_lock (acquired in that order).
+ * Deleting an entry from the list however only requires the file_lock_gllock.
  */
 DEFINE_STATIC_LGLOCK(file_lock_lglock);
 static DEFINE_PER_CPU(struct hlist_head, file_lock_list);
@@ -183,19 +197,6 @@ static DEFINE_HASHTABLE(blocked_hash, BLOCKED_HASH_BITS);
 /*
  * This lock protects the blocked_hash. Generally, if you're accessing it, you
  * want to be holding this lock.
- *
- * In addition, it also protects the fl->fl_block list, and the fl->fl_next
- * pointer for file_lock structures that are acting as lock requests (in
- * contrast to those that are acting as records of acquired locks).
- *
- * Note that when we acquire this lock in order to change the above fields,
- * we often hold the flc_lock as well. In certain cases, when reading the fields
- * protected by this lock, we can skip acquiring it iff we already hold the
- * flc_lock.
- *
- * In particular, adding an entry to the fl_block list requires that you hold
- * both the flc_lock and the blocked_lock_lock (acquired in that order).
- * Deleting an entry from the list however only requires the file_lock_lock.
  */
 static DEFINE_SPINLOCK(blocked_lock_lock);
 
@@ -607,7 +608,7 @@ static void locks_delete_global_blocked(struct file_lock *waiter)
 /* Remove waiter from blocker's block list.
  * When blocker ends up pointing to itself then the list is empty.
  *
- * Must be called with blocked_lock_lock held.
+ * Must be called with file_lock_lglock held.
  */
 static void __locks_delete_block(struct file_lock *waiter)
 {
@@ -617,7 +618,7 @@ static void __locks_delete_block(struct file_lock *waiter)
 
 /* Posix block variant of __locks_delete_block.
  *
- * Must be called with blocked_lock_lock held.
+ * Must be called with file_lock_lglock held.
  */
 static void __locks_delete_posix_block(struct file_lock *waiter)
 {
@@ -627,16 +628,18 @@ static void __locks_delete_posix_block(struct file_lock *waiter)
 
 static void locks_delete_block(struct file_lock *waiter)
 {
-	spin_lock(&blocked_lock_lock);
+	lg_local_lock_cpu(&file_lock_lglock, waiter->fl_link_cpu);
 	__locks_delete_block(waiter);
-	spin_unlock(&blocked_lock_lock);
+	lg_local_unlock_cpu(&file_lock_lglock, waiter->fl_link_cpu);
 }
 
 static void locks_delete_posix_block(struct file_lock *waiter)
 {
+	lg_local_lock_cpu(&file_lock_lglock, waiter->fl_link_cpu);
 	spin_lock(&blocked_lock_lock);
 	__locks_delete_posix_block(waiter);
 	spin_unlock(&blocked_lock_lock);
+	lg_local_unlock_cpu(&file_lock_lglock, waiter->fl_link_cpu);
 }
 
 /* Insert waiter into blocker's block list.
@@ -644,22 +647,23 @@ static void locks_delete_posix_block(struct file_lock *waiter)
  * the order they blocked. The documentation doesn't require this but
  * it seems like the reasonable thing to do.
  *
- * Must be called with both the flc_lock and blocked_lock_lock held. The
- * fl_block list itself is protected by the blocked_lock_lock, but by ensuring
+ * Must be called with both the flc_lock and file_lock_lglock held. The
+ * fl_block list itself is protected by the file_lock_lglock, but by ensuring
  * that the flc_lock is also held on insertions we can avoid taking the
- * blocked_lock_lock in some cases when we see that the fl_block list is empty.
+ * file_lock_lglock in some cases when we see that the fl_block list is empty.
  */
 static void __locks_insert_block(struct file_lock *blocker,
 					struct file_lock *waiter)
 {
 	BUG_ON(!list_empty(&waiter->fl_block));
+	waiter->fl_link_cpu = blocker->fl_link_cpu;
 	waiter->fl_next = blocker;
 	list_add_tail(&waiter->fl_block, &blocker->fl_block);
 }
 
 /* Posix block variant of __locks_insert_block.
  *
- * Must be called with flc_lock and blocked_lock_lock held.
+ * Must be called with flc_lock and file_lock_lglock held.
  */
 static void __locks_insert_posix_block(struct file_lock *blocker,
 					struct file_lock *waiter)
@@ -672,9 +676,9 @@ static void __locks_insert_posix_block(struct file_lock *blocker,
 static void locks_insert_block(struct file_lock *blocker,
 					struct file_lock *waiter)
 {
-	spin_lock(&blocked_lock_lock);
+	lg_local_lock_cpu(&file_lock_lglock, blocker->fl_link_cpu);
 	__locks_insert_block(blocker, waiter);
-	spin_unlock(&blocked_lock_lock);
+	lg_local_unlock_cpu(&file_lock_lglock, blocker->fl_link_cpu);
 }
 
 /*
@@ -685,31 +689,33 @@ static void locks_insert_block(struct file_lock *blocker,
 static void locks_wake_up_blocks(struct file_lock *blocker)
 {
 	/*
-	 * Avoid taking global lock if list is empty. This is safe since new
+	 * Avoid taking lock if list is empty. This is safe since new
 	 * blocked requests are only added to the list under the flc_lock, and
 	 * the flc_lock is always held here. Note that removal from the fl_block
 	 * list does not require the flc_lock, so we must recheck list_empty()
-	 * after acquiring the blocked_lock_lock.
+	 * after acquiring the file_lock_lglock.
 	 */
 	if (list_empty(&blocker->fl_block))
 		return;
 
-	spin_lock(&blocked_lock_lock);
+	lg_local_lock_cpu(&file_lock_lglock, blocker->fl_link_cpu);
 	while (!list_empty(&blocker->fl_block)) {
 		struct file_lock *waiter;
 
 		waiter = list_first_entry(&blocker->fl_block,
 				struct file_lock, fl_block);
-		if (IS_POSIX(blocker) && !IS_OFDLCK(blocker))
+		if (IS_POSIX(blocker) && !IS_OFDLCK(blocker)) {
+			spin_lock(&blocked_lock_lock);
 			__locks_delete_posix_block(waiter);
-		else
+			spin_unlock(&blocked_lock_lock);
+		} else
 			__locks_delete_block(waiter);
 		if (waiter->fl_lmops && waiter->fl_lmops->lm_notify)
 			waiter->fl_lmops->lm_notify(waiter);
 		else
 			wake_up(&waiter->fl_wait);
 	}
-	spin_unlock(&blocked_lock_lock);
+	lg_local_unlock_cpu(&file_lock_lglock, blocker->fl_link_cpu);
 }
 
 static void
@@ -1010,12 +1016,14 @@ static int __posix_lock_file(struct inode *inode, struct file_lock *request, str
 			 * locks list must be done while holding the same lock!
 			 */
 			error = -EDEADLK;
+			lg_local_lock_cpu(&file_lock_lglock, fl->fl_link_cpu);
 			spin_lock(&blocked_lock_lock);
 			if (likely(!posix_locks_deadlock(request, fl))) {
 				error = FILE_LOCK_DEFERRED;
 				__locks_insert_posix_block(fl, request);
 			}
 			spin_unlock(&blocked_lock_lock);
+			lg_local_unlock_cpu(&file_lock_lglock, fl->fl_link_cpu);
 			goto out;
   		}
   	}
@@ -2497,12 +2505,14 @@ posix_unblock_lock(struct file_lock *waiter)
 {
 	int status = 0;
 
+	lg_local_lock_cpu(&file_lock_lglock, waiter->fl_link_cpu);
 	spin_lock(&blocked_lock_lock);
 	if (waiter->fl_next)
 		__locks_delete_posix_block(waiter);
 	else
 		status = -ENOENT;
 	spin_unlock(&blocked_lock_lock);
+	lg_local_unlock_cpu(&file_lock_lglock, waiter->fl_link_cpu);
 	return status;
 }
 EXPORT_SYMBOL(posix_unblock_lock);
@@ -2629,13 +2639,11 @@ static int locks_show(struct seq_file *f, void *v)
 }
 
 static void *locks_start(struct seq_file *f, loff_t *pos)
-	__acquires(&blocked_lock_lock)
 {
 	struct locks_iterator *iter = f->private;
 
 	iter->li_pos = *pos + 1;
 	lg_global_lock(&file_lock_lglock);
-	spin_lock(&blocked_lock_lock);
 	return seq_hlist_start_percpu(&file_lock_list, &iter->li_cpu, *pos);
 }
 
@@ -2648,9 +2656,7 @@ static void *locks_next(struct seq_file *f, void *v, loff_t *pos)
 }
 
 static void locks_stop(struct seq_file *f, void *v)
-	__releases(&blocked_lock_lock)
 {
-	spin_unlock(&blocked_lock_lock);
 	lg_global_unlock(&file_lock_lglock);
 }
 
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 0/2] Use blocked_lock_lock only to protect blocked_hash
  2015-03-06  7:53 [PATCH v3 0/2] Use blocked_lock_lock only to protect blocked_hash Daniel Wagner
@ 2015-03-07 14:00   ` Jeff Layton
  2015-03-06  7:53 ` [PATCH v3 2/2] locks: Use blocked_lock_lock only to protect blocked_hash Daniel Wagner
  2015-03-07 14:00   ` Jeff Layton
  2 siblings, 0 replies; 11+ messages in thread
From: Jeff Layton @ 2015-03-07 14:00 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-fsdevel, linux-kernel, Alexander Viro, J. Bruce Fields, Andi Kleen

On Fri,  6 Mar 2015 08:53:30 +0100
Daniel Wagner <daniel.wagner@bmw-carit.de> wrote:

> Hi,
> 
> Finally, I got a bigger machine and did a quick test round. I expected
> to see some improvements but the resutls do not show any real gain. So
> they are merely refactoring patches.
> 

Ok, in that case is there any point in merging these? I'm all for
breaking up global locks when it makes sense, but if you can't
demonstrate a clear benefit then I'm less inclined to take the churn.

Perhaps we should wait to see if a benefit emerges when/if you convert
the lglock code to use normal spinlocks (like Andi suggested)? That
seems like a rather simple conversion, and I don't think it's
"cheating" in any sense of the word.

I do however wonder why Nick used arch_spinlock_t there when he wrote
the lglock code instead of normal spinlocks. Was it simply memory usage
considerations or something else?

> 4x Intel(R) Xeon(R) CPU E5-4610 v2 @ 2.30GHz
> 
> 4.0.0-rc2/flock01.data
> # NumSamples = 3; Min = 47160.80; Max = 47555.42
> # Mean = 47294.254786; Variance = 34110.284932; SD = 184.689699; Median 47166.534982
> # each ∎ represents a count of 1
> 47160.8049 - 47200.2668 [     2]: ∎∎
> 47200.2668 - 47239.7288 [     0]: 
> 47239.7288 - 47279.1908 [     0]: 
> 47279.1908 - 47318.6527 [     0]: 
> 47318.6527 - 47358.1147 [     0]: 
> 47358.1147 - 47397.5767 [     0]: 
> 47397.5767 - 47437.0386 [     0]: 
> 47437.0386 - 47476.5006 [     0]: 
> 47476.5006 - 47515.9625 [     0]: 
> 47515.9625 - 47555.4245 [     1]: ∎
> 
> patched/flock01.data
> # NumSamples = 21; Min = 45877.22; Max = 50206.70
> # Mean = 47042.844720; Variance = 752166.966346; SD = 867.275600; Median 46939.811380
> # each ∎ represents a count of 1
> 45877.2235 - 46310.1709 [     2]: ∎∎
> 46310.1709 - 46743.1182 [     7]: ∎∎∎∎∎∎∎
> 46743.1182 - 47176.0655 [     3]: ∎∎∎
> 47176.0655 - 47609.0128 [     6]: ∎∎∎∎∎∎
> 47609.0128 - 48041.9602 [     2]: ∎∎
> 48041.9602 - 48474.9075 [     0]: 
> 48474.9075 - 48907.8548 [     0]: 
> 48907.8548 - 49340.8021 [     0]: 
> 49340.8021 - 49773.7495 [     0]: 
> 49773.7495 - 50206.6968 [     1]: ∎
> 
> 
> 4.0.0-rc2/flock02.data
> # NumSamples = 1786; Min = 1.86; Max = 3.13
> # Mean = 2.204980; Variance = 0.015900; SD = 0.126096; Median 2.177549
> # each ∎ represents a count of 13
>     1.8606 -     1.9880 [     5]: 
>     1.9880 -     2.1154 [   315]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
>     2.1154 -     2.2427 [  1040]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
>     2.2427 -     2.3701 [   272]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
>     2.3701 -     2.4975 [    75]: ∎∎∎∎∎
>     2.4975 -     2.6249 [    42]: ∎∎∎
>     2.6249 -     2.7523 [    28]: ∎∎
>     2.7523 -     2.8796 [     7]: 
>     2.8796 -     3.0070 [     1]: 
>     3.0070 -     3.1344 [     1]: 
> 
> patched/flock02.data
> # NumSamples = 4586; Min = 2.14; Max = 4.31
> # Mean = 2.619467; Variance = 0.043192; SD = 0.207828; Median 2.575378
> # each ∎ represents a count of 27
>     2.1385 -     2.3561 [   186]: ∎∎∎∎∎∎
>     2.3561 -     2.5737 [  2079]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
>     2.5737 -     2.7914 [  1642]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
>     2.7914 -     3.0090 [   355]: ∎∎∎∎∎∎∎∎∎∎∎∎∎
>     3.0090 -     3.2266 [   246]: ∎∎∎∎∎∎∎∎∎
>     3.2266 -     3.4442 [    66]: ∎∎
>     3.4442 -     3.6618 [     9]: 
>     3.6618 -     3.8795 [     1]: 
>     3.8795 -     4.0971 [     0]: 
>     4.0971 -     4.3147 [     2]: 
> 
> 
> 4.0.0-rc2/lease01.data
> # NumSamples = 12; Min = 1097.16; Max = 1255.06
> # Mean = 1184.550432; Variance = 1590.438052; SD = 39.880297; Median 1190.704582
> # each ∎ represents a count of 1
>  1097.1556 -  1112.9460 [     1]: ∎
>  1112.9460 -  1128.7363 [     0]: 
>  1128.7363 -  1144.5267 [     1]: ∎
>  1144.5267 -  1160.3170 [     0]: 
>  1160.3170 -  1176.1074 [     2]: ∎∎
>  1176.1074 -  1191.8977 [     2]: ∎∎
>  1191.8977 -  1207.6881 [     2]: ∎∎
>  1207.6881 -  1223.4784 [     3]: ∎∎∎
>  1223.4784 -  1239.2688 [     0]: 
>  1239.2688 -  1255.0591 [     1]: ∎
> 
> patched/lease01.data
> # NumSamples = 14; Min = 1055.00; Max = 1213.97
> # Mean = 1128.800723; Variance = 2225.466357; SD = 47.174849; Median 1114.384900
> # each ∎ represents a count of 1
>  1054.9959 -  1070.8932 [     2]: ∎∎
>  1070.8932 -  1086.7906 [     1]: ∎
>  1086.7906 -  1102.6879 [     1]: ∎
>  1102.6879 -  1118.5853 [     4]: ∎∎∎∎
>  1118.5853 -  1134.4826 [     0]: 
>  1134.4826 -  1150.3800 [     1]: ∎
>  1150.3800 -  1166.2773 [     2]: ∎∎
>  1166.2773 -  1182.1747 [     0]: 
>  1182.1747 -  1198.0720 [     2]: ∎∎
>  1198.0720 -  1213.9694 [     1]: ∎
> 
> 
> 4.0.0-rc2/lease02.data
> # NumSamples = 12; Min = 841.43; Max = 911.82
> # Mean = 888.716745; Variance = 317.221486; SD = 17.810713; Median 894.897002
> # each ∎ represents a count of 1
>   841.4339 -   848.4727 [     1]: ∎
>   848.4727 -   855.5115 [     0]: 
>   855.5115 -   862.5503 [     0]: 
>   862.5503 -   869.5891 [     0]: 
>   869.5891 -   876.6278 [     2]: ∎∎
>   876.6278 -   883.6666 [     1]: ∎
>   883.6666 -   890.7054 [     1]: ∎
>   890.7054 -   897.7442 [     3]: ∎∎∎
>   897.7442 -   904.7830 [     2]: ∎∎
>   904.7830 -   911.8218 [     2]: ∎∎
> 
> patched/lease02.data
> # NumSamples = 26; Min = 845.36; Max = 917.22
> # Mean = 886.178134; Variance = 320.861100; SD = 17.912596; Median 889.109363
> # each ∎ represents a count of 1
>   845.3620 -   852.5481 [     2]: ∎∎
>   852.5481 -   859.7343 [     1]: ∎
>   859.7343 -   866.9204 [     1]: ∎
>   866.9204 -   874.1065 [     2]: ∎∎
>   874.1065 -   881.2926 [     3]: ∎∎∎
>   881.2926 -   888.4788 [     2]: ∎∎
>   888.4788 -   895.6649 [     6]: ∎∎∎∎∎∎
>   895.6649 -   902.8510 [     4]: ∎∎∎∎
>   902.8510 -   910.0372 [     2]: ∎∎
>   910.0372 -   917.2233 [     3]: ∎∎∎
> 
> 
> 4.0.0-rc2/posix01.data
> # NumSamples = 5; Min = 46659.56; Max = 48332.45
> # Mean = 47237.374603; Variance = 337801.649607; SD = 581.207063; Median 47011.051338
> # each ∎ represents a count of 1
> 46659.5620 - 46826.8508 [     1]: ∎
> 46826.8508 - 46994.1396 [     1]: ∎
> 46994.1396 - 47161.4284 [     1]: ∎
> 47161.4284 - 47328.7172 [     1]: ∎
> 47328.7172 - 47496.0060 [     0]: 
> 47496.0060 - 47663.2948 [     0]: 
> 47663.2948 - 47830.5836 [     0]: 
> 47830.5836 - 47997.8724 [     0]: 
> 47997.8724 - 48165.1612 [     0]: 
> 48165.1612 - 48332.4500 [     1]: ∎
> 
> patched/posix01.data
> # NumSamples = 5; Min = 47274.54; Max = 47925.60
> # Mean = 47693.475091; Variance = 48592.645586; SD = 220.437396; Median 47762.516318
> # each ∎ represents a count of 1
> 47274.5400 - 47339.6458 [     1]: ∎
> 47339.6458 - 47404.7516 [     0]: 
> 47404.7516 - 47469.8574 [     0]: 
> 47469.8574 - 47534.9632 [     0]: 
> 47534.9632 - 47600.0690 [     0]: 
> 47600.0690 - 47665.1748 [     0]: 
> 47665.1748 - 47730.2806 [     1]: ∎
> 47730.2806 - 47795.3863 [     2]: ∎∎
> 47795.3863 - 47860.4921 [     0]: 
> 47860.4921 - 47925.5979 [     1]: ∎
> 
> 
> 4.0.0-rc2/posix02.data
> # NumSamples = 633; Min = 3.28; Max = 6.66
> # Mean = 3.742386; Variance = 0.087680; SD = 0.296108; Median 3.683302
> # each ∎ represents a count of 4
>     3.2834 -     3.6214 [   218]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
>     3.6214 -     3.9595 [   338]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
>     3.9595 -     4.2975 [    39]: ∎∎∎∎∎∎∎∎∎
>     4.2975 -     4.6356 [    23]: ∎∎∎∎∎
>     4.6356 -     4.9736 [    13]: ∎∎∎
>     4.9736 -     5.3117 [     1]: 
>     5.3117 -     5.6497 [     0]: 
>     5.6497 -     5.9878 [     0]: 
>     5.9878 -     6.3258 [     0]: 
>     6.3258 -     6.6638 [     1]: 
> 
> patched/posix02.data
> # NumSamples = 1266; Min = 3.52; Max = 8.13
> # Mean = 4.177377; Variance = 0.139937; SD = 0.374081; Median 4.091573
> # each ∎ represents a count of 10
>     3.5154 -     3.9770 [   343]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
>     3.9770 -     4.4386 [   760]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
>     4.4386 -     4.9002 [    87]: ∎∎∎∎∎∎∎∎
>     4.9002 -     5.3617 [    55]: ∎∎∎∎∎
>     5.3617 -     5.8233 [    17]: ∎
>     5.8233 -     6.2849 [     2]: 
>     6.2849 -     6.7464 [     0]: 
>     6.7464 -     7.2080 [     1]: 
>     7.2080 -     7.6696 [     0]: 
>     7.6696 -     8.1311 [     1]: 
> 
> 4.0.0-rc2/posix03.data
> # NumSamples = 6; Min = 37.77; Max = 41.77
> # Mean = 39.590607; Variance = 1.780789; SD = 1.334462; Median 39.619681
> # each ∎ represents a count of 1
>    37.7725 -    38.1719 [     1]: ∎
>    38.1719 -    38.5713 [     1]: ∎
>    38.5713 -    38.9706 [     0]: 
>    38.9706 -    39.3700 [     1]: ∎
>    39.3700 -    39.7694 [     0]: 
>    39.7694 -    40.1688 [     1]: ∎
>    40.1688 -    40.5682 [     1]: ∎
>    40.5682 -    40.9675 [     0]: 
>    40.9675 -    41.3669 [     0]: 
>    41.3669 -    41.7663 [     1]: ∎
> 
> patched/posix03.data
> # NumSamples = 15; Min = 37.45; Max = 44.72
> # Mean = 41.543496; Variance = 4.628029; SD = 2.151286; Median 41.293384
> # each ∎ represents a count of 1
>    37.4498 -    38.1767 [     1]: ∎
>    38.1767 -    38.9036 [     0]: 
>    38.9036 -    39.6304 [     1]: ∎
>    39.6304 -    40.3573 [     4]: ∎∎∎∎
>    40.3573 -    41.0842 [     1]: ∎
>    41.0842 -    41.8110 [     1]: ∎
>    41.8110 -    42.5379 [     1]: ∎
>    42.5379 -    43.2648 [     3]: ∎∎∎
>    43.2648 -    43.9916 [     0]: 
>    43.9916 -    44.7185 [     3]: ∎∎∎
> 
> v3:
>  - added missing !IS_OFDLCK
>  - removed unnecessary locking protecting fl_list by flock_lock_lglock
> 
> v2:
>  - added a few lockdep assertion
>  - dropped spinlock conversion
> 
> v1:
>  - rebased on v3.19-8975-g3d88348
>  - splittet into smaller pieces
>  - fixed a wrong usage of __locks_insert/delete_block() and it's posix version
>  - added seqfile helpers to avoid ugly open coded version
> 
> https://lkml.org/lkml/2015/3/2/351
> 
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Jeff Layton <jlayton@poochiereds.net>
> Cc: "J. Bruce Fields" <bfields@fieldses.org>
> Cc: linux-fsdevel@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> 
> Daniel Wagner (2):
>   locks: Split insert/delete block functions into flock/posix parts
>   locks: Use blocked_lock_lock only to protect blocked_hash
> 
>  fs/locks.c | 111 ++++++++++++++++++++++++++++++++++++++++---------------------
>  1 file changed, 74 insertions(+), 37 deletions(-)
> 


-- 
Jeff Layton <jlayton@poochiereds.net>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 0/2] Use blocked_lock_lock only to protect blocked_hash
@ 2015-03-07 14:00   ` Jeff Layton
  0 siblings, 0 replies; 11+ messages in thread
From: Jeff Layton @ 2015-03-07 14:00 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-fsdevel, linux-kernel, Alexander Viro, J. Bruce Fields, Andi Kleen

On Fri,  6 Mar 2015 08:53:30 +0100
Daniel Wagner <daniel.wagner@bmw-carit.de> wrote:

> Hi,
> 
> Finally, I got a bigger machine and did a quick test round. I expected
> to see some improvements but the resutls do not show any real gain. So
> they are merely refactoring patches.
> 

Ok, in that case is there any point in merging these? I'm all for
breaking up global locks when it makes sense, but if you can't
demonstrate a clear benefit then I'm less inclined to take the churn.

Perhaps we should wait to see if a benefit emerges when/if you convert
the lglock code to use normal spinlocks (like Andi suggested)? That
seems like a rather simple conversion, and I don't think it's
"cheating" in any sense of the word.

I do however wonder why Nick used arch_spinlock_t there when he wrote
the lglock code instead of normal spinlocks. Was it simply memory usage
considerations or something else?

> 4x Intel(R) Xeon(R) CPU E5-4610 v2 @ 2.30GHz
> 
> 4.0.0-rc2/flock01.data
> # NumSamples = 3; Min = 47160.80; Max = 47555.42
> # Mean = 47294.254786; Variance = 34110.284932; SD = 184.689699; Median 47166.534982
> # each ∎ represents a count of 1
> 47160.8049 - 47200.2668 [     2]: ∎∎
> 47200.2668 - 47239.7288 [     0]: 
> 47239.7288 - 47279.1908 [     0]: 
> 47279.1908 - 47318.6527 [     0]: 
> 47318.6527 - 47358.1147 [     0]: 
> 47358.1147 - 47397.5767 [     0]: 
> 47397.5767 - 47437.0386 [     0]: 
> 47437.0386 - 47476.5006 [     0]: 
> 47476.5006 - 47515.9625 [     0]: 
> 47515.9625 - 47555.4245 [     1]: ∎
> 
> patched/flock01.data
> # NumSamples = 21; Min = 45877.22; Max = 50206.70
> # Mean = 47042.844720; Variance = 752166.966346; SD = 867.275600; Median 46939.811380
> # each ∎ represents a count of 1
> 45877.2235 - 46310.1709 [     2]: ∎∎
> 46310.1709 - 46743.1182 [     7]: ∎∎∎∎∎∎∎
> 46743.1182 - 47176.0655 [     3]: ∎∎∎
> 47176.0655 - 47609.0128 [     6]: ∎∎∎∎∎∎
> 47609.0128 - 48041.9602 [     2]: ∎∎
> 48041.9602 - 48474.9075 [     0]: 
> 48474.9075 - 48907.8548 [     0]: 
> 48907.8548 - 49340.8021 [     0]: 
> 49340.8021 - 49773.7495 [     0]: 
> 49773.7495 - 50206.6968 [     1]: ∎
> 
> 
> 4.0.0-rc2/flock02.data
> # NumSamples = 1786; Min = 1.86; Max = 3.13
> # Mean = 2.204980; Variance = 0.015900; SD = 0.126096; Median 2.177549
> # each ∎ represents a count of 13
>     1.8606 -     1.9880 [     5]: 
>     1.9880 -     2.1154 [   315]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
>     2.1154 -     2.2427 [  1040]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
>     2.2427 -     2.3701 [   272]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
>     2.3701 -     2.4975 [    75]: ∎∎∎∎∎
>     2.4975 -     2.6249 [    42]: ∎∎∎
>     2.6249 -     2.7523 [    28]: ∎∎
>     2.7523 -     2.8796 [     7]: 
>     2.8796 -     3.0070 [     1]: 
>     3.0070 -     3.1344 [     1]: 
> 
> patched/flock02.data
> # NumSamples = 4586; Min = 2.14; Max = 4.31
> # Mean = 2.619467; Variance = 0.043192; SD = 0.207828; Median 2.575378
> # each ∎ represents a count of 27
>     2.1385 -     2.3561 [   186]: ∎∎∎∎∎∎
>     2.3561 -     2.5737 [  2079]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
>     2.5737 -     2.7914 [  1642]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
>     2.7914 -     3.0090 [   355]: ∎∎∎∎∎∎∎∎∎∎∎∎∎
>     3.0090 -     3.2266 [   246]: ∎∎∎∎∎∎∎∎∎
>     3.2266 -     3.4442 [    66]: ∎∎
>     3.4442 -     3.6618 [     9]: 
>     3.6618 -     3.8795 [     1]: 
>     3.8795 -     4.0971 [     0]: 
>     4.0971 -     4.3147 [     2]: 
> 
> 
> 4.0.0-rc2/lease01.data
> # NumSamples = 12; Min = 1097.16; Max = 1255.06
> # Mean = 1184.550432; Variance = 1590.438052; SD = 39.880297; Median 1190.704582
> # each ∎ represents a count of 1
>  1097.1556 -  1112.9460 [     1]: ∎
>  1112.9460 -  1128.7363 [     0]: 
>  1128.7363 -  1144.5267 [     1]: ∎
>  1144.5267 -  1160.3170 [     0]: 
>  1160.3170 -  1176.1074 [     2]: ∎∎
>  1176.1074 -  1191.8977 [     2]: ∎∎
>  1191.8977 -  1207.6881 [     2]: ∎∎
>  1207.6881 -  1223.4784 [     3]: ∎∎∎
>  1223.4784 -  1239.2688 [     0]: 
>  1239.2688 -  1255.0591 [     1]: ∎
> 
> patched/lease01.data
> # NumSamples = 14; Min = 1055.00; Max = 1213.97
> # Mean = 1128.800723; Variance = 2225.466357; SD = 47.174849; Median 1114.384900
> # each ∎ represents a count of 1
>  1054.9959 -  1070.8932 [     2]: ∎∎
>  1070.8932 -  1086.7906 [     1]: ∎
>  1086.7906 -  1102.6879 [     1]: ∎
>  1102.6879 -  1118.5853 [     4]: ∎∎∎∎
>  1118.5853 -  1134.4826 [     0]: 
>  1134.4826 -  1150.3800 [     1]: ∎
>  1150.3800 -  1166.2773 [     2]: ∎∎
>  1166.2773 -  1182.1747 [     0]: 
>  1182.1747 -  1198.0720 [     2]: ∎∎
>  1198.0720 -  1213.9694 [     1]: ∎
> 
> 
> 4.0.0-rc2/lease02.data
> # NumSamples = 12; Min = 841.43; Max = 911.82
> # Mean = 888.716745; Variance = 317.221486; SD = 17.810713; Median 894.897002
> # each ∎ represents a count of 1
>   841.4339 -   848.4727 [     1]: ∎
>   848.4727 -   855.5115 [     0]: 
>   855.5115 -   862.5503 [     0]: 
>   862.5503 -   869.5891 [     0]: 
>   869.5891 -   876.6278 [     2]: ∎∎
>   876.6278 -   883.6666 [     1]: ∎
>   883.6666 -   890.7054 [     1]: ∎
>   890.7054 -   897.7442 [     3]: ∎∎∎
>   897.7442 -   904.7830 [     2]: ∎∎
>   904.7830 -   911.8218 [     2]: ∎∎
> 
> patched/lease02.data
> # NumSamples = 26; Min = 845.36; Max = 917.22
> # Mean = 886.178134; Variance = 320.861100; SD = 17.912596; Median 889.109363
> # each ∎ represents a count of 1
>   845.3620 -   852.5481 [     2]: ∎∎
>   852.5481 -   859.7343 [     1]: ∎
>   859.7343 -   866.9204 [     1]: ∎
>   866.9204 -   874.1065 [     2]: ∎∎
>   874.1065 -   881.2926 [     3]: ∎∎∎
>   881.2926 -   888.4788 [     2]: ∎∎
>   888.4788 -   895.6649 [     6]: ∎∎∎∎∎∎
>   895.6649 -   902.8510 [     4]: ∎∎∎∎
>   902.8510 -   910.0372 [     2]: ∎∎
>   910.0372 -   917.2233 [     3]: ∎∎∎
> 
> 
> 4.0.0-rc2/posix01.data
> # NumSamples = 5; Min = 46659.56; Max = 48332.45
> # Mean = 47237.374603; Variance = 337801.649607; SD = 581.207063; Median 47011.051338
> # each ∎ represents a count of 1
> 46659.5620 - 46826.8508 [     1]: ∎
> 46826.8508 - 46994.1396 [     1]: ∎
> 46994.1396 - 47161.4284 [     1]: ∎
> 47161.4284 - 47328.7172 [     1]: ∎
> 47328.7172 - 47496.0060 [     0]: 
> 47496.0060 - 47663.2948 [     0]: 
> 47663.2948 - 47830.5836 [     0]: 
> 47830.5836 - 47997.8724 [     0]: 
> 47997.8724 - 48165.1612 [     0]: 
> 48165.1612 - 48332.4500 [     1]: ∎
> 
> patched/posix01.data
> # NumSamples = 5; Min = 47274.54; Max = 47925.60
> # Mean = 47693.475091; Variance = 48592.645586; SD = 220.437396; Median 47762.516318
> # each ∎ represents a count of 1
> 47274.5400 - 47339.6458 [     1]: ∎
> 47339.6458 - 47404.7516 [     0]: 
> 47404.7516 - 47469.8574 [     0]: 
> 47469.8574 - 47534.9632 [     0]: 
> 47534.9632 - 47600.0690 [     0]: 
> 47600.0690 - 47665.1748 [     0]: 
> 47665.1748 - 47730.2806 [     1]: ∎
> 47730.2806 - 47795.3863 [     2]: ∎∎
> 47795.3863 - 47860.4921 [     0]: 
> 47860.4921 - 47925.5979 [     1]: ∎
> 
> 
> 4.0.0-rc2/posix02.data
> # NumSamples = 633; Min = 3.28; Max = 6.66
> # Mean = 3.742386; Variance = 0.087680; SD = 0.296108; Median 3.683302
> # each ∎ represents a count of 4
>     3.2834 -     3.6214 [   218]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
>     3.6214 -     3.9595 [   338]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
>     3.9595 -     4.2975 [    39]: ∎∎∎∎∎∎∎∎∎
>     4.2975 -     4.6356 [    23]: ∎∎∎∎∎
>     4.6356 -     4.9736 [    13]: ∎∎∎
>     4.9736 -     5.3117 [     1]: 
>     5.3117 -     5.6497 [     0]: 
>     5.6497 -     5.9878 [     0]: 
>     5.9878 -     6.3258 [     0]: 
>     6.3258 -     6.6638 [     1]: 
> 
> patched/posix02.data
> # NumSamples = 1266; Min = 3.52; Max = 8.13
> # Mean = 4.177377; Variance = 0.139937; SD = 0.374081; Median 4.091573
> # each ∎ represents a count of 10
>     3.5154 -     3.9770 [   343]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
>     3.9770 -     4.4386 [   760]: ∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
>     4.4386 -     4.9002 [    87]: ∎∎∎∎∎∎∎∎
>     4.9002 -     5.3617 [    55]: ∎∎∎∎∎
>     5.3617 -     5.8233 [    17]: ∎
>     5.8233 -     6.2849 [     2]: 
>     6.2849 -     6.7464 [     0]: 
>     6.7464 -     7.2080 [     1]: 
>     7.2080 -     7.6696 [     0]: 
>     7.6696 -     8.1311 [     1]: 
> 
> 4.0.0-rc2/posix03.data
> # NumSamples = 6; Min = 37.77; Max = 41.77
> # Mean = 39.590607; Variance = 1.780789; SD = 1.334462; Median 39.619681
> # each ∎ represents a count of 1
>    37.7725 -    38.1719 [     1]: ∎
>    38.1719 -    38.5713 [     1]: ∎
>    38.5713 -    38.9706 [     0]: 
>    38.9706 -    39.3700 [     1]: ∎
>    39.3700 -    39.7694 [     0]: 
>    39.7694 -    40.1688 [     1]: ∎
>    40.1688 -    40.5682 [     1]: ∎
>    40.5682 -    40.9675 [     0]: 
>    40.9675 -    41.3669 [     0]: 
>    41.3669 -    41.7663 [     1]: ∎
> 
> patched/posix03.data
> # NumSamples = 15; Min = 37.45; Max = 44.72
> # Mean = 41.543496; Variance = 4.628029; SD = 2.151286; Median 41.293384
> # each ∎ represents a count of 1
>    37.4498 -    38.1767 [     1]: ∎
>    38.1767 -    38.9036 [     0]: 
>    38.9036 -    39.6304 [     1]: ∎
>    39.6304 -    40.3573 [     4]: ∎∎∎∎
>    40.3573 -    41.0842 [     1]: ∎
>    41.0842 -    41.8110 [     1]: ∎
>    41.8110 -    42.5379 [     1]: ∎
>    42.5379 -    43.2648 [     3]: ∎∎∎
>    43.2648 -    43.9916 [     0]: 
>    43.9916 -    44.7185 [     3]: ∎∎∎
> 
> v3:
>  - added missing !IS_OFDLCK
>  - removed unnecessary locking protecting fl_list by flock_lock_lglock
> 
> v2:
>  - added a few lockdep assertion
>  - dropped spinlock conversion
> 
> v1:
>  - rebased on v3.19-8975-g3d88348
>  - splittet into smaller pieces
>  - fixed a wrong usage of __locks_insert/delete_block() and it's posix version
>  - added seqfile helpers to avoid ugly open coded version
> 
> https://lkml.org/lkml/2015/3/2/351
> 
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Jeff Layton <jlayton@poochiereds.net>
> Cc: "J. Bruce Fields" <bfields@fieldses.org>
> Cc: linux-fsdevel@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> 
> Daniel Wagner (2):
>   locks: Split insert/delete block functions into flock/posix parts
>   locks: Use blocked_lock_lock only to protect blocked_hash
> 
>  fs/locks.c | 111 ++++++++++++++++++++++++++++++++++++++++---------------------
>  1 file changed, 74 insertions(+), 37 deletions(-)
> 


-- 
Jeff Layton <jlayton@poochiereds.net>
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 0/2] Use blocked_lock_lock only to protect blocked_hash
  2015-03-07 14:00   ` Jeff Layton
  (?)
@ 2015-03-07 14:09   ` Jeff Layton
  -1 siblings, 0 replies; 11+ messages in thread
From: Jeff Layton @ 2015-03-07 14:09 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-fsdevel, linux-kernel, Alexander Viro, J. Bruce Fields, Andi Kleen

On Sat, 7 Mar 2015 09:00:41 -0500
Jeff Layton <jlayton@poochiereds.net> wrote:

> On Fri,  6 Mar 2015 08:53:30 +0100
> Daniel Wagner <daniel.wagner@bmw-carit.de> wrote:
> 
> > Hi,
> > 
> > Finally, I got a bigger machine and did a quick test round. I expected
> > to see some improvements but the resutls do not show any real gain. So
> > they are merely refactoring patches.
> > 
> 
> Ok, in that case is there any point in merging these? I'm all for
> breaking up global locks when it makes sense, but if you can't
> demonstrate a clear benefit then I'm less inclined to take the churn.
> 
> Perhaps we should wait to see if a benefit emerges when/if you convert
> the lglock code to use normal spinlocks (like Andi suggested)? That
> seems like a rather simple conversion, and I don't think it's
> "cheating" in any sense of the word.
> 
> I do however wonder why Nick used arch_spinlock_t there when he wrote
> the lglock code instead of normal spinlocks. Was it simply memory usage
> considerations or something else?
> 

Hmm...to answer my own question. The (old) LWN article here seems to
suggest that he did it that way to avoid preemption:

    http://lwn.net/Articles/401738/

I don't think we need to avoid being preempted in the file-locking
code, but I'm not sure about stop_machine.c. Is that necessary there?
The comment in queue_stop_cpus_work seems to indicate that it may be.

-- 
Jeff Layton <jlayton@poochiereds.net>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 0/2] Use blocked_lock_lock only to protect blocked_hash
  2015-03-07 14:00   ` Jeff Layton
  (?)
  (?)
@ 2015-03-10  8:20   ` Daniel Wagner
  2015-03-14 12:40     ` Jeff Layton
  -1 siblings, 1 reply; 11+ messages in thread
From: Daniel Wagner @ 2015-03-10  8:20 UTC (permalink / raw)
  To: Jeff Layton
  Cc: linux-fsdevel, linux-kernel, Alexander Viro, J. Bruce Fields, Andi Kleen

Hi,

On 03/07/2015 03:00 PM, Jeff Layton wrote:
> On Fri,  6 Mar 2015 08:53:30 +0100
> Daniel Wagner <daniel.wagner@bmw-carit.de> wrote:
> 
>> Hi,
>>
>> Finally, I got a bigger machine and did a quick test round. I expected
>> to see some improvements but the resutls do not show any real gain. So
>> they are merely refactoring patches.
>>
> 
> Ok, in that case is there any point in merging these? I'm all for
> breaking up global locks when it makes sense, but if you can't
> demonstrate a clear benefit then I'm less inclined to take the churn.
> 
> Perhaps we should wait to see if a benefit emerges when/if you convert
> the lglock code to use normal spinlocks (like Andi suggested)? That
> seems like a rather simple conversion, and I don't think it's
> "cheating" in any sense of the word.
> 
> I do however wonder why Nick used arch_spinlock_t there when he wrote
> the lglock code instead of normal spinlocks. Was it simply memory usage
> considerations or something else?

I did a complete test run with 4.0.0-rc3, the two patches from this
thread (fs-locks-v10), the spinlock_t conversion (lglock-v0)
and fs-locks-v10 and lglock-v0 combined. Some of the test take rather
long on my machine (12 minutes per run) so I tweaked it a bit to get
more samples. Each test was run 100 times.

The raw data and scripts are here: http://www.monom.org/lglock/data/

flock01
                             mean   variance      sigma        max        min
                4.0.0-rc3  8930.8708 282098.1663   531.1291  9612.7085  4681.8674
             fs-locks-v10  9081.6789 43493.0287   208.5498  9747.8491  8072.6508
                lglock-v0  9004.9252 12339.3832   111.0828  9489.5420  8493.0763
   fs-locks-v10+lglock-v0  9053.6680 17714.5623   133.0961  9588.7413  8555.0727


flock02
                             mean   variance      sigma        max        min
                4.0.0-rc3   553.1720  1057.6026    32.5208   606.2989   479.5528
             fs-locks-v10   596.0683  1486.8345    38.5595   662.6566   512.4865
                lglock-v0   595.2150   976.8544    31.2547   646.7506   527.2517
   fs-locks-v10+lglock-v0   587.5492   989.9467    31.4634   646.2098   509.6020


lease01
                             mean   variance      sigma        max        min
                4.0.0-rc3   505.2654   780.7545    27.9420   564.2530   433.7727
             fs-locks-v10   523.6855   705.2606    26.5567   570.3401   442.6539
                lglock-v0   516.7525  1026.7596    32.0431   573.8766   433.4124
   fs-locks-v10+lglock-v0   513.6507   751.1674    27.4074   567.0972   435.6154


lease02
                             mean   variance      sigma        max        min
                4.0.0-rc3  3588.2069 26736.0422   163.5116  3769.7430  3154.8405
             fs-locks-v10  3566.0658 34225.6410   185.0017  3747.6039  3188.5478
                lglock-v0  3585.0648 28720.1679   169.4703  3758.7240  3150.9310
   fs-locks-v10+lglock-v0  3621.9347 17706.2354   133.0648  3744.0095  3174.0998


posix01
                             mean   variance      sigma        max        min
                4.0.0-rc3  9297.5030 26911.6602   164.0477  9941.8094  8963.3528
             fs-locks-v10  9462.8665 44762.9316   211.5725 10504.3205  9202.5748
                lglock-v0  9320.4716 47168.9903   217.1842 10401.6565  9054.1950
   fs-locks-v10+lglock-v0  9458.1463 58231.8844   241.3128 10564.2086  9193.1177


posix02
                             mean   variance      sigma        max        min
                4.0.0-rc3   920.6533  2648.1293    51.4600   983.4213   790.1792
             fs-locks-v10   915.3972  4384.6821    66.2169  1004.2339   795.4129
                lglock-v0   888.1910  4644.0478    68.1473   983.8412   777.4851
   fs-locks-v10+lglock-v0   926.4184  1834.6481    42.8328   975.8544   794.4582


posix03
                             mean   variance      sigma        max        min
                4.0.0-rc3     7.5202     0.0456     0.2136     7.9697     6.9803
             fs-locks-v10     7.5203     0.0640     0.2529     7.9619     7.0063
                lglock-v0     7.4738     0.0349     0.1867     7.8011     7.0973
   fs-locks-v10+lglock-v0     7.5856     0.0595     0.2439     8.1098     6.8695


posix04
                             mean   variance      sigma        max        min
                4.0.0-rc3     0.6699     0.1091     0.3303     3.3845     0.5247
             fs-locks-v10     0.6320     0.0026     0.0511     0.9064     0.5195
                lglock-v0     0.6460     0.0039     0.0623     1.0830     0.5438
   fs-locks-v10+lglock-v0     0.6589     0.0338     0.1838     2.0346     0.5393


Hmm, not really convincing numbers. I hoped to see scaling effects but nope, no fun.

cheers,
daniel



The spinlock_t conversion patch (lglock-v0) I used:

diff --git a/include/linux/lglock.h b/include/linux/lglock.h
index 0081f00..ea97f74 100644
--- a/include/linux/lglock.h
+++ b/include/linux/lglock.h
@@ -34,7 +34,7 @@
 #endif
 
 struct lglock {
-	arch_spinlock_t __percpu *lock;
+	spinlock_t __percpu *lock;
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
 	struct lock_class_key lock_key;
 	struct lockdep_map    lock_dep_map;
@@ -42,13 +42,13 @@ struct lglock {
 };
 
 #define DEFINE_LGLOCK(name)						\
-	static DEFINE_PER_CPU(arch_spinlock_t, name ## _lock)		\
-	= __ARCH_SPIN_LOCK_UNLOCKED;					\
+	static DEFINE_PER_CPU(spinlock_t, name ## _lock)		\
+	= __SPIN_LOCK_UNLOCKED(name ## _lock);				\
 	struct lglock name = { .lock = &name ## _lock }
 
 #define DEFINE_STATIC_LGLOCK(name)					\
-	static DEFINE_PER_CPU(arch_spinlock_t, name ## _lock)		\
-	= __ARCH_SPIN_LOCK_UNLOCKED;					\
+	static DEFINE_PER_CPU(spinlock_t, name ## _lock)		\
+	= __SPIN_LOCK_UNLOCKED(name ## _lock);				\
 	static struct lglock name = { .lock = &name ## _lock }
 
 void lg_lock_init(struct lglock *lg, char *name);
diff --git a/kernel/locking/lglock.c b/kernel/locking/lglock.c
index 86ae2ae..34077a7 100644
--- a/kernel/locking/lglock.c
+++ b/kernel/locking/lglock.c
@@ -18,44 +18,44 @@ EXPORT_SYMBOL(lg_lock_init);
 
 void lg_local_lock(struct lglock *lg)
 {
-	arch_spinlock_t *lock;
+	spinlock_t *lock;
 
 	preempt_disable();
 	lock_acquire_shared(&lg->lock_dep_map, 0, 0, NULL, _RET_IP_);
 	lock = this_cpu_ptr(lg->lock);
-	arch_spin_lock(lock);
+	spin_lock(lock);
 }
 EXPORT_SYMBOL(lg_local_lock);
 
 void lg_local_unlock(struct lglock *lg)
 {
-	arch_spinlock_t *lock;
+	spinlock_t *lock;
 
 	lock_release(&lg->lock_dep_map, 1, _RET_IP_);
 	lock = this_cpu_ptr(lg->lock);
-	arch_spin_unlock(lock);
+	spin_unlock(lock);
 	preempt_enable();
 }
 EXPORT_SYMBOL(lg_local_unlock);
 
 void lg_local_lock_cpu(struct lglock *lg, int cpu)
 {
-	arch_spinlock_t *lock;
+	spinlock_t *lock;
 
 	preempt_disable();
 	lock_acquire_shared(&lg->lock_dep_map, 0, 0, NULL, _RET_IP_);
 	lock = per_cpu_ptr(lg->lock, cpu);
-	arch_spin_lock(lock);
+	spin_lock(lock);
 }
 EXPORT_SYMBOL(lg_local_lock_cpu);
 
 void lg_local_unlock_cpu(struct lglock *lg, int cpu)
 {
-	arch_spinlock_t *lock;
+	spinlock_t *lock;
 
 	lock_release(&lg->lock_dep_map, 1, _RET_IP_);
 	lock = per_cpu_ptr(lg->lock, cpu);
-	arch_spin_unlock(lock);
+	spin_unlock(lock);
 	preempt_enable();
 }
 EXPORT_SYMBOL(lg_local_unlock_cpu);
@@ -67,9 +67,9 @@ void lg_global_lock(struct lglock *lg)
 	preempt_disable();
 	lock_acquire_exclusive(&lg->lock_dep_map, 0, 0, NULL, _RET_IP_);
 	for_each_possible_cpu(i) {
-		arch_spinlock_t *lock;
+		spinlock_t *lock;
 		lock = per_cpu_ptr(lg->lock, i);
-		arch_spin_lock(lock);
+		spin_lock(lock);
 	}
 }
 EXPORT_SYMBOL(lg_global_lock);
@@ -80,9 +80,9 @@ void lg_global_unlock(struct lglock *lg)
 
 	lock_release(&lg->lock_dep_map, 1, _RET_IP_);
 	for_each_possible_cpu(i) {
-		arch_spinlock_t *lock;
+		spinlock_t *lock;
 		lock = per_cpu_ptr(lg->lock, i);
-		arch_spin_unlock(lock);
+		spin_unlock(lock);
 	}
 	preempt_enable();
 }



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 0/2] Use blocked_lock_lock only to protect blocked_hash
  2015-03-10  8:20   ` Daniel Wagner
@ 2015-03-14 12:40     ` Jeff Layton
  2015-03-26 10:11       ` Daniel Wagner
  0 siblings, 1 reply; 11+ messages in thread
From: Jeff Layton @ 2015-03-14 12:40 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-fsdevel, linux-kernel, Alexander Viro, J. Bruce Fields, Andi Kleen

On Tue, 10 Mar 2015 09:20:24 +0100
Daniel Wagner <daniel.wagner@bmw-carit.de> wrote:

> Hi,
> 
> On 03/07/2015 03:00 PM, Jeff Layton wrote:
> > On Fri,  6 Mar 2015 08:53:30 +0100
> > Daniel Wagner <daniel.wagner@bmw-carit.de> wrote:
> > 
> >> Hi,
> >>
> >> Finally, I got a bigger machine and did a quick test round. I expected
> >> to see some improvements but the resutls do not show any real gain. So
> >> they are merely refactoring patches.
> >>
> > 
> > Ok, in that case is there any point in merging these? I'm all for
> > breaking up global locks when it makes sense, but if you can't
> > demonstrate a clear benefit then I'm less inclined to take the churn.
> > 
> > Perhaps we should wait to see if a benefit emerges when/if you convert
> > the lglock code to use normal spinlocks (like Andi suggested)? That
> > seems like a rather simple conversion, and I don't think it's
> > "cheating" in any sense of the word.
> > 
> > I do however wonder why Nick used arch_spinlock_t there when he wrote
> > the lglock code instead of normal spinlocks. Was it simply memory usage
> > considerations or something else?
> 
> I did a complete test run with 4.0.0-rc3, the two patches from this
> thread (fs-locks-v10), the spinlock_t conversion (lglock-v0)
> and fs-locks-v10 and lglock-v0 combined. Some of the test take rather
> long on my machine (12 minutes per run) so I tweaked it a bit to get
> more samples. Each test was run 100 times.
> 
> The raw data and scripts are here: http://www.monom.org/lglock/data/
> 
> flock01
>                              mean   variance      sigma        max        min
>                 4.0.0-rc3  8930.8708 282098.1663   531.1291  9612.7085  4681.8674
>              fs-locks-v10  9081.6789 43493.0287   208.5498  9747.8491  8072.6508
>                 lglock-v0  9004.9252 12339.3832   111.0828  9489.5420  8493.0763
>    fs-locks-v10+lglock-v0  9053.6680 17714.5623   133.0961  9588.7413  8555.0727
> 
> 
> flock02
>                              mean   variance      sigma        max        min
>                 4.0.0-rc3   553.1720  1057.6026    32.5208   606.2989   479.5528
>              fs-locks-v10   596.0683  1486.8345    38.5595   662.6566   512.4865
>                 lglock-v0   595.2150   976.8544    31.2547   646.7506   527.2517
>    fs-locks-v10+lglock-v0   587.5492   989.9467    31.4634   646.2098   509.6020
> 
> 
> lease01
>                              mean   variance      sigma        max        min
>                 4.0.0-rc3   505.2654   780.7545    27.9420   564.2530   433.7727
>              fs-locks-v10   523.6855   705.2606    26.5567   570.3401   442.6539
>                 lglock-v0   516.7525  1026.7596    32.0431   573.8766   433.4124
>    fs-locks-v10+lglock-v0   513.6507   751.1674    27.4074   567.0972   435.6154
> 
> 
> lease02
>                              mean   variance      sigma        max        min
>                 4.0.0-rc3  3588.2069 26736.0422   163.5116  3769.7430  3154.8405
>              fs-locks-v10  3566.0658 34225.6410   185.0017  3747.6039  3188.5478
>                 lglock-v0  3585.0648 28720.1679   169.4703  3758.7240  3150.9310
>    fs-locks-v10+lglock-v0  3621.9347 17706.2354   133.0648  3744.0095  3174.0998
> 
> 
> posix01
>                              mean   variance      sigma        max        min
>                 4.0.0-rc3  9297.5030 26911.6602   164.0477  9941.8094  8963.3528
>              fs-locks-v10  9462.8665 44762.9316   211.5725 10504.3205  9202.5748
>                 lglock-v0  9320.4716 47168.9903   217.1842 10401.6565  9054.1950
>    fs-locks-v10+lglock-v0  9458.1463 58231.8844   241.3128 10564.2086  9193.1177
> 
> 
> posix02
>                              mean   variance      sigma        max        min
>                 4.0.0-rc3   920.6533  2648.1293    51.4600   983.4213   790.1792
>              fs-locks-v10   915.3972  4384.6821    66.2169  1004.2339   795.4129
>                 lglock-v0   888.1910  4644.0478    68.1473   983.8412   777.4851
>    fs-locks-v10+lglock-v0   926.4184  1834.6481    42.8328   975.8544   794.4582
> 
> 
> posix03
>                              mean   variance      sigma        max        min
>                 4.0.0-rc3     7.5202     0.0456     0.2136     7.9697     6.9803
>              fs-locks-v10     7.5203     0.0640     0.2529     7.9619     7.0063
>                 lglock-v0     7.4738     0.0349     0.1867     7.8011     7.0973
>    fs-locks-v10+lglock-v0     7.5856     0.0595     0.2439     8.1098     6.8695
> 
> 
> posix04
>                              mean   variance      sigma        max        min
>                 4.0.0-rc3     0.6699     0.1091     0.3303     3.3845     0.5247
>              fs-locks-v10     0.6320     0.0026     0.0511     0.9064     0.5195
>                 lglock-v0     0.6460     0.0039     0.0623     1.0830     0.5438
>    fs-locks-v10+lglock-v0     0.6589     0.0338     0.1838     2.0346     0.5393
> 
> 
> Hmm, not really convincing numbers. I hoped to see scaling effects but nope, no fun.
> 

Yeah...

That said, the lglock-v0 numbers do look a little better. Perhaps it
would make sense to go ahead and consider that change? It's not clear
to me why the lglock code uses arch_spinlock_t. Was it just the extra
memory usage or was there some other reason?

You had mentioned at one point that lglocks didn't play well with the
-rt kernels. What's the actual problem there?

> cheers,
> daniel
> 
> 
> 
> The spinlock_t conversion patch (lglock-v0) I used:
> 
> diff --git a/include/linux/lglock.h b/include/linux/lglock.h
> index 0081f00..ea97f74 100644
> --- a/include/linux/lglock.h
> +++ b/include/linux/lglock.h
> @@ -34,7 +34,7 @@
>  #endif
>  
>  struct lglock {
> -	arch_spinlock_t __percpu *lock;
> +	spinlock_t __percpu *lock;
>  #ifdef CONFIG_DEBUG_LOCK_ALLOC
>  	struct lock_class_key lock_key;
>  	struct lockdep_map    lock_dep_map;
> @@ -42,13 +42,13 @@ struct lglock {
>  };
>  
>  #define
> DEFINE_LGLOCK(name)						\
> -	static DEFINE_PER_CPU(arch_spinlock_t, name ##
> _lock)		\
> -	=
> __ARCH_SPIN_LOCK_UNLOCKED;					\
> +	static DEFINE_PER_CPU(spinlock_t, name ##
> _lock)		\
> +	= __SPIN_LOCK_UNLOCKED(name ##
> _lock);				\ struct lglock name = { .lock
> = &name ## _lock } 
>  #define
> DEFINE_STATIC_LGLOCK(name)					\
> -	static DEFINE_PER_CPU(arch_spinlock_t, name ##
> _lock)		\
> -	=
> __ARCH_SPIN_LOCK_UNLOCKED;					\
> +	static DEFINE_PER_CPU(spinlock_t, name ##
> _lock)		\
> +	= __SPIN_LOCK_UNLOCKED(name ##
> _lock);				\ static struct lglock name =
> { .lock = &name ## _lock } 
>  void lg_lock_init(struct lglock *lg, char *name);
> diff --git a/kernel/locking/lglock.c b/kernel/locking/lglock.c
> index 86ae2ae..34077a7 100644
> --- a/kernel/locking/lglock.c
> +++ b/kernel/locking/lglock.c
> @@ -18,44 +18,44 @@ EXPORT_SYMBOL(lg_lock_init);
>  
>  void lg_local_lock(struct lglock *lg)
>  {
> -	arch_spinlock_t *lock;
> +	spinlock_t *lock;
>  
>  	preempt_disable();
>  	lock_acquire_shared(&lg->lock_dep_map, 0, 0, NULL, _RET_IP_);
>  	lock = this_cpu_ptr(lg->lock);
> -	arch_spin_lock(lock);
> +	spin_lock(lock);
>  }
>  EXPORT_SYMBOL(lg_local_lock);
>  
>  void lg_local_unlock(struct lglock *lg)
>  {
> -	arch_spinlock_t *lock;
> +	spinlock_t *lock;
>  
>  	lock_release(&lg->lock_dep_map, 1, _RET_IP_);
>  	lock = this_cpu_ptr(lg->lock);
> -	arch_spin_unlock(lock);
> +	spin_unlock(lock);
>  	preempt_enable();
>  }
>  EXPORT_SYMBOL(lg_local_unlock);
>  
>  void lg_local_lock_cpu(struct lglock *lg, int cpu)
>  {
> -	arch_spinlock_t *lock;
> +	spinlock_t *lock;
>  
>  	preempt_disable();
>  	lock_acquire_shared(&lg->lock_dep_map, 0, 0, NULL, _RET_IP_);
>  	lock = per_cpu_ptr(lg->lock, cpu);
> -	arch_spin_lock(lock);
> +	spin_lock(lock);
>  }
>  EXPORT_SYMBOL(lg_local_lock_cpu);
>  
>  void lg_local_unlock_cpu(struct lglock *lg, int cpu)
>  {
> -	arch_spinlock_t *lock;
> +	spinlock_t *lock;
>  
>  	lock_release(&lg->lock_dep_map, 1, _RET_IP_);
>  	lock = per_cpu_ptr(lg->lock, cpu);
> -	arch_spin_unlock(lock);
> +	spin_unlock(lock);
>  	preempt_enable();
>  }
>  EXPORT_SYMBOL(lg_local_unlock_cpu);
> @@ -67,9 +67,9 @@ void lg_global_lock(struct lglock *lg)
>  	preempt_disable();
>  	lock_acquire_exclusive(&lg->lock_dep_map, 0, 0, NULL,
> _RET_IP_); for_each_possible_cpu(i) {
> -		arch_spinlock_t *lock;
> +		spinlock_t *lock;
>  		lock = per_cpu_ptr(lg->lock, i);
> -		arch_spin_lock(lock);
> +		spin_lock(lock);
>  	}
>  }
>  EXPORT_SYMBOL(lg_global_lock);
> @@ -80,9 +80,9 @@ void lg_global_unlock(struct lglock *lg)
>  
>  	lock_release(&lg->lock_dep_map, 1, _RET_IP_);
>  	for_each_possible_cpu(i) {
> -		arch_spinlock_t *lock;
> +		spinlock_t *lock;
>  		lock = per_cpu_ptr(lg->lock, i);
> -		arch_spin_unlock(lock);
> +		spin_unlock(lock);
>  	}
>  	preempt_enable();
>  }
> 
> 


-- 
Jeff Layton <jlayton@poochiereds.net>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 0/2] Use blocked_lock_lock only to protect blocked_hash
  2015-03-14 12:40     ` Jeff Layton
@ 2015-03-26 10:11       ` Daniel Wagner
  2015-03-26 13:17         ` Jeff Layton
  0 siblings, 1 reply; 11+ messages in thread
From: Daniel Wagner @ 2015-03-26 10:11 UTC (permalink / raw)
  To: Jeff Layton
  Cc: linux-fsdevel, linux-kernel, Alexander Viro, J. Bruce Fields, Andi Kleen

Hi Jeff,

Sorry for the long delay. Was I week on holiday and the testing
took a bit longer than I expected.

On 03/14/2015 01:40 PM, Jeff Layton wrote:
> On Tue, 10 Mar 2015 09:20:24 +0100
> Daniel Wagner <daniel.wagner@bmw-carit.de> wrote:
>> On 03/07/2015 03:00 PM, Jeff Layton wrote:
>>> On Fri,  6 Mar 2015 08:53:30 +0100
>>> Daniel Wagner <daniel.wagner@bmw-carit.de> wrote:
>>>
>>>> Hi,
>>>>
>>>> Finally, I got a bigger machine and did a quick test round. I expected
>>>> to see some improvements but the resutls do not show any real gain. So
>>>> they are merely refactoring patches.
>>>>
>>>
>>> Ok, in that case is there any point in merging these? I'm all for
>>> breaking up global locks when it makes sense, but if you can't
>>> demonstrate a clear benefit then I'm less inclined to take the churn.
>>>
>>> Perhaps we should wait to see if a benefit emerges when/if you convert
>>> the lglock code to use normal spinlocks (like Andi suggested)? That
>>> seems like a rather simple conversion, and I don't think it's
>>> "cheating" in any sense of the word.
>>>
>>> I do however wonder why Nick used arch_spinlock_t there when he wrote
>>> the lglock code instead of normal spinlocks. Was it simply memory usage
>>> considerations or something else?
>>
>> I did a complete test run with 4.0.0-rc3, the two patches from this
>> thread (fs-locks-v10), the spinlock_t conversion (lglock-v0)
>> and fs-locks-v10 and lglock-v0 combined. Some of the test take rather
>> long on my machine (12 minutes per run) so I tweaked it a bit to get
>> more samples. Each test was run 100 times.
>>
>> The raw data and scripts are here: http://www.monom.org/lglock/data/
>>
>> flock01
>>                              mean   variance      sigma        max        min
>>                 4.0.0-rc3  8930.8708 282098.1663   531.1291  9612.7085  4681.8674
>>              fs-locks-v10  9081.6789 43493.0287   208.5498  9747.8491  8072.6508
>>                 lglock-v0  9004.9252 12339.3832   111.0828  9489.5420  8493.0763
>>    fs-locks-v10+lglock-v0  9053.6680 17714.5623   133.0961  9588.7413  8555.0727
>>
>>
>> flock02
>>                              mean   variance      sigma        max        min
>>                 4.0.0-rc3   553.1720  1057.6026    32.5208   606.2989   479.5528
>>              fs-locks-v10   596.0683  1486.8345    38.5595   662.6566   512.4865
>>                 lglock-v0   595.2150   976.8544    31.2547   646.7506   527.2517
>>    fs-locks-v10+lglock-v0   587.5492   989.9467    31.4634   646.2098   509.6020
>>
>>
>> lease01
>>                              mean   variance      sigma        max        min
>>                 4.0.0-rc3   505.2654   780.7545    27.9420   564.2530   433.7727
>>              fs-locks-v10   523.6855   705.2606    26.5567   570.3401   442.6539
>>                 lglock-v0   516.7525  1026.7596    32.0431   573.8766   433.4124
>>    fs-locks-v10+lglock-v0   513.6507   751.1674    27.4074   567.0972   435.6154
>>
>>
>> lease02
>>                              mean   variance      sigma        max        min
>>                 4.0.0-rc3  3588.2069 26736.0422   163.5116  3769.7430  3154.8405
>>              fs-locks-v10  3566.0658 34225.6410   185.0017  3747.6039  3188.5478
>>                 lglock-v0  3585.0648 28720.1679   169.4703  3758.7240  3150.9310
>>    fs-locks-v10+lglock-v0  3621.9347 17706.2354   133.0648  3744.0095  3174.0998
>>
>>
>> posix01
>>                              mean   variance      sigma        max        min
>>                 4.0.0-rc3  9297.5030 26911.6602   164.0477  9941.8094  8963.3528
>>              fs-locks-v10  9462.8665 44762.9316   211.5725 10504.3205  9202.5748
>>                 lglock-v0  9320.4716 47168.9903   217.1842 10401.6565  9054.1950
>>    fs-locks-v10+lglock-v0  9458.1463 58231.8844   241.3128 10564.2086  9193.1177
>>
>>
>> posix02
>>                              mean   variance      sigma        max        min
>>                 4.0.0-rc3   920.6533  2648.1293    51.4600   983.4213   790.1792
>>              fs-locks-v10   915.3972  4384.6821    66.2169  1004.2339   795.4129
>>                 lglock-v0   888.1910  4644.0478    68.1473   983.8412   777.4851
>>    fs-locks-v10+lglock-v0   926.4184  1834.6481    42.8328   975.8544   794.4582
>>
>>
>> posix03
>>                              mean   variance      sigma        max        min
>>                 4.0.0-rc3     7.5202     0.0456     0.2136     7.9697     6.9803
>>              fs-locks-v10     7.5203     0.0640     0.2529     7.9619     7.0063
>>                 lglock-v0     7.4738     0.0349     0.1867     7.8011     7.0973
>>    fs-locks-v10+lglock-v0     7.5856     0.0595     0.2439     8.1098     6.8695
>>
>>
>> posix04
>>                              mean   variance      sigma        max        min
>>                 4.0.0-rc3     0.6699     0.1091     0.3303     3.3845     0.5247
>>              fs-locks-v10     0.6320     0.0026     0.0511     0.9064     0.5195
>>                 lglock-v0     0.6460     0.0039     0.0623     1.0830     0.5438
>>    fs-locks-v10+lglock-v0     0.6589     0.0338     0.1838     2.0346     0.5393
>>
>>
>> Hmm, not really convincing numbers. I hoped to see scaling effects but nope, no fun.
>>
> 
> Yeah...
> 
> That said, the lglock-v0 numbers do look a little better. Perhaps it
> would make sense to go ahead and consider that change? It's not clear
> to me why the lglock code uses arch_spinlock_t. Was it just the extra
> memory usage or was there some other reason?

If my reading is correct, the main difference between spinlock_t 
and arch_spinlock_t is the avoidance of the trylock path:

 	spin_lock(&lock)
	  raw_spin_lock(&lock)
	    _raw_spin_lock(&lock)
	      __raw_spin_lock(&lock)

	static inline void __raw_spin_lock(raw_spinlock_t *lock)
	{
		preempt_disable();
		spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
		LOCK_CONTENDED(lock, do_raw_spin_trylock, do_raw_spin_lock);
	}

	static inline int do_raw_spin_trylock(raw_spinlock_t *lock)
	{
		return arch_spin_trylock(&(lock)->raw_lock);
	}
	
	static inline void do_raw_spin_lock(raw_spinlock_t *lock) __acquires(lock)
	{
		__acquire(lock);
		arch_spin_lock(&lock->raw_lock);
	}
	

So with using arch_spin_lock(&lock) lglock is short cutting the
path slightly.

The memory consumption is even less using spinlock_t:

4.0.0-rc5

   text    data     bss     dec     hex filename
  19941    2409    1088   23438    5b8e fs/locks.o
    822       0       0     822     336 kernel/locking/lglock.o

[    0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:64 nr_node_ids:4
[    0.000000] PERCPU: Embedded 32 pages/cpu @ffff881fbfc00000 s92888 r8192 d29992 u131072
[    0.000000] pcpu-alloc: s92888 r8192 d29992 u131072 alloc=1*2097152
[    0.000000] pcpu-alloc: [0] 00 04 08 12 16 20 24 28 32 36 40 44 48 52 56 60 
[    0.000000] pcpu-alloc: [1] 01 05 09 13 17 21 25 29 33 37 41 45 49 53 57 61 
[    0.000000] pcpu-alloc: [2] 02 06 10 14 18 22 26 30 34 38 42 46 50 54 58 62 
[    0.000000] pcpu-alloc: [3] 03 07 11 15 19 23 27 31 35 39 43 47 51 55 59 63 
[    0.000000] Built 4 zonelists in Node order, mobility grouping on.  Total pages: 132109066


4.0.0-rc5-lglock-v0

   text    data     bss     dec     hex filename
  19941    2409    1088   23438    5b8e fs/locks.o
    620       0       0     620     26c kernel/locking/lglock.o

[  +0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:64 nr_node_ids:4
[  +0.000000] PERCPU: Embedded 32 pages/cpu @ffff881fbfc00000 s92888 r8192 d29992 u131072
[  +0.000000] pcpu-alloc: s92888 r8192 d29992 u131072 alloc=1*2097152
[  +0.000000] pcpu-alloc: [0] 00 04 08 12 16 20 24 28 32 36 40 44 48 52 56 60 
[  +0.000000] pcpu-alloc: [1] 01 05 09 13 17 21 25 29 33 37 41 45 49 53 57 61 
[  +0.000000] pcpu-alloc: [2] 02 06 10 14 18 22 26 30 34 38 42 46 50 54 58 62 
[  +0.000000] pcpu-alloc: [3] 03 07 11 15 19 23 27 31 35 39 43 47 51 55 59 63 
[  +0.000000] Built 4 zonelists in Node order, mobility grouping on.  Total pages: 132109066


Legend: s: static size, r: reserved size, d: dynamic size, u: unit size


I did another round of measurements with different parameters and saw 
some unexpected things:

 - flock01: The number of child processes is close to the number
   of cores (eg. 128 processes on 64 cores) and the number of test
   iterations is relatively low (32 iterations). In this configuration
   the results are not stable:

	while true; do 
		rm -rf /tmp/a; flock01 -n 128 -l 32 /tmp/a; 
	done

	38.392769508
	1.054781151
	113.731122000
	66.362571593
	97.581588309
	0.015311589
	117.311633231
	0.015412247
	0.014909320
	0.015469361
	0.015481439
	38.779573512
	101.239880635
	0.822888216
	...

   I see this on 4.0.0-rc5 with or without the lglock-v0 patch.
   This looks like if we are lucky the children of flock01 get
   along without interfering and that results in low numbers.

   If they system is not idle (kernel build in background
   'make -j200') then the numbers get more consistent:

	0.034442009
	0.035964874
	0.026305154
	0.030738657
	0.024400840
	0.028685513
	0.025869458
	0.027475024
	0.023971313
	0.026113323
	0.033676295
	....

 - I also played with lockdep detection. With lglock-v0 applied
   some tests like flock02 and posix02 get considerable worse
   results. The difference between flock01 and flock02 is that
   the children of flock01 fight over one file lock versus
   the children of flock02 lock and unlock their own lock.
   My best guess is that the lockdep tracing is adding
   far more to the per child lock configuration. I didn't find
   any other explanation than this. Although I have to admit
   I can't find a good argument why this makes a difference
   between arch_spinlock_t and spinlock_t. 


With lockdep enabled:

flock01
                                     mean   variance      sigma        max        min
                     4.0.0-rc5   339.6094  2174.4070    46.6305   446.6720   219.1196
           4.0.0-rc5-lglock-v0   499.1753  8123.7092    90.1316   666.3474   315.5089


flock02
                                     mean   variance      sigma        max        min
                     4.0.0-rc5   201.2253    64.5758     8.0359   219.6059   179.1278
           4.0.0-rc5-lglock-v0   785.1701  1049.7690    32.4001   844.4298   715.6951


lease01
                                     mean   variance      sigma        max        min
                     4.0.0-rc5     8.6606     4.2222     2.0548    13.3409     4.2273
           4.0.0-rc5-lglock-v0    12.1898     3.5871     1.8940    16.5744     9.2004


lease02
                                     mean   variance      sigma        max        min
                     4.0.0-rc5    42.0945     1.2928     1.1370    44.8503    37.0932
           4.0.0-rc5-lglock-v0    38.5887     0.4077     0.6385    39.8308    36.3703


posix01
                                     mean   variance      sigma        max        min
                     4.0.0-rc5   407.8706  3005.7657    54.8249   581.1921   293.4723
           4.0.0-rc5-lglock-v0   613.6238  5604.3537    74.8622   781.7903   487.7466


posix02
                                     mean   variance      sigma        max        min
                     4.0.0-rc5   340.7774   186.4059    13.6531   365.8146   315.1692
           4.0.0-rc5-lglock-v0  1319.7676   726.9997    26.9629  1377.5981  1242.2350


posix03
                                     mean   variance      sigma        max        min
                     4.0.0-rc5     0.9615     0.0040     0.0629     1.1086     0.8280
           4.0.0-rc5-lglock-v0     1.2682     0.0009     0.0299     1.3415     1.1902


posix04
                                     mean   variance      sigma        max        min
                     4.0.0-rc5     0.0527     0.0003     0.0172     0.1156     0.0237
           4.0.0-rc5-lglock-v0     0.0365     0.0001     0.0101     0.0887     0.0249




Without lockdep:

                                     mean   variance      sigma        max        min
                     4.0.0-rc5   448.0287 15417.8359   124.1686   527.8083     0.0081
           4.0.0-rc5-lglock-v0   395.1646 28713.4347   169.4504   520.7507     0.0075


flock02
                                     mean   variance      sigma        max        min
                     4.0.0-rc5     6.9185     0.8830     0.9397    10.6138     4.9707
           4.0.0-rc5-lglock-v0     6.2474     0.9234     0.9610     9.5478     4.3703


lease01
                                     mean   variance      sigma        max        min
                     4.0.0-rc5     7.7040     0.3930     0.6269     9.1874     5.4179
           4.0.0-rc5-lglock-v0     7.6862     0.7794     0.8828     9.0623     1.3639


lease02
                                     mean   variance      sigma        max        min
                     4.0.0-rc5    16.3074     0.1418     0.3766    17.1600    15.0240
           4.0.0-rc5-lglock-v0    16.2698     0.2772     0.5265    17.2221    13.4127


posix01
                                     mean   variance      sigma        max        min
                     4.0.0-rc5   531.5151   181.3078    13.4651   596.5883   501.2940
           4.0.0-rc5-lglock-v0   531.3600   209.0023    14.4569   600.7317   507.1767


posix02
                                     mean   variance      sigma        max        min
                     4.0.0-rc5    13.8395     2.9768     1.7253    22.0783     9.9136
           4.0.0-rc5-lglock-v0    12.6822     3.1645     1.7789    18.1258     9.0030


posix03
                                     mean   variance      sigma        max        min
                     4.0.0-rc5     0.8939     0.0006     0.0242     0.9392     0.8360
           4.0.0-rc5-lglock-v0     0.9050     0.0006     0.0254     0.9617     0.8454


posix04
                                     mean   variance      sigma        max        min
                     4.0.0-rc5     0.0122     0.0000     0.0023     0.0227     0.0083
           4.0.0-rc5-lglock-v0     0.0115     0.0000     0.0016     0.0165     0.0091

> You had mentioned at one point that lglocks didn't play well with the
> -rt kernels. What's the actual problem there?

-rt kernels like to preempt everything possible. One mean to achieve
this is by exchanging normal spinlock_t with rt_mutex. arch_spinlock_t
does not get this treatment automatically via the lock framework. 
For this following patch is carried around:

https://git.kernel.org/cgit/linux/kernel/git/rt/linux-stable-rt.git/commit/?h=v3.14-rt-rebase&id=da1cbed0dcf6ab22a4b50b0c5606271067749aef

 struct lglock {
+#ifndef CONFIG_PREEMPT_RT_FULL
        arch_spinlock_t __percpu *lock;
+#else
+       struct rt_mutex __percpu *lock;
+#endif
 #ifdef CONFIG_DEBUG_LOCK_ALLOC
        struct lock_class_key lock_key;
        struct lockdep_map    lock_dep_map;
 #endif
 };

[...]


I have modified version of the above patch ontop of of the lglock-v0
which drops all the ifdefing around arch_spinlock_t and rt_mutex. The
results are identical.

If there aren't any objection I send the lglock-v0 patch with a proper
commit message.

cheers,
daniel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 0/2] Use blocked_lock_lock only to protect blocked_hash
  2015-03-26 10:11       ` Daniel Wagner
@ 2015-03-26 13:17         ` Jeff Layton
  2015-03-26 13:55           ` Daniel Wagner
  0 siblings, 1 reply; 11+ messages in thread
From: Jeff Layton @ 2015-03-26 13:17 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: linux-fsdevel, linux-kernel, Alexander Viro, J. Bruce Fields, Andi Kleen

On Thu, 26 Mar 2015 11:11:19 +0100
Daniel Wagner <daniel.wagner@bmw-carit.de> wrote:

> Hi Jeff,
> 
> Sorry for the long delay. Was I week on holiday and the testing
> took a bit longer than I expected.
> 
> On 03/14/2015 01:40 PM, Jeff Layton wrote:
> > On Tue, 10 Mar 2015 09:20:24 +0100
> > Daniel Wagner <daniel.wagner@bmw-carit.de> wrote:
> >> On 03/07/2015 03:00 PM, Jeff Layton wrote:
> >>> On Fri,  6 Mar 2015 08:53:30 +0100
> >>> Daniel Wagner <daniel.wagner@bmw-carit.de> wrote:
> >>>
> >>>> Hi,
> >>>>
> >>>> Finally, I got a bigger machine and did a quick test round. I expected
> >>>> to see some improvements but the resutls do not show any real gain. So
> >>>> they are merely refactoring patches.
> >>>>
> >>>
> >>> Ok, in that case is there any point in merging these? I'm all for
> >>> breaking up global locks when it makes sense, but if you can't
> >>> demonstrate a clear benefit then I'm less inclined to take the churn.
> >>>
> >>> Perhaps we should wait to see if a benefit emerges when/if you convert
> >>> the lglock code to use normal spinlocks (like Andi suggested)? That
> >>> seems like a rather simple conversion, and I don't think it's
> >>> "cheating" in any sense of the word.
> >>>
> >>> I do however wonder why Nick used arch_spinlock_t there when he wrote
> >>> the lglock code instead of normal spinlocks. Was it simply memory usage
> >>> considerations or something else?
> >>
> >> I did a complete test run with 4.0.0-rc3, the two patches from this
> >> thread (fs-locks-v10), the spinlock_t conversion (lglock-v0)
> >> and fs-locks-v10 and lglock-v0 combined. Some of the test take rather
> >> long on my machine (12 minutes per run) so I tweaked it a bit to get
> >> more samples. Each test was run 100 times.
> >>
> >> The raw data and scripts are here: http://www.monom.org/lglock/data/
> >>
> >> flock01
> >>                              mean   variance      sigma        max        min
> >>                 4.0.0-rc3  8930.8708 282098.1663   531.1291  9612.7085  4681.8674
> >>              fs-locks-v10  9081.6789 43493.0287   208.5498  9747.8491  8072.6508
> >>                 lglock-v0  9004.9252 12339.3832   111.0828  9489.5420  8493.0763
> >>    fs-locks-v10+lglock-v0  9053.6680 17714.5623   133.0961  9588.7413  8555.0727
> >>
> >>
> >> flock02
> >>                              mean   variance      sigma        max        min
> >>                 4.0.0-rc3   553.1720  1057.6026    32.5208   606.2989   479.5528
> >>              fs-locks-v10   596.0683  1486.8345    38.5595   662.6566   512.4865
> >>                 lglock-v0   595.2150   976.8544    31.2547   646.7506   527.2517
> >>    fs-locks-v10+lglock-v0   587.5492   989.9467    31.4634   646.2098   509.6020
> >>
> >>
> >> lease01
> >>                              mean   variance      sigma        max        min
> >>                 4.0.0-rc3   505.2654   780.7545    27.9420   564.2530   433.7727
> >>              fs-locks-v10   523.6855   705.2606    26.5567   570.3401   442.6539
> >>                 lglock-v0   516.7525  1026.7596    32.0431   573.8766   433.4124
> >>    fs-locks-v10+lglock-v0   513.6507   751.1674    27.4074   567.0972   435.6154
> >>
> >>
> >> lease02
> >>                              mean   variance      sigma        max        min
> >>                 4.0.0-rc3  3588.2069 26736.0422   163.5116  3769.7430  3154.8405
> >>              fs-locks-v10  3566.0658 34225.6410   185.0017  3747.6039  3188.5478
> >>                 lglock-v0  3585.0648 28720.1679   169.4703  3758.7240  3150.9310
> >>    fs-locks-v10+lglock-v0  3621.9347 17706.2354   133.0648  3744.0095  3174.0998
> >>
> >>
> >> posix01
> >>                              mean   variance      sigma        max        min
> >>                 4.0.0-rc3  9297.5030 26911.6602   164.0477  9941.8094  8963.3528
> >>              fs-locks-v10  9462.8665 44762.9316   211.5725 10504.3205  9202.5748
> >>                 lglock-v0  9320.4716 47168.9903   217.1842 10401.6565  9054.1950
> >>    fs-locks-v10+lglock-v0  9458.1463 58231.8844   241.3128 10564.2086  9193.1177
> >>
> >>
> >> posix02
> >>                              mean   variance      sigma        max        min
> >>                 4.0.0-rc3   920.6533  2648.1293    51.4600   983.4213   790.1792
> >>              fs-locks-v10   915.3972  4384.6821    66.2169  1004.2339   795.4129
> >>                 lglock-v0   888.1910  4644.0478    68.1473   983.8412   777.4851
> >>    fs-locks-v10+lglock-v0   926.4184  1834.6481    42.8328   975.8544   794.4582
> >>
> >>
> >> posix03
> >>                              mean   variance      sigma        max        min
> >>                 4.0.0-rc3     7.5202     0.0456     0.2136     7.9697     6.9803
> >>              fs-locks-v10     7.5203     0.0640     0.2529     7.9619     7.0063
> >>                 lglock-v0     7.4738     0.0349     0.1867     7.8011     7.0973
> >>    fs-locks-v10+lglock-v0     7.5856     0.0595     0.2439     8.1098     6.8695
> >>
> >>
> >> posix04
> >>                              mean   variance      sigma        max        min
> >>                 4.0.0-rc3     0.6699     0.1091     0.3303     3.3845     0.5247
> >>              fs-locks-v10     0.6320     0.0026     0.0511     0.9064     0.5195
> >>                 lglock-v0     0.6460     0.0039     0.0623     1.0830     0.5438
> >>    fs-locks-v10+lglock-v0     0.6589     0.0338     0.1838     2.0346     0.5393
> >>
> >>
> >> Hmm, not really convincing numbers. I hoped to see scaling effects but nope, no fun.
> >>
> > 
> > Yeah...
> > 
> > That said, the lglock-v0 numbers do look a little better. Perhaps it
> > would make sense to go ahead and consider that change? It's not clear
> > to me why the lglock code uses arch_spinlock_t. Was it just the extra
> > memory usage or was there some other reason?
> 
> If my reading is correct, the main difference between spinlock_t 
> and arch_spinlock_t is the avoidance of the trylock path:
> 
>  	spin_lock(&lock)
> 	  raw_spin_lock(&lock)
> 	    _raw_spin_lock(&lock)
> 	      __raw_spin_lock(&lock)
> 
> 	static inline void __raw_spin_lock(raw_spinlock_t *lock)
> 	{
> 		preempt_disable();
> 		spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
> 		LOCK_CONTENDED(lock, do_raw_spin_trylock, do_raw_spin_lock);
> 	}
> 
> 	static inline int do_raw_spin_trylock(raw_spinlock_t *lock)
> 	{
> 		return arch_spin_trylock(&(lock)->raw_lock);
> 	}
> 	
> 	static inline void do_raw_spin_lock(raw_spinlock_t *lock) __acquires(lock)
> 	{
> 		__acquire(lock);
> 		arch_spin_lock(&lock->raw_lock);
> 	}
> 	
> 
> So with using arch_spin_lock(&lock) lglock is short cutting the
> path slightly.
> 
> The memory consumption is even less using spinlock_t:
> 
> 4.0.0-rc5
> 
>    text    data     bss     dec     hex filename
>   19941    2409    1088   23438    5b8e fs/locks.o
>     822       0       0     822     336 kernel/locking/lglock.o
> 
> [    0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:64 nr_node_ids:4
> [    0.000000] PERCPU: Embedded 32 pages/cpu @ffff881fbfc00000 s92888 r8192 d29992 u131072
> [    0.000000] pcpu-alloc: s92888 r8192 d29992 u131072 alloc=1*2097152
> [    0.000000] pcpu-alloc: [0] 00 04 08 12 16 20 24 28 32 36 40 44 48 52 56 60 
> [    0.000000] pcpu-alloc: [1] 01 05 09 13 17 21 25 29 33 37 41 45 49 53 57 61 
> [    0.000000] pcpu-alloc: [2] 02 06 10 14 18 22 26 30 34 38 42 46 50 54 58 62 
> [    0.000000] pcpu-alloc: [3] 03 07 11 15 19 23 27 31 35 39 43 47 51 55 59 63 
> [    0.000000] Built 4 zonelists in Node order, mobility grouping on.  Total pages: 132109066
> 
> 
> 4.0.0-rc5-lglock-v0
> 
>    text    data     bss     dec     hex filename
>   19941    2409    1088   23438    5b8e fs/locks.o
>     620       0       0     620     26c kernel/locking/lglock.o
> 
> [  +0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:64 nr_node_ids:4
> [  +0.000000] PERCPU: Embedded 32 pages/cpu @ffff881fbfc00000 s92888 r8192 d29992 u131072
> [  +0.000000] pcpu-alloc: s92888 r8192 d29992 u131072 alloc=1*2097152
> [  +0.000000] pcpu-alloc: [0] 00 04 08 12 16 20 24 28 32 36 40 44 48 52 56 60 
> [  +0.000000] pcpu-alloc: [1] 01 05 09 13 17 21 25 29 33 37 41 45 49 53 57 61 
> [  +0.000000] pcpu-alloc: [2] 02 06 10 14 18 22 26 30 34 38 42 46 50 54 58 62 
> [  +0.000000] pcpu-alloc: [3] 03 07 11 15 19 23 27 31 35 39 43 47 51 55 59 63 
> [  +0.000000] Built 4 zonelists in Node order, mobility grouping on.  Total pages: 132109066
> 
> 
> Legend: s: static size, r: reserved size, d: dynamic size, u: unit size
> 

Memory consumption here really shouldn't be much of a factor. These are
global objects after all, so we really aren't looking at much memory in
the big scheme of things.

> 
> I did another round of measurements with different parameters and saw 
> some unexpected things:
> 
>  - flock01: The number of child processes is close to the number
>    of cores (eg. 128 processes on 64 cores) and the number of test
>    iterations is relatively low (32 iterations). In this configuration
>    the results are not stable:
> 
> 	while true; do 
> 		rm -rf /tmp/a; flock01 -n 128 -l 32 /tmp/a; 
> 	done
> 
> 	38.392769508
> 	1.054781151
> 	113.731122000
> 	66.362571593
> 	97.581588309
> 	0.015311589
> 	117.311633231
> 	0.015412247
> 	0.014909320
> 	0.015469361
> 	0.015481439
> 	38.779573512
> 	101.239880635
> 	0.822888216
> 	...
> 
>    I see this on 4.0.0-rc5 with or without the lglock-v0 patch.
>    This looks like if we are lucky the children of flock01 get
>    along without interfering and that results in low numbers.
> 

Yep. That test is more or less at the mercy of the scheduler.

>    If they system is not idle (kernel build in background
>    'make -j200') then the numbers get more consistent:
> 
> 	0.034442009
> 	0.035964874
> 	0.026305154
> 	0.030738657
> 	0.024400840
> 	0.028685513
> 	0.025869458
> 	0.027475024
> 	0.023971313
> 	0.026113323
> 	0.033676295
> 	....
> 

Interesting.

>  - I also played with lockdep detection. With lglock-v0 applied
>    some tests like flock02 and posix02 get considerable worse
>    results. The difference between flock01 and flock02 is that
>    the children of flock01 fight over one file lock versus
>    the children of flock02 lock and unlock their own lock.
>    My best guess is that the lockdep tracing is adding
>    far more to the per child lock configuration. I didn't find
>    any other explanation than this. Although I have to admit
>    I can't find a good argument why this makes a difference
>    between arch_spinlock_t and spinlock_t. 
> 
> 
> With lockdep enabled:
> 
> flock01
>                                      mean   variance      sigma        max        min
>                      4.0.0-rc5   339.6094  2174.4070    46.6305   446.6720   219.1196
>            4.0.0-rc5-lglock-v0   499.1753  8123.7092    90.1316   666.3474   315.5089
> 
> 
> flock02
>                                      mean   variance      sigma        max        min
>                      4.0.0-rc5   201.2253    64.5758     8.0359   219.6059   179.1278
>            4.0.0-rc5-lglock-v0   785.1701  1049.7690    32.4001   844.4298   715.6951
> 
> 
> lease01
>                                      mean   variance      sigma        max        min
>                      4.0.0-rc5     8.6606     4.2222     2.0548    13.3409     4.2273
>            4.0.0-rc5-lglock-v0    12.1898     3.5871     1.8940    16.5744     9.2004
> 
> 
> lease02
>                                      mean   variance      sigma        max        min
>                      4.0.0-rc5    42.0945     1.2928     1.1370    44.8503    37.0932
>            4.0.0-rc5-lglock-v0    38.5887     0.4077     0.6385    39.8308    36.3703
> 
> 
> posix01
>                                      mean   variance      sigma        max        min
>                      4.0.0-rc5   407.8706  3005.7657    54.8249   581.1921   293.4723
>            4.0.0-rc5-lglock-v0   613.6238  5604.3537    74.8622   781.7903   487.7466
> 
> 
> posix02
>                                      mean   variance      sigma        max        min
>                      4.0.0-rc5   340.7774   186.4059    13.6531   365.8146   315.1692
>            4.0.0-rc5-lglock-v0  1319.7676   726.9997    26.9629  1377.5981  1242.2350
> 
> 
> posix03
>                                      mean   variance      sigma        max        min
>                      4.0.0-rc5     0.9615     0.0040     0.0629     1.1086     0.8280
>            4.0.0-rc5-lglock-v0     1.2682     0.0009     0.0299     1.3415     1.1902
> 
> 
> posix04
>                                      mean   variance      sigma        max        min
>                      4.0.0-rc5     0.0527     0.0003     0.0172     0.1156     0.0237
>            4.0.0-rc5-lglock-v0     0.0365     0.0001     0.0101     0.0887     0.0249
> 
> 
> 
> 
> Without lockdep:
> 
>                                      mean   variance      sigma        max        min
>                      4.0.0-rc5   448.0287 15417.8359   124.1686   527.8083     0.0081
>            4.0.0-rc5-lglock-v0   395.1646 28713.4347   169.4504   520.7507     0.0075
> 
> 
> flock02
>                                      mean   variance      sigma        max        min
>                      4.0.0-rc5     6.9185     0.8830     0.9397    10.6138     4.9707
>            4.0.0-rc5-lglock-v0     6.2474     0.9234     0.9610     9.5478     4.3703
> 
> 
> lease01
>                                      mean   variance      sigma        max        min
>                      4.0.0-rc5     7.7040     0.3930     0.6269     9.1874     5.4179
>            4.0.0-rc5-lglock-v0     7.6862     0.7794     0.8828     9.0623     1.3639
> 
> 
> lease02
>                                      mean   variance      sigma        max        min
>                      4.0.0-rc5    16.3074     0.1418     0.3766    17.1600    15.0240
>            4.0.0-rc5-lglock-v0    16.2698     0.2772     0.5265    17.2221    13.4127
> 
> 
> posix01
>                                      mean   variance      sigma        max        min
>                      4.0.0-rc5   531.5151   181.3078    13.4651   596.5883   501.2940
>            4.0.0-rc5-lglock-v0   531.3600   209.0023    14.4569   600.7317   507.1767
> 
> 
> posix02
>                                      mean   variance      sigma        max        min
>                      4.0.0-rc5    13.8395     2.9768     1.7253    22.0783     9.9136
>            4.0.0-rc5-lglock-v0    12.6822     3.1645     1.7789    18.1258     9.0030
> 
> 
> posix03
>                                      mean   variance      sigma        max        min
>                      4.0.0-rc5     0.8939     0.0006     0.0242     0.9392     0.8360
>            4.0.0-rc5-lglock-v0     0.9050     0.0006     0.0254     0.9617     0.8454
> 
> 
> posix04
>                                      mean   variance      sigma        max        min
>                      4.0.0-rc5     0.0122     0.0000     0.0023     0.0227     0.0083
>            4.0.0-rc5-lglock-v0     0.0115     0.0000     0.0016     0.0165     0.0091
> 

lockdep has overhead, and when you move from arch_spinlock_t to
"normal" spinlock_t's you end up with per-spinlock lockdep structures.
I wouldn't worry much about performance with lockdep enabled.

> > You had mentioned at one point that lglocks didn't play well with the
> > -rt kernels. What's the actual problem there?
> 
> -rt kernels like to preempt everything possible. One mean to achieve
> this is by exchanging normal spinlock_t with rt_mutex. arch_spinlock_t
> does not get this treatment automatically via the lock framework. 
> For this following patch is carried around:
> 
> https://git.kernel.org/cgit/linux/kernel/git/rt/linux-stable-rt.git/commit/?h=v3.14-rt-rebase&id=da1cbed0dcf6ab22a4b50b0c5606271067749aef
> 
>  struct lglock {
> +#ifndef CONFIG_PREEMPT_RT_FULL
>         arch_spinlock_t __percpu *lock;
> +#else
> +       struct rt_mutex __percpu *lock;
> +#endif
>  #ifdef CONFIG_DEBUG_LOCK_ALLOC
>         struct lock_class_key lock_key;
>         struct lockdep_map    lock_dep_map;
>  #endif
>  };
> 
> [...]
> 

Ok. Is that approach problematic in some way? I'm trying to understand
the exact problem that you're trying to solve for -rt with this effort.

> 
> I have modified version of the above patch ontop of of the lglock-v0
> which drops all the ifdefing around arch_spinlock_t and rt_mutex. The
> results are identical.
> 
> If there aren't any objection I send the lglock-v0 patch with a proper
> commit message.
> 

I'll be happy to take a look.

-- 
Jeff Layton <jlayton@poochiereds.net>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 0/2] Use blocked_lock_lock only to protect blocked_hash
  2015-03-26 13:17         ` Jeff Layton
@ 2015-03-26 13:55           ` Daniel Wagner
  0 siblings, 0 replies; 11+ messages in thread
From: Daniel Wagner @ 2015-03-26 13:55 UTC (permalink / raw)
  To: Jeff Layton
  Cc: linux-fsdevel, linux-kernel, Alexander Viro, J. Bruce Fields, Andi Kleen

>>  - I also played with lockdep detection. With lglock-v0 applied
>>    some tests like flock02 and posix02 get considerable worse
>>    results. The difference between flock01 and flock02 is that
>>    the children of flock01 fight over one file lock versus
>>    the children of flock02 lock and unlock their own lock.
>>    My best guess is that the lockdep tracing is adding
>>    far more to the per child lock configuration. I didn't find
>>    any other explanation than this. Although I have to admit
>>    I can't find a good argument why this makes a difference
>>    between arch_spinlock_t and spinlock_t. 
>>

[...]

> lockdep has overhead, and when you move from arch_spinlock_t to
> "normal" spinlock_t's you end up with per-spinlock lockdep structures.
> I wouldn't worry much about performance with lockdep enabled.

That was the missing piece. Okay, that explains the performance degradation.

>>> You had mentioned at one point that lglocks didn't play well with the
>>> -rt kernels. What's the actual problem there?
>>
>> -rt kernels like to preempt everything possible. One mean to achieve
>> this is by exchanging normal spinlock_t with rt_mutex. arch_spinlock_t
>> does not get this treatment automatically via the lock framework. 
>> For this following patch is carried around:
>>
>> https://git.kernel.org/cgit/linux/kernel/git/rt/linux-stable-rt.git/commit/?h=v3.14-rt-rebase&id=da1cbed0dcf6ab22a4b50b0c5606271067749aef
>>
>>  struct lglock {
>> +#ifndef CONFIG_PREEMPT_RT_FULL
>>         arch_spinlock_t __percpu *lock;
>> +#else
>> +       struct rt_mutex __percpu *lock;
>> +#endif
>>  #ifdef CONFIG_DEBUG_LOCK_ALLOC
>>         struct lock_class_key lock_key;
>>         struct lockdep_map    lock_dep_map;
>>  #endif
>>  };
>>
>> [...]
>>
> 
> Ok. Is that approach problematic in some way?

I expect that mainline wont accept such a patch :). T

> I'm trying to understand the exact problem that you're
> trying to solve for -rt with this effort.

My aim is to rid of the -rt patches and mainline the features. This here
is just my small contribution to the whole -rt effort.

cheers,
daniel

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2015-03-26 13:55 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-06  7:53 [PATCH v3 0/2] Use blocked_lock_lock only to protect blocked_hash Daniel Wagner
2015-03-06  7:53 ` [PATCH v3 1/2] locks: Split insert/delete block functions into flock/posix parts Daniel Wagner
2015-03-06  7:53 ` [PATCH v3 2/2] locks: Use blocked_lock_lock only to protect blocked_hash Daniel Wagner
2015-03-07 14:00 ` [PATCH v3 0/2] " Jeff Layton
2015-03-07 14:00   ` Jeff Layton
2015-03-07 14:09   ` Jeff Layton
2015-03-10  8:20   ` Daniel Wagner
2015-03-14 12:40     ` Jeff Layton
2015-03-26 10:11       ` Daniel Wagner
2015-03-26 13:17         ` Jeff Layton
2015-03-26 13:55           ` Daniel Wagner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.