All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] Minor cleanups in locking helpers
@ 2019-09-24 17:33 David Sterba
  2019-09-24 17:33 ` [PATCH 1/4] btrfs: make locking assertion helpers static inline David Sterba
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: David Sterba @ 2019-09-24 17:33 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

Reorganizing code wrt locking helpers, minor text and stack savings.

Debugging build
~~~~~~~~~~~~~~~

   text    data     bss     dec     hex filename
1513898  146192   27496 1687586  19c022 pre/btrfs.ko
1514206  146768   27496 1688470  19c396 post/btrfs.ko
DELTA: +308
^^^^^^^^^^^ the increase is caused by inlining the
            btrfs_assert_tree_locked itself, IMO acceptable for debugging build

Stack report:

btrfs_clean_tree_block                                             -8 (24 -> 16)
btrfs_drop_subtree                                                 +8 (104 -> 112)
btrfs_mark_buffer_dirty                                            -8 (32 -> 24)
insert_ptr                                                         -8 (80 -> 72)

LOST (64):
        btrfs_assert_tree_read_locked                               8
        btrfs_assert_spinning_writers_get                          16
        btrfs_assert_spinning_readers_put                          16
        btrfs_assert_spinning_writers_put                          16
        btrfs_assert_tree_locked                                    8

NEW (0):
LOST/NEW DELTA:      -64
PRE/POST DELTA:      -80


Release build
~~~~~~~~~~~~~

   text    data     bss     dec     hex filename
1079643   17304   14912 1111859  10f733 pre/btrfs.ko
1079468   17316   14912 1111696  10f690 post/btrfs.ko
DELTA: -175
^^^^^^^^^^^ that's what counts

Stack report:

btrfs_drop_subtree                                                 -8 (80 -> 72)
btrfs_clean_tree_block                                             -8 (24 -> 16)
btrfs_mark_buffer_dirty                                            -8 (32 -> 24)
insert_ptr                                                         -8 (80 -> 72)

LOST (8):
        btrfs_assert_tree_locked                                    8

NEW (0):
LOST/NEW DELTA:       -8
PRE/POST DELTA:      -4

David Sterba (4):
  btrfs: make locking assertion helpers static inline
  btrfs: make btrfs_assert_tree_locked static inline
  btrfs: move btrfs_set_path_blocking to other locking functions
  btrfs: move btrfs_unlock_up_safe to other locking functions

 fs/btrfs/ctree.c         | 51 --------------------------
 fs/btrfs/ctree.h         |  1 -
 fs/btrfs/delayed-inode.c |  1 +
 fs/btrfs/locking.c       | 78 +++++++++++++++++++++++++++++++---------
 fs/btrfs/locking.h       | 13 ++++++-
 5 files changed, 75 insertions(+), 69 deletions(-)

-- 
2.23.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/4] btrfs: make locking assertion helpers static inline
  2019-09-24 17:33 [PATCH 0/4] Minor cleanups in locking helpers David Sterba
@ 2019-09-24 17:33 ` David Sterba
  2019-09-24 17:33 ` [PATCH 2/4] btrfs: make btrfs_assert_tree_locked " David Sterba
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: David Sterba @ 2019-09-24 17:33 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

I've noticed that none of the btrfs_assert_*lock* debugging helpers is
inlined, despite they're short and mostly a value update. Making them
inline shaves 67 from the text size, reduces stack consumption and
perhaps also slightly improves the performance due to avoiding
unnecessary calls.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/locking.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/fs/btrfs/locking.c b/fs/btrfs/locking.c
index 7f9a578a1a20..409c5a865079 100644
--- a/fs/btrfs/locking.c
+++ b/fs/btrfs/locking.c
@@ -14,55 +14,55 @@
 #include "locking.h"
 
 #ifdef CONFIG_BTRFS_DEBUG
-static void btrfs_assert_spinning_writers_get(struct extent_buffer *eb)
+static inline void btrfs_assert_spinning_writers_get(struct extent_buffer *eb)
 {
 	WARN_ON(eb->spinning_writers);
 	eb->spinning_writers++;
 }
 
-static void btrfs_assert_spinning_writers_put(struct extent_buffer *eb)
+static inline void btrfs_assert_spinning_writers_put(struct extent_buffer *eb)
 {
 	WARN_ON(eb->spinning_writers != 1);
 	eb->spinning_writers--;
 }
 
-static void btrfs_assert_no_spinning_writers(struct extent_buffer *eb)
+static inline void btrfs_assert_no_spinning_writers(struct extent_buffer *eb)
 {
 	WARN_ON(eb->spinning_writers);
 }
 
-static void btrfs_assert_spinning_readers_get(struct extent_buffer *eb)
+static inline void btrfs_assert_spinning_readers_get(struct extent_buffer *eb)
 {
 	atomic_inc(&eb->spinning_readers);
 }
 
-static void btrfs_assert_spinning_readers_put(struct extent_buffer *eb)
+static inline void btrfs_assert_spinning_readers_put(struct extent_buffer *eb)
 {
 	WARN_ON(atomic_read(&eb->spinning_readers) == 0);
 	atomic_dec(&eb->spinning_readers);
 }
 
-static void btrfs_assert_tree_read_locks_get(struct extent_buffer *eb)
+static inline void btrfs_assert_tree_read_locks_get(struct extent_buffer *eb)
 {
 	atomic_inc(&eb->read_locks);
 }
 
-static void btrfs_assert_tree_read_locks_put(struct extent_buffer *eb)
+static inline void btrfs_assert_tree_read_locks_put(struct extent_buffer *eb)
 {
 	atomic_dec(&eb->read_locks);
 }
 
-static void btrfs_assert_tree_read_locked(struct extent_buffer *eb)
+static inline void btrfs_assert_tree_read_locked(struct extent_buffer *eb)
 {
 	BUG_ON(!atomic_read(&eb->read_locks));
 }
 
-static void btrfs_assert_tree_write_locks_get(struct extent_buffer *eb)
+static inline void btrfs_assert_tree_write_locks_get(struct extent_buffer *eb)
 {
 	eb->write_locks++;
 }
 
-static void btrfs_assert_tree_write_locks_put(struct extent_buffer *eb)
+static inline void btrfs_assert_tree_write_locks_put(struct extent_buffer *eb)
 {
 	eb->write_locks--;
 }
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/4] btrfs: make btrfs_assert_tree_locked static inline
  2019-09-24 17:33 [PATCH 0/4] Minor cleanups in locking helpers David Sterba
  2019-09-24 17:33 ` [PATCH 1/4] btrfs: make locking assertion helpers static inline David Sterba
@ 2019-09-24 17:33 ` David Sterba
  2019-09-24 17:33 ` [PATCH 3/4] btrfs: move btrfs_set_path_blocking to other locking functions David Sterba
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: David Sterba @ 2019-09-24 17:33 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

The function btrfs_assert_tree_locked is used outside of the locking
code so it is exported, however we can make it static inine as it's
fairly trivial.

This is the only locking assertion used in release builds, inlining
improves the text size by 174 bytes and reduces stack consumption in the
callers.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/locking.c |  6 ------
 fs/btrfs/locking.h | 10 +++++++++-
 2 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/fs/btrfs/locking.c b/fs/btrfs/locking.c
index 409c5a865079..028513153ac4 100644
--- a/fs/btrfs/locking.c
+++ b/fs/btrfs/locking.c
@@ -67,11 +67,6 @@ static inline void btrfs_assert_tree_write_locks_put(struct extent_buffer *eb)
 	eb->write_locks--;
 }
 
-void btrfs_assert_tree_locked(struct extent_buffer *eb)
-{
-	BUG_ON(!eb->write_locks);
-}
-
 #else
 static void btrfs_assert_spinning_writers_get(struct extent_buffer *eb) { }
 static void btrfs_assert_spinning_writers_put(struct extent_buffer *eb) { }
@@ -81,7 +76,6 @@ static void btrfs_assert_spinning_readers_get(struct extent_buffer *eb) { }
 static void btrfs_assert_tree_read_locked(struct extent_buffer *eb) { }
 static void btrfs_assert_tree_read_locks_get(struct extent_buffer *eb) { }
 static void btrfs_assert_tree_read_locks_put(struct extent_buffer *eb) { }
-void btrfs_assert_tree_locked(struct extent_buffer *eb) { }
 static void btrfs_assert_tree_write_locks_get(struct extent_buffer *eb) { }
 static void btrfs_assert_tree_write_locks_put(struct extent_buffer *eb) { }
 #endif
diff --git a/fs/btrfs/locking.h b/fs/btrfs/locking.h
index b775a4207ed9..ab4020de25e7 100644
--- a/fs/btrfs/locking.h
+++ b/fs/btrfs/locking.h
@@ -6,6 +6,8 @@
 #ifndef BTRFS_LOCKING_H
 #define BTRFS_LOCKING_H
 
+#include "extent_io.h"
+
 #define BTRFS_WRITE_LOCK 1
 #define BTRFS_READ_LOCK 2
 #define BTRFS_WRITE_LOCK_BLOCKING 3
@@ -19,11 +21,17 @@ void btrfs_tree_read_unlock(struct extent_buffer *eb);
 void btrfs_tree_read_unlock_blocking(struct extent_buffer *eb);
 void btrfs_set_lock_blocking_read(struct extent_buffer *eb);
 void btrfs_set_lock_blocking_write(struct extent_buffer *eb);
-void btrfs_assert_tree_locked(struct extent_buffer *eb);
 int btrfs_try_tree_read_lock(struct extent_buffer *eb);
 int btrfs_try_tree_write_lock(struct extent_buffer *eb);
 int btrfs_tree_read_lock_atomic(struct extent_buffer *eb);
 
+#ifdef CONFIG_BTRFS_DEBUG
+static inline void btrfs_assert_tree_locked(struct extent_buffer *eb) {
+	BUG_ON(!eb->write_locks);
+}
+#else
+static inline void btrfs_assert_tree_locked(struct extent_buffer *eb) { }
+#endif
 
 static inline void btrfs_tree_unlock_rw(struct extent_buffer *eb, int rw)
 {
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 3/4] btrfs: move btrfs_set_path_blocking to other locking functions
  2019-09-24 17:33 [PATCH 0/4] Minor cleanups in locking helpers David Sterba
  2019-09-24 17:33 ` [PATCH 1/4] btrfs: make locking assertion helpers static inline David Sterba
  2019-09-24 17:33 ` [PATCH 2/4] btrfs: make btrfs_assert_tree_locked " David Sterba
@ 2019-09-24 17:33 ` David Sterba
  2019-09-24 17:33 ` [PATCH 4/4] btrfs: move btrfs_unlock_up_safe " David Sterba
  2019-09-24 20:55 ` [PATCH 0/4] Minor cleanups in locking helpers Josef Bacik
  4 siblings, 0 replies; 6+ messages in thread
From: David Sterba @ 2019-09-24 17:33 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

The function belongs to the family of locking functions, so move it
there. The 'noinline' keyword is dropped as it's now an exported
function that does not need it.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/ctree.c   | 25 -------------------------
 fs/btrfs/locking.c | 26 ++++++++++++++++++++++++++
 fs/btrfs/locking.h |  2 ++
 3 files changed, 28 insertions(+), 25 deletions(-)

diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index 0231141de289..a55d55e5c913 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -56,31 +56,6 @@ struct btrfs_path *btrfs_alloc_path(void)
 	return kmem_cache_zalloc(btrfs_path_cachep, GFP_NOFS);
 }
 
-/*
- * set all locked nodes in the path to blocking locks.  This should
- * be done before scheduling
- */
-noinline void btrfs_set_path_blocking(struct btrfs_path *p)
-{
-	int i;
-	for (i = 0; i < BTRFS_MAX_LEVEL; i++) {
-		if (!p->nodes[i] || !p->locks[i])
-			continue;
-		/*
-		 * If we currently have a spinning reader or writer lock this
-		 * will bump the count of blocking holders and drop the
-		 * spinlock.
-		 */
-		if (p->locks[i] == BTRFS_READ_LOCK) {
-			btrfs_set_lock_blocking_read(p->nodes[i]);
-			p->locks[i] = BTRFS_READ_LOCK_BLOCKING;
-		} else if (p->locks[i] == BTRFS_WRITE_LOCK) {
-			btrfs_set_lock_blocking_write(p->nodes[i]);
-			p->locks[i] = BTRFS_WRITE_LOCK_BLOCKING;
-		}
-	}
-}
-
 /* this also releases the path */
 void btrfs_free_path(struct btrfs_path *p)
 {
diff --git a/fs/btrfs/locking.c b/fs/btrfs/locking.c
index 028513153ac4..f58606887859 100644
--- a/fs/btrfs/locking.c
+++ b/fs/btrfs/locking.c
@@ -316,3 +316,29 @@ void btrfs_tree_unlock(struct extent_buffer *eb)
 		write_unlock(&eb->lock);
 	}
 }
+
+/*
+ * Set all locked nodes in the path to blocking locks.  This should be done
+ * before scheduling
+ */
+void btrfs_set_path_blocking(struct btrfs_path *p)
+{
+	int i;
+
+	for (i = 0; i < BTRFS_MAX_LEVEL; i++) {
+		if (!p->nodes[i] || !p->locks[i])
+			continue;
+		/*
+		 * If we currently have a spinning reader or writer lock this
+		 * will bump the count of blocking holders and drop the
+		 * spinlock.
+		 */
+		if (p->locks[i] == BTRFS_READ_LOCK) {
+			btrfs_set_lock_blocking_read(p->nodes[i]);
+			p->locks[i] = BTRFS_READ_LOCK_BLOCKING;
+		} else if (p->locks[i] == BTRFS_WRITE_LOCK) {
+			btrfs_set_lock_blocking_write(p->nodes[i]);
+			p->locks[i] = BTRFS_WRITE_LOCK_BLOCKING;
+		}
+	}
+}
diff --git a/fs/btrfs/locking.h b/fs/btrfs/locking.h
index ab4020de25e7..98c92222eaf0 100644
--- a/fs/btrfs/locking.h
+++ b/fs/btrfs/locking.h
@@ -33,6 +33,8 @@ static inline void btrfs_assert_tree_locked(struct extent_buffer *eb) {
 static inline void btrfs_assert_tree_locked(struct extent_buffer *eb) { }
 #endif
 
+void btrfs_set_path_blocking(struct btrfs_path *p);
+
 static inline void btrfs_tree_unlock_rw(struct extent_buffer *eb, int rw)
 {
 	if (rw == BTRFS_WRITE_LOCK || rw == BTRFS_WRITE_LOCK_BLOCKING)
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 4/4] btrfs: move btrfs_unlock_up_safe to other locking functions
  2019-09-24 17:33 [PATCH 0/4] Minor cleanups in locking helpers David Sterba
                   ` (2 preceding siblings ...)
  2019-09-24 17:33 ` [PATCH 3/4] btrfs: move btrfs_set_path_blocking to other locking functions David Sterba
@ 2019-09-24 17:33 ` David Sterba
  2019-09-24 20:55 ` [PATCH 0/4] Minor cleanups in locking helpers Josef Bacik
  4 siblings, 0 replies; 6+ messages in thread
From: David Sterba @ 2019-09-24 17:33 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

The function belongs to the family of locking functions, so move it
there. The 'noinline' keyword is dropped as it's now an exported
function that does not need it.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/ctree.c         | 26 --------------------------
 fs/btrfs/ctree.h         |  1 -
 fs/btrfs/delayed-inode.c |  1 +
 fs/btrfs/locking.c       | 26 ++++++++++++++++++++++++++
 fs/btrfs/locking.h       |  1 +
 5 files changed, 28 insertions(+), 27 deletions(-)

diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index a55d55e5c913..f2f9cf1149a4 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -2353,32 +2353,6 @@ static noinline void unlock_up(struct btrfs_path *path, int level,
 	}
 }
 
-/*
- * This releases any locks held in the path starting at level and
- * going all the way up to the root.
- *
- * btrfs_search_slot will keep the lock held on higher nodes in a few
- * corner cases, such as COW of the block at slot zero in the node.  This
- * ignores those rules, and it should only be called when there are no
- * more updates to be done higher up in the tree.
- */
-noinline void btrfs_unlock_up_safe(struct btrfs_path *path, int level)
-{
-	int i;
-
-	if (path->keep_locks)
-		return;
-
-	for (i = level; i < BTRFS_MAX_LEVEL; i++) {
-		if (!path->nodes[i])
-			continue;
-		if (!path->locks[i])
-			continue;
-		btrfs_tree_unlock_rw(path->nodes[i], path->locks[i]);
-		path->locks[i] = 0;
-	}
-}
-
 /*
  * helper function for btrfs_search_slot.  The goal is to find a block
  * in cache without setting the path to blocking.  If we find the block
diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 5e7ff169683c..4bf0433b1179 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -2571,7 +2571,6 @@ int btrfs_realloc_node(struct btrfs_trans_handle *trans,
 void btrfs_release_path(struct btrfs_path *p);
 struct btrfs_path *btrfs_alloc_path(void);
 void btrfs_free_path(struct btrfs_path *p);
-void btrfs_set_path_blocking(struct btrfs_path *p);
 void btrfs_unlock_up_safe(struct btrfs_path *p, int level);
 
 int btrfs_del_items(struct btrfs_trans_handle *trans, struct btrfs_root *root,
diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
index 49ec3402886a..942cc2a59164 100644
--- a/fs/btrfs/delayed-inode.c
+++ b/fs/btrfs/delayed-inode.c
@@ -12,6 +12,7 @@
 #include "transaction.h"
 #include "ctree.h"
 #include "qgroup.h"
+#include "locking.h"
 
 #define BTRFS_DELAYED_WRITEBACK		512
 #define BTRFS_DELAYED_BACKGROUND	128
diff --git a/fs/btrfs/locking.c b/fs/btrfs/locking.c
index f58606887859..93146b495276 100644
--- a/fs/btrfs/locking.c
+++ b/fs/btrfs/locking.c
@@ -342,3 +342,29 @@ void btrfs_set_path_blocking(struct btrfs_path *p)
 		}
 	}
 }
+
+/*
+ * This releases any locks held in the path starting at level and going all the
+ * way up to the root.
+ *
+ * btrfs_search_slot will keep the lock held on higher nodes in a few corner
+ * cases, such as COW of the block at slot zero in the node.  This ignores
+ * those rules, and it should only be called when there are no more updates to
+ * be done higher up in the tree.
+ */
+void btrfs_unlock_up_safe(struct btrfs_path *path, int level)
+{
+	int i;
+
+	if (path->keep_locks)
+		return;
+
+	for (i = level; i < BTRFS_MAX_LEVEL; i++) {
+		if (!path->nodes[i])
+			continue;
+		if (!path->locks[i])
+			continue;
+		btrfs_tree_unlock_rw(path->nodes[i], path->locks[i]);
+		path->locks[i] = 0;
+	}
+}
diff --git a/fs/btrfs/locking.h b/fs/btrfs/locking.h
index 98c92222eaf0..21a285883e89 100644
--- a/fs/btrfs/locking.h
+++ b/fs/btrfs/locking.h
@@ -34,6 +34,7 @@ static inline void btrfs_assert_tree_locked(struct extent_buffer *eb) { }
 #endif
 
 void btrfs_set_path_blocking(struct btrfs_path *p);
+void btrfs_unlock_up_safe(struct btrfs_path *path, int level);
 
 static inline void btrfs_tree_unlock_rw(struct extent_buffer *eb, int rw)
 {
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/4] Minor cleanups in locking helpers
  2019-09-24 17:33 [PATCH 0/4] Minor cleanups in locking helpers David Sterba
                   ` (3 preceding siblings ...)
  2019-09-24 17:33 ` [PATCH 4/4] btrfs: move btrfs_unlock_up_safe " David Sterba
@ 2019-09-24 20:55 ` Josef Bacik
  4 siblings, 0 replies; 6+ messages in thread
From: Josef Bacik @ 2019-09-24 20:55 UTC (permalink / raw)
  To: David Sterba; +Cc: linux-btrfs

On Tue, Sep 24, 2019 at 07:33:16PM +0200, David Sterba wrote:
> Reorganizing code wrt locking helpers, minor text and stack savings.
> 

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-09-24 20:55 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-24 17:33 [PATCH 0/4] Minor cleanups in locking helpers David Sterba
2019-09-24 17:33 ` [PATCH 1/4] btrfs: make locking assertion helpers static inline David Sterba
2019-09-24 17:33 ` [PATCH 2/4] btrfs: make btrfs_assert_tree_locked " David Sterba
2019-09-24 17:33 ` [PATCH 3/4] btrfs: move btrfs_set_path_blocking to other locking functions David Sterba
2019-09-24 17:33 ` [PATCH 4/4] btrfs: move btrfs_unlock_up_safe " David Sterba
2019-09-24 20:55 ` [PATCH 0/4] Minor cleanups in locking helpers Josef Bacik

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.