linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/12] btrfs: add zstd compression level support
@ 2019-02-04 20:19 Dennis Zhou
  2019-02-04 20:19 ` [PATCH 01/12] btrfs: add helpers for compression type and level Dennis Zhou
                   ` (13 more replies)
  0 siblings, 14 replies; 34+ messages in thread
From: Dennis Zhou @ 2019-02-04 20:19 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

Hi everyone,

V2 had only a handful of changes outside of minor feedback.
0001:
- use functions over macros
0003:
- BTRFS_NR_WORKSPACE_MANAGERS is added instead of overriding
  BTRFS_COMPRESS_TYPES
0011 (new):
- address monotonic memory requirement for zstd workspaces
0012:
- increase reclaim timer to 307s from 67s
- move from keeping track of time in ns to jiffies
- remove timer in cleanup code
- use min_t() instead of if statements in .set_level()
- add header text to describe how workspaces are managed
- nofs_flag type -> unsigned long to unsigned int

From v1:
This is a respin of [1] which aims to add zstd compression level
support. V3 moves away from the using set_level() to resize workspaces
in favor of just allocating a workspace of the appropriate level and
using a timer to reclaim unused workspaces.

Zstd compression requires different amounts of memory for each level of
compression. The prior patches implemented indirection to allow for each
compression type to manage their workspaces independently. This patch
uses this indirection to implement compression level support for zstd.

As mentioned above, a requirement that differs zstd from zlib is that
higher levels of compression require more memory. To manage this, each
compression level has its own queue of workspaces. A global LRU is used
to help with reclaim. To guarantee forward progress, a max level
workspace is preallocated and hidden from the LRU.

When getting a workspace, it uses a bitmap to identify the levels that
are populated and scans up. If it finds a workspace that is greater than
it, it uses it, but does not update the last_used time and the
corresponding place in the LRU. This provides a mechanism to decrease
memory utilization as we only keep around workspaces that are sized
appropriately for the in use compression levels.

By knowing which compression levels have available workspaces, we can
recycle rather than always create new workspaces as well as take
advantage of the preallocated max level for forward progress. If we hit
memory pressure, we sleep on the max level workspace. We continue to
rescan in case we can use a smaller workspace, but eventually should be
able to obtain the max level workspace or allocate one again should
memory pressure subside. The memory requirement for decompression is the
same as level 1, and therefore can use any of available workspace.

The number of workspaces is bound by an upper limit of the workqueue's
limit which currently is 2 (percpu) limit). Second, a reclaim timer is
used to free inactive/improperly sized workspaces. The reclaim timer is
set to 67s to avoid colliding with transaction commit (every 30s) and
attempts to reclaim any unused workspace older than 45s.

Repeating the experiment from v2 [1], the Silesia corpus was copied to a
btrfs filesystem 10 times and then read back after dropping the caches.
The btrfs filesystem was on an SSD.

Level   Ratio   Compression (MB/s)  Decompression (MB/s)
1       2.658        438.47                910.51
2       2.744        364.86                886.55
3       2.801        336.33                828.41
4       2.858        286.71                886.55
5       2.916        212.77                556.84
6       2.363        119.82                990.85
7       3.000        154.06                849.30
8       3.011        159.54                875.03
9       3.025        100.51                940.15
10      3.033        118.97                616.26
11      3.036         94.19                802.11
12      3.037         73.45                931.49
13      3.041         55.17                835.26
14      3.087         44.70                716.78
15      3.126         37.30                878.84

[1] https://lore.kernel.org/linux-btrfs/20181031181108.289340-1-terrelln@fb.com/

This patchset contains the following 12 patches:
  0001-btrfs-add-helpers-for-compression-type-and-level.patch
  0002-btrfs-rename-workspaces_list-to-workspace_manager.patch
  0003-btrfs-manage-heuristic-workspace-as-index-0.patch
  0004-btrfs-unify-compression-ops-with-workspace_manager.patch
  0005-btrfs-add-helper-methods-for-workspace-manager-init-.patch
  0006-btrfs-add-compression-interface-in-get-put-_workspac.patch
  0007-btrfs-move-to-fn-pointers-for-get-put-workspaces.patch
  0008-btrfs-plumb-level-through-the-compression-interface.patch
  0009-btrfs-change-set_level-to-bound-the-level-passed-in.patch
  0010-btrfs-zstd-use-the-passed-through-level-instead-of-d.patch
  0011-btrfs-make-zstd-memory-requirements-monotonic.patch
  0012-btrfs-add-zstd-compression-level-support.patch

0001 adds helper functions for type_level conversion. 0002 renames
workspaces_list to workspace_manager.  0003 moves back to managing the
heuristic workspaces as the index 0 compression level. 0004-0007 unify
operations with the workspace_manager with 0007 moving to compression
types owning their workspace_manager. 0008-0010 plumbs level throughout
the compression level getting interface and converts set_level() to be a
bounding function rather than setting level on a workspace. 0011
makes zstd compression level memory monotonic. 00012 adds zstd
compression level support.

This patchset is on top of kdave#master d73aba1115cf.

diffstats below:

Dennis Zhou (12):
  btrfs: add helpers for compression type and level
  btrfs: rename workspaces_list to workspace_manager
  btrfs: manage heuristic workspace as index 0
  btrfs: unify compression ops with workspace_manager
  btrfs: add helper methods for workspace manager init and cleanup
  btrfs: add compression interface in (get/put)_workspace()
  btrfs: move to fn pointers for get/put workspaces
  btrfs: plumb level through the compression interface
  btrfs: change set_level() to bound the level passed in
  btrfs: zstd use the passed through level instead of default
  btrfs: make zstd memory requirements monotonic
  btrfs: add zstd compression level support

 fs/btrfs/compression.c | 249 +++++++++++++++------------------
 fs/btrfs/compression.h |  53 ++++++-
 fs/btrfs/lzo.c         |  31 +++-
 fs/btrfs/super.c       |  10 +-
 fs/btrfs/zlib.c        |  45 ++++--
 fs/btrfs/zstd.c        | 311 +++++++++++++++++++++++++++++++++++++++--
 6 files changed, 539 insertions(+), 160 deletions(-)

Thanks,
Dennis

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 01/12] btrfs: add helpers for compression type and level
  2019-02-04 20:19 [PATCH v2 00/12] btrfs: add zstd compression level support Dennis Zhou
@ 2019-02-04 20:19 ` Dennis Zhou
  2019-02-04 20:19 ` [PATCH 02/12] btrfs: rename workspaces_list to workspace_manager Dennis Zhou
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 34+ messages in thread
From: Dennis Zhou @ 2019-02-04 20:19 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

It is very easy to miss places that rely on a certain bitshifting for
decyphering the type_level overloading. Add helpers to do this instead.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Cc: Omar Sandoval <osandov@osandov.com>
---
 fs/btrfs/compression.c |  2 +-
 fs/btrfs/compression.h | 10 ++++++++++
 fs/btrfs/zlib.c        |  2 +-
 3 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 548057630b69..94a0b0a3a301 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -1036,9 +1036,9 @@ int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping,
 			 unsigned long *total_in,
 			 unsigned long *total_out)
 {
+	int type = btrfs_compress_type(type_level);
 	struct list_head *workspace;
 	int ret;
-	int type = type_level & 0xF;
 
 	workspace = find_workspace(type);
 
diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index ddda9b80bf20..004db0b3111b 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -64,6 +64,16 @@ struct compressed_bio {
 	u32 sums;
 };
 
+static inline unsigned int btrfs_compress_type(unsigned int type_level)
+{
+	return (type_level & 0xF);
+}
+
+static inline unsigned int btrfs_compress_level(unsigned int type_level)
+{
+	return ((type_level & 0xF0) >> 4);
+}
+
 void __init btrfs_init_compress(void);
 void __cold btrfs_exit_compress(void);
 
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index 970ff3e35bb3..2bd655c4f8b4 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -393,7 +393,7 @@ static int zlib_decompress(struct list_head *ws, unsigned char *data_in,
 static void zlib_set_level(struct list_head *ws, unsigned int type)
 {
 	struct workspace *workspace = list_entry(ws, struct workspace, list);
-	unsigned level = (type & 0xF0) >> 4;
+	unsigned int level = btrfs_compress_level(type);
 
 	if (level > 9)
 		level = 9;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 02/12] btrfs: rename workspaces_list to workspace_manager
  2019-02-04 20:19 [PATCH v2 00/12] btrfs: add zstd compression level support Dennis Zhou
  2019-02-04 20:19 ` [PATCH 01/12] btrfs: add helpers for compression type and level Dennis Zhou
@ 2019-02-04 20:19 ` Dennis Zhou
  2019-02-04 20:19 ` [PATCH 03/12] btrfs: manage heuristic workspace as index 0 Dennis Zhou
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 34+ messages in thread
From: Dennis Zhou @ 2019-02-04 20:19 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

This is in preparation for zstd compression levels. As each level will
require different sized workspaces, workspaces_list is no longer a
really fitting name.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/compression.c | 46 +++++++++++++++++++++---------------------
 1 file changed, 23 insertions(+), 23 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 94a0b0a3a301..d098df768b67 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -769,7 +769,7 @@ static struct list_head *alloc_heuristic_ws(void)
 	return ERR_PTR(-ENOMEM);
 }
 
-struct workspaces_list {
+struct workspace_manager {
 	struct list_head idle_ws;
 	spinlock_t ws_lock;
 	/* Number of free workspaces */
@@ -780,9 +780,9 @@ struct workspaces_list {
 	wait_queue_head_t ws_wait;
 };
 
-static struct workspaces_list btrfs_comp_ws[BTRFS_COMPRESS_TYPES];
+static struct workspace_manager wsm[BTRFS_COMPRESS_TYPES];
 
-static struct workspaces_list btrfs_heuristic_ws;
+static struct workspace_manager btrfs_heuristic_ws;
 
 static const struct btrfs_compress_op * const btrfs_compress_op[] = {
 	&btrfs_zlib_compress,
@@ -811,10 +811,10 @@ void __init btrfs_init_compress(void)
 	}
 
 	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
-		INIT_LIST_HEAD(&btrfs_comp_ws[i].idle_ws);
-		spin_lock_init(&btrfs_comp_ws[i].ws_lock);
-		atomic_set(&btrfs_comp_ws[i].total_ws, 0);
-		init_waitqueue_head(&btrfs_comp_ws[i].ws_wait);
+		INIT_LIST_HEAD(&wsm[i].idle_ws);
+		spin_lock_init(&wsm[i].ws_lock);
+		atomic_set(&wsm[i].total_ws, 0);
+		init_waitqueue_head(&wsm[i].ws_wait);
 
 		/*
 		 * Preallocate one workspace for each compression type so
@@ -824,9 +824,9 @@ void __init btrfs_init_compress(void)
 		if (IS_ERR(workspace)) {
 			pr_warn("BTRFS: cannot preallocate compression workspace, will try later\n");
 		} else {
-			atomic_set(&btrfs_comp_ws[i].total_ws, 1);
-			btrfs_comp_ws[i].free_ws = 1;
-			list_add(workspace, &btrfs_comp_ws[i].idle_ws);
+			atomic_set(&wsm[i].total_ws, 1);
+			wsm[i].free_ws = 1;
+			list_add(workspace, &wsm[i].idle_ws);
 		}
 	}
 }
@@ -856,11 +856,11 @@ static struct list_head *__find_workspace(int type, bool heuristic)
 		ws_wait	 = &btrfs_heuristic_ws.ws_wait;
 		free_ws	 = &btrfs_heuristic_ws.free_ws;
 	} else {
-		idle_ws	 = &btrfs_comp_ws[idx].idle_ws;
-		ws_lock	 = &btrfs_comp_ws[idx].ws_lock;
-		total_ws = &btrfs_comp_ws[idx].total_ws;
-		ws_wait	 = &btrfs_comp_ws[idx].ws_wait;
-		free_ws	 = &btrfs_comp_ws[idx].free_ws;
+		idle_ws	 = &wsm[idx].idle_ws;
+		ws_lock	 = &wsm[idx].ws_lock;
+		total_ws = &wsm[idx].total_ws;
+		ws_wait	 = &wsm[idx].ws_wait;
+		free_ws	 = &wsm[idx].free_ws;
 	}
 
 again:
@@ -952,11 +952,11 @@ static void __free_workspace(int type, struct list_head *workspace,
 		ws_wait	 = &btrfs_heuristic_ws.ws_wait;
 		free_ws	 = &btrfs_heuristic_ws.free_ws;
 	} else {
-		idle_ws	 = &btrfs_comp_ws[idx].idle_ws;
-		ws_lock	 = &btrfs_comp_ws[idx].ws_lock;
-		total_ws = &btrfs_comp_ws[idx].total_ws;
-		ws_wait	 = &btrfs_comp_ws[idx].ws_wait;
-		free_ws	 = &btrfs_comp_ws[idx].free_ws;
+		idle_ws	 = &wsm[idx].idle_ws;
+		ws_lock	 = &wsm[idx].ws_lock;
+		total_ws = &wsm[idx].total_ws;
+		ws_wait	 = &wsm[idx].ws_wait;
+		free_ws	 = &wsm[idx].free_ws;
 	}
 
 	spin_lock(ws_lock);
@@ -998,11 +998,11 @@ static void free_workspaces(void)
 	}
 
 	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
-		while (!list_empty(&btrfs_comp_ws[i].idle_ws)) {
-			workspace = btrfs_comp_ws[i].idle_ws.next;
+		while (!list_empty(&wsm[i].idle_ws)) {
+			workspace = wsm[i].idle_ws.next;
 			list_del(workspace);
 			btrfs_compress_op[i]->free_workspace(workspace);
-			atomic_dec(&btrfs_comp_ws[i].total_ws);
+			atomic_dec(&wsm[i].total_ws);
 		}
 	}
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 03/12] btrfs: manage heuristic workspace as index 0
  2019-02-04 20:19 [PATCH v2 00/12] btrfs: add zstd compression level support Dennis Zhou
  2019-02-04 20:19 ` [PATCH 01/12] btrfs: add helpers for compression type and level Dennis Zhou
  2019-02-04 20:19 ` [PATCH 02/12] btrfs: rename workspaces_list to workspace_manager Dennis Zhou
@ 2019-02-04 20:19 ` Dennis Zhou
  2019-02-04 20:20 ` [PATCH 04/12] btrfs: unify compression ops with workspace_manager Dennis Zhou
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 34+ messages in thread
From: Dennis Zhou @ 2019-02-04 20:19 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

While the heuristic workspaces aren't really compression workspaces,
they use the same interface for managing them. So rather than branching,
let's just handle them once again as the index 0 compression type.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
v2:
- added BTRFS_NR_WORKSPACE_MANAGERS instead of overriding
  BTRFS_COMPRESS_TYPES

 fs/btrfs/compression.c | 111 +++++++++++------------------------------
 fs/btrfs/compression.h |   4 ++
 2 files changed, 33 insertions(+), 82 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index d098df768b67..5b508cb3b236 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -769,6 +769,11 @@ static struct list_head *alloc_heuristic_ws(void)
 	return ERR_PTR(-ENOMEM);
 }
 
+const struct btrfs_compress_op btrfs_heuristic_compress = {
+	.alloc_workspace = alloc_heuristic_ws,
+	.free_workspace = free_heuristic_ws,
+};
+
 struct workspace_manager {
 	struct list_head idle_ws;
 	spinlock_t ws_lock;
@@ -780,11 +785,10 @@ struct workspace_manager {
 	wait_queue_head_t ws_wait;
 };
 
-static struct workspace_manager wsm[BTRFS_COMPRESS_TYPES];
-
-static struct workspace_manager btrfs_heuristic_ws;
+static struct workspace_manager wsm[BTRFS_NR_WORKSPACE_MANAGERS];
 
 static const struct btrfs_compress_op * const btrfs_compress_op[] = {
+	&btrfs_heuristic_compress,
 	&btrfs_zlib_compress,
 	&btrfs_lzo_compress,
 	&btrfs_zstd_compress,
@@ -795,22 +799,7 @@ void __init btrfs_init_compress(void)
 	struct list_head *workspace;
 	int i;
 
-	INIT_LIST_HEAD(&btrfs_heuristic_ws.idle_ws);
-	spin_lock_init(&btrfs_heuristic_ws.ws_lock);
-	atomic_set(&btrfs_heuristic_ws.total_ws, 0);
-	init_waitqueue_head(&btrfs_heuristic_ws.ws_wait);
-
-	workspace = alloc_heuristic_ws();
-	if (IS_ERR(workspace)) {
-		pr_warn(
-	"BTRFS: cannot preallocate heuristic workspace, will try later\n");
-	} else {
-		atomic_set(&btrfs_heuristic_ws.total_ws, 1);
-		btrfs_heuristic_ws.free_ws = 1;
-		list_add(workspace, &btrfs_heuristic_ws.idle_ws);
-	}
-
-	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
+	for (i = 0; i < BTRFS_NR_WORKSPACE_MANAGERS; i++) {
 		INIT_LIST_HEAD(&wsm[i].idle_ws);
 		spin_lock_init(&wsm[i].ws_lock);
 		atomic_set(&wsm[i].total_ws, 0);
@@ -837,11 +826,10 @@ void __init btrfs_init_compress(void)
  * Preallocation makes a forward progress guarantees and we do not return
  * errors.
  */
-static struct list_head *__find_workspace(int type, bool heuristic)
+static struct list_head *find_workspace(int type)
 {
 	struct list_head *workspace;
 	int cpus = num_online_cpus();
-	int idx = type - 1;
 	unsigned nofs_flag;
 	struct list_head *idle_ws;
 	spinlock_t *ws_lock;
@@ -849,19 +837,11 @@ static struct list_head *__find_workspace(int type, bool heuristic)
 	wait_queue_head_t *ws_wait;
 	int *free_ws;
 
-	if (heuristic) {
-		idle_ws	 = &btrfs_heuristic_ws.idle_ws;
-		ws_lock	 = &btrfs_heuristic_ws.ws_lock;
-		total_ws = &btrfs_heuristic_ws.total_ws;
-		ws_wait	 = &btrfs_heuristic_ws.ws_wait;
-		free_ws	 = &btrfs_heuristic_ws.free_ws;
-	} else {
-		idle_ws	 = &wsm[idx].idle_ws;
-		ws_lock	 = &wsm[idx].ws_lock;
-		total_ws = &wsm[idx].total_ws;
-		ws_wait	 = &wsm[idx].ws_wait;
-		free_ws	 = &wsm[idx].free_ws;
-	}
+	idle_ws	 = &wsm[type].idle_ws;
+	ws_lock	 = &wsm[type].ws_lock;
+	total_ws = &wsm[type].total_ws;
+	ws_wait	 = &wsm[type].ws_wait;
+	free_ws	 = &wsm[type].free_ws;
 
 again:
 	spin_lock(ws_lock);
@@ -892,10 +872,7 @@ static struct list_head *__find_workspace(int type, bool heuristic)
 	 * context of btrfs_compress_bio/btrfs_compress_pages
 	 */
 	nofs_flag = memalloc_nofs_save();
-	if (heuristic)
-		workspace = alloc_heuristic_ws();
-	else
-		workspace = btrfs_compress_op[idx]->alloc_workspace();
+	workspace = btrfs_compress_op[type]->alloc_workspace();
 	memalloc_nofs_restore(nofs_flag);
 
 	if (IS_ERR(workspace)) {
@@ -926,38 +903,23 @@ static struct list_head *__find_workspace(int type, bool heuristic)
 	return workspace;
 }
 
-static struct list_head *find_workspace(int type)
-{
-	return __find_workspace(type, false);
-}
-
 /*
  * put a workspace struct back on the list or free it if we have enough
  * idle ones sitting around
  */
-static void __free_workspace(int type, struct list_head *workspace,
-			     bool heuristic)
+static void free_workspace(int type, struct list_head *workspace)
 {
-	int idx = type - 1;
 	struct list_head *idle_ws;
 	spinlock_t *ws_lock;
 	atomic_t *total_ws;
 	wait_queue_head_t *ws_wait;
 	int *free_ws;
 
-	if (heuristic) {
-		idle_ws	 = &btrfs_heuristic_ws.idle_ws;
-		ws_lock	 = &btrfs_heuristic_ws.ws_lock;
-		total_ws = &btrfs_heuristic_ws.total_ws;
-		ws_wait	 = &btrfs_heuristic_ws.ws_wait;
-		free_ws	 = &btrfs_heuristic_ws.free_ws;
-	} else {
-		idle_ws	 = &wsm[idx].idle_ws;
-		ws_lock	 = &wsm[idx].ws_lock;
-		total_ws = &wsm[idx].total_ws;
-		ws_wait	 = &wsm[idx].ws_wait;
-		free_ws	 = &wsm[idx].free_ws;
-	}
+	idle_ws	 = &wsm[type].idle_ws;
+	ws_lock	 = &wsm[type].ws_lock;
+	total_ws = &wsm[type].total_ws;
+	ws_wait	 = &wsm[type].ws_wait;
+	free_ws	 = &wsm[type].free_ws;
 
 	spin_lock(ws_lock);
 	if (*free_ws <= num_online_cpus()) {
@@ -968,20 +930,12 @@ static void __free_workspace(int type, struct list_head *workspace,
 	}
 	spin_unlock(ws_lock);
 
-	if (heuristic)
-		free_heuristic_ws(workspace);
-	else
-		btrfs_compress_op[idx]->free_workspace(workspace);
+	btrfs_compress_op[type]->free_workspace(workspace);
 	atomic_dec(total_ws);
 wake:
 	cond_wake_up(ws_wait);
 }
 
-static void free_workspace(int type, struct list_head *ws)
-{
-	return __free_workspace(type, ws, false);
-}
-
 /*
  * cleanup function for module exit
  */
@@ -990,14 +944,7 @@ static void free_workspaces(void)
 	struct list_head *workspace;
 	int i;
 
-	while (!list_empty(&btrfs_heuristic_ws.idle_ws)) {
-		workspace = btrfs_heuristic_ws.idle_ws.next;
-		list_del(workspace);
-		free_heuristic_ws(workspace);
-		atomic_dec(&btrfs_heuristic_ws.total_ws);
-	}
-
-	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
+	for (i = 0; i < BTRFS_NR_WORKSPACE_MANAGERS; i++) {
 		while (!list_empty(&wsm[i].idle_ws)) {
 			workspace = wsm[i].idle_ws.next;
 			list_del(workspace);
@@ -1042,8 +989,8 @@ int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping,
 
 	workspace = find_workspace(type);
 
-	btrfs_compress_op[type - 1]->set_level(workspace, type_level);
-	ret = btrfs_compress_op[type-1]->compress_pages(workspace, mapping,
+	btrfs_compress_op[type]->set_level(workspace, type_level);
+	ret = btrfs_compress_op[type]->compress_pages(workspace, mapping,
 						      start, pages,
 						      out_pages,
 						      total_in, total_out);
@@ -1072,7 +1019,7 @@ static int btrfs_decompress_bio(struct compressed_bio *cb)
 	int type = cb->compress_type;
 
 	workspace = find_workspace(type);
-	ret = btrfs_compress_op[type - 1]->decompress_bio(workspace, cb);
+	ret = btrfs_compress_op[type]->decompress_bio(workspace, cb);
 	free_workspace(type, workspace);
 
 	return ret;
@@ -1091,7 +1038,7 @@ int btrfs_decompress(int type, unsigned char *data_in, struct page *dest_page,
 
 	workspace = find_workspace(type);
 
-	ret = btrfs_compress_op[type-1]->decompress(workspace, data_in,
+	ret = btrfs_compress_op[type]->decompress(workspace, data_in,
 						  dest_page, start_byte,
 						  srclen, destlen);
 
@@ -1512,7 +1459,7 @@ static void heuristic_collect_sample(struct inode *inode, u64 start, u64 end,
  */
 int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end)
 {
-	struct list_head *ws_list = __find_workspace(0, true);
+	struct list_head *ws_list = find_workspace(0);
 	struct heuristic_ws *ws;
 	u32 i;
 	u8 byte;
@@ -1581,7 +1528,7 @@ int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end)
 	}
 
 out:
-	__free_workspace(0, ws_list, true);
+	free_workspace(0, ws_list);
 	return ret;
 }
 
diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index 004db0b3111b..7ef62caa60ab 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -132,6 +132,10 @@ struct btrfs_compress_op {
 	void (*set_level)(struct list_head *ws, unsigned int type);
 };
 
+/* the heuristic workspaces are managed via the 0th workspace manager */
+#define BTRFS_NR_WORKSPACE_MANAGERS	(BTRFS_COMPRESS_TYPES + 1)
+
+extern const struct btrfs_compress_op btrfs_heuristic_compress;
 extern const struct btrfs_compress_op btrfs_zlib_compress;
 extern const struct btrfs_compress_op btrfs_lzo_compress;
 extern const struct btrfs_compress_op btrfs_zstd_compress;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 04/12] btrfs: unify compression ops with workspace_manager
  2019-02-04 20:19 [PATCH v2 00/12] btrfs: add zstd compression level support Dennis Zhou
                   ` (2 preceding siblings ...)
  2019-02-04 20:19 ` [PATCH 03/12] btrfs: manage heuristic workspace as index 0 Dennis Zhou
@ 2019-02-04 20:20 ` Dennis Zhou
  2019-02-04 20:20 ` [PATCH 05/12] btrfs: add helper methods for workspace manager init and cleanup Dennis Zhou
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 34+ messages in thread
From: Dennis Zhou @ 2019-02-04 20:20 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

Make the workspace_manager own the interface operations rather than
managing index-paired arrays for the workspace_manager and compression
operations.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/compression.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 5b508cb3b236..ef560b47b9c5 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -775,6 +775,7 @@ const struct btrfs_compress_op btrfs_heuristic_compress = {
 };
 
 struct workspace_manager {
+	const struct btrfs_compress_op *ops;
 	struct list_head idle_ws;
 	spinlock_t ws_lock;
 	/* Number of free workspaces */
@@ -800,6 +801,8 @@ void __init btrfs_init_compress(void)
 	int i;
 
 	for (i = 0; i < BTRFS_NR_WORKSPACE_MANAGERS; i++) {
+		wsm[i].ops = btrfs_compress_op[i];
+
 		INIT_LIST_HEAD(&wsm[i].idle_ws);
 		spin_lock_init(&wsm[i].ws_lock);
 		atomic_set(&wsm[i].total_ws, 0);
@@ -809,7 +812,7 @@ void __init btrfs_init_compress(void)
 		 * Preallocate one workspace for each compression type so
 		 * we can guarantee forward progress in the worst case
 		 */
-		workspace = btrfs_compress_op[i]->alloc_workspace();
+		workspace = wsm[i].ops->alloc_workspace();
 		if (IS_ERR(workspace)) {
 			pr_warn("BTRFS: cannot preallocate compression workspace, will try later\n");
 		} else {
@@ -872,7 +875,7 @@ static struct list_head *find_workspace(int type)
 	 * context of btrfs_compress_bio/btrfs_compress_pages
 	 */
 	nofs_flag = memalloc_nofs_save();
-	workspace = btrfs_compress_op[type]->alloc_workspace();
+	workspace = wsm[type].ops->alloc_workspace();
 	memalloc_nofs_restore(nofs_flag);
 
 	if (IS_ERR(workspace)) {
@@ -930,7 +933,7 @@ static void free_workspace(int type, struct list_head *workspace)
 	}
 	spin_unlock(ws_lock);
 
-	btrfs_compress_op[type]->free_workspace(workspace);
+	wsm[type].ops->free_workspace(workspace);
 	atomic_dec(total_ws);
 wake:
 	cond_wake_up(ws_wait);
@@ -948,7 +951,7 @@ static void free_workspaces(void)
 		while (!list_empty(&wsm[i].idle_ws)) {
 			workspace = wsm[i].idle_ws.next;
 			list_del(workspace);
-			btrfs_compress_op[i]->free_workspace(workspace);
+			wsm[i].ops->free_workspace(workspace);
 			atomic_dec(&wsm[i].total_ws);
 		}
 	}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 05/12] btrfs: add helper methods for workspace manager init and cleanup
  2019-02-04 20:19 [PATCH v2 00/12] btrfs: add zstd compression level support Dennis Zhou
                   ` (3 preceding siblings ...)
  2019-02-04 20:20 ` [PATCH 04/12] btrfs: unify compression ops with workspace_manager Dennis Zhou
@ 2019-02-04 20:20 ` Dennis Zhou
  2019-02-04 20:20 ` [PATCH 06/12] btrfs: add compression interface in (get/put)_workspace() Dennis Zhou
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 34+ messages in thread
From: Dennis Zhou @ 2019-02-04 20:20 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

Workspace manager init and cleanup code is open coded inside a for loop
over the compression types. This forces each compression type to rely on
the same workspace manager implementation. This patch creates helper
methods that will be the generic implementation for btrfs workspace
management.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/compression.c | 81 ++++++++++++++++++++++--------------------
 1 file changed, 42 insertions(+), 39 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index ef560b47b9c5..b213d1efb586 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -795,31 +795,41 @@ static const struct btrfs_compress_op * const btrfs_compress_op[] = {
 	&btrfs_zstd_compress,
 };
 
-void __init btrfs_init_compress(void)
+static void btrfs_init_workspace_manager(int type)
 {
+	struct workspace_manager *wsman = &wsm[type];
 	struct list_head *workspace;
-	int i;
 
-	for (i = 0; i < BTRFS_NR_WORKSPACE_MANAGERS; i++) {
-		wsm[i].ops = btrfs_compress_op[i];
+	wsman->ops = btrfs_compress_op[type];
 
-		INIT_LIST_HEAD(&wsm[i].idle_ws);
-		spin_lock_init(&wsm[i].ws_lock);
-		atomic_set(&wsm[i].total_ws, 0);
-		init_waitqueue_head(&wsm[i].ws_wait);
+	INIT_LIST_HEAD(&wsman->idle_ws);
+	spin_lock_init(&wsman->ws_lock);
+	atomic_set(&wsman->total_ws, 0);
+	init_waitqueue_head(&wsman->ws_wait);
 
-		/*
-		 * Preallocate one workspace for each compression type so
-		 * we can guarantee forward progress in the worst case
-		 */
-		workspace = wsm[i].ops->alloc_workspace();
-		if (IS_ERR(workspace)) {
-			pr_warn("BTRFS: cannot preallocate compression workspace, will try later\n");
-		} else {
-			atomic_set(&wsm[i].total_ws, 1);
-			wsm[i].free_ws = 1;
-			list_add(workspace, &wsm[i].idle_ws);
-		}
+	/*
+	 * Preallocate one workspace for each compression type so
+	 * we can guarantee forward progress in the worst case
+	 */
+	workspace = wsman->ops->alloc_workspace();
+	if (IS_ERR(workspace)) {
+		pr_warn("BTRFS: cannot preallocate compression workspace, will try later\n");
+	} else {
+		atomic_set(&wsman->total_ws, 1);
+		wsman->free_ws = 1;
+		list_add(workspace, &wsman->idle_ws);
+	}
+}
+
+static void btrfs_cleanup_workspace_manager(struct workspace_manager *wsman)
+{
+	struct list_head *ws;
+
+	while (!list_empty(&wsman->idle_ws)) {
+		ws = wsman->idle_ws.next;
+		list_del(ws);
+		wsman->ops->free_workspace(ws);
+		atomic_dec(&wsman->total_ws);
 	}
 }
 
@@ -939,24 +949,6 @@ static void free_workspace(int type, struct list_head *workspace)
 	cond_wake_up(ws_wait);
 }
 
-/*
- * cleanup function for module exit
- */
-static void free_workspaces(void)
-{
-	struct list_head *workspace;
-	int i;
-
-	for (i = 0; i < BTRFS_NR_WORKSPACE_MANAGERS; i++) {
-		while (!list_empty(&wsm[i].idle_ws)) {
-			workspace = wsm[i].idle_ws.next;
-			list_del(workspace);
-			wsm[i].ops->free_workspace(workspace);
-			atomic_dec(&wsm[i].total_ws);
-		}
-	}
-}
-
 /*
  * Given an address space and start and length, compress the bytes into @pages
  * that are allocated on demand.
@@ -1049,9 +1041,20 @@ int btrfs_decompress(int type, unsigned char *data_in, struct page *dest_page,
 	return ret;
 }
 
+void __init btrfs_init_compress(void)
+{
+	int i;
+
+	for (i = 0; i < BTRFS_NR_WORKSPACE_MANAGERS; i++)
+		btrfs_init_workspace_manager(i);
+}
+
 void __cold btrfs_exit_compress(void)
 {
-	free_workspaces();
+	int i;
+
+	for (i = 0; i < BTRFS_NR_WORKSPACE_MANAGERS; i++)
+		btrfs_cleanup_workspace_manager(&wsm[i]);
 }
 
 /*
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 06/12] btrfs: add compression interface in (get/put)_workspace()
  2019-02-04 20:19 [PATCH v2 00/12] btrfs: add zstd compression level support Dennis Zhou
                   ` (4 preceding siblings ...)
  2019-02-04 20:20 ` [PATCH 05/12] btrfs: add helper methods for workspace manager init and cleanup Dennis Zhou
@ 2019-02-04 20:20 ` Dennis Zhou
  2019-02-04 20:20 ` [PATCH 07/12] btrfs: move to fn pointers for get/put workspaces Dennis Zhou
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 34+ messages in thread
From: Dennis Zhou @ 2019-02-04 20:20 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

There are two levels of workspace management. First, alloc()/free()
which are responsible for actually creating and destroy workspaces.
Second, at a higher level, get()/put() which is the compression code
asking for a workspace from a workspace_manager.

The compression code shouldn't really care how it gets a workspace, but
that it got a workspace. This adds get_workspace() and put_workspace()
to be the higher level interface which is responsible for indexing into
the appropriate compression type. It also introduces
btrfs_put_workspace() and btrfs_get_workspace() to be the generic
implementations of the higher interface.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/compression.c | 57 +++++++++++++++++++++++++-----------------
 1 file changed, 34 insertions(+), 23 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index b213d1efb586..98e1b222f78c 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -839,7 +839,7 @@ static void btrfs_cleanup_workspace_manager(struct workspace_manager *wsman)
  * Preallocation makes a forward progress guarantees and we do not return
  * errors.
  */
-static struct list_head *find_workspace(int type)
+static struct list_head *btrfs_get_workspace(struct workspace_manager *wsman)
 {
 	struct list_head *workspace;
 	int cpus = num_online_cpus();
@@ -850,11 +850,11 @@ static struct list_head *find_workspace(int type)
 	wait_queue_head_t *ws_wait;
 	int *free_ws;
 
-	idle_ws	 = &wsm[type].idle_ws;
-	ws_lock	 = &wsm[type].ws_lock;
-	total_ws = &wsm[type].total_ws;
-	ws_wait	 = &wsm[type].ws_wait;
-	free_ws	 = &wsm[type].free_ws;
+	idle_ws	 = &wsman->idle_ws;
+	ws_lock	 = &wsman->ws_lock;
+	total_ws = &wsman->total_ws;
+	ws_wait	 = &wsman->ws_wait;
+	free_ws	 = &wsman->free_ws;
 
 again:
 	spin_lock(ws_lock);
@@ -885,7 +885,7 @@ static struct list_head *find_workspace(int type)
 	 * context of btrfs_compress_bio/btrfs_compress_pages
 	 */
 	nofs_flag = memalloc_nofs_save();
-	workspace = wsm[type].ops->alloc_workspace();
+	workspace = wsman->ops->alloc_workspace();
 	memalloc_nofs_restore(nofs_flag);
 
 	if (IS_ERR(workspace)) {
@@ -916,11 +916,17 @@ static struct list_head *find_workspace(int type)
 	return workspace;
 }
 
+static struct list_head *get_workspace(int type)
+{
+	return btrfs_get_workspace(&wsm[type]);
+}
+
 /*
  * put a workspace struct back on the list or free it if we have enough
  * idle ones sitting around
  */
-static void free_workspace(int type, struct list_head *workspace)
+static void btrfs_put_workspace(struct workspace_manager *wsman,
+				struct list_head *ws)
 {
 	struct list_head *idle_ws;
 	spinlock_t *ws_lock;
@@ -928,27 +934,32 @@ static void free_workspace(int type, struct list_head *workspace)
 	wait_queue_head_t *ws_wait;
 	int *free_ws;
 
-	idle_ws	 = &wsm[type].idle_ws;
-	ws_lock	 = &wsm[type].ws_lock;
-	total_ws = &wsm[type].total_ws;
-	ws_wait	 = &wsm[type].ws_wait;
-	free_ws	 = &wsm[type].free_ws;
+	idle_ws	 = &wsman->idle_ws;
+	ws_lock	 = &wsman->ws_lock;
+	total_ws = &wsman->total_ws;
+	ws_wait	 = &wsman->ws_wait;
+	free_ws	 = &wsman->free_ws;
 
 	spin_lock(ws_lock);
 	if (*free_ws <= num_online_cpus()) {
-		list_add(workspace, idle_ws);
+		list_add(ws, idle_ws);
 		(*free_ws)++;
 		spin_unlock(ws_lock);
 		goto wake;
 	}
 	spin_unlock(ws_lock);
 
-	wsm[type].ops->free_workspace(workspace);
+	wsman->ops->free_workspace(ws);
 	atomic_dec(total_ws);
 wake:
 	cond_wake_up(ws_wait);
 }
 
+static void put_workspace(int type, struct list_head *ws)
+{
+	return btrfs_put_workspace(&wsm[type], ws);
+}
+
 /*
  * Given an address space and start and length, compress the bytes into @pages
  * that are allocated on demand.
@@ -982,14 +993,14 @@ int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping,
 	struct list_head *workspace;
 	int ret;
 
-	workspace = find_workspace(type);
+	workspace = get_workspace(type);
 
 	btrfs_compress_op[type]->set_level(workspace, type_level);
 	ret = btrfs_compress_op[type]->compress_pages(workspace, mapping,
 						      start, pages,
 						      out_pages,
 						      total_in, total_out);
-	free_workspace(type, workspace);
+	put_workspace(type, workspace);
 	return ret;
 }
 
@@ -1013,9 +1024,9 @@ static int btrfs_decompress_bio(struct compressed_bio *cb)
 	int ret;
 	int type = cb->compress_type;
 
-	workspace = find_workspace(type);
+	workspace = get_workspace(type);
 	ret = btrfs_compress_op[type]->decompress_bio(workspace, cb);
-	free_workspace(type, workspace);
+	put_workspace(type, workspace);
 
 	return ret;
 }
@@ -1031,13 +1042,13 @@ int btrfs_decompress(int type, unsigned char *data_in, struct page *dest_page,
 	struct list_head *workspace;
 	int ret;
 
-	workspace = find_workspace(type);
+	workspace = get_workspace(type);
 
 	ret = btrfs_compress_op[type]->decompress(workspace, data_in,
 						  dest_page, start_byte,
 						  srclen, destlen);
 
-	free_workspace(type, workspace);
+	put_workspace(type, workspace);
 	return ret;
 }
 
@@ -1465,7 +1476,7 @@ static void heuristic_collect_sample(struct inode *inode, u64 start, u64 end,
  */
 int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end)
 {
-	struct list_head *ws_list = find_workspace(0);
+	struct list_head *ws_list = get_workspace(0);
 	struct heuristic_ws *ws;
 	u32 i;
 	u8 byte;
@@ -1534,7 +1545,7 @@ int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end)
 	}
 
 out:
-	free_workspace(0, ws_list);
+	put_workspace(0, ws_list);
 	return ret;
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 07/12] btrfs: move to fn pointers for get/put workspaces
  2019-02-04 20:19 [PATCH v2 00/12] btrfs: add zstd compression level support Dennis Zhou
                   ` (5 preceding siblings ...)
  2019-02-04 20:20 ` [PATCH 06/12] btrfs: add compression interface in (get/put)_workspace() Dennis Zhou
@ 2019-02-04 20:20 ` Dennis Zhou
  2019-02-04 20:20 ` [PATCH 08/12] btrfs: plumb level through the compression interface Dennis Zhou
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 34+ messages in thread
From: Dennis Zhou @ 2019-02-04 20:20 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

The previous patch added generic helpers for get_workspace() and
put_workspace(). Now, we can migrate ownership of the workspace_manager
to be in the compression type code as the compression code itself
doesn't care beyond being able to get a workspace. The init/cleanup
and get/put methods are abstracted so each compression algorithm can
decide how they want to manage their workspaces.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/compression.c | 101 +++++++++++++++++++++++------------------
 fs/btrfs/compression.h |  26 +++++++++++
 fs/btrfs/lzo.c         |  26 +++++++++++
 fs/btrfs/zlib.c        |  26 +++++++++++
 fs/btrfs/zstd.c        |  26 +++++++++++
 5 files changed, 160 insertions(+), 45 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 98e1b222f78c..3d42ab71455d 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -730,6 +730,28 @@ struct heuristic_ws {
 	struct list_head list;
 };
 
+static struct workspace_manager heuristic_wsm;
+
+static void heuristic_init_workspace_manager(void)
+{
+	btrfs_init_workspace_manager(&heuristic_wsm, &btrfs_heuristic_compress);
+}
+
+static void heuristic_cleanup_workspace_manager(void)
+{
+	btrfs_cleanup_workspace_manager(&heuristic_wsm);
+}
+
+static struct list_head *heuristic_get_workspace(void)
+{
+	return btrfs_get_workspace(&heuristic_wsm);
+}
+
+static void heuristic_put_workspace(struct list_head *ws)
+{
+	btrfs_put_workspace(&heuristic_wsm, ws);
+}
+
 static void free_heuristic_ws(struct list_head *ws)
 {
 	struct heuristic_ws *workspace;
@@ -770,24 +792,14 @@ static struct list_head *alloc_heuristic_ws(void)
 }
 
 const struct btrfs_compress_op btrfs_heuristic_compress = {
+	.init_workspace_manager = heuristic_init_workspace_manager,
+	.cleanup_workspace_manager = heuristic_cleanup_workspace_manager,
+	.get_workspace = heuristic_get_workspace,
+	.put_workspace = heuristic_put_workspace,
 	.alloc_workspace = alloc_heuristic_ws,
 	.free_workspace = free_heuristic_ws,
 };
 
-struct workspace_manager {
-	const struct btrfs_compress_op *ops;
-	struct list_head idle_ws;
-	spinlock_t ws_lock;
-	/* Number of free workspaces */
-	int free_ws;
-	/* Total number of allocated workspaces */
-	atomic_t total_ws;
-	/* Waiters for a free workspace */
-	wait_queue_head_t ws_wait;
-};
-
-static struct workspace_manager wsm[BTRFS_NR_WORKSPACE_MANAGERS];
-
 static const struct btrfs_compress_op * const btrfs_compress_op[] = {
 	&btrfs_heuristic_compress,
 	&btrfs_zlib_compress,
@@ -795,33 +807,33 @@ static const struct btrfs_compress_op * const btrfs_compress_op[] = {
 	&btrfs_zstd_compress,
 };
 
-static void btrfs_init_workspace_manager(int type)
+void btrfs_init_workspace_manager(struct workspace_manager *wsm,
+				  const struct btrfs_compress_op *ops)
 {
-	struct workspace_manager *wsman = &wsm[type];
 	struct list_head *workspace;
 
-	wsman->ops = btrfs_compress_op[type];
+	wsm->ops = ops;
 
-	INIT_LIST_HEAD(&wsman->idle_ws);
-	spin_lock_init(&wsman->ws_lock);
-	atomic_set(&wsman->total_ws, 0);
-	init_waitqueue_head(&wsman->ws_wait);
+	INIT_LIST_HEAD(&wsm->idle_ws);
+	spin_lock_init(&wsm->ws_lock);
+	atomic_set(&wsm->total_ws, 0);
+	init_waitqueue_head(&wsm->ws_wait);
 
 	/*
 	 * Preallocate one workspace for each compression type so
 	 * we can guarantee forward progress in the worst case
 	 */
-	workspace = wsman->ops->alloc_workspace();
+	workspace = wsm->ops->alloc_workspace();
 	if (IS_ERR(workspace)) {
 		pr_warn("BTRFS: cannot preallocate compression workspace, will try later\n");
 	} else {
-		atomic_set(&wsman->total_ws, 1);
-		wsman->free_ws = 1;
-		list_add(workspace, &wsman->idle_ws);
+		atomic_set(&wsm->total_ws, 1);
+		wsm->free_ws = 1;
+		list_add(workspace, &wsm->idle_ws);
 	}
 }
 
-static void btrfs_cleanup_workspace_manager(struct workspace_manager *wsman)
+void btrfs_cleanup_workspace_manager(struct workspace_manager *wsman)
 {
 	struct list_head *ws;
 
@@ -839,7 +851,7 @@ static void btrfs_cleanup_workspace_manager(struct workspace_manager *wsman)
  * Preallocation makes a forward progress guarantees and we do not return
  * errors.
  */
-static struct list_head *btrfs_get_workspace(struct workspace_manager *wsman)
+struct list_head *btrfs_get_workspace(struct workspace_manager *wsm)
 {
 	struct list_head *workspace;
 	int cpus = num_online_cpus();
@@ -850,11 +862,11 @@ static struct list_head *btrfs_get_workspace(struct workspace_manager *wsman)
 	wait_queue_head_t *ws_wait;
 	int *free_ws;
 
-	idle_ws	 = &wsman->idle_ws;
-	ws_lock	 = &wsman->ws_lock;
-	total_ws = &wsman->total_ws;
-	ws_wait	 = &wsman->ws_wait;
-	free_ws	 = &wsman->free_ws;
+	idle_ws	 = &wsm->idle_ws;
+	ws_lock	 = &wsm->ws_lock;
+	total_ws = &wsm->total_ws;
+	ws_wait	 = &wsm->ws_wait;
+	free_ws	 = &wsm->free_ws;
 
 again:
 	spin_lock(ws_lock);
@@ -885,7 +897,7 @@ static struct list_head *btrfs_get_workspace(struct workspace_manager *wsman)
 	 * context of btrfs_compress_bio/btrfs_compress_pages
 	 */
 	nofs_flag = memalloc_nofs_save();
-	workspace = wsman->ops->alloc_workspace();
+	workspace = wsm->ops->alloc_workspace();
 	memalloc_nofs_restore(nofs_flag);
 
 	if (IS_ERR(workspace)) {
@@ -918,15 +930,14 @@ static struct list_head *btrfs_get_workspace(struct workspace_manager *wsman)
 
 static struct list_head *get_workspace(int type)
 {
-	return btrfs_get_workspace(&wsm[type]);
+	return btrfs_compress_op[type]->get_workspace();
 }
 
 /*
  * put a workspace struct back on the list or free it if we have enough
  * idle ones sitting around
  */
-static void btrfs_put_workspace(struct workspace_manager *wsman,
-				struct list_head *ws)
+void btrfs_put_workspace(struct workspace_manager *wsm, struct list_head *ws)
 {
 	struct list_head *idle_ws;
 	spinlock_t *ws_lock;
@@ -934,11 +945,11 @@ static void btrfs_put_workspace(struct workspace_manager *wsman,
 	wait_queue_head_t *ws_wait;
 	int *free_ws;
 
-	idle_ws	 = &wsman->idle_ws;
-	ws_lock	 = &wsman->ws_lock;
-	total_ws = &wsman->total_ws;
-	ws_wait	 = &wsman->ws_wait;
-	free_ws	 = &wsman->free_ws;
+	idle_ws	 = &wsm->idle_ws;
+	ws_lock	 = &wsm->ws_lock;
+	total_ws = &wsm->total_ws;
+	ws_wait	 = &wsm->ws_wait;
+	free_ws	 = &wsm->free_ws;
 
 	spin_lock(ws_lock);
 	if (*free_ws <= num_online_cpus()) {
@@ -949,7 +960,7 @@ static void btrfs_put_workspace(struct workspace_manager *wsman,
 	}
 	spin_unlock(ws_lock);
 
-	wsman->ops->free_workspace(ws);
+	wsm->ops->free_workspace(ws);
 	atomic_dec(total_ws);
 wake:
 	cond_wake_up(ws_wait);
@@ -957,7 +968,7 @@ static void btrfs_put_workspace(struct workspace_manager *wsman,
 
 static void put_workspace(int type, struct list_head *ws)
 {
-	return btrfs_put_workspace(&wsm[type], ws);
+	return btrfs_compress_op[type]->put_workspace(ws);
 }
 
 /*
@@ -1057,7 +1068,7 @@ void __init btrfs_init_compress(void)
 	int i;
 
 	for (i = 0; i < BTRFS_NR_WORKSPACE_MANAGERS; i++)
-		btrfs_init_workspace_manager(i);
+		btrfs_compress_op[i]->init_workspace_manager();
 }
 
 void __cold btrfs_exit_compress(void)
@@ -1065,7 +1076,7 @@ void __cold btrfs_exit_compress(void)
 	int i;
 
 	for (i = 0; i < BTRFS_NR_WORKSPACE_MANAGERS; i++)
-		btrfs_cleanup_workspace_manager(&wsm[i]);
+		btrfs_compress_op[i]->cleanup_workspace_manager();
 }
 
 /*
diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index 7ef62caa60ab..8422e6ce4838 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -107,7 +107,33 @@ enum btrfs_compression_type {
 	BTRFS_COMPRESS_TYPES = 3,
 };
 
+struct workspace_manager {
+	const struct btrfs_compress_op *ops;
+	struct list_head idle_ws;
+	spinlock_t ws_lock;
+	/* Number of free workspaces */
+	int free_ws;
+	/* Total number of allocated workspaces */
+	atomic_t total_ws;
+	/* Waiters for a free workspace */
+	wait_queue_head_t ws_wait;
+};
+
+void btrfs_init_workspace_manager(struct workspace_manager *wsm,
+				  const struct btrfs_compress_op *ops);
+struct list_head *btrfs_get_workspace(struct workspace_manager *wsm);
+void btrfs_put_workspace(struct workspace_manager *wsm, struct list_head *ws);
+void btrfs_cleanup_workspace_manager(struct workspace_manager *wsm);
+
 struct btrfs_compress_op {
+	void (*init_workspace_manager)(void);
+
+	void (*cleanup_workspace_manager)(void);
+
+	struct list_head *(*get_workspace)(void);
+
+	void (*put_workspace)(struct list_head *ws);
+
 	struct list_head *(*alloc_workspace)(void);
 
 	void (*free_workspace)(struct list_head *workspace);
diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
index 90639140439f..f0837b2c8e94 100644
--- a/fs/btrfs/lzo.c
+++ b/fs/btrfs/lzo.c
@@ -61,6 +61,28 @@ struct workspace {
 	struct list_head list;
 };
 
+static struct workspace_manager wsm;
+
+static void lzo_init_workspace_manager(void)
+{
+	btrfs_init_workspace_manager(&wsm, &btrfs_lzo_compress);
+}
+
+static void lzo_cleanup_workspace_manager(void)
+{
+	btrfs_cleanup_workspace_manager(&wsm);
+}
+
+static struct list_head *lzo_get_workspace(void)
+{
+	return btrfs_get_workspace(&wsm);
+}
+
+static void lzo_put_workspace(struct list_head *ws)
+{
+	btrfs_put_workspace(&wsm, ws);
+}
+
 static void lzo_free_workspace(struct list_head *ws)
 {
 	struct workspace *workspace = list_entry(ws, struct workspace, list);
@@ -490,6 +512,10 @@ static void lzo_set_level(struct list_head *ws, unsigned int type)
 }
 
 const struct btrfs_compress_op btrfs_lzo_compress = {
+	.init_workspace_manager	= lzo_init_workspace_manager,
+	.cleanup_workspace_manager = lzo_cleanup_workspace_manager,
+	.get_workspace		= lzo_get_workspace,
+	.put_workspace		= lzo_put_workspace,
 	.alloc_workspace	= lzo_alloc_workspace,
 	.free_workspace		= lzo_free_workspace,
 	.compress_pages		= lzo_compress_pages,
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index 2bd655c4f8b4..773d1d70ceec 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -27,6 +27,28 @@ struct workspace {
 	int level;
 };
 
+static struct workspace_manager wsm;
+
+static void zlib_init_workspace_manager(void)
+{
+	btrfs_init_workspace_manager(&wsm, &btrfs_zlib_compress);
+}
+
+static void zlib_cleanup_workspace_manager(void)
+{
+	btrfs_cleanup_workspace_manager(&wsm);
+}
+
+static struct list_head *zlib_get_workspace(void)
+{
+	return btrfs_get_workspace(&wsm);
+}
+
+static void zlib_put_workspace(struct list_head *ws)
+{
+	btrfs_put_workspace(&wsm, ws);
+}
+
 static void zlib_free_workspace(struct list_head *ws)
 {
 	struct workspace *workspace = list_entry(ws, struct workspace, list);
@@ -402,6 +424,10 @@ static void zlib_set_level(struct list_head *ws, unsigned int type)
 }
 
 const struct btrfs_compress_op btrfs_zlib_compress = {
+	.init_workspace_manager	= zlib_init_workspace_manager,
+	.cleanup_workspace_manager = zlib_cleanup_workspace_manager,
+	.get_workspace		= zlib_get_workspace,
+	.put_workspace		= zlib_put_workspace,
 	.alloc_workspace	= zlib_alloc_workspace,
 	.free_workspace		= zlib_free_workspace,
 	.compress_pages		= zlib_compress_pages,
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index af6ec59972f5..b06eaf171be7 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -41,6 +41,28 @@ struct workspace {
 	ZSTD_outBuffer out_buf;
 };
 
+static struct workspace_manager wsm;
+
+static void zstd_init_workspace_manager(void)
+{
+	btrfs_init_workspace_manager(&wsm, &btrfs_zstd_compress);
+}
+
+static void zstd_cleanup_workspace_manager(void)
+{
+	btrfs_cleanup_workspace_manager(&wsm);
+}
+
+static struct list_head *zstd_get_workspace(void)
+{
+	return btrfs_get_workspace(&wsm);
+}
+
+static void zstd_put_workspace(struct list_head *ws)
+{
+	btrfs_put_workspace(&wsm, ws);
+}
+
 static void zstd_free_workspace(struct list_head *ws)
 {
 	struct workspace *workspace = list_entry(ws, struct workspace, list);
@@ -424,6 +446,10 @@ static void zstd_set_level(struct list_head *ws, unsigned int type)
 }
 
 const struct btrfs_compress_op btrfs_zstd_compress = {
+	.init_workspace_manager = zstd_init_workspace_manager,
+	.cleanup_workspace_manager = zstd_cleanup_workspace_manager,
+	.get_workspace = zstd_get_workspace,
+	.put_workspace = zstd_put_workspace,
 	.alloc_workspace = zstd_alloc_workspace,
 	.free_workspace = zstd_free_workspace,
 	.compress_pages = zstd_compress_pages,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 08/12] btrfs: plumb level through the compression interface
  2019-02-04 20:19 [PATCH v2 00/12] btrfs: add zstd compression level support Dennis Zhou
                   ` (6 preceding siblings ...)
  2019-02-04 20:20 ` [PATCH 07/12] btrfs: move to fn pointers for get/put workspaces Dennis Zhou
@ 2019-02-04 20:20 ` Dennis Zhou
  2019-02-04 20:20 ` [PATCH 09/12] btrfs: change set_level() to bound the level passed in Dennis Zhou
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 34+ messages in thread
From: Dennis Zhou @ 2019-02-04 20:20 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

Zlib compression supports multiple levels, but doesn't require changing
in how a workspace itself is created and managed. Zstd introduces a
different memory requirement such that higher levels of compression
require more memory. This requires changes in how the alloc()/get()
methods work for zstd. This pach plumbs compression level through the
interface as a parameter in preparation for zstd compression levels.
This gives the compression types opportunity to create/manage based on
the compression level.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/compression.c | 31 ++++++++++++++++---------------
 fs/btrfs/compression.h |  7 ++++---
 fs/btrfs/lzo.c         |  6 +++---
 fs/btrfs/zlib.c        |  7 ++++---
 fs/btrfs/zstd.c        |  6 +++---
 5 files changed, 30 insertions(+), 27 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 3d42ab71455d..a54b210739bc 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -742,9 +742,9 @@ static void heuristic_cleanup_workspace_manager(void)
 	btrfs_cleanup_workspace_manager(&heuristic_wsm);
 }
 
-static struct list_head *heuristic_get_workspace(void)
+static struct list_head *heuristic_get_workspace(unsigned int level)
 {
-	return btrfs_get_workspace(&heuristic_wsm);
+	return btrfs_get_workspace(&heuristic_wsm, level);
 }
 
 static void heuristic_put_workspace(struct list_head *ws)
@@ -764,7 +764,7 @@ static void free_heuristic_ws(struct list_head *ws)
 	kfree(workspace);
 }
 
-static struct list_head *alloc_heuristic_ws(void)
+static struct list_head *alloc_heuristic_ws(unsigned int level)
 {
 	struct heuristic_ws *ws;
 
@@ -823,7 +823,7 @@ void btrfs_init_workspace_manager(struct workspace_manager *wsm,
 	 * Preallocate one workspace for each compression type so
 	 * we can guarantee forward progress in the worst case
 	 */
-	workspace = wsm->ops->alloc_workspace();
+	workspace = wsm->ops->alloc_workspace(0);
 	if (IS_ERR(workspace)) {
 		pr_warn("BTRFS: cannot preallocate compression workspace, will try later\n");
 	} else {
@@ -851,7 +851,8 @@ void btrfs_cleanup_workspace_manager(struct workspace_manager *wsman)
  * Preallocation makes a forward progress guarantees and we do not return
  * errors.
  */
-struct list_head *btrfs_get_workspace(struct workspace_manager *wsm)
+struct list_head *btrfs_get_workspace(struct workspace_manager *wsm,
+				      unsigned int level)
 {
 	struct list_head *workspace;
 	int cpus = num_online_cpus();
@@ -897,7 +898,7 @@ struct list_head *btrfs_get_workspace(struct workspace_manager *wsm)
 	 * context of btrfs_compress_bio/btrfs_compress_pages
 	 */
 	nofs_flag = memalloc_nofs_save();
-	workspace = wsm->ops->alloc_workspace();
+	workspace = wsm->ops->alloc_workspace(level);
 	memalloc_nofs_restore(nofs_flag);
 
 	if (IS_ERR(workspace)) {
@@ -928,9 +929,9 @@ struct list_head *btrfs_get_workspace(struct workspace_manager *wsm)
 	return workspace;
 }
 
-static struct list_head *get_workspace(int type)
+static struct list_head *get_workspace(int type, int level)
 {
-	return btrfs_compress_op[type]->get_workspace();
+	return btrfs_compress_op[type]->get_workspace(level);
 }
 
 /*
@@ -1001,12 +1002,13 @@ int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping,
 			 unsigned long *total_out)
 {
 	int type = btrfs_compress_type(type_level);
+	int level = btrfs_compress_level(type_level);
 	struct list_head *workspace;
 	int ret;
 
-	workspace = get_workspace(type);
+	workspace = get_workspace(type, level);
 
-	btrfs_compress_op[type]->set_level(workspace, type_level);
+	btrfs_compress_op[type]->set_level(workspace, level);
 	ret = btrfs_compress_op[type]->compress_pages(workspace, mapping,
 						      start, pages,
 						      out_pages,
@@ -1035,7 +1037,7 @@ static int btrfs_decompress_bio(struct compressed_bio *cb)
 	int ret;
 	int type = cb->compress_type;
 
-	workspace = get_workspace(type);
+	workspace = get_workspace(type, 0);
 	ret = btrfs_compress_op[type]->decompress_bio(workspace, cb);
 	put_workspace(type, workspace);
 
@@ -1053,13 +1055,12 @@ int btrfs_decompress(int type, unsigned char *data_in, struct page *dest_page,
 	struct list_head *workspace;
 	int ret;
 
-	workspace = get_workspace(type);
-
+	workspace = get_workspace(type, 0);
 	ret = btrfs_compress_op[type]->decompress(workspace, data_in,
 						  dest_page, start_byte,
 						  srclen, destlen);
-
 	put_workspace(type, workspace);
+
 	return ret;
 }
 
@@ -1487,7 +1488,7 @@ static void heuristic_collect_sample(struct inode *inode, u64 start, u64 end,
  */
 int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end)
 {
-	struct list_head *ws_list = get_workspace(0);
+	struct list_head *ws_list = get_workspace(0, 0);
 	struct heuristic_ws *ws;
 	u32 i;
 	u8 byte;
diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index 8422e6ce4838..32ba40ebae1d 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -121,7 +121,8 @@ struct workspace_manager {
 
 void btrfs_init_workspace_manager(struct workspace_manager *wsm,
 				  const struct btrfs_compress_op *ops);
-struct list_head *btrfs_get_workspace(struct workspace_manager *wsm);
+struct list_head *btrfs_get_workspace(struct workspace_manager *wsm,
+				      unsigned int level);
 void btrfs_put_workspace(struct workspace_manager *wsm, struct list_head *ws);
 void btrfs_cleanup_workspace_manager(struct workspace_manager *wsm);
 
@@ -130,11 +131,11 @@ struct btrfs_compress_op {
 
 	void (*cleanup_workspace_manager)(void);
 
-	struct list_head *(*get_workspace)(void);
+	struct list_head *(*get_workspace)(unsigned int level);
 
 	void (*put_workspace)(struct list_head *ws);
 
-	struct list_head *(*alloc_workspace)(void);
+	struct list_head *(*alloc_workspace)(unsigned int level);
 
 	void (*free_workspace)(struct list_head *workspace);
 
diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
index f0837b2c8e94..f132af45a924 100644
--- a/fs/btrfs/lzo.c
+++ b/fs/btrfs/lzo.c
@@ -73,9 +73,9 @@ static void lzo_cleanup_workspace_manager(void)
 	btrfs_cleanup_workspace_manager(&wsm);
 }
 
-static struct list_head *lzo_get_workspace(void)
+static struct list_head *lzo_get_workspace(unsigned int level)
 {
-	return btrfs_get_workspace(&wsm);
+	return btrfs_get_workspace(&wsm, level);
 }
 
 static void lzo_put_workspace(struct list_head *ws)
@@ -93,7 +93,7 @@ static void lzo_free_workspace(struct list_head *ws)
 	kfree(workspace);
 }
 
-static struct list_head *lzo_alloc_workspace(void)
+static struct list_head *lzo_alloc_workspace(unsigned int level)
 {
 	struct workspace *workspace;
 
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index 773d1d70ceec..c0ef09bef3b3 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -39,9 +39,9 @@ static void zlib_cleanup_workspace_manager(void)
 	btrfs_cleanup_workspace_manager(&wsm);
 }
 
-static struct list_head *zlib_get_workspace(void)
+static struct list_head *zlib_get_workspace(unsigned int level)
 {
-	return btrfs_get_workspace(&wsm);
+	return btrfs_get_workspace(&wsm, level);
 }
 
 static void zlib_put_workspace(struct list_head *ws)
@@ -58,7 +58,7 @@ static void zlib_free_workspace(struct list_head *ws)
 	kfree(workspace);
 }
 
-static struct list_head *zlib_alloc_workspace(void)
+static struct list_head *zlib_alloc_workspace(unsigned int level)
 {
 	struct workspace *workspace;
 	int workspacesize;
@@ -71,6 +71,7 @@ static struct list_head *zlib_alloc_workspace(void)
 			zlib_inflate_workspacesize());
 	workspace->strm.workspace = kvmalloc(workspacesize, GFP_KERNEL);
 	workspace->buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
+	workspace->level = level;
 	if (!workspace->strm.workspace || !workspace->buf)
 		goto fail;
 
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index b06eaf171be7..404101864220 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -53,9 +53,9 @@ static void zstd_cleanup_workspace_manager(void)
 	btrfs_cleanup_workspace_manager(&wsm);
 }
 
-static struct list_head *zstd_get_workspace(void)
+static struct list_head *zstd_get_workspace(unsigned int level)
 {
-	return btrfs_get_workspace(&wsm);
+	return btrfs_get_workspace(&wsm, level);
 }
 
 static void zstd_put_workspace(struct list_head *ws)
@@ -72,7 +72,7 @@ static void zstd_free_workspace(struct list_head *ws)
 	kfree(workspace);
 }
 
-static struct list_head *zstd_alloc_workspace(void)
+static struct list_head *zstd_alloc_workspace(unsigned int level)
 {
 	ZSTD_parameters params =
 			zstd_get_btrfs_parameters(ZSTD_BTRFS_MAX_INPUT);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 09/12] btrfs: change set_level() to bound the level passed in
  2019-02-04 20:19 [PATCH v2 00/12] btrfs: add zstd compression level support Dennis Zhou
                   ` (7 preceding siblings ...)
  2019-02-04 20:20 ` [PATCH 08/12] btrfs: plumb level through the compression interface Dennis Zhou
@ 2019-02-04 20:20 ` Dennis Zhou
  2019-02-05 19:06   ` David Sterba
  2019-02-06 16:48   ` [PATCH v2 " Dennis Zhou
  2019-02-04 20:20 ` [PATCH 10/12] btrfs: zstd use the passed through level instead of default Dennis Zhou
                   ` (4 subsequent siblings)
  13 siblings, 2 replies; 34+ messages in thread
From: Dennis Zhou @ 2019-02-04 20:20 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

Currently, the only user of set_level() is zlib which sets an internal
workspace parameter. As level is now plumbed into get_workspace(), this
can be handled there rather than separately.

This repurposes set_level() to bound the level passed in so it can be
used when setting the mounts compression level and as well as verifying
the level before getting a workspace. The other benefit is this divides
the meaning of compress(0) and get_workspace(0). The former means we
want to use the default compression level of the compression type. The
latter means we can use any workspace available.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
 fs/btrfs/compression.c | 21 +++++++++++++--------
 fs/btrfs/compression.h | 10 ++++++++--
 fs/btrfs/lzo.c         |  3 ++-
 fs/btrfs/super.c       |  4 +++-
 fs/btrfs/zlib.c        | 18 ++++++++++--------
 fs/btrfs/zstd.c        |  3 ++-
 6 files changed, 38 insertions(+), 21 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index a54b210739bc..dc4afd7d9027 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -1007,8 +1007,6 @@ int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping,
 	int ret;
 
 	workspace = get_workspace(type, level);
-
-	btrfs_compress_op[type]->set_level(workspace, level);
 	ret = btrfs_compress_op[type]->compress_pages(workspace, mapping,
 						      start, pages,
 						      out_pages,
@@ -1561,14 +1559,21 @@ int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end)
 	return ret;
 }
 
-unsigned int btrfs_compress_str2level(const char *str)
+unsigned int btrfs_compress_str2level(unsigned int type, const char *str)
 {
-	if (strncmp(str, "zlib", 4) != 0)
+	unsigned int level;
+	int ret;
+
+	if (!type)
 		return 0;
 
-	/* Accepted form: zlib:1 up to zlib:9 and nothing left after the number */
-	if (str[4] == ':' && '1' <= str[5] && str[5] <= '9' && str[6] == 0)
-		return str[5] - '0';
+	if (str[0] == ':') {
+		ret = kstrtouint(str + 1, 10, &level);
+		if (ret)
+			level = 0;
+	}
+
+	level = btrfs_compress_op[type]->set_level(level);
 
-	return BTRFS_ZLIB_DEFAULT_LEVEL;
+	return level;
 }
diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index 32ba40ebae1d..3f2d36b0aabe 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -97,7 +97,7 @@ blk_status_t btrfs_submit_compressed_write(struct inode *inode, u64 start,
 blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 				 int mirror_num, unsigned long bio_flags);
 
-unsigned btrfs_compress_str2level(const char *str);
+unsigned int btrfs_compress_str2level(unsigned int type, const char *str);
 
 enum btrfs_compression_type {
 	BTRFS_COMPRESS_NONE  = 0,
@@ -156,7 +156,13 @@ struct btrfs_compress_op {
 			  unsigned long start_byte,
 			  size_t srclen, size_t destlen);
 
-	void (*set_level)(struct list_head *ws, unsigned int type);
+	/*
+	 * This bounds the level set by the user to be within range of
+	 * a particular compression type.  It returns the level that
+	 * will be used if the level is out of bounds or the default
+	 * if 0 is passed in.
+	 */
+	unsigned int (*set_level)(unsigned int level);
 };
 
 /* the heuristic workspaces are managed via the 0th workspace manager */
diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
index f132af45a924..579d53ae256f 100644
--- a/fs/btrfs/lzo.c
+++ b/fs/btrfs/lzo.c
@@ -507,8 +507,9 @@ static int lzo_decompress(struct list_head *ws, unsigned char *data_in,
 	return ret;
 }
 
-static void lzo_set_level(struct list_head *ws, unsigned int type)
+static unsigned int lzo_set_level(unsigned int level)
 {
+	return 0;
 }
 
 const struct btrfs_compress_op btrfs_lzo_compress = {
diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
index c5586ffd1426..b28dff207383 100644
--- a/fs/btrfs/super.c
+++ b/fs/btrfs/super.c
@@ -529,7 +529,9 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
 				if (token != Opt_compress &&
 				    token != Opt_compress_force)
 					info->compress_level =
-					  btrfs_compress_str2level(args[0].from);
+					  btrfs_compress_str2level(
+							BTRFS_COMPRESS_ZLIB,
+							args[0].from + 4);
 				btrfs_set_opt(info->mount_opt, COMPRESS);
 				btrfs_clear_opt(info->mount_opt, NODATACOW);
 				btrfs_clear_opt(info->mount_opt, NODATASUM);
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index c0ef09bef3b3..77363f663868 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -41,7 +41,12 @@ static void zlib_cleanup_workspace_manager(void)
 
 static struct list_head *zlib_get_workspace(unsigned int level)
 {
-	return btrfs_get_workspace(&wsm, level);
+	struct list_head *ws = btrfs_get_workspace(&wsm, level);
+	struct workspace *workspace = list_entry(ws, struct workspace, list);
+
+	workspace->level = level;
+
+	return ws;
 }
 
 static void zlib_put_workspace(struct list_head *ws)
@@ -413,15 +418,12 @@ static int zlib_decompress(struct list_head *ws, unsigned char *data_in,
 	return ret;
 }
 
-static void zlib_set_level(struct list_head *ws, unsigned int type)
+static unsigned int zlib_set_level(unsigned int level)
 {
-	struct workspace *workspace = list_entry(ws, struct workspace, list);
-	unsigned int level = btrfs_compress_level(type);
-
-	if (level > 9)
-		level = 9;
+	if (!level)
+		return BTRFS_ZLIB_DEFAULT_LEVEL;
 
-	workspace->level = level > 0 ? level : 3;
+	return min_t(unsigned int, level, 9);
 }
 
 const struct btrfs_compress_op btrfs_zlib_compress = {
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index 404101864220..43f3be755b8c 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -441,8 +441,9 @@ static int zstd_decompress(struct list_head *ws, unsigned char *data_in,
 	return ret;
 }
 
-static void zstd_set_level(struct list_head *ws, unsigned int type)
+static unsigned int zstd_set_level(unsigned int level)
 {
+	return ZSTD_BTRFS_DEFAULT_LEVEL;
 }
 
 const struct btrfs_compress_op btrfs_zstd_compress = {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 10/12] btrfs: zstd use the passed through level instead of default
  2019-02-04 20:19 [PATCH v2 00/12] btrfs: add zstd compression level support Dennis Zhou
                   ` (8 preceding siblings ...)
  2019-02-04 20:20 ` [PATCH 09/12] btrfs: change set_level() to bound the level passed in Dennis Zhou
@ 2019-02-04 20:20 ` Dennis Zhou
  2019-02-04 20:20 ` [PATCH 11/12] btrfs: make zstd memory requirements monotonic Dennis Zhou
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 34+ messages in thread
From: Dennis Zhou @ 2019-02-04 20:20 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

Zstd currently only supports the default level of compression. This
patch switches to using the level passed in for btrfs zstd
configuration.

Zstd workspaces now keep track of the requested level as this can differ
from the size of the workspace.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
---
 fs/btrfs/zstd.c | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index 43f3be755b8c..a951d4fe77f7 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -21,10 +21,10 @@
 #define ZSTD_BTRFS_MAX_INPUT (1 << ZSTD_BTRFS_MAX_WINDOWLOG)
 #define ZSTD_BTRFS_DEFAULT_LEVEL 3
 
-static ZSTD_parameters zstd_get_btrfs_parameters(size_t src_len)
+static ZSTD_parameters zstd_get_btrfs_parameters(unsigned int level,
+						 size_t src_len)
 {
-	ZSTD_parameters params = ZSTD_getParams(ZSTD_BTRFS_DEFAULT_LEVEL,
-						src_len, 0);
+	ZSTD_parameters params = ZSTD_getParams(level, src_len, 0);
 
 	if (params.cParams.windowLog > ZSTD_BTRFS_MAX_WINDOWLOG)
 		params.cParams.windowLog = ZSTD_BTRFS_MAX_WINDOWLOG;
@@ -36,6 +36,7 @@ struct workspace {
 	void *mem;
 	size_t size;
 	char *buf;
+	unsigned int req_level;
 	struct list_head list;
 	ZSTD_inBuffer in_buf;
 	ZSTD_outBuffer out_buf;
@@ -55,7 +56,12 @@ static void zstd_cleanup_workspace_manager(void)
 
 static struct list_head *zstd_get_workspace(unsigned int level)
 {
-	return btrfs_get_workspace(&wsm, level);
+	struct list_head *ws = btrfs_get_workspace(&wsm, level);
+	struct workspace *workspace = list_entry(ws, struct workspace, list);
+
+	workspace->req_level = level;
+
+	return ws;
 }
 
 static void zstd_put_workspace(struct list_head *ws)
@@ -75,7 +81,7 @@ static void zstd_free_workspace(struct list_head *ws)
 static struct list_head *zstd_alloc_workspace(unsigned int level)
 {
 	ZSTD_parameters params =
-			zstd_get_btrfs_parameters(ZSTD_BTRFS_MAX_INPUT);
+			zstd_get_btrfs_parameters(level, ZSTD_BTRFS_MAX_INPUT);
 	struct workspace *workspace;
 
 	workspace = kzalloc(sizeof(*workspace), GFP_KERNEL);
@@ -117,7 +123,8 @@ static int zstd_compress_pages(struct list_head *ws,
 	unsigned long len = *total_out;
 	const unsigned long nr_dest_pages = *out_pages;
 	unsigned long max_out = nr_dest_pages * PAGE_SIZE;
-	ZSTD_parameters params = zstd_get_btrfs_parameters(len);
+	ZSTD_parameters params = zstd_get_btrfs_parameters(workspace->req_level,
+							   len);
 
 	*out_pages = 0;
 	*total_out = 0;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 11/12] btrfs: make zstd memory requirements monotonic
  2019-02-04 20:19 [PATCH v2 00/12] btrfs: add zstd compression level support Dennis Zhou
                   ` (9 preceding siblings ...)
  2019-02-04 20:20 ` [PATCH 10/12] btrfs: zstd use the passed through level instead of default Dennis Zhou
@ 2019-02-04 20:20 ` Dennis Zhou
  2019-02-04 20:20 ` [PATCH 12/12] btrfs: add zstd compression level support Dennis Zhou
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 34+ messages in thread
From: Dennis Zhou @ 2019-02-04 20:20 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

It is possible based on the level configurations that a higher level
workspace uses less memory than a lower level workspace. In order to
reuse workspaces, this must be made a monotonic relationship. This
precomputes the required memory for each level and enforces the
monotonicity between level and memory required. This is also done
in upstream zstd in [1].

[1] https://github.com/facebook/zstd/commit/a68b76afefec6876f8e8a538155109a5aeac0143

Signed-off-by: Dennis Zhou <dennis@kernel.org>
Cc: Nick Terrell <terrelln@fb.com>
---
 fs/btrfs/zstd.c | 38 +++++++++++++++++++++++++++++++++-----
 1 file changed, 33 insertions(+), 5 deletions(-)

diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index a951d4fe77f7..2231123fedbe 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -20,6 +20,7 @@
 #define ZSTD_BTRFS_MAX_WINDOWLOG 17
 #define ZSTD_BTRFS_MAX_INPUT (1 << ZSTD_BTRFS_MAX_WINDOWLOG)
 #define ZSTD_BTRFS_DEFAULT_LEVEL 3
+#define ZSTD_BTRFS_MAX_LEVEL 15
 
 static ZSTD_parameters zstd_get_btrfs_parameters(unsigned int level,
 						 size_t src_len)
@@ -44,8 +45,39 @@ struct workspace {
 
 static struct workspace_manager wsm;
 
+static size_t zstd_ws_mem_sizes[ZSTD_BTRFS_MAX_LEVEL];
+
+/*
+ * zstd_calc_ws_mem_sizes - calculate monotonic memory bounds
+ *
+ * It is possible based on the level configurations that a higher level
+ * workspace uses less memory than a lower level workspace.  In order to
+ * reuse workspaces, this must be made a monotonic relationship.  This
+ * precomputes the required memory for each level and enforces the
+ * monotonicity between level and memory required.
+ */
+static void zstd_calc_ws_mem_sizes(void)
+{
+	size_t max_size = 0;
+	unsigned int level;
+
+	for (level = 1; level <= ZSTD_BTRFS_MAX_LEVEL; level++) {
+		ZSTD_parameters params =
+			zstd_get_btrfs_parameters(level, ZSTD_BTRFS_MAX_INPUT);
+		size_t level_size =
+			max_t(size_t,
+			      ZSTD_CStreamWorkspaceBound(params.cParams),
+			      ZSTD_DStreamWorkspaceBound(ZSTD_BTRFS_MAX_INPUT));
+
+		max_size = max_t(size_t, max_size, level_size);
+		zstd_ws_mem_sizes[level - 1] = max_size;
+	}
+}
+
 static void zstd_init_workspace_manager(void)
 {
+	zstd_calc_ws_mem_sizes();
+
 	btrfs_init_workspace_manager(&wsm, &btrfs_zstd_compress);
 }
 
@@ -80,17 +112,13 @@ static void zstd_free_workspace(struct list_head *ws)
 
 static struct list_head *zstd_alloc_workspace(unsigned int level)
 {
-	ZSTD_parameters params =
-			zstd_get_btrfs_parameters(level, ZSTD_BTRFS_MAX_INPUT);
 	struct workspace *workspace;
 
 	workspace = kzalloc(sizeof(*workspace), GFP_KERNEL);
 	if (!workspace)
 		return ERR_PTR(-ENOMEM);
 
-	workspace->size = max_t(size_t,
-			ZSTD_CStreamWorkspaceBound(params.cParams),
-			ZSTD_DStreamWorkspaceBound(ZSTD_BTRFS_MAX_INPUT));
+	workspace->size = zstd_ws_mem_sizes[level - 1];
 	workspace->mem = kvmalloc(workspace->size, GFP_KERNEL);
 	workspace->buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
 	if (!workspace->mem || !workspace->buf)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 12/12] btrfs: add zstd compression level support
  2019-02-04 20:19 [PATCH v2 00/12] btrfs: add zstd compression level support Dennis Zhou
                   ` (10 preceding siblings ...)
  2019-02-04 20:20 ` [PATCH 11/12] btrfs: make zstd memory requirements monotonic Dennis Zhou
@ 2019-02-04 20:20 ` Dennis Zhou
  2019-02-05 19:26   ` [PATCH v2 " Dennis Zhou
  2019-02-06 16:47   ` [PATCH v3 " Dennis Zhou
  2019-02-05 14:57 ` [PATCH v2 00/12] " David Sterba
  2019-02-07 16:59 ` David Sterba
  13 siblings, 2 replies; 34+ messages in thread
From: Dennis Zhou @ 2019-02-04 20:20 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

Zstd compression requires different amounts of memory for each level of
compression. The prior patches implemented indirection to allow for each
compression type to manage their workspaces independently. This patch
uses this indirection to implement compression level support for zstd.

To manage the additional memory require, each compression level has its
own queue of workspaces. A global LRU is used to help with reclaim.
Reclaim is done via a timer which provides a mechanism to decrease
memory utilization by keeping only workspaces around that are sized
appropriately. Forward progress is guaranteed by a preallocated max
workspace hidden from the LRU.

When getting a workspace, it uses a bitmap to identify the levels that
are populated and scans up. If it finds a workspace that is greater than
it, it uses it, but does not update the last_used time and the
corresponding place in the LRU. If we hit memory pressure, we sleep on
the max level workspace. We continue to rescan in case we can use a
smaller workspace, but eventually should be able to obtain the max level
workspace or allocate one again should memory pressure subside.

The memory requirement for decompression is the same as level 1, and
therefore can use any of available workspace.

The number of workspaces is bound by an upper limit of the workqueue's
limit which currently is 2 (percpu limit). The reclaim timer is used to
free inactive/improperly sized workspaces and is set to 307s to avoid
colliding with transaction commit (every 30s).

Repeating the experiment from v2 [1], the Silesia corpus was copied to a
btrfs filesystem 10 times and then read back after dropping the caches.
The btrfs filesystem was on an SSD.

Level   Ratio   Compression (MB/s)  Decompression (MB/s)  Memory (KB)
1       2.658        438.47                910.51            780
2       2.744        364.86                886.55           1004
3       2.801        336.33                828.41           1260
4       2.858        286.71                886.55           1260
5       2.916        212.77                556.84           1388
6       2.363        119.82                990.85           1516
7       3.000        154.06                849.30           1516
8       3.011        159.54                875.03           1772
9       3.025        100.51                940.15           1772
10      3.033        118.97                616.26           1772
11      3.036         94.19                802.11           1772
12      3.037         73.45                931.49           1772
13      3.041         55.17                835.26           2284
14      3.087         44.70                716.78           2547
15      3.126         37.30                878.84           2547

[1] https://lore.kernel.org/linux-btrfs/20181031181108.289340-1-terrelln@fb.com/

Signed-off-by: Dennis Zhou <dennis@kernel.org>
Cc: Nick Terrell <terrelln@fb.com>
Cc: Omar Sandoval <osandov@osandov.com>
---
v2:
- increase reclaim timer to 307s from 67s
- move from keeping track of time in ns to jiffies
- remove timer in cleanup code
- use min_t() instead of if statements in .set_level()
- add header text to describe how workspaces are managed
- nofs_flag type -> unsigned long to unsigned int

 fs/btrfs/super.c |   6 +-
 fs/btrfs/zstd.c  | 243 +++++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 240 insertions(+), 9 deletions(-)

diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
index b28dff207383..0ecc513cb56c 100644
--- a/fs/btrfs/super.c
+++ b/fs/btrfs/super.c
@@ -544,9 +544,13 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
 				btrfs_clear_opt(info->mount_opt, NODATASUM);
 				btrfs_set_fs_incompat(info, COMPRESS_LZO);
 				no_compress = 0;
-			} else if (strcmp(args[0].from, "zstd") == 0) {
+			} else if (strncmp(args[0].from, "zstd", 4) == 0) {
 				compress_type = "zstd";
 				info->compress_type = BTRFS_COMPRESS_ZSTD;
+				info->compress_level =
+					btrfs_compress_str2level(
+							 BTRFS_COMPRESS_ZSTD,
+							 args[0].from + 4);
 				btrfs_set_opt(info->mount_opt, COMPRESS);
 				btrfs_clear_opt(info->mount_opt, NODATACOW);
 				btrfs_clear_opt(info->mount_opt, NODATASUM);
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index 2231123fedbe..c1de3649a7ae 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -3,24 +3,45 @@
  * Copyright (c) 2016-present, Facebook, Inc.
  * All rights reserved.
  *
+ * Zstd Workspace Management:
+ * Zstd workspaces have different memory requirements depending on the level.
+ * The zstd workspaces are managed by having individual lists for each level
+ * and a global lru.  Forward progress is maintained by protecting a max level
+ * workspace.
+ *
+ * Getting a workspace is done by using the bitmap to identify the levels that
+ * have available workspaces and scans up.  This lets us recycle higher level
+ * workspaces because of the monotonic memory guarantee.  A workspace's
+ * last_used is only updated if it is being used by the corresponding memory
+ * level.  Putting a workspace involves adding it back to the appropriate places
+ * and adding it back to the lru if necessary.
+ *
+ * A timer is used to reclaim workspaces if they have not been used for
+ * ZSTD_BTRFS_RECLAIM_JIFFIES.  This helps keep only active workspaces around.
+ * The upper bound is provided by the workqueue limit which is 2 (percpu limit).
  */
 
 #include <linux/bio.h>
+#include <linux/bitmap.h>
 #include <linux/err.h>
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/mm.h>
+#include <linux/sched/mm.h>
 #include <linux/pagemap.h>
 #include <linux/refcount.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
 #include <linux/zstd.h>
 #include "compression.h"
+#include "ctree.h"
 
 #define ZSTD_BTRFS_MAX_WINDOWLOG 17
 #define ZSTD_BTRFS_MAX_INPUT (1 << ZSTD_BTRFS_MAX_WINDOWLOG)
 #define ZSTD_BTRFS_DEFAULT_LEVEL 3
 #define ZSTD_BTRFS_MAX_LEVEL 15
+/* 307s to avoid pathologically clashing with transaction commit */
+#define ZSTD_BTRFS_RECLAIM_JIFFIES (307 * HZ)
 
 static ZSTD_parameters zstd_get_btrfs_parameters(unsigned int level,
 						 size_t src_len)
@@ -37,16 +58,81 @@ struct workspace {
 	void *mem;
 	size_t size;
 	char *buf;
+	unsigned int level;
 	unsigned int req_level;
+	unsigned long last_used; /* jiffies */
 	struct list_head list;
+	struct list_head lru_list;
 	ZSTD_inBuffer in_buf;
 	ZSTD_outBuffer out_buf;
 };
 
-static struct workspace_manager wsm;
+struct zstd_workspace_manager {
+	const struct btrfs_compress_op *ops;
+	spinlock_t lock;
+	struct list_head lru_list;
+	struct list_head idle_ws[ZSTD_BTRFS_MAX_LEVEL];
+	unsigned long active_map;
+	wait_queue_head_t wait;
+	struct timer_list timer;
+};
+
+static struct zstd_workspace_manager wsm;
 
 static size_t zstd_ws_mem_sizes[ZSTD_BTRFS_MAX_LEVEL];
 
+static inline struct workspace *list_to_workspace(struct list_head *list)
+{
+	return container_of(list, struct workspace, list);
+}
+
+/*
+ * zstd_reclaim_timer_fn - reclaim timer
+ * @t: timer
+ *
+ * This scans the lru_list and attempts to reclaim any workspace that hasn't
+ * been used for ZSTD_BTRFS_RECLAIM_JIFFIES.
+ */
+static void zstd_reclaim_timer_fn(struct timer_list *t)
+{
+	unsigned long reclaim_threshold = jiffies - ZSTD_BTRFS_RECLAIM_JIFFIES;
+	struct list_head *pos, *next;
+
+	spin_lock(&wsm.lock);
+
+	if (list_empty(&wsm.lru_list)) {
+		spin_unlock(&wsm.lock);
+		return;
+	}
+
+	list_for_each_prev_safe(pos, next, &wsm.lru_list) {
+		struct workspace *victim = container_of(pos, struct workspace,
+							lru_list);
+		unsigned int level;
+
+		if (time_after(victim->last_used, reclaim_threshold))
+			break;
+
+		/* workspace is in use */
+		if (victim->req_level)
+			continue;
+
+		level = victim->level;
+		list_del(&victim->lru_list);
+		list_del(&victim->list);
+		wsm.ops->free_workspace(&victim->list);
+
+		if (list_empty(&wsm.idle_ws[level - 1]))
+			clear_bit(level - 1, &wsm.active_map);
+
+	}
+
+	if (!list_empty(&wsm.lru_list))
+		mod_timer(&wsm.timer, jiffies + ZSTD_BTRFS_RECLAIM_JIFFIES);
+
+	spin_unlock(&wsm.lock);
+}
+
 /*
  * zstd_calc_ws_mem_sizes - calculate monotonic memory bounds
  *
@@ -76,29 +162,163 @@ static void zstd_calc_ws_mem_sizes(void)
 
 static void zstd_init_workspace_manager(void)
 {
+	struct list_head *ws;
+	int i;
+
 	zstd_calc_ws_mem_sizes();
 
-	btrfs_init_workspace_manager(&wsm, &btrfs_zstd_compress);
+	wsm.ops = &btrfs_zstd_compress;
+	spin_lock_init(&wsm.lock);
+	init_waitqueue_head(&wsm.wait);
+	timer_setup(&wsm.timer, zstd_reclaim_timer_fn, 0);
+
+	INIT_LIST_HEAD(&wsm.lru_list);
+	for (i = 0; i < ZSTD_BTRFS_MAX_LEVEL; i++)
+		INIT_LIST_HEAD(&wsm.idle_ws[i]);
+
+	ws = wsm.ops->alloc_workspace(ZSTD_BTRFS_MAX_LEVEL);
+	if (IS_ERR(ws)) {
+		pr_warn("BTRFS: cannot preallocate zstd compression workspace\n");
+	} else {
+		set_bit(ZSTD_BTRFS_MAX_LEVEL - 1, &wsm.active_map);
+		list_add(ws, &wsm.idle_ws[ZSTD_BTRFS_MAX_LEVEL - 1]);
+	}
 }
 
 static void zstd_cleanup_workspace_manager(void)
 {
-	btrfs_cleanup_workspace_manager(&wsm);
+	struct workspace *workspace;
+	int i;
+
+	del_timer(&wsm.timer);
+
+	for (i = 0; i < ZSTD_BTRFS_MAX_LEVEL; i++) {
+		while (!list_empty(&wsm.idle_ws[i])) {
+			workspace = container_of(wsm.idle_ws[i].next,
+						 struct workspace, list);
+			list_del(&workspace->list);
+			list_del(&workspace->lru_list);
+			wsm.ops->free_workspace(&workspace->list);
+		}
+	}
+}
+
+/*
+ * zstd_find_workspace - find workspace
+ * @level: compression level
+ *
+ * This iterates over the set bits in the active_map beginning at the requested
+ * compression level.  This lets us utilize already allocated workspaces before
+ * allocating a new one.  If the workspace is of a larger size, it is used, but
+ * the place in the lru_list and last_used times are not updated.  This is to
+ * offer the opportunity to reclaim the workspace in favor of allocating an
+ * appropriately sized one in the future.
+ */
+static struct list_head *zstd_find_workspace(unsigned int level)
+{
+	struct list_head *ws;
+	struct workspace *workspace;
+	int i = level - 1;
+
+	spin_lock(&wsm.lock);
+	for_each_set_bit_from(i, &wsm.active_map, ZSTD_BTRFS_MAX_LEVEL) {
+		if (!list_empty(&wsm.idle_ws[i])) {
+			ws = wsm.idle_ws[i].next;
+			workspace = list_to_workspace(ws);
+			list_del_init(ws);
+			/* keep its place if it's a lower level using this */
+			workspace->req_level = level;
+			if (level == workspace->level)
+				list_del(&workspace->lru_list);
+			if (list_empty(&wsm.idle_ws[i]))
+				clear_bit(i, &wsm.active_map);
+			spin_unlock(&wsm.lock);
+			return ws;
+		}
+	}
+	spin_unlock(&wsm.lock);
+
+	return NULL;
 }
 
+/*
+ * zstd_get_workspace - zstd's get_workspace
+ * @level: compression level
+ *
+ * If @level is 0, then any compression level can be used.  Therefore, we begin
+ * scanning from 1.  We first scan through possible workspaces and then after
+ * attempt to allocate a new workspace.  If we fail to allocate one due to
+ * memory pressure, go to sleep waiting for the max level workspace to free up.
+ */
 static struct list_head *zstd_get_workspace(unsigned int level)
 {
-	struct list_head *ws = btrfs_get_workspace(&wsm, level);
-	struct workspace *workspace = list_entry(ws, struct workspace, list);
+	struct list_head *ws;
+	unsigned int nofs_flag;
 
-	workspace->req_level = level;
+	/* level == 0 means we can use any workspace */
+	if (!level)
+		level = 1;
+
+again:
+	ws = zstd_find_workspace(level);
+	if (ws)
+		return ws;
+
+	nofs_flag = memalloc_nofs_save();
+	ws = wsm.ops->alloc_workspace(level);
+	memalloc_nofs_restore(nofs_flag);
+
+	if (IS_ERR(ws)) {
+		DEFINE_WAIT(wait);
+
+		prepare_to_wait(&wsm.wait, &wait, TASK_UNINTERRUPTIBLE);
+		schedule();
+		finish_wait(&wsm.wait, &wait);
+
+		goto again;
+	}
 
 	return ws;
 }
 
+/*
+ * zstd_put_workspace - zstd put_workspace
+ * @ws: list_head for the workspace
+ *
+ * When putting back a workspace, we only need to update the LRU if we are of
+ * the requested compression level.  Here is where we continue to protect the
+ * max level workspace or update last_used accordingly.  If the reclaim timer
+ * isn't set, it is also set here.  Only the max level workspace tries and wakes
+ * up waiting workspaces.
+ */
 static void zstd_put_workspace(struct list_head *ws)
 {
-	btrfs_put_workspace(&wsm, ws);
+	struct workspace *workspace = list_to_workspace(ws);
+
+	spin_lock(&wsm.lock);
+
+	/* a node is only taken off the lru if we are the corresponding level */
+	if (workspace->req_level == workspace->level) {
+		/* hide a max level workspace from reclaim */
+		if (list_empty(&wsm.idle_ws[ZSTD_BTRFS_MAX_LEVEL - 1])) {
+			INIT_LIST_HEAD(&workspace->lru_list);
+		} else {
+			workspace->last_used = jiffies;
+			list_add(&workspace->lru_list, &wsm.lru_list);
+			if (!timer_pending(&wsm.timer))
+				mod_timer(&wsm.timer,
+					  jiffies + ZSTD_BTRFS_RECLAIM_JIFFIES);
+		}
+	}
+
+	set_bit(workspace->level - 1, &wsm.active_map);
+	list_add(&workspace->list, &wsm.idle_ws[workspace->level - 1]);
+	workspace->req_level = 0;
+
+	spin_unlock(&wsm.lock);
+
+	if (workspace->level == ZSTD_BTRFS_MAX_LEVEL)
+		cond_wake_up(&wsm.wait);
 }
 
 static void zstd_free_workspace(struct list_head *ws)
@@ -121,10 +341,14 @@ static struct list_head *zstd_alloc_workspace(unsigned int level)
 	workspace->size = zstd_ws_mem_sizes[level - 1];
 	workspace->mem = kvmalloc(workspace->size, GFP_KERNEL);
 	workspace->buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
+	workspace->level = level;
+	workspace->req_level = level;
+	workspace->last_used = jiffies;
 	if (!workspace->mem || !workspace->buf)
 		goto fail;
 
 	INIT_LIST_HEAD(&workspace->list);
+	INIT_LIST_HEAD(&workspace->lru_list);
 
 	return &workspace->list;
 fail:
@@ -478,7 +702,10 @@ static int zstd_decompress(struct list_head *ws, unsigned char *data_in,
 
 static unsigned int zstd_set_level(unsigned int level)
 {
-	return ZSTD_BTRFS_DEFAULT_LEVEL;
+	if (!level)
+		return ZSTD_BTRFS_DEFAULT_LEVEL;
+
+	return min_t(unsigned int, level, ZSTD_BTRFS_MAX_LEVEL);
 }
 
 const struct btrfs_compress_op btrfs_zstd_compress = {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 00/12] btrfs: add zstd compression level support
  2019-02-04 20:19 [PATCH v2 00/12] btrfs: add zstd compression level support Dennis Zhou
                   ` (11 preceding siblings ...)
  2019-02-04 20:20 ` [PATCH 12/12] btrfs: add zstd compression level support Dennis Zhou
@ 2019-02-05 14:57 ` David Sterba
  2019-02-05 16:03   ` Dennis Zhou
  2019-02-07 16:59 ` David Sterba
  13 siblings, 1 reply; 34+ messages in thread
From: David Sterba @ 2019-02-05 14:57 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov, kernel-team, linux-btrfs,
	linux-kernel

On Mon, Feb 04, 2019 at 03:19:56PM -0500, Dennis Zhou wrote:
> Hi everyone,
> 
> V2 had only a handful of changes outside of minor feedback.
> 0001:
> - use functions over macros
> 0003:
> - BTRFS_NR_WORKSPACE_MANAGERS is added instead of overriding
>   BTRFS_COMPRESS_TYPES
> 0011 (new):
> - address monotonic memory requirement for zstd workspaces
> 0012:
> - increase reclaim timer to 307s from 67s
> - move from keeping track of time in ns to jiffies
> - remove timer in cleanup code
> - use min_t() instead of if statements in .set_level()
> - add header text to describe how workspaces are managed
> - nofs_flag type -> unsigned long to unsigned int

Something is wrong, the patchset on top of 5.0-rc5 hangs in test
btrfs/007, without a stacktrace. V1 was fine and I double checked that
rc5 itself is fine.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 00/12] btrfs: add zstd compression level support
  2019-02-05 14:57 ` [PATCH v2 00/12] " David Sterba
@ 2019-02-05 16:03   ` Dennis Zhou
  2019-02-05 16:27     ` David Sterba
  0 siblings, 1 reply; 34+ messages in thread
From: Dennis Zhou @ 2019-02-05 16:03 UTC (permalink / raw)
  To: David Sterba
  Cc: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell, Nikolay Borisov, kernel-team,
	linux-btrfs, linux-kernel

On Tue, Feb 05, 2019 at 03:57:53PM +0100, David Sterba wrote:
> On Mon, Feb 04, 2019 at 03:19:56PM -0500, Dennis Zhou wrote:
> > Hi everyone,
> > 
> > V2 had only a handful of changes outside of minor feedback.
> > 0001:
> > - use functions over macros
> > 0003:
> > - BTRFS_NR_WORKSPACE_MANAGERS is added instead of overriding
> >   BTRFS_COMPRESS_TYPES
> > 0011 (new):
> > - address monotonic memory requirement for zstd workspaces
> > 0012:
> > - increase reclaim timer to 307s from 67s
> > - move from keeping track of time in ns to jiffies
> > - remove timer in cleanup code
> > - use min_t() instead of if statements in .set_level()
> > - add header text to describe how workspaces are managed
> > - nofs_flag type -> unsigned long to unsigned int
> 
> Something is wrong, the patchset on top of 5.0-rc5 hangs in test
> btrfs/007, without a stacktrace. V1 was fine and I double checked that
> rc5 itself is fine.

Hmmm, well that's awkward. I ran xfstests and that test passed on my
machine. I'll figure out the delta and submit a v3. Thanks David!

Thanks,
Dennis

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 00/12] btrfs: add zstd compression level support
  2019-02-05 16:03   ` Dennis Zhou
@ 2019-02-05 16:27     ` David Sterba
  2019-02-05 16:30       ` Dennis Zhou
  0 siblings, 1 reply; 34+ messages in thread
From: David Sterba @ 2019-02-05 16:27 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov, kernel-team, linux-btrfs,
	linux-kernel

On Tue, Feb 05, 2019 at 11:03:02AM -0500, Dennis Zhou wrote:
> On Tue, Feb 05, 2019 at 03:57:53PM +0100, David Sterba wrote:
> > On Mon, Feb 04, 2019 at 03:19:56PM -0500, Dennis Zhou wrote:
> > > Hi everyone,
> > > 
> > > V2 had only a handful of changes outside of minor feedback.
> > > 0001:
> > > - use functions over macros
> > > 0003:
> > > - BTRFS_NR_WORKSPACE_MANAGERS is added instead of overriding
> > >   BTRFS_COMPRESS_TYPES
> > > 0011 (new):
> > > - address monotonic memory requirement for zstd workspaces
> > > 0012:
> > > - increase reclaim timer to 307s from 67s
> > > - move from keeping track of time in ns to jiffies
> > > - remove timer in cleanup code
> > > - use min_t() instead of if statements in .set_level()
> > > - add header text to describe how workspaces are managed
> > > - nofs_flag type -> unsigned long to unsigned int
> > 
> > Something is wrong, the patchset on top of 5.0-rc5 hangs in test
> > btrfs/007, without a stacktrace. V1 was fine and I double checked that
> > rc5 itself is fine.
> 
> Hmmm, well that's awkward. I ran xfstests and that test passed on my
> machine. I'll figure out the delta and submit a v3. Thanks David!

It failed on a VM and 2 other physical machines, so it's not somethig
related to the testing setup.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 00/12] btrfs: add zstd compression level support
  2019-02-05 16:27     ` David Sterba
@ 2019-02-05 16:30       ` Dennis Zhou
  2019-02-05 16:51         ` David Sterba
  0 siblings, 1 reply; 34+ messages in thread
From: Dennis Zhou @ 2019-02-05 16:30 UTC (permalink / raw)
  To: David Sterba
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov, kernel-team, linux-btrfs,
	linux-kernel

On Tue, Feb 05, 2019 at 05:27:34PM +0100, David Sterba wrote:
> On Tue, Feb 05, 2019 at 11:03:02AM -0500, Dennis Zhou wrote:
> > On Tue, Feb 05, 2019 at 03:57:53PM +0100, David Sterba wrote:
> > > On Mon, Feb 04, 2019 at 03:19:56PM -0500, Dennis Zhou wrote:
> > > > Hi everyone,
> > > > 
> > > > V2 had only a handful of changes outside of minor feedback.
> > > > 0001:
> > > > - use functions over macros
> > > > 0003:
> > > > - BTRFS_NR_WORKSPACE_MANAGERS is added instead of overriding
> > > >   BTRFS_COMPRESS_TYPES
> > > > 0011 (new):
> > > > - address monotonic memory requirement for zstd workspaces
> > > > 0012:
> > > > - increase reclaim timer to 307s from 67s
> > > > - move from keeping track of time in ns to jiffies
> > > > - remove timer in cleanup code
> > > > - use min_t() instead of if statements in .set_level()
> > > > - add header text to describe how workspaces are managed
> > > > - nofs_flag type -> unsigned long to unsigned int
> > > 
> > > Something is wrong, the patchset on top of 5.0-rc5 hangs in test
> > > btrfs/007, without a stacktrace. V1 was fine and I double checked that
> > > rc5 itself is fine.
> > 
> > Hmmm, well that's awkward. I ran xfstests and that test passed on my
> > machine. I'll figure out the delta and submit a v3. Thanks David!
> 
> It failed on a VM and 2 other physical machines, so it's not somethig
> related to the testing setup.

Oh, I wasn't alluding to that. I'm more that sure it's my fault
somewhere along the line.

Thanks,
Dennis

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 00/12] btrfs: add zstd compression level support
  2019-02-05 16:30       ` Dennis Zhou
@ 2019-02-05 16:51         ` David Sterba
  2019-02-05 17:07           ` David Sterba
  0 siblings, 1 reply; 34+ messages in thread
From: David Sterba @ 2019-02-05 16:51 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov, kernel-team, linux-btrfs,
	linux-kernel

On Tue, Feb 05, 2019 at 11:30:12AM -0500, Dennis Zhou wrote:
> > > > Something is wrong, the patchset on top of 5.0-rc5 hangs in test
> > > > btrfs/007, without a stacktrace. V1 was fine and I double checked that
> > > > rc5 itself is fine.
> > > 
> > > Hmmm, well that's awkward. I ran xfstests and that test passed on my
> > > machine. I'll figure out the delta and submit a v3. Thanks David!
> > 
> > It failed on a VM and 2 other physical machines, so it's not somethig
> > related to the testing setup.
> 
> Oh, I wasn't alluding to that. I'm more that sure it's my fault
> somewhere along the line.

Hang on, it's probably caused by fstests, as fssum is busy waiting
somewhere, so it's not even kernel code.

fstests a4235f29f780b15d9e87c0f2446a53b9e69c5e2d -- ok
fstests 6d17c9076fe9d97d2411e5909e6b255e5e721fc3 (master) -- hangs

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 00/12] btrfs: add zstd compression level support
  2019-02-05 16:51         ` David Sterba
@ 2019-02-05 17:07           ` David Sterba
  2019-02-05 18:27             ` David Sterba
  0 siblings, 1 reply; 34+ messages in thread
From: David Sterba @ 2019-02-05 17:07 UTC (permalink / raw)
  To: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell, Nikolay Borisov, kernel-team,
	linux-btrfs, linux-kernel

On Tue, Feb 05, 2019 at 05:51:13PM +0100, David Sterba wrote:
> On Tue, Feb 05, 2019 at 11:30:12AM -0500, Dennis Zhou wrote:
> > > > > Something is wrong, the patchset on top of 5.0-rc5 hangs in test
> > > > > btrfs/007, without a stacktrace. V1 was fine and I double checked that
> > > > > rc5 itself is fine.
> > > > 
> > > > Hmmm, well that's awkward. I ran xfstests and that test passed on my
> > > > machine. I'll figure out the delta and submit a v3. Thanks David!
> > > 
> > > It failed on a VM and 2 other physical machines, so it's not somethig
> > > related to the testing setup.
> > 
> > Oh, I wasn't alluding to that. I'm more that sure it's my fault
> > somewhere along the line.
> 
> Hang on, it's probably caused by fstests, as fssum is busy waiting
> somewhere, so it's not even kernel code.
> 
> fstests a4235f29f780b15d9e87c0f2446a53b9e69c5e2d -- ok
> fstests 6d17c9076fe9d97d2411e5909e6b255e5e721fc3 (master) -- hangs

Git bisect points to

commit 04b405fa1f8d0cacde489f3d1c8771dfe7a34dc0
Author: Zorro Lang <zlang@redhat.com>
Date:   Wed Jan 23 15:34:55 2019 +0800

    common/dump: disable splice from FSSTRESS_AVOID

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 00/12] btrfs: add zstd compression level support
  2019-02-05 17:07           ` David Sterba
@ 2019-02-05 18:27             ` David Sterba
  2019-02-05 18:30               ` Dennis Zhou
  0 siblings, 1 reply; 34+ messages in thread
From: David Sterba @ 2019-02-05 18:27 UTC (permalink / raw)
  To: David Sterba
  Cc: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell, Nikolay Borisov, kernel-team,
	linux-btrfs, linux-kernel

On Tue, Feb 05, 2019 at 06:07:49PM +0100, David Sterba wrote:
> On Tue, Feb 05, 2019 at 05:51:13PM +0100, David Sterba wrote:
> > On Tue, Feb 05, 2019 at 11:30:12AM -0500, Dennis Zhou wrote:
> > > > > > Something is wrong, the patchset on top of 5.0-rc5 hangs in test
> > > > > > btrfs/007, without a stacktrace. V1 was fine and I double checked that
> > > > > > rc5 itself is fine.
> > > > > 
> > > > > Hmmm, well that's awkward. I ran xfstests and that test passed on my
> > > > > machine. I'll figure out the delta and submit a v3. Thanks David!
> > > > 
> > > > It failed on a VM and 2 other physical machines, so it's not somethig
> > > > related to the testing setup.
> > > 
> > > Oh, I wasn't alluding to that. I'm more that sure it's my fault
> > > somewhere along the line.
> > 
> > Hang on, it's probably caused by fstests, as fssum is busy waiting
> > somewhere, so it's not even kernel code.
> > 
> > fstests a4235f29f780b15d9e87c0f2446a53b9e69c5e2d -- ok
> > fstests 6d17c9076fe9d97d2411e5909e6b255e5e721fc3 (master) -- hangs
> 
> Git bisect points to
> 
> commit 04b405fa1f8d0cacde489f3d1c8771dfe7a34dc0
> Author: Zorro Lang <zlang@redhat.com>
> Date:   Wed Jan 23 15:34:55 2019 +0800
> 
>     common/dump: disable splice from FSSTRESS_AVOID

So with this commit reverted, test btrfs/007 does not hang anymore.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 00/12] btrfs: add zstd compression level support
  2019-02-05 18:27             ` David Sterba
@ 2019-02-05 18:30               ` Dennis Zhou
  2019-02-05 20:48                 ` Dennis Zhou
  0 siblings, 1 reply; 34+ messages in thread
From: Dennis Zhou @ 2019-02-05 18:30 UTC (permalink / raw)
  To: David Sterba
  Cc: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell, Nikolay Borisov, kernel-team,
	linux-btrfs, linux-kernel

On Tue, Feb 05, 2019 at 07:27:49PM +0100, David Sterba wrote:
> On Tue, Feb 05, 2019 at 06:07:49PM +0100, David Sterba wrote:
> > On Tue, Feb 05, 2019 at 05:51:13PM +0100, David Sterba wrote:
> > > On Tue, Feb 05, 2019 at 11:30:12AM -0500, Dennis Zhou wrote:
> > > > > > > Something is wrong, the patchset on top of 5.0-rc5 hangs in test
> > > > > > > btrfs/007, without a stacktrace. V1 was fine and I double checked that
> > > > > > > rc5 itself is fine.
> > > > > > 
> > > > > > Hmmm, well that's awkward. I ran xfstests and that test passed on my
> > > > > > machine. I'll figure out the delta and submit a v3. Thanks David!
> > > > > 
> > > > > It failed on a VM and 2 other physical machines, so it's not somethig
> > > > > related to the testing setup.
> > > > 
> > > > Oh, I wasn't alluding to that. I'm more that sure it's my fault
> > > > somewhere along the line.
> > > 
> > > Hang on, it's probably caused by fstests, as fssum is busy waiting
> > > somewhere, so it's not even kernel code.
> > > 
> > > fstests a4235f29f780b15d9e87c0f2446a53b9e69c5e2d -- ok
> > > fstests 6d17c9076fe9d97d2411e5909e6b255e5e721fc3 (master) -- hangs
> > 
> > Git bisect points to
> > 
> > commit 04b405fa1f8d0cacde489f3d1c8771dfe7a34dc0
> > Author: Zorro Lang <zlang@redhat.com>
> > Date:   Wed Jan 23 15:34:55 2019 +0800
> > 
> >     common/dump: disable splice from FSSTRESS_AVOID
> 
> So with this commit reverted, test btrfs/007 does not hang anymore.

Ok great! I'm going to add a v2 for 0012 to add taking the spin_lock
just to be safe in cleanup. I should have that ready in a few minutes.

Thanks,
Dennis

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 09/12] btrfs: change set_level() to bound the level passed in
  2019-02-04 20:20 ` [PATCH 09/12] btrfs: change set_level() to bound the level passed in Dennis Zhou
@ 2019-02-05 19:06   ` David Sterba
  2019-02-05 19:32     ` Dennis Zhou
  2019-02-06 16:48   ` [PATCH v2 " Dennis Zhou
  1 sibling, 1 reply; 34+ messages in thread
From: David Sterba @ 2019-02-05 19:06 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov, kernel-team, linux-btrfs,
	linux-kernel

On Mon, Feb 04, 2019 at 03:20:05PM -0500, Dennis Zhou wrote:
> -unsigned int btrfs_compress_str2level(const char *str)
> +unsigned int btrfs_compress_str2level(unsigned int type, const char *str)
>  {
> -	if (strncmp(str, "zlib", 4) != 0)
> +	unsigned int level;
> +	int ret;
> +
> +	if (!type)
>  		return 0;
>  
> -	/* Accepted form: zlib:1 up to zlib:9 and nothing left after the number */
> -	if (str[4] == ':' && '1' <= str[5] && str[5] <= '9' && str[6] == 0)
> -		return str[5] - '0';
> +	if (str[0] == ':') {
> +		ret = kstrtouint(str + 1, 10, &level);

The docs kstrtouint of say that initial + is also accepted, I'd rather
keep the level specification strict, ie. no "zlib:+3" and no garbage
after the number.

The validation is currently missing but I think we should catch levels
out of range during mount/remount. The fallback to default is a safety
but wrong specification should be communicated to the user early.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v2 12/12] btrfs: add zstd compression level support
  2019-02-04 20:20 ` [PATCH 12/12] btrfs: add zstd compression level support Dennis Zhou
@ 2019-02-05 19:26   ` Dennis Zhou
  2019-02-06 16:47   ` [PATCH v3 " Dennis Zhou
  1 sibling, 0 replies; 34+ messages in thread
From: Dennis Zhou @ 2019-02-05 19:26 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov, kernel-team, linux-btrfs,
	linux-kernel

From 0d8684d1d7b18dfa9d5bc9c74033c6c3b6fecd92 Mon Sep 17 00:00:00 2001
From: Dennis Zhou <dennis@kernel.org>
Date: Sat, 19 Jan 2019 18:51:39 -0800

Zstd compression requires different amounts of memory for each level of
compression. The prior patches implemented indirection to allow for each
compression type to manage their workspaces independently. This patch
uses this indirection to implement compression level support for zstd.

To manage the additional memory require, each compression level has its
own queue of workspaces. A global LRU is used to help with reclaim.
Reclaim is done via a timer which provides a mechanism to decrease
memory utilization by keeping only workspaces around that are sized
appropriately. Forward progress is guaranteed by a preallocated max
workspace hidden from the LRU.

When getting a workspace, it uses a bitmap to identify the levels that
are populated and scans up. If it finds a workspace that is greater than
it, it uses it, but does not update the last_used time and the
corresponding place in the LRU. If we hit memory pressure, we sleep on
the max level workspace. We continue to rescan in case we can use a
smaller workspace, but eventually should be able to obtain the max level
workspace or allocate one again should memory pressure subside.

The memory requirement for decompression is the same as level 1, and
therefore can use any of available workspace.

The number of workspaces is bound by an upper limit of the workqueue's
limit which currently is 2 (percpu limit). The reclaim timer is used to
free inactive/improperly sized workspaces and is set to 307s to avoid
colliding with transaction commit (every 30s).

Repeating the experiment from v2 [1], the Silesia corpus was copied to a
btrfs filesystem 10 times and then read back after dropping the caches.
The btrfs filesystem was on an SSD.

Level   Ratio   Compression (MB/s)  Decompression (MB/s)  Memory (KB)
1       2.658        438.47                910.51            780
2       2.744        364.86                886.55           1004
3       2.801        336.33                828.41           1260
4       2.858        286.71                886.55           1260
5       2.916        212.77                556.84           1388
6       2.363        119.82                990.85           1516
7       3.000        154.06                849.30           1516
8       3.011        159.54                875.03           1772
9       3.025        100.51                940.15           1772
10      3.033        118.97                616.26           1772
11      3.036         94.19                802.11           1772
12      3.037         73.45                931.49           1772
13      3.041         55.17                835.26           2284
14      3.087         44.70                716.78           2547
15      3.126         37.30                878.84           2547

[1] https://lore.kernel.org/linux-btrfs/20181031181108.289340-1-terrelln@fb.com/

Signed-off-by: Dennis Zhou <dennis@kernel.org>
Cc: Nick Terrell <terrelln@fb.com>
Cc: Omar Sandoval <osandov@osandov.com>
---
v2:
- increase reclaim timer to 307s from 67s
- move from keeping track of time in ns to jiffies
- remove timer in cleanup code (use del_timer_sync)
- use min_t() instead of if statements in .set_level()
- add header text to describe how workspaces are managed
- nofs_flag type -> unsigned long to unsigned int

 fs/btrfs/super.c |   6 +-
 fs/btrfs/zstd.c  | 245 +++++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 242 insertions(+), 9 deletions(-)

diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
index b28dff207383..0ecc513cb56c 100644
--- a/fs/btrfs/super.c
+++ b/fs/btrfs/super.c
@@ -544,9 +544,13 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
 				btrfs_clear_opt(info->mount_opt, NODATASUM);
 				btrfs_set_fs_incompat(info, COMPRESS_LZO);
 				no_compress = 0;
-			} else if (strcmp(args[0].from, "zstd") == 0) {
+			} else if (strncmp(args[0].from, "zstd", 4) == 0) {
 				compress_type = "zstd";
 				info->compress_type = BTRFS_COMPRESS_ZSTD;
+				info->compress_level =
+					btrfs_compress_str2level(
+							 BTRFS_COMPRESS_ZSTD,
+							 args[0].from + 4);
 				btrfs_set_opt(info->mount_opt, COMPRESS);
 				btrfs_clear_opt(info->mount_opt, NODATACOW);
 				btrfs_clear_opt(info->mount_opt, NODATASUM);
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index 2231123fedbe..54219c84320c 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -3,24 +3,45 @@
  * Copyright (c) 2016-present, Facebook, Inc.
  * All rights reserved.
  *
+ * Zstd Workspace Management:
+ * Zstd workspaces have different memory requirements depending on the level.
+ * The zstd workspaces are managed by having individual lists for each level
+ * and a global lru.  Forward progress is maintained by protecting a max level
+ * workspace.
+ *
+ * Getting a workspace is done by using the bitmap to identify the levels that
+ * have available workspaces and scans up.  This lets us recycle higher level
+ * workspaces because of the monotonic memory guarantee.  A workspace's
+ * last_used is only updated if it is being used by the corresponding memory
+ * level.  Putting a workspace involves adding it back to the appropriate places
+ * and adding it back to the lru if necessary.
+ *
+ * A timer is used to reclaim workspaces if they have not been used for
+ * ZSTD_BTRFS_RECLAIM_JIFFIES.  This helps keep only active workspaces around.
+ * The upper bound is provided by the workqueue limit which is 2 (percpu limit).
  */
 
 #include <linux/bio.h>
+#include <linux/bitmap.h>
 #include <linux/err.h>
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/mm.h>
+#include <linux/sched/mm.h>
 #include <linux/pagemap.h>
 #include <linux/refcount.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
 #include <linux/zstd.h>
 #include "compression.h"
+#include "ctree.h"
 
 #define ZSTD_BTRFS_MAX_WINDOWLOG 17
 #define ZSTD_BTRFS_MAX_INPUT (1 << ZSTD_BTRFS_MAX_WINDOWLOG)
 #define ZSTD_BTRFS_DEFAULT_LEVEL 3
 #define ZSTD_BTRFS_MAX_LEVEL 15
+/* 307s to avoid pathologically clashing with transaction commit */
+#define ZSTD_BTRFS_RECLAIM_JIFFIES (307 * HZ)
 
 static ZSTD_parameters zstd_get_btrfs_parameters(unsigned int level,
 						 size_t src_len)
@@ -37,16 +58,81 @@ struct workspace {
 	void *mem;
 	size_t size;
 	char *buf;
+	unsigned int level;
 	unsigned int req_level;
+	unsigned long last_used; /* jiffies */
 	struct list_head list;
+	struct list_head lru_list;
 	ZSTD_inBuffer in_buf;
 	ZSTD_outBuffer out_buf;
 };
 
-static struct workspace_manager wsm;
+struct zstd_workspace_manager {
+	const struct btrfs_compress_op *ops;
+	spinlock_t lock;
+	struct list_head lru_list;
+	struct list_head idle_ws[ZSTD_BTRFS_MAX_LEVEL];
+	unsigned long active_map;
+	wait_queue_head_t wait;
+	struct timer_list timer;
+};
+
+static struct zstd_workspace_manager wsm;
 
 static size_t zstd_ws_mem_sizes[ZSTD_BTRFS_MAX_LEVEL];
 
+static inline struct workspace *list_to_workspace(struct list_head *list)
+{
+	return container_of(list, struct workspace, list);
+}
+
+/*
+ * zstd_reclaim_timer_fn - reclaim timer
+ * @t: timer
+ *
+ * This scans the lru_list and attempts to reclaim any workspace that hasn't
+ * been used for ZSTD_BTRFS_RECLAIM_JIFFIES.
+ */
+static void zstd_reclaim_timer_fn(struct timer_list *t)
+{
+	unsigned long reclaim_threshold = jiffies - ZSTD_BTRFS_RECLAIM_JIFFIES;
+	struct list_head *pos, *next;
+
+	spin_lock(&wsm.lock);
+
+	if (list_empty(&wsm.lru_list)) {
+		spin_unlock(&wsm.lock);
+		return;
+	}
+
+	list_for_each_prev_safe(pos, next, &wsm.lru_list) {
+		struct workspace *victim = container_of(pos, struct workspace,
+							lru_list);
+		unsigned int level;
+
+		if (time_after(victim->last_used, reclaim_threshold))
+			break;
+
+		/* workspace is in use */
+		if (victim->req_level)
+			continue;
+
+		level = victim->level;
+		list_del(&victim->lru_list);
+		list_del(&victim->list);
+		wsm.ops->free_workspace(&victim->list);
+
+		if (list_empty(&wsm.idle_ws[level - 1]))
+			clear_bit(level - 1, &wsm.active_map);
+
+	}
+
+	if (!list_empty(&wsm.lru_list))
+		mod_timer(&wsm.timer, jiffies + ZSTD_BTRFS_RECLAIM_JIFFIES);
+
+	spin_unlock(&wsm.lock);
+}
+
 /*
  * zstd_calc_ws_mem_sizes - calculate monotonic memory bounds
  *
@@ -76,29 +162,165 @@ static void zstd_calc_ws_mem_sizes(void)
 
 static void zstd_init_workspace_manager(void)
 {
+	struct list_head *ws;
+	int i;
+
 	zstd_calc_ws_mem_sizes();
 
-	btrfs_init_workspace_manager(&wsm, &btrfs_zstd_compress);
+	wsm.ops = &btrfs_zstd_compress;
+	spin_lock_init(&wsm.lock);
+	init_waitqueue_head(&wsm.wait);
+	timer_setup(&wsm.timer, zstd_reclaim_timer_fn, 0);
+
+	INIT_LIST_HEAD(&wsm.lru_list);
+	for (i = 0; i < ZSTD_BTRFS_MAX_LEVEL; i++)
+		INIT_LIST_HEAD(&wsm.idle_ws[i]);
+
+	ws = wsm.ops->alloc_workspace(ZSTD_BTRFS_MAX_LEVEL);
+	if (IS_ERR(ws)) {
+		pr_warn("BTRFS: cannot preallocate zstd compression workspace\n");
+	} else {
+		set_bit(ZSTD_BTRFS_MAX_LEVEL - 1, &wsm.active_map);
+		list_add(ws, &wsm.idle_ws[ZSTD_BTRFS_MAX_LEVEL - 1]);
+	}
 }
 
 static void zstd_cleanup_workspace_manager(void)
 {
-	btrfs_cleanup_workspace_manager(&wsm);
+	struct workspace *workspace;
+	int i;
+
+	del_timer_sync(&wsm.timer);
+
+	spin_lock(&wsm.lock);
+	for (i = 0; i < ZSTD_BTRFS_MAX_LEVEL; i++) {
+		while (!list_empty(&wsm.idle_ws[i])) {
+			workspace = container_of(wsm.idle_ws[i].next,
+						 struct workspace, list);
+			list_del(&workspace->list);
+			list_del(&workspace->lru_list);
+			wsm.ops->free_workspace(&workspace->list);
+		}
+	}
+	spin_unlock(&wsm.lock);
+}
+
+/*
+ * zstd_find_workspace - find workspace
+ * @level: compression level
+ *
+ * This iterates over the set bits in the active_map beginning at the requested
+ * compression level.  This lets us utilize already allocated workspaces before
+ * allocating a new one.  If the workspace is of a larger size, it is used, but
+ * the place in the lru_list and last_used times are not updated.  This is to
+ * offer the opportunity to reclaim the workspace in favor of allocating an
+ * appropriately sized one in the future.
+ */
+static struct list_head *zstd_find_workspace(unsigned int level)
+{
+	struct list_head *ws;
+	struct workspace *workspace;
+	int i = level - 1;
+
+	spin_lock(&wsm.lock);
+	for_each_set_bit_from(i, &wsm.active_map, ZSTD_BTRFS_MAX_LEVEL) {
+		if (!list_empty(&wsm.idle_ws[i])) {
+			ws = wsm.idle_ws[i].next;
+			workspace = list_to_workspace(ws);
+			list_del_init(ws);
+			/* keep its place if it's a lower level using this */
+			workspace->req_level = level;
+			if (level == workspace->level)
+				list_del(&workspace->lru_list);
+			if (list_empty(&wsm.idle_ws[i]))
+				clear_bit(i, &wsm.active_map);
+			spin_unlock(&wsm.lock);
+			return ws;
+		}
+	}
+	spin_unlock(&wsm.lock);
+
+	return NULL;
 }
 
+/*
+ * zstd_get_workspace - zstd's get_workspace
+ * @level: compression level
+ *
+ * If @level is 0, then any compression level can be used.  Therefore, we begin
+ * scanning from 1.  We first scan through possible workspaces and then after
+ * attempt to allocate a new workspace.  If we fail to allocate one due to
+ * memory pressure, go to sleep waiting for the max level workspace to free up.
+ */
 static struct list_head *zstd_get_workspace(unsigned int level)
 {
-	struct list_head *ws = btrfs_get_workspace(&wsm, level);
-	struct workspace *workspace = list_entry(ws, struct workspace, list);
+	struct list_head *ws;
+	unsigned int nofs_flag;
 
-	workspace->req_level = level;
+	/* level == 0 means we can use any workspace */
+	if (!level)
+		level = 1;
+
+again:
+	ws = zstd_find_workspace(level);
+	if (ws)
+		return ws;
+
+	nofs_flag = memalloc_nofs_save();
+	ws = wsm.ops->alloc_workspace(level);
+	memalloc_nofs_restore(nofs_flag);
+
+	if (IS_ERR(ws)) {
+		DEFINE_WAIT(wait);
+
+		prepare_to_wait(&wsm.wait, &wait, TASK_UNINTERRUPTIBLE);
+		schedule();
+		finish_wait(&wsm.wait, &wait);
+
+		goto again;
+	}
 
 	return ws;
 }
 
+/*
+ * zstd_put_workspace - zstd put_workspace
+ * @ws: list_head for the workspace
+ *
+ * When putting back a workspace, we only need to update the LRU if we are of
+ * the requested compression level.  Here is where we continue to protect the
+ * max level workspace or update last_used accordingly.  If the reclaim timer
+ * isn't set, it is also set here.  Only the max level workspace tries and wakes
+ * up waiting workspaces.
+ */
 static void zstd_put_workspace(struct list_head *ws)
 {
-	btrfs_put_workspace(&wsm, ws);
+	struct workspace *workspace = list_to_workspace(ws);
+
+	spin_lock(&wsm.lock);
+
+	/* a node is only taken off the lru if we are the corresponding level */
+	if (workspace->req_level == workspace->level) {
+		/* hide a max level workspace from reclaim */
+		if (list_empty(&wsm.idle_ws[ZSTD_BTRFS_MAX_LEVEL - 1])) {
+			INIT_LIST_HEAD(&workspace->lru_list);
+		} else {
+			workspace->last_used = jiffies;
+			list_add(&workspace->lru_list, &wsm.lru_list);
+			if (!timer_pending(&wsm.timer))
+				mod_timer(&wsm.timer,
+					  jiffies + ZSTD_BTRFS_RECLAIM_JIFFIES);
+		}
+	}
+
+	set_bit(workspace->level - 1, &wsm.active_map);
+	list_add(&workspace->list, &wsm.idle_ws[workspace->level - 1]);
+	workspace->req_level = 0;
+
+	spin_unlock(&wsm.lock);
+
+	if (workspace->level == ZSTD_BTRFS_MAX_LEVEL)
+		cond_wake_up(&wsm.wait);
 }
 
 static void zstd_free_workspace(struct list_head *ws)
@@ -121,10 +343,14 @@ static struct list_head *zstd_alloc_workspace(unsigned int level)
 	workspace->size = zstd_ws_mem_sizes[level - 1];
 	workspace->mem = kvmalloc(workspace->size, GFP_KERNEL);
 	workspace->buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
+	workspace->level = level;
+	workspace->req_level = level;
+	workspace->last_used = jiffies;
 	if (!workspace->mem || !workspace->buf)
 		goto fail;
 
 	INIT_LIST_HEAD(&workspace->list);
+	INIT_LIST_HEAD(&workspace->lru_list);
 
 	return &workspace->list;
 fail:
@@ -478,7 +704,10 @@ static int zstd_decompress(struct list_head *ws, unsigned char *data_in,
 
 static unsigned int zstd_set_level(unsigned int level)
 {
-	return ZSTD_BTRFS_DEFAULT_LEVEL;
+	if (!level)
+		return ZSTD_BTRFS_DEFAULT_LEVEL;
+
+	return min_t(unsigned int, level, ZSTD_BTRFS_MAX_LEVEL);
 }
 
 const struct btrfs_compress_op btrfs_zstd_compress = {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH 09/12] btrfs: change set_level() to bound the level passed in
  2019-02-05 19:06   ` David Sterba
@ 2019-02-05 19:32     ` Dennis Zhou
  2019-02-05 19:54       ` David Sterba
  0 siblings, 1 reply; 34+ messages in thread
From: Dennis Zhou @ 2019-02-05 19:32 UTC (permalink / raw)
  To: David Sterba
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov, kernel-team, linux-btrfs,
	linux-kernel

On Tue, Feb 05, 2019 at 08:06:37PM +0100, David Sterba wrote:
> On Mon, Feb 04, 2019 at 03:20:05PM -0500, Dennis Zhou wrote:
> > -unsigned int btrfs_compress_str2level(const char *str)
> > +unsigned int btrfs_compress_str2level(unsigned int type, const char *str)
> >  {
> > -	if (strncmp(str, "zlib", 4) != 0)
> > +	unsigned int level;
> > +	int ret;
> > +
> > +	if (!type)
> >  		return 0;
> >  
> > -	/* Accepted form: zlib:1 up to zlib:9 and nothing left after the number */
> > -	if (str[4] == ':' && '1' <= str[5] && str[5] <= '9' && str[6] == 0)
> > -		return str[5] - '0';
> > +	if (str[0] == ':') {
> > +		ret = kstrtouint(str + 1, 10, &level);
> 
> The docs kstrtouint of say that initial + is also accepted, I'd rather
> keep the level specification strict, ie. no "zlib:+3" and no garbage
> after the number.
> 
> The validation is currently missing but I think we should catch levels
> out of range during mount/remount. The fallback to default is a safety
> but wrong specification should be communicated to the user early.

Ok. To make sure I understand properly for improper level (ie "30",
"+3", "+3d") set the level to default (already done) and pr_warn saying
invalid level?

Thanks,
Dennis

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 09/12] btrfs: change set_level() to bound the level passed in
  2019-02-05 19:32     ` Dennis Zhou
@ 2019-02-05 19:54       ` David Sterba
  2019-02-05 19:59         ` Dennis Zhou
  0 siblings, 1 reply; 34+ messages in thread
From: David Sterba @ 2019-02-05 19:54 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell, Nikolay Borisov, kernel-team,
	linux-btrfs, linux-kernel

On Tue, Feb 05, 2019 at 02:32:54PM -0500, Dennis Zhou wrote:
> On Tue, Feb 05, 2019 at 08:06:37PM +0100, David Sterba wrote:
> > On Mon, Feb 04, 2019 at 03:20:05PM -0500, Dennis Zhou wrote:
> > > -unsigned int btrfs_compress_str2level(const char *str)
> > > +unsigned int btrfs_compress_str2level(unsigned int type, const char *str)
> > >  {
> > > -	if (strncmp(str, "zlib", 4) != 0)
> > > +	unsigned int level;
> > > +	int ret;
> > > +
> > > +	if (!type)
> > >  		return 0;
> > >  
> > > -	/* Accepted form: zlib:1 up to zlib:9 and nothing left after the number */
> > > -	if (str[4] == ':' && '1' <= str[5] && str[5] <= '9' && str[6] == 0)
> > > -		return str[5] - '0';
> > > +	if (str[0] == ':') {
> > > +		ret = kstrtouint(str + 1, 10, &level);
> > 
> > The docs kstrtouint of say that initial + is also accepted, I'd rather
> > keep the level specification strict, ie. no "zlib:+3" and no garbage
> > after the number.
> > 
> > The validation is currently missing but I think we should catch levels
> > out of range during mount/remount. The fallback to default is a safety
> > but wrong specification should be communicated to the user early.
> 
> Ok. To make sure I understand properly for improper level (ie "30",
> "+3", "+3d") set the level to default (already done) and pr_warn saying
> invalid level?

So we have (at least) two ways how to handle:

- warn and fail the mount -- catch typos and the like, continuing with
  default could cause problems later though not that severe as the
  compressionw would be different than expected, but this could be
  considered a usability bug

- only warn and continue with the default -- not mounting a root
  filesystem just because of a typo can be worse than mounting with
  wrong level; as long as there's a warning in the log, the user still
  has a working system and can fix it manually

Both have some pros and cons and initially I was more inclined to pick
the 1st option, but now that I'm thinking about that again, the other
has some merit.

Given that zlib now falls back to default for unrecognized level, we
should probably stick to that for zstd too.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 09/12] btrfs: change set_level() to bound the level passed in
  2019-02-05 19:54       ` David Sterba
@ 2019-02-05 19:59         ` Dennis Zhou
  0 siblings, 0 replies; 34+ messages in thread
From: Dennis Zhou @ 2019-02-05 19:59 UTC (permalink / raw)
  To: David Sterba
  Cc: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell, Nikolay Borisov, kernel-team,
	linux-btrfs, linux-kernel

On Tue, Feb 05, 2019 at 08:54:15PM +0100, David Sterba wrote:
> On Tue, Feb 05, 2019 at 02:32:54PM -0500, Dennis Zhou wrote:
> > On Tue, Feb 05, 2019 at 08:06:37PM +0100, David Sterba wrote:
> > > On Mon, Feb 04, 2019 at 03:20:05PM -0500, Dennis Zhou wrote:
> > > > -unsigned int btrfs_compress_str2level(const char *str)
> > > > +unsigned int btrfs_compress_str2level(unsigned int type, const char *str)
> > > >  {
> > > > -	if (strncmp(str, "zlib", 4) != 0)
> > > > +	unsigned int level;
> > > > +	int ret;
> > > > +
> > > > +	if (!type)
> > > >  		return 0;
> > > >  
> > > > -	/* Accepted form: zlib:1 up to zlib:9 and nothing left after the number */
> > > > -	if (str[4] == ':' && '1' <= str[5] && str[5] <= '9' && str[6] == 0)
> > > > -		return str[5] - '0';
> > > > +	if (str[0] == ':') {
> > > > +		ret = kstrtouint(str + 1, 10, &level);
> > > 
> > > The docs kstrtouint of say that initial + is also accepted, I'd rather
> > > keep the level specification strict, ie. no "zlib:+3" and no garbage
> > > after the number.
> > > 
> > > The validation is currently missing but I think we should catch levels
> > > out of range during mount/remount. The fallback to default is a safety
> > > but wrong specification should be communicated to the user early.
> > 
> > Ok. To make sure I understand properly for improper level (ie "30",
> > "+3", "+3d") set the level to default (already done) and pr_warn saying
> > invalid level?
> 
> So we have (at least) two ways how to handle:
> 
> - warn and fail the mount -- catch typos and the like, continuing with
>   default could cause problems later though not that severe as the
>   compressionw would be different than expected, but this could be
>   considered a usability bug
> 
> - only warn and continue with the default -- not mounting a root
>   filesystem just because of a typo can be worse than mounting with
>   wrong level; as long as there's a warning in the log, the user still
>   has a working system and can fix it manually
> 
> Both have some pros and cons and initially I was more inclined to pick
> the 1st option, but now that I'm thinking about that again, the other
> has some merit.
> 
> Given that zlib now falls back to default for unrecognized level, we
> should probably stick to that for zstd too.

Ok. With that I'll run a v3 adding a warning for a '+' or invalid input.

I think moving compression to being a property rather than a mount
option would be ideal. That would enable multiple compression levels per
mount and prevent remounting with incorrect mount options.

Thanks,
Dennis

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 00/12] btrfs: add zstd compression level support
  2019-02-05 18:30               ` Dennis Zhou
@ 2019-02-05 20:48                 ` Dennis Zhou
  2019-02-06 15:15                   ` David Sterba
  0 siblings, 1 reply; 34+ messages in thread
From: Dennis Zhou @ 2019-02-05 20:48 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell, Nikolay Borisov, kernel-team,
	linux-btrfs, linux-kernel

On Tue, Feb 05, 2019 at 01:30:27PM -0500, Dennis Zhou wrote:
> On Tue, Feb 05, 2019 at 07:27:49PM +0100, David Sterba wrote:
> > On Tue, Feb 05, 2019 at 06:07:49PM +0100, David Sterba wrote:
> > > On Tue, Feb 05, 2019 at 05:51:13PM +0100, David Sterba wrote:
> > > > On Tue, Feb 05, 2019 at 11:30:12AM -0500, Dennis Zhou wrote:
> > > > > > > > Something is wrong, the patchset on top of 5.0-rc5 hangs in test
> > > > > > > > btrfs/007, without a stacktrace. V1 was fine and I double checked that
> > > > > > > > rc5 itself is fine.
> > > > > > > 
> > > > > > > Hmmm, well that's awkward. I ran xfstests and that test passed on my
> > > > > > > machine. I'll figure out the delta and submit a v3. Thanks David!
> > > > > > 
> > > > > > It failed on a VM and 2 other physical machines, so it's not somethig
> > > > > > related to the testing setup.
> > > > > 
> > > > > Oh, I wasn't alluding to that. I'm more that sure it's my fault
> > > > > somewhere along the line.
> > > > 
> > > > Hang on, it's probably caused by fstests, as fssum is busy waiting
> > > > somewhere, so it's not even kernel code.
> > > > 
> > > > fstests a4235f29f780b15d9e87c0f2446a53b9e69c5e2d -- ok
> > > > fstests 6d17c9076fe9d97d2411e5909e6b255e5e721fc3 (master) -- hangs
> > > 
> > > Git bisect points to
> > > 
> > > commit 04b405fa1f8d0cacde489f3d1c8771dfe7a34dc0
> > > Author: Zorro Lang <zlang@redhat.com>
> > > Date:   Wed Jan 23 15:34:55 2019 +0800
> > > 
> > >     common/dump: disable splice from FSSTRESS_AVOID
> > 
> > So with this commit reverted, test btrfs/007 does not hang anymore.
> 
> Ok great! I'm going to add a v2 for 0012 to add taking the spin_lock
> just to be safe in cleanup. I should have that ready in a few minutes.
> 

Before I run v3, are there any other concerns (pending the changes I
made to address the mentioned make sense)?

Thanks,
Dennis

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 00/12] btrfs: add zstd compression level support
  2019-02-05 20:48                 ` Dennis Zhou
@ 2019-02-06 15:15                   ` David Sterba
  2019-02-06 16:51                     ` Dennis Zhou
  0 siblings, 1 reply; 34+ messages in thread
From: David Sterba @ 2019-02-06 15:15 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell, Nikolay Borisov, kernel-team,
	linux-btrfs, linux-kernel

On Tue, Feb 05, 2019 at 03:48:48PM -0500, Dennis Zhou wrote:
> > Ok great! I'm going to add a v2 for 0012 to add taking the spin_lock
> > just to be safe in cleanup. I should have that ready in a few minutes.
> > 
> 
> Before I run v3, are there any other concerns (pending the changes I
> made to address the mentioned make sense)?

You don't needs to do full v3 resend, either new version of the
individual patches or incremental changes that can be squashed to the
patch. I'm about to move the patchset from topic branch to misc-next so
incrementals work better now as there aren't big updates expected.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v3 12/12] btrfs: add zstd compression level support
  2019-02-04 20:20 ` [PATCH 12/12] btrfs: add zstd compression level support Dennis Zhou
  2019-02-05 19:26   ` [PATCH v2 " Dennis Zhou
@ 2019-02-06 16:47   ` Dennis Zhou
  1 sibling, 0 replies; 34+ messages in thread
From: Dennis Zhou @ 2019-02-06 16:47 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov, kernel-team, linux-btrfs,
	linux-kernel

From 16b7c3fe05984a95436da1e9e01c80de1fdbba25 Mon Sep 17 00:00:00 2001
From: Dennis Zhou <dennis@kernel.org>
Date: Sat, 19 Jan 2019 18:51:39 -0800

Zstd compression requires different amounts of memory for each level of
compression. The prior patches implemented indirection to allow for each
compression type to manage their workspaces independently. This patch
uses this indirection to implement compression level support for zstd.

To manage the additional memory require, each compression level has its
own queue of workspaces. A global LRU is used to help with reclaim.
Reclaim is done via a timer which provides a mechanism to decrease
memory utilization by keeping only workspaces around that are sized
appropriately. Forward progress is guaranteed by a preallocated max
workspace hidden from the LRU.

When getting a workspace, it uses a bitmap to identify the levels that
are populated and scans up. If it finds a workspace that is greater than
it, it uses it, but does not update the last_used time and the
corresponding place in the LRU. If we hit memory pressure, we sleep on
the max level workspace. We continue to rescan in case we can use a
smaller workspace, but eventually should be able to obtain the max level
workspace or allocate one again should memory pressure subside.

The memory requirement for decompression is the same as level 1, and
therefore can use any of available workspace.

The number of workspaces is bound by an upper limit of the workqueue's
limit which currently is 2 (percpu limit). The reclaim timer is used to
free inactive/improperly sized workspaces and is set to 307s to avoid
colliding with transaction commit (every 30s).

Repeating the experiment from v2 [1], the Silesia corpus was copied to a
btrfs filesystem 10 times and then read back after dropping the caches.
The btrfs filesystem was on an SSD.

Level   Ratio   Compression (MB/s)  Decompression (MB/s)  Memory (KB)
1       2.658        438.47                910.51            780
2       2.744        364.86                886.55           1004
3       2.801        336.33                828.41           1260
4       2.858        286.71                886.55           1260
5       2.916        212.77                556.84           1388
6       2.363        119.82                990.85           1516
7       3.000        154.06                849.30           1516
8       3.011        159.54                875.03           1772
9       3.025        100.51                940.15           1772
10      3.033        118.97                616.26           1772
11      3.036         94.19                802.11           1772
12      3.037         73.45                931.49           1772
13      3.041         55.17                835.26           2284
14      3.087         44.70                716.78           2547
15      3.126         37.30                878.84           2547

[1] https://lore.kernel.org/linux-btrfs/20181031181108.289340-1-terrelln@fb.com/

Signed-off-by: Dennis Zhou <dennis@kernel.org>
Cc: Nick Terrell <terrelln@fb.com>
Cc: Omar Sandoval <osandov@osandov.com>
---
v3:
- warn on level out of bounds

v2:
- increase reclaim timer to 307s from 67s
- move from keeping track of time in ns to jiffies
- remove timer in cleanup code (use del_timer_sync)
- use min_t() instead of if statements in .set_level()
- add header text to describe how workspaces are managed
- nofs_flag type -> unsigned long to unsigned int



 fs/btrfs/super.c |   6 +-
 fs/btrfs/zstd.c  | 250 +++++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 247 insertions(+), 9 deletions(-)

diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
index b28dff207383..0ecc513cb56c 100644
--- a/fs/btrfs/super.c
+++ b/fs/btrfs/super.c
@@ -544,9 +544,13 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
 				btrfs_clear_opt(info->mount_opt, NODATASUM);
 				btrfs_set_fs_incompat(info, COMPRESS_LZO);
 				no_compress = 0;
-			} else if (strcmp(args[0].from, "zstd") == 0) {
+			} else if (strncmp(args[0].from, "zstd", 4) == 0) {
 				compress_type = "zstd";
 				info->compress_type = BTRFS_COMPRESS_ZSTD;
+				info->compress_level =
+					btrfs_compress_str2level(
+							 BTRFS_COMPRESS_ZSTD,
+							 args[0].from + 4);
 				btrfs_set_opt(info->mount_opt, COMPRESS);
 				btrfs_clear_opt(info->mount_opt, NODATACOW);
 				btrfs_clear_opt(info->mount_opt, NODATASUM);
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index 2231123fedbe..9946cbea624a 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -3,24 +3,45 @@
  * Copyright (c) 2016-present, Facebook, Inc.
  * All rights reserved.
  *
+ * Zstd Workspace Management:
+ * Zstd workspaces have different memory requirements depending on the level.
+ * The zstd workspaces are managed by having individual lists for each level
+ * and a global lru.  Forward progress is maintained by protecting a max level
+ * workspace.
+ *
+ * Getting a workspace is done by using the bitmap to identify the levels that
+ * have available workspaces and scans up.  This lets us recycle higher level
+ * workspaces because of the monotonic memory guarantee.  A workspace's
+ * last_used is only updated if it is being used by the corresponding memory
+ * level.  Putting a workspace involves adding it back to the appropriate places
+ * and adding it back to the lru if necessary.
+ *
+ * A timer is used to reclaim workspaces if they have not been used for
+ * ZSTD_BTRFS_RECLAIM_JIFFIES.  This helps keep only active workspaces around.
+ * The upper bound is provided by the workqueue limit which is 2 (percpu limit).
  */
 
 #include <linux/bio.h>
+#include <linux/bitmap.h>
 #include <linux/err.h>
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/mm.h>
+#include <linux/sched/mm.h>
 #include <linux/pagemap.h>
 #include <linux/refcount.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
 #include <linux/zstd.h>
 #include "compression.h"
+#include "ctree.h"
 
 #define ZSTD_BTRFS_MAX_WINDOWLOG 17
 #define ZSTD_BTRFS_MAX_INPUT (1 << ZSTD_BTRFS_MAX_WINDOWLOG)
 #define ZSTD_BTRFS_DEFAULT_LEVEL 3
 #define ZSTD_BTRFS_MAX_LEVEL 15
+/* 307s to avoid pathologically clashing with transaction commit */
+#define ZSTD_BTRFS_RECLAIM_JIFFIES (307 * HZ)
 
 static ZSTD_parameters zstd_get_btrfs_parameters(unsigned int level,
 						 size_t src_len)
@@ -37,16 +58,81 @@ struct workspace {
 	void *mem;
 	size_t size;
 	char *buf;
+	unsigned int level;
 	unsigned int req_level;
+	unsigned long last_used; /* jiffies */
 	struct list_head list;
+	struct list_head lru_list;
 	ZSTD_inBuffer in_buf;
 	ZSTD_outBuffer out_buf;
 };
 
-static struct workspace_manager wsm;
+struct zstd_workspace_manager {
+	const struct btrfs_compress_op *ops;
+	spinlock_t lock;
+	struct list_head lru_list;
+	struct list_head idle_ws[ZSTD_BTRFS_MAX_LEVEL];
+	unsigned long active_map;
+	wait_queue_head_t wait;
+	struct timer_list timer;
+};
+
+static struct zstd_workspace_manager wsm;
 
 static size_t zstd_ws_mem_sizes[ZSTD_BTRFS_MAX_LEVEL];
 
+static inline struct workspace *list_to_workspace(struct list_head *list)
+{
+	return container_of(list, struct workspace, list);
+}
+
+/*
+ * zstd_reclaim_timer_fn - reclaim timer
+ * @t: timer
+ *
+ * This scans the lru_list and attempts to reclaim any workspace that hasn't
+ * been used for ZSTD_BTRFS_RECLAIM_JIFFIES.
+ */
+static void zstd_reclaim_timer_fn(struct timer_list *t)
+{
+	unsigned long reclaim_threshold = jiffies - ZSTD_BTRFS_RECLAIM_JIFFIES;
+	struct list_head *pos, *next;
+
+	spin_lock(&wsm.lock);
+
+	if (list_empty(&wsm.lru_list)) {
+		spin_unlock(&wsm.lock);
+		return;
+	}
+
+	list_for_each_prev_safe(pos, next, &wsm.lru_list) {
+		struct workspace *victim = container_of(pos, struct workspace,
+							lru_list);
+		unsigned int level;
+
+		if (time_after(victim->last_used, reclaim_threshold))
+			break;
+
+		/* workspace is in use */
+		if (victim->req_level)
+			continue;
+
+		level = victim->level;
+		list_del(&victim->lru_list);
+		list_del(&victim->list);
+		wsm.ops->free_workspace(&victim->list);
+
+		if (list_empty(&wsm.idle_ws[level - 1]))
+			clear_bit(level - 1, &wsm.active_map);
+
+	}
+
+	if (!list_empty(&wsm.lru_list))
+		mod_timer(&wsm.timer, jiffies + ZSTD_BTRFS_RECLAIM_JIFFIES);
+
+	spin_unlock(&wsm.lock);
+}
+
 /*
  * zstd_calc_ws_mem_sizes - calculate monotonic memory bounds
  *
@@ -76,29 +162,165 @@ static void zstd_calc_ws_mem_sizes(void)
 
 static void zstd_init_workspace_manager(void)
 {
+	struct list_head *ws;
+	int i;
+
 	zstd_calc_ws_mem_sizes();
 
-	btrfs_init_workspace_manager(&wsm, &btrfs_zstd_compress);
+	wsm.ops = &btrfs_zstd_compress;
+	spin_lock_init(&wsm.lock);
+	init_waitqueue_head(&wsm.wait);
+	timer_setup(&wsm.timer, zstd_reclaim_timer_fn, 0);
+
+	INIT_LIST_HEAD(&wsm.lru_list);
+	for (i = 0; i < ZSTD_BTRFS_MAX_LEVEL; i++)
+		INIT_LIST_HEAD(&wsm.idle_ws[i]);
+
+	ws = wsm.ops->alloc_workspace(ZSTD_BTRFS_MAX_LEVEL);
+	if (IS_ERR(ws)) {
+		pr_warn("BTRFS: cannot preallocate zstd compression workspace\n");
+	} else {
+		set_bit(ZSTD_BTRFS_MAX_LEVEL - 1, &wsm.active_map);
+		list_add(ws, &wsm.idle_ws[ZSTD_BTRFS_MAX_LEVEL - 1]);
+	}
 }
 
 static void zstd_cleanup_workspace_manager(void)
 {
-	btrfs_cleanup_workspace_manager(&wsm);
+	struct workspace *workspace;
+	int i;
+
+	del_timer_sync(&wsm.timer);
+
+	spin_lock(&wsm.lock);
+	for (i = 0; i < ZSTD_BTRFS_MAX_LEVEL; i++) {
+		while (!list_empty(&wsm.idle_ws[i])) {
+			workspace = container_of(wsm.idle_ws[i].next,
+						 struct workspace, list);
+			list_del(&workspace->list);
+			list_del(&workspace->lru_list);
+			wsm.ops->free_workspace(&workspace->list);
+		}
+	}
+	spin_unlock(&wsm.lock);
 }
 
+/*
+ * zstd_find_workspace - find workspace
+ * @level: compression level
+ *
+ * This iterates over the set bits in the active_map beginning at the requested
+ * compression level.  This lets us utilize already allocated workspaces before
+ * allocating a new one.  If the workspace is of a larger size, it is used, but
+ * the place in the lru_list and last_used times are not updated.  This is to
+ * offer the opportunity to reclaim the workspace in favor of allocating an
+ * appropriately sized one in the future.
+ */
+static struct list_head *zstd_find_workspace(unsigned int level)
+{
+	struct list_head *ws;
+	struct workspace *workspace;
+	int i = level - 1;
+
+	spin_lock(&wsm.lock);
+	for_each_set_bit_from(i, &wsm.active_map, ZSTD_BTRFS_MAX_LEVEL) {
+		if (!list_empty(&wsm.idle_ws[i])) {
+			ws = wsm.idle_ws[i].next;
+			workspace = list_to_workspace(ws);
+			list_del_init(ws);
+			/* keep its place if it's a lower level using this */
+			workspace->req_level = level;
+			if (level == workspace->level)
+				list_del(&workspace->lru_list);
+			if (list_empty(&wsm.idle_ws[i]))
+				clear_bit(i, &wsm.active_map);
+			spin_unlock(&wsm.lock);
+			return ws;
+		}
+	}
+	spin_unlock(&wsm.lock);
+
+	return NULL;
+}
+
+/*
+ * zstd_get_workspace - zstd's get_workspace
+ * @level: compression level
+ *
+ * If @level is 0, then any compression level can be used.  Therefore, we begin
+ * scanning from 1.  We first scan through possible workspaces and then after
+ * attempt to allocate a new workspace.  If we fail to allocate one due to
+ * memory pressure, go to sleep waiting for the max level workspace to free up.
+ */
 static struct list_head *zstd_get_workspace(unsigned int level)
 {
-	struct list_head *ws = btrfs_get_workspace(&wsm, level);
-	struct workspace *workspace = list_entry(ws, struct workspace, list);
+	struct list_head *ws;
+	unsigned int nofs_flag;
 
-	workspace->req_level = level;
+	/* level == 0 means we can use any workspace */
+	if (!level)
+		level = 1;
+
+again:
+	ws = zstd_find_workspace(level);
+	if (ws)
+		return ws;
+
+	nofs_flag = memalloc_nofs_save();
+	ws = wsm.ops->alloc_workspace(level);
+	memalloc_nofs_restore(nofs_flag);
+
+	if (IS_ERR(ws)) {
+		DEFINE_WAIT(wait);
+
+		prepare_to_wait(&wsm.wait, &wait, TASK_UNINTERRUPTIBLE);
+		schedule();
+		finish_wait(&wsm.wait, &wait);
+
+		goto again;
+	}
 
 	return ws;
 }
 
+/*
+ * zstd_put_workspace - zstd put_workspace
+ * @ws: list_head for the workspace
+ *
+ * When putting back a workspace, we only need to update the LRU if we are of
+ * the requested compression level.  Here is where we continue to protect the
+ * max level workspace or update last_used accordingly.  If the reclaim timer
+ * isn't set, it is also set here.  Only the max level workspace tries and wakes
+ * up waiting workspaces.
+ */
 static void zstd_put_workspace(struct list_head *ws)
 {
-	btrfs_put_workspace(&wsm, ws);
+	struct workspace *workspace = list_to_workspace(ws);
+
+	spin_lock(&wsm.lock);
+
+	/* a node is only taken off the lru if we are the corresponding level */
+	if (workspace->req_level == workspace->level) {
+		/* hide a max level workspace from reclaim */
+		if (list_empty(&wsm.idle_ws[ZSTD_BTRFS_MAX_LEVEL - 1])) {
+			INIT_LIST_HEAD(&workspace->lru_list);
+		} else {
+			workspace->last_used = jiffies;
+			list_add(&workspace->lru_list, &wsm.lru_list);
+			if (!timer_pending(&wsm.timer))
+				mod_timer(&wsm.timer,
+					  jiffies + ZSTD_BTRFS_RECLAIM_JIFFIES);
+		}
+	}
+
+	set_bit(workspace->level - 1, &wsm.active_map);
+	list_add(&workspace->list, &wsm.idle_ws[workspace->level - 1]);
+	workspace->req_level = 0;
+
+	spin_unlock(&wsm.lock);
+
+	if (workspace->level == ZSTD_BTRFS_MAX_LEVEL)
+		cond_wake_up(&wsm.wait);
 }
 
 static void zstd_free_workspace(struct list_head *ws)
@@ -121,10 +343,14 @@ static struct list_head *zstd_alloc_workspace(unsigned int level)
 	workspace->size = zstd_ws_mem_sizes[level - 1];
 	workspace->mem = kvmalloc(workspace->size, GFP_KERNEL);
 	workspace->buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
+	workspace->level = level;
+	workspace->req_level = level;
+	workspace->last_used = jiffies;
 	if (!workspace->mem || !workspace->buf)
 		goto fail;
 
 	INIT_LIST_HEAD(&workspace->list);
+	INIT_LIST_HEAD(&workspace->lru_list);
 
 	return &workspace->list;
 fail:
@@ -478,7 +704,15 @@ static int zstd_decompress(struct list_head *ws, unsigned char *data_in,
 
 static unsigned int zstd_set_level(unsigned int level)
 {
-	return ZSTD_BTRFS_DEFAULT_LEVEL;
+	if (!level) {
+		return ZSTD_BTRFS_DEFAULT_LEVEL;
+	} else if (level > ZSTD_BTRFS_MAX_LEVEL) {
+		level = ZSTD_BTRFS_MAX_LEVEL;
+		pr_warn("BTRFS: zstd level > max level, using max level: %u",
+			level);
+	}
+
+	return level;
 }
 
 const struct btrfs_compress_op btrfs_zstd_compress = {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 09/12] btrfs: change set_level() to bound the level passed in
  2019-02-04 20:20 ` [PATCH 09/12] btrfs: change set_level() to bound the level passed in Dennis Zhou
  2019-02-05 19:06   ` David Sterba
@ 2019-02-06 16:48   ` Dennis Zhou
  1 sibling, 0 replies; 34+ messages in thread
From: Dennis Zhou @ 2019-02-06 16:48 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov
  Cc: kernel-team, linux-btrfs, linux-kernel

From ef64a8d1f4ec44f52bd13a825288a91667121708 Mon Sep 17 00:00:00 2001
From: Dennis Zhou <dennis@kernel.org>
Date: Thu, 17 Jan 2019 12:13:27 -0800

Currently, the only user of set_level() is zlib which sets an internal
workspace parameter. As level is now plumbed into get_workspace(), this
can be handled there rather than separately.

This repurposes set_level() to bound the level passed in so it can be
used when setting the mounts compression level and as well as verifying
the level before getting a workspace. The other benefit is this divides
the meaning of compress(0) and get_workspace(0). The former means we
want to use the default compression level of the compression type. The
latter means we can use any workspace available.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
v2:
- warn on bad level format eg: zlib:c100
- warn on level out of bounds eg: zlib:10

 fs/btrfs/compression.c | 27 +++++++++++++++++++--------
 fs/btrfs/compression.h | 10 ++++++++--
 fs/btrfs/lzo.c         |  3 ++-
 fs/btrfs/super.c       |  4 +++-
 fs/btrfs/zlib.c        | 21 ++++++++++++++-------
 fs/btrfs/zstd.c        |  3 ++-
 6 files changed, 48 insertions(+), 20 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index a54b210739bc..560a926c2e2f 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -1007,8 +1007,6 @@ int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping,
 	int ret;
 
 	workspace = get_workspace(type, level);
-
-	btrfs_compress_op[type]->set_level(workspace, level);
 	ret = btrfs_compress_op[type]->compress_pages(workspace, mapping,
 						      start, pages,
 						      out_pages,
@@ -1561,14 +1559,27 @@ int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end)
 	return ret;
 }
 
-unsigned int btrfs_compress_str2level(const char *str)
+unsigned int btrfs_compress_str2level(unsigned int type, const char *str)
 {
-	if (strncmp(str, "zlib", 4) != 0)
+	unsigned int level = 0;
+	int ret;
+
+	if (!type)
 		return 0;
 
-	/* Accepted form: zlib:1 up to zlib:9 and nothing left after the number */
-	if (str[4] == ':' && '1' <= str[5] && str[5] <= '9' && str[6] == 0)
-		return str[5] - '0';
+	if (str[0] == ':') {
+		if (str[1] != '+') {
+			ret = kstrtouint(str + 1, 10, &level);
+			if (!ret)
+				goto set_level;
+		}
+
+		pr_warn("BTRFS: invalid compression level %s, using default",
+			str + 1);
+	}
+
+set_level:
+	level = btrfs_compress_op[type]->set_level(level);
 
-	return BTRFS_ZLIB_DEFAULT_LEVEL;
+	return level;
 }
diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index 32ba40ebae1d..3f2d36b0aabe 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -97,7 +97,7 @@ blk_status_t btrfs_submit_compressed_write(struct inode *inode, u64 start,
 blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 				 int mirror_num, unsigned long bio_flags);
 
-unsigned btrfs_compress_str2level(const char *str);
+unsigned int btrfs_compress_str2level(unsigned int type, const char *str);
 
 enum btrfs_compression_type {
 	BTRFS_COMPRESS_NONE  = 0,
@@ -156,7 +156,13 @@ struct btrfs_compress_op {
 			  unsigned long start_byte,
 			  size_t srclen, size_t destlen);
 
-	void (*set_level)(struct list_head *ws, unsigned int type);
+	/*
+	 * This bounds the level set by the user to be within range of
+	 * a particular compression type.  It returns the level that
+	 * will be used if the level is out of bounds or the default
+	 * if 0 is passed in.
+	 */
+	unsigned int (*set_level)(unsigned int level);
 };
 
 /* the heuristic workspaces are managed via the 0th workspace manager */
diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
index f132af45a924..579d53ae256f 100644
--- a/fs/btrfs/lzo.c
+++ b/fs/btrfs/lzo.c
@@ -507,8 +507,9 @@ static int lzo_decompress(struct list_head *ws, unsigned char *data_in,
 	return ret;
 }
 
-static void lzo_set_level(struct list_head *ws, unsigned int type)
+static unsigned int lzo_set_level(unsigned int level)
 {
+	return 0;
 }
 
 const struct btrfs_compress_op btrfs_lzo_compress = {
diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
index c5586ffd1426..b28dff207383 100644
--- a/fs/btrfs/super.c
+++ b/fs/btrfs/super.c
@@ -529,7 +529,9 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
 				if (token != Opt_compress &&
 				    token != Opt_compress_force)
 					info->compress_level =
-					  btrfs_compress_str2level(args[0].from);
+					  btrfs_compress_str2level(
+							BTRFS_COMPRESS_ZLIB,
+							args[0].from + 4);
 				btrfs_set_opt(info->mount_opt, COMPRESS);
 				btrfs_clear_opt(info->mount_opt, NODATACOW);
 				btrfs_clear_opt(info->mount_opt, NODATASUM);
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index c0ef09bef3b3..439dad221e9a 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -41,7 +41,12 @@ static void zlib_cleanup_workspace_manager(void)
 
 static struct list_head *zlib_get_workspace(unsigned int level)
 {
-	return btrfs_get_workspace(&wsm, level);
+	struct list_head *ws = btrfs_get_workspace(&wsm, level);
+	struct workspace *workspace = list_entry(ws, struct workspace, list);
+
+	workspace->level = level;
+
+	return ws;
 }
 
 static void zlib_put_workspace(struct list_head *ws)
@@ -413,15 +418,17 @@ static int zlib_decompress(struct list_head *ws, unsigned char *data_in,
 	return ret;
 }
 
-static void zlib_set_level(struct list_head *ws, unsigned int type)
+static unsigned int zlib_set_level(unsigned int level)
 {
-	struct workspace *workspace = list_entry(ws, struct workspace, list);
-	unsigned int level = btrfs_compress_level(type);
-
-	if (level > 9)
+	if (!level) {
+		return BTRFS_ZLIB_DEFAULT_LEVEL;
+	} else if (level > 9) {
 		level = 9;
+		pr_warn("BTRFS: zlib level > max level, using max level: %u",
+			level);
+	}
 
-	workspace->level = level > 0 ? level : 3;
+	return level;
 }
 
 const struct btrfs_compress_op btrfs_zlib_compress = {
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index 404101864220..43f3be755b8c 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -441,8 +441,9 @@ static int zstd_decompress(struct list_head *ws, unsigned char *data_in,
 	return ret;
 }
 
-static void zstd_set_level(struct list_head *ws, unsigned int type)
+static unsigned int zstd_set_level(unsigned int level)
 {
+	return ZSTD_BTRFS_DEFAULT_LEVEL;
 }
 
 const struct btrfs_compress_op btrfs_zstd_compress = {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 00/12] btrfs: add zstd compression level support
  2019-02-06 15:15                   ` David Sterba
@ 2019-02-06 16:51                     ` Dennis Zhou
  0 siblings, 0 replies; 34+ messages in thread
From: Dennis Zhou @ 2019-02-06 16:51 UTC (permalink / raw)
  To: David Sterba
  Cc: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell, Nikolay Borisov, kernel-team,
	linux-btrfs, linux-kernel

On Wed, Feb 06, 2019 at 04:15:52PM +0100, David Sterba wrote:
> On Tue, Feb 05, 2019 at 03:48:48PM -0500, Dennis Zhou wrote:
> > > Ok great! I'm going to add a v2 for 0012 to add taking the spin_lock
> > > just to be safe in cleanup. I should have that ready in a few minutes.
> > > 
> > 
> > Before I run v3, are there any other concerns (pending the changes I
> > made to address the mentioned make sense)?
> 
> You don't needs to do full v3 resend, either new version of the
> individual patches or incremental changes that can be squashed to the
> patch. I'm about to move the patchset from topic branch to misc-next so
> incrementals work better now as there aren't big updates expected.

Ok great! I just sent a v2 of 0009 and v3 of 00012 which include the
changes for warning on incorrect input and level out of bounds. I did
forget to take out the "Re:" though.. whoops.

Thanks,
Dennis

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 00/12] btrfs: add zstd compression level support
  2019-02-04 20:19 [PATCH v2 00/12] btrfs: add zstd compression level support Dennis Zhou
                   ` (12 preceding siblings ...)
  2019-02-05 14:57 ` [PATCH v2 00/12] " David Sterba
@ 2019-02-07 16:59 ` David Sterba
  2019-02-07 17:36   ` Dennis Zhou
  13 siblings, 1 reply; 34+ messages in thread
From: David Sterba @ 2019-02-07 16:59 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov, kernel-team, linux-btrfs,
	linux-kernel

On Mon, Feb 04, 2019 at 03:19:56PM -0500, Dennis Zhou wrote:
> Dennis Zhou (12):
>   btrfs: add helpers for compression type and level
>   btrfs: rename workspaces_list to workspace_manager
>   btrfs: manage heuristic workspace as index 0
>   btrfs: unify compression ops with workspace_manager
>   btrfs: add helper methods for workspace manager init and cleanup
>   btrfs: add compression interface in (get/put)_workspace()
>   btrfs: move to fn pointers for get/put workspaces
>   btrfs: plumb level through the compression interface
>   btrfs: change set_level() to bound the level passed in
>   btrfs: zstd use the passed through level instead of default
>   btrfs: make zstd memory requirements monotonic
>   btrfs: add zstd compression level support

The patchset has been added to msic-next, scheduled for 5.1. I've left
out the changes to warn on wrong level, so currently it falls back to
the default. I don't want to make a last minute change of that sort, so
this needs to go as a separate patch.

I also did a last pass through all patches and made some trivial changes
like reformatting comments or minor style updates, nothing functional.
The inter-branch diff can be found at
https://github.com/kdave/btrfs-devel/compare/ext/dzhou/compr-workspaces-v2..kdave:ext/dzhou/compr-workspaces

The feature should be ready to use, though as mentioned before some fine
tuning might be necessary to make the memory requirements more
efficient. The code could be simplified or cleaned up now that there are
3 compressors, at this point stabilization fixes are preferred.

Thanks.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 00/12] btrfs: add zstd compression level support
  2019-02-07 16:59 ` David Sterba
@ 2019-02-07 17:36   ` Dennis Zhou
  2019-02-08 13:22     ` David Sterba
  0 siblings, 1 reply; 34+ messages in thread
From: Dennis Zhou @ 2019-02-07 17:36 UTC (permalink / raw)
  To: David Sterba
  Cc: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell, Nikolay Borisov, kernel-team,
	linux-btrfs, linux-kernel

On Thu, Feb 07, 2019 at 05:59:26PM +0100, David Sterba wrote:
> On Mon, Feb 04, 2019 at 03:19:56PM -0500, Dennis Zhou wrote:
> > Dennis Zhou (12):
> >   btrfs: add helpers for compression type and level
> >   btrfs: rename workspaces_list to workspace_manager
> >   btrfs: manage heuristic workspace as index 0
> >   btrfs: unify compression ops with workspace_manager
> >   btrfs: add helper methods for workspace manager init and cleanup
> >   btrfs: add compression interface in (get/put)_workspace()
> >   btrfs: move to fn pointers for get/put workspaces
> >   btrfs: plumb level through the compression interface
> >   btrfs: change set_level() to bound the level passed in
> >   btrfs: zstd use the passed through level instead of default
> >   btrfs: make zstd memory requirements monotonic
> >   btrfs: add zstd compression level support
> 
> The patchset has been added to msic-next, scheduled for 5.1. I've left
> out the changes to warn on wrong level, so currently it falls back to
> the default. I don't want to make a last minute change of that sort, so
> this needs to go as a separate patch.

Regarding the bug report from Dan, do you want me to send you a patch
for it on top of the topic branch or do you mind fixing the missing:

unsigned int level = 0;

> 
> I also did a last pass through all patches and made some trivial changes
> like reformatting comments or minor style updates, nothing functional.
> The inter-branch diff can be found at
> https://github.com/kdave/btrfs-devel/compare/ext/dzhou/compr-workspaces-v2..kdave:ext/dzhou/compr-workspaces
> 
> The feature should be ready to use, though as mentioned before some fine
> tuning might be necessary to make the memory requirements more
> efficient. The code could be simplified or cleaned up now that there are
> 3 compressors, at this point stabilization fixes are preferred.
> 
> Thanks.

Thanks David!
Dennis

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v2 00/12] btrfs: add zstd compression level support
  2019-02-07 17:36   ` Dennis Zhou
@ 2019-02-08 13:22     ` David Sterba
  0 siblings, 0 replies; 34+ messages in thread
From: David Sterba @ 2019-02-08 13:22 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, Nikolay Borisov, kernel-team, linux-btrfs,
	linux-kernel

On Thu, Feb 07, 2019 at 12:36:54PM -0500, Dennis Zhou wrote:
> On Thu, Feb 07, 2019 at 05:59:26PM +0100, David Sterba wrote:
> > On Mon, Feb 04, 2019 at 03:19:56PM -0500, Dennis Zhou wrote:
> > > Dennis Zhou (12):
> > >   btrfs: add helpers for compression type and level
> > >   btrfs: rename workspaces_list to workspace_manager
> > >   btrfs: manage heuristic workspace as index 0
> > >   btrfs: unify compression ops with workspace_manager
> > >   btrfs: add helper methods for workspace manager init and cleanup
> > >   btrfs: add compression interface in (get/put)_workspace()
> > >   btrfs: move to fn pointers for get/put workspaces
> > >   btrfs: plumb level through the compression interface
> > >   btrfs: change set_level() to bound the level passed in
> > >   btrfs: zstd use the passed through level instead of default
> > >   btrfs: make zstd memory requirements monotonic
> > >   btrfs: add zstd compression level support
> > 
> > The patchset has been added to msic-next, scheduled for 5.1. I've left
> > out the changes to warn on wrong level, so currently it falls back to
> > the default. I don't want to make a last minute change of that sort, so
> > this needs to go as a separate patch.
> 
> Regarding the bug report from Dan, do you want me to send you a patch
> for it on top of the topic branch or do you mind fixing the missing:
> 
> unsigned int level = 0;

It's been fixed in the pushed branch.

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2019-02-08 13:22 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-04 20:19 [PATCH v2 00/12] btrfs: add zstd compression level support Dennis Zhou
2019-02-04 20:19 ` [PATCH 01/12] btrfs: add helpers for compression type and level Dennis Zhou
2019-02-04 20:19 ` [PATCH 02/12] btrfs: rename workspaces_list to workspace_manager Dennis Zhou
2019-02-04 20:19 ` [PATCH 03/12] btrfs: manage heuristic workspace as index 0 Dennis Zhou
2019-02-04 20:20 ` [PATCH 04/12] btrfs: unify compression ops with workspace_manager Dennis Zhou
2019-02-04 20:20 ` [PATCH 05/12] btrfs: add helper methods for workspace manager init and cleanup Dennis Zhou
2019-02-04 20:20 ` [PATCH 06/12] btrfs: add compression interface in (get/put)_workspace() Dennis Zhou
2019-02-04 20:20 ` [PATCH 07/12] btrfs: move to fn pointers for get/put workspaces Dennis Zhou
2019-02-04 20:20 ` [PATCH 08/12] btrfs: plumb level through the compression interface Dennis Zhou
2019-02-04 20:20 ` [PATCH 09/12] btrfs: change set_level() to bound the level passed in Dennis Zhou
2019-02-05 19:06   ` David Sterba
2019-02-05 19:32     ` Dennis Zhou
2019-02-05 19:54       ` David Sterba
2019-02-05 19:59         ` Dennis Zhou
2019-02-06 16:48   ` [PATCH v2 " Dennis Zhou
2019-02-04 20:20 ` [PATCH 10/12] btrfs: zstd use the passed through level instead of default Dennis Zhou
2019-02-04 20:20 ` [PATCH 11/12] btrfs: make zstd memory requirements monotonic Dennis Zhou
2019-02-04 20:20 ` [PATCH 12/12] btrfs: add zstd compression level support Dennis Zhou
2019-02-05 19:26   ` [PATCH v2 " Dennis Zhou
2019-02-06 16:47   ` [PATCH v3 " Dennis Zhou
2019-02-05 14:57 ` [PATCH v2 00/12] " David Sterba
2019-02-05 16:03   ` Dennis Zhou
2019-02-05 16:27     ` David Sterba
2019-02-05 16:30       ` Dennis Zhou
2019-02-05 16:51         ` David Sterba
2019-02-05 17:07           ` David Sterba
2019-02-05 18:27             ` David Sterba
2019-02-05 18:30               ` Dennis Zhou
2019-02-05 20:48                 ` Dennis Zhou
2019-02-06 15:15                   ` David Sterba
2019-02-06 16:51                     ` Dennis Zhou
2019-02-07 16:59 ` David Sterba
2019-02-07 17:36   ` Dennis Zhou
2019-02-08 13:22     ` David Sterba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).