linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/11] btrfs: add zstd compression level support
@ 2019-01-28 21:24 Dennis Zhou
  2019-01-28 21:24 ` [PATCH 01/11] btrfs: add macros for compression type and level Dennis Zhou
                   ` (11 more replies)
  0 siblings, 12 replies; 46+ messages in thread
From: Dennis Zhou @ 2019-01-28 21:24 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

Hi everyone,

This is a respin of [1] which aims to add zstd compression level
support. V3 moves away from the using set_level() to resize workspaces
in favor of just allocating a workspace of the appropriate level and
using a timer to reclaim unused workspaces.

Zstd compression requires different amounts of memory for each level of
compression. The prior patches implemented indirection to allow for each
compression type to manage their workspaces independently. This patch
uses this indirection to implement compression level support for zstd.

As mentioned above, a requirement that differs zstd from zlib is that
higher levels of compression require more memory. To manage this, each
compression level has its own queue of workspaces. A global LRU is used
to help with reclaim. To guarantee forward progress, a max level
workspace is preallocated and hidden from the LRU.

When getting a workspace, it uses a bitmap to identify the levels that
are populated and scans up. If it finds a workspace that is greater than
it, it uses it, but does not update the last_used time and the
corresponding place in the LRU. This provides a mechanism to decrease
memory utilization as we only keep around workspaces that are sized
appropriately for the in use compression levels.

By knowing which compression levels have available workspaces, we can
recycle rather than always create new workspaces as well as take
advantage of the preallocated max level for forward progress. If we hit
memory pressure, we sleep on the max level workspace. We continue to
rescan in case we can use a smaller workspace, but eventually should be
able to obtain the max level workspace or allocate one again should
memory pressure subside. The memory requirement for decompression is the
same as level 1, and therefore can use any of available workspace.

The number of workspaces is bound by an upper limit of the workqueue's
limit which currently is 2 (percpu) limit). Second, a reclaim timer is
used to free inactive/improperly sized workspaces. The reclaim timer is
set to 67s to avoid colliding with transaction commit (every 30s) and
attempts to reclaim any unused workspace older than 45s.

Repeating the experiment from v2 [1], the Silesia corpus was copied to a
btrfs filesystem 10 times and then read back after dropping the caches.
The btrfs filesystem was on an SSD.

Level   Ratio   Compression (MB/s)  Decompression (MB/s)
1       2.658        438.47                910.51
2       2.744        364.86                886.55
3       2.801        336.33                828.41
4       2.858        286.71                886.55
5       2.916        212.77                556.84
6       2.363        119.82                990.85
7       3.000        154.06                849.30
8       3.011        159.54                875.03
9       3.025        100.51                940.15
10      3.033        118.97                616.26
11      3.036         94.19                802.11
12      3.037         73.45                931.49
13      3.041         55.17                835.26
14      3.087         44.70                716.78
15      3.126         37.30                878.84

[1] https://lore.kernel.org/linux-btrfs/20181031181108.289340-1-terrelln@fb.com/

This patchset contains the following 11 patches:
  0001-btrfs-add-macros-for-compression-type-and-level.patch
  0002-btrfs-rename-workspaces_list-to-workspace_manager.patch
  0003-btrfs-manage-heuristic-workspace-as-index-0.patch
  0004-btrfs-unify-compression-ops-with-workspace_manager.patch
  0005-btrfs-add-helper-methods-for-workspace-manager-init-.patch
  0006-btrfs-add-compression-interface-in-get-put-_workspac.patch
  0007-btrfs-move-to-fn-pointers-for-get-put-workspaces.patch
  0008-btrfs-plumb-level-through-the-compression-interface.patch
  0009-btrfs-change-set_level-to-bound-the-level-passed-in.patch
  0010-btrfs-zstd-use-the-passed-through-level-instead-of-d.patch
  0011-btrfs-add-zstd-compression-level-support.patch

0001 adds macros for type_level conversion. 0002 renames workspaces_list
to workspace_manager.  0003 moves back to managing the heuristic
workspaces as the index 0 compression level. 0004-0007 unify operations
with the workspace_manager with 0007 moving to compression types owning
their workspace_manager. 0008-0010 plumbs level throughout the
compression level getting interface and converts set_level() to be a
bounding function rather than setting level on a workspace. 0011 adds
zstd compression level support.

This patchset is on top of kdave#master d73aba1115cf.

diffstats below:

Dennis Zhou (11):
  btrfs: add macros for compression type and level
  btrfs: rename workspaces_list to workspace_manager
  btrfs: manage heuristic workspace as index 0
  btrfs: unify compression ops with workspace_manager
  btrfs: add helper methods for workspace manager init and cleanup
  btrfs: add compression interface in (get/put)_workspace()
  btrfs: move to fn pointers for get/put workspaces
  btrfs: plumb level through the compression interface
  btrfs: change set_level() to bound the level passed in
  btrfs: zstd use the passed through level instead of default
  btrfs: add zstd compression level support

 fs/btrfs/compression.c  | 251 ++++++++++++++++++--------------------
 fs/btrfs/compression.h  |  39 +++++-
 fs/btrfs/ioctl.c        |   2 +-
 fs/btrfs/lzo.c          |  31 ++++-
 fs/btrfs/super.c        |  10 +-
 fs/btrfs/tree-checker.c |   4 +-
 fs/btrfs/zlib.c         |  45 +++++--
 fs/btrfs/zstd.c         | 261 ++++++++++++++++++++++++++++++++++++++--
 8 files changed, 485 insertions(+), 158 deletions(-)

Thanks,
Dennis

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH 01/11] btrfs: add macros for compression type and level
  2019-01-28 21:24 [PATCH 00/11] btrfs: add zstd compression level support Dennis Zhou
@ 2019-01-28 21:24 ` Dennis Zhou
  2019-01-29  7:26   ` Nikolay Borisov
                     ` (3 more replies)
  2019-01-28 21:24 ` [PATCH 02/11] btrfs: rename workspaces_list to workspace_manager Dennis Zhou
                   ` (10 subsequent siblings)
  11 siblings, 4 replies; 46+ messages in thread
From: Dennis Zhou @ 2019-01-28 21:24 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

It is very easy to miss places that rely on a certain bitshifting for
decyphering the type_level overloading. Make macros handle this instead.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
 fs/btrfs/compression.c | 2 +-
 fs/btrfs/compression.h | 3 +++
 fs/btrfs/zlib.c        | 2 +-
 3 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 548057630b69..586f95ac0aea 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -1036,9 +1036,9 @@ int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping,
 			 unsigned long *total_in,
 			 unsigned long *total_out)
 {
+	int type = BTRFS_COMPRESS_TYPE(type_level);
 	struct list_head *workspace;
 	int ret;
-	int type = type_level & 0xF;
 
 	workspace = find_workspace(type);
 
diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index ddda9b80bf20..69a9197dadc3 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -25,6 +25,9 @@
 
 #define	BTRFS_ZLIB_DEFAULT_LEVEL		3
 
+#define BTRFS_COMPRESS_TYPE(type_level)		(type_level & 0xF)
+#define BTRFS_COMPRESS_LEVEL(type_level)	((type_level & 0xF0) >> 4)
+
 struct compressed_bio {
 	/* number of bios pending for this compressed extent */
 	refcount_t pending_bios;
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index 970ff3e35bb3..1480b3eee306 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -393,7 +393,7 @@ static int zlib_decompress(struct list_head *ws, unsigned char *data_in,
 static void zlib_set_level(struct list_head *ws, unsigned int type)
 {
 	struct workspace *workspace = list_entry(ws, struct workspace, list);
-	unsigned level = (type & 0xF0) >> 4;
+	unsigned int level = BTRFS_COMPRESS_LEVEL(type);
 
 	if (level > 9)
 		level = 9;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 02/11] btrfs: rename workspaces_list to workspace_manager
  2019-01-28 21:24 [PATCH 00/11] btrfs: add zstd compression level support Dennis Zhou
  2019-01-28 21:24 ` [PATCH 01/11] btrfs: add macros for compression type and level Dennis Zhou
@ 2019-01-28 21:24 ` Dennis Zhou
  2019-01-29  7:27   ` Nikolay Borisov
  2019-01-29 17:58   ` Josef Bacik
  2019-01-28 21:24 ` [PATCH 03/11] btrfs: manage heuristic workspace as index 0 Dennis Zhou
                   ` (9 subsequent siblings)
  11 siblings, 2 replies; 46+ messages in thread
From: Dennis Zhou @ 2019-01-28 21:24 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

This is in preparation for zstd compression levels. As each level will
require different sized workspaces, workspaces_list is no longer a
really fitting name.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
 fs/btrfs/compression.c | 46 +++++++++++++++++++++---------------------
 1 file changed, 23 insertions(+), 23 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 586f95ac0aea..aced261984e2 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -769,7 +769,7 @@ static struct list_head *alloc_heuristic_ws(void)
 	return ERR_PTR(-ENOMEM);
 }
 
-struct workspaces_list {
+struct workspace_manager {
 	struct list_head idle_ws;
 	spinlock_t ws_lock;
 	/* Number of free workspaces */
@@ -780,9 +780,9 @@ struct workspaces_list {
 	wait_queue_head_t ws_wait;
 };
 
-static struct workspaces_list btrfs_comp_ws[BTRFS_COMPRESS_TYPES];
+static struct workspace_manager wsm[BTRFS_COMPRESS_TYPES];
 
-static struct workspaces_list btrfs_heuristic_ws;
+static struct workspace_manager btrfs_heuristic_ws;
 
 static const struct btrfs_compress_op * const btrfs_compress_op[] = {
 	&btrfs_zlib_compress,
@@ -811,10 +811,10 @@ void __init btrfs_init_compress(void)
 	}
 
 	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
-		INIT_LIST_HEAD(&btrfs_comp_ws[i].idle_ws);
-		spin_lock_init(&btrfs_comp_ws[i].ws_lock);
-		atomic_set(&btrfs_comp_ws[i].total_ws, 0);
-		init_waitqueue_head(&btrfs_comp_ws[i].ws_wait);
+		INIT_LIST_HEAD(&wsm[i].idle_ws);
+		spin_lock_init(&wsm[i].ws_lock);
+		atomic_set(&wsm[i].total_ws, 0);
+		init_waitqueue_head(&wsm[i].ws_wait);
 
 		/*
 		 * Preallocate one workspace for each compression type so
@@ -824,9 +824,9 @@ void __init btrfs_init_compress(void)
 		if (IS_ERR(workspace)) {
 			pr_warn("BTRFS: cannot preallocate compression workspace, will try later\n");
 		} else {
-			atomic_set(&btrfs_comp_ws[i].total_ws, 1);
-			btrfs_comp_ws[i].free_ws = 1;
-			list_add(workspace, &btrfs_comp_ws[i].idle_ws);
+			atomic_set(&wsm[i].total_ws, 1);
+			wsm[i].free_ws = 1;
+			list_add(workspace, &wsm[i].idle_ws);
 		}
 	}
 }
@@ -856,11 +856,11 @@ static struct list_head *__find_workspace(int type, bool heuristic)
 		ws_wait	 = &btrfs_heuristic_ws.ws_wait;
 		free_ws	 = &btrfs_heuristic_ws.free_ws;
 	} else {
-		idle_ws	 = &btrfs_comp_ws[idx].idle_ws;
-		ws_lock	 = &btrfs_comp_ws[idx].ws_lock;
-		total_ws = &btrfs_comp_ws[idx].total_ws;
-		ws_wait	 = &btrfs_comp_ws[idx].ws_wait;
-		free_ws	 = &btrfs_comp_ws[idx].free_ws;
+		idle_ws	 = &wsm[idx].idle_ws;
+		ws_lock	 = &wsm[idx].ws_lock;
+		total_ws = &wsm[idx].total_ws;
+		ws_wait	 = &wsm[idx].ws_wait;
+		free_ws	 = &wsm[idx].free_ws;
 	}
 
 again:
@@ -952,11 +952,11 @@ static void __free_workspace(int type, struct list_head *workspace,
 		ws_wait	 = &btrfs_heuristic_ws.ws_wait;
 		free_ws	 = &btrfs_heuristic_ws.free_ws;
 	} else {
-		idle_ws	 = &btrfs_comp_ws[idx].idle_ws;
-		ws_lock	 = &btrfs_comp_ws[idx].ws_lock;
-		total_ws = &btrfs_comp_ws[idx].total_ws;
-		ws_wait	 = &btrfs_comp_ws[idx].ws_wait;
-		free_ws	 = &btrfs_comp_ws[idx].free_ws;
+		idle_ws	 = &wsm[idx].idle_ws;
+		ws_lock	 = &wsm[idx].ws_lock;
+		total_ws = &wsm[idx].total_ws;
+		ws_wait	 = &wsm[idx].ws_wait;
+		free_ws	 = &wsm[idx].free_ws;
 	}
 
 	spin_lock(ws_lock);
@@ -998,11 +998,11 @@ static void free_workspaces(void)
 	}
 
 	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
-		while (!list_empty(&btrfs_comp_ws[i].idle_ws)) {
-			workspace = btrfs_comp_ws[i].idle_ws.next;
+		while (!list_empty(&wsm[i].idle_ws)) {
+			workspace = wsm[i].idle_ws.next;
 			list_del(workspace);
 			btrfs_compress_op[i]->free_workspace(workspace);
-			atomic_dec(&btrfs_comp_ws[i].total_ws);
+			atomic_dec(&wsm[i].total_ws);
 		}
 	}
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 03/11] btrfs: manage heuristic workspace as index 0
  2019-01-28 21:24 [PATCH 00/11] btrfs: add zstd compression level support Dennis Zhou
  2019-01-28 21:24 ` [PATCH 01/11] btrfs: add macros for compression type and level Dennis Zhou
  2019-01-28 21:24 ` [PATCH 02/11] btrfs: rename workspaces_list to workspace_manager Dennis Zhou
@ 2019-01-28 21:24 ` Dennis Zhou
  2019-01-29  7:53   ` Nikolay Borisov
                     ` (2 more replies)
  2019-01-28 21:24 ` [PATCH 04/11] btrfs: unify compression ops with workspace_manager Dennis Zhou
                   ` (8 subsequent siblings)
  11 siblings, 3 replies; 46+ messages in thread
From: Dennis Zhou @ 2019-01-28 21:24 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

While the heuristic workspaces aren't really compression workspaces,
they use the same interface for managing them. So rather than branching,
let's just handle them once again as the index 0 compression type.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
 fs/btrfs/compression.c  | 107 +++++++++++-----------------------------
 fs/btrfs/compression.h  |   3 +-
 fs/btrfs/ioctl.c        |   2 +-
 fs/btrfs/tree-checker.c |   4 +-
 4 files changed, 33 insertions(+), 83 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index aced261984e2..bda7e8d2cbc7 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -37,6 +37,8 @@ const char* btrfs_compress_type2str(enum btrfs_compression_type type)
 	case BTRFS_COMPRESS_ZSTD:
 	case BTRFS_COMPRESS_NONE:
 		return btrfs_compress_types[type];
+	default:
+		return NULL;
 	}
 
 	return NULL;
@@ -769,6 +771,11 @@ static struct list_head *alloc_heuristic_ws(void)
 	return ERR_PTR(-ENOMEM);
 }
 
+const struct btrfs_compress_op btrfs_heuristic_compress = {
+	.alloc_workspace = alloc_heuristic_ws,
+	.free_workspace = free_heuristic_ws,
+};
+
 struct workspace_manager {
 	struct list_head idle_ws;
 	spinlock_t ws_lock;
@@ -782,9 +789,8 @@ struct workspace_manager {
 
 static struct workspace_manager wsm[BTRFS_COMPRESS_TYPES];
 
-static struct workspace_manager btrfs_heuristic_ws;
-
 static const struct btrfs_compress_op * const btrfs_compress_op[] = {
+	&btrfs_heuristic_compress,
 	&btrfs_zlib_compress,
 	&btrfs_lzo_compress,
 	&btrfs_zstd_compress,
@@ -795,21 +801,6 @@ void __init btrfs_init_compress(void)
 	struct list_head *workspace;
 	int i;
 
-	INIT_LIST_HEAD(&btrfs_heuristic_ws.idle_ws);
-	spin_lock_init(&btrfs_heuristic_ws.ws_lock);
-	atomic_set(&btrfs_heuristic_ws.total_ws, 0);
-	init_waitqueue_head(&btrfs_heuristic_ws.ws_wait);
-
-	workspace = alloc_heuristic_ws();
-	if (IS_ERR(workspace)) {
-		pr_warn(
-	"BTRFS: cannot preallocate heuristic workspace, will try later\n");
-	} else {
-		atomic_set(&btrfs_heuristic_ws.total_ws, 1);
-		btrfs_heuristic_ws.free_ws = 1;
-		list_add(workspace, &btrfs_heuristic_ws.idle_ws);
-	}
-
 	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
 		INIT_LIST_HEAD(&wsm[i].idle_ws);
 		spin_lock_init(&wsm[i].ws_lock);
@@ -837,11 +828,10 @@ void __init btrfs_init_compress(void)
  * Preallocation makes a forward progress guarantees and we do not return
  * errors.
  */
-static struct list_head *__find_workspace(int type, bool heuristic)
+static struct list_head *find_workspace(int type)
 {
 	struct list_head *workspace;
 	int cpus = num_online_cpus();
-	int idx = type - 1;
 	unsigned nofs_flag;
 	struct list_head *idle_ws;
 	spinlock_t *ws_lock;
@@ -849,19 +839,11 @@ static struct list_head *__find_workspace(int type, bool heuristic)
 	wait_queue_head_t *ws_wait;
 	int *free_ws;
 
-	if (heuristic) {
-		idle_ws	 = &btrfs_heuristic_ws.idle_ws;
-		ws_lock	 = &btrfs_heuristic_ws.ws_lock;
-		total_ws = &btrfs_heuristic_ws.total_ws;
-		ws_wait	 = &btrfs_heuristic_ws.ws_wait;
-		free_ws	 = &btrfs_heuristic_ws.free_ws;
-	} else {
-		idle_ws	 = &wsm[idx].idle_ws;
-		ws_lock	 = &wsm[idx].ws_lock;
-		total_ws = &wsm[idx].total_ws;
-		ws_wait	 = &wsm[idx].ws_wait;
-		free_ws	 = &wsm[idx].free_ws;
-	}
+	idle_ws	 = &wsm[type].idle_ws;
+	ws_lock	 = &wsm[type].ws_lock;
+	total_ws = &wsm[type].total_ws;
+	ws_wait	 = &wsm[type].ws_wait;
+	free_ws	 = &wsm[type].free_ws;
 
 again:
 	spin_lock(ws_lock);
@@ -892,10 +874,7 @@ static struct list_head *__find_workspace(int type, bool heuristic)
 	 * context of btrfs_compress_bio/btrfs_compress_pages
 	 */
 	nofs_flag = memalloc_nofs_save();
-	if (heuristic)
-		workspace = alloc_heuristic_ws();
-	else
-		workspace = btrfs_compress_op[idx]->alloc_workspace();
+	workspace = btrfs_compress_op[type]->alloc_workspace();
 	memalloc_nofs_restore(nofs_flag);
 
 	if (IS_ERR(workspace)) {
@@ -926,38 +905,23 @@ static struct list_head *__find_workspace(int type, bool heuristic)
 	return workspace;
 }
 
-static struct list_head *find_workspace(int type)
-{
-	return __find_workspace(type, false);
-}
-
 /*
  * put a workspace struct back on the list or free it if we have enough
  * idle ones sitting around
  */
-static void __free_workspace(int type, struct list_head *workspace,
-			     bool heuristic)
+static void free_workspace(int type, struct list_head *workspace)
 {
-	int idx = type - 1;
 	struct list_head *idle_ws;
 	spinlock_t *ws_lock;
 	atomic_t *total_ws;
 	wait_queue_head_t *ws_wait;
 	int *free_ws;
 
-	if (heuristic) {
-		idle_ws	 = &btrfs_heuristic_ws.idle_ws;
-		ws_lock	 = &btrfs_heuristic_ws.ws_lock;
-		total_ws = &btrfs_heuristic_ws.total_ws;
-		ws_wait	 = &btrfs_heuristic_ws.ws_wait;
-		free_ws	 = &btrfs_heuristic_ws.free_ws;
-	} else {
-		idle_ws	 = &wsm[idx].idle_ws;
-		ws_lock	 = &wsm[idx].ws_lock;
-		total_ws = &wsm[idx].total_ws;
-		ws_wait	 = &wsm[idx].ws_wait;
-		free_ws	 = &wsm[idx].free_ws;
-	}
+	idle_ws	 = &wsm[type].idle_ws;
+	ws_lock	 = &wsm[type].ws_lock;
+	total_ws = &wsm[type].total_ws;
+	ws_wait	 = &wsm[type].ws_wait;
+	free_ws	 = &wsm[type].free_ws;
 
 	spin_lock(ws_lock);
 	if (*free_ws <= num_online_cpus()) {
@@ -968,20 +932,12 @@ static void __free_workspace(int type, struct list_head *workspace,
 	}
 	spin_unlock(ws_lock);
 
-	if (heuristic)
-		free_heuristic_ws(workspace);
-	else
-		btrfs_compress_op[idx]->free_workspace(workspace);
+	btrfs_compress_op[type]->free_workspace(workspace);
 	atomic_dec(total_ws);
 wake:
 	cond_wake_up(ws_wait);
 }
 
-static void free_workspace(int type, struct list_head *ws)
-{
-	return __free_workspace(type, ws, false);
-}
-
 /*
  * cleanup function for module exit
  */
@@ -990,13 +946,6 @@ static void free_workspaces(void)
 	struct list_head *workspace;
 	int i;
 
-	while (!list_empty(&btrfs_heuristic_ws.idle_ws)) {
-		workspace = btrfs_heuristic_ws.idle_ws.next;
-		list_del(workspace);
-		free_heuristic_ws(workspace);
-		atomic_dec(&btrfs_heuristic_ws.total_ws);
-	}
-
 	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
 		while (!list_empty(&wsm[i].idle_ws)) {
 			workspace = wsm[i].idle_ws.next;
@@ -1042,8 +991,8 @@ int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping,
 
 	workspace = find_workspace(type);
 
-	btrfs_compress_op[type - 1]->set_level(workspace, type_level);
-	ret = btrfs_compress_op[type-1]->compress_pages(workspace, mapping,
+	btrfs_compress_op[type]->set_level(workspace, type_level);
+	ret = btrfs_compress_op[type]->compress_pages(workspace, mapping,
 						      start, pages,
 						      out_pages,
 						      total_in, total_out);
@@ -1072,7 +1021,7 @@ static int btrfs_decompress_bio(struct compressed_bio *cb)
 	int type = cb->compress_type;
 
 	workspace = find_workspace(type);
-	ret = btrfs_compress_op[type - 1]->decompress_bio(workspace, cb);
+	ret = btrfs_compress_op[type]->decompress_bio(workspace, cb);
 	free_workspace(type, workspace);
 
 	return ret;
@@ -1091,7 +1040,7 @@ int btrfs_decompress(int type, unsigned char *data_in, struct page *dest_page,
 
 	workspace = find_workspace(type);
 
-	ret = btrfs_compress_op[type-1]->decompress(workspace, data_in,
+	ret = btrfs_compress_op[type]->decompress(workspace, data_in,
 						  dest_page, start_byte,
 						  srclen, destlen);
 
@@ -1512,7 +1461,7 @@ static void heuristic_collect_sample(struct inode *inode, u64 start, u64 end,
  */
 int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end)
 {
-	struct list_head *ws_list = __find_workspace(0, true);
+	struct list_head *ws_list = find_workspace(0);
 	struct heuristic_ws *ws;
 	u32 i;
 	u8 byte;
@@ -1581,7 +1530,7 @@ int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end)
 	}
 
 out:
-	__free_workspace(0, ws_list, true);
+	free_workspace(0, ws_list);
 	return ret;
 }
 
diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index 69a9197dadc3..53a8b9e93217 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -97,7 +97,7 @@ enum btrfs_compression_type {
 	BTRFS_COMPRESS_ZLIB  = 1,
 	BTRFS_COMPRESS_LZO   = 2,
 	BTRFS_COMPRESS_ZSTD  = 3,
-	BTRFS_COMPRESS_TYPES = 3,
+	BTRFS_COMPRESS_TYPES = 4,
 };
 
 struct btrfs_compress_op {
@@ -125,6 +125,7 @@ struct btrfs_compress_op {
 	void (*set_level)(struct list_head *ws, unsigned int type);
 };
 
+extern const struct btrfs_compress_op btrfs_heuristic_compress;
 extern const struct btrfs_compress_op btrfs_zlib_compress;
 extern const struct btrfs_compress_op btrfs_lzo_compress;
 extern const struct btrfs_compress_op btrfs_zstd_compress;
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index 9c8e1734429c..20081465a451 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -1410,7 +1410,7 @@ int btrfs_defrag_file(struct inode *inode, struct file *file,
 		return -EINVAL;
 
 	if (do_compress) {
-		if (range->compress_type > BTRFS_COMPRESS_TYPES)
+		if (range->compress_type >= BTRFS_COMPRESS_TYPES)
 			return -EINVAL;
 		if (range->compress_type)
 			compress_type = range->compress_type;
diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
index a62e1e837a89..c88e146d8e99 100644
--- a/fs/btrfs/tree-checker.c
+++ b/fs/btrfs/tree-checker.c
@@ -133,9 +133,9 @@ static int check_extent_data_item(struct btrfs_fs_info *fs_info,
 	 * Support for new compression/encryption must introduce incompat flag,
 	 * and must be caught in open_ctree().
 	 */
-	if (btrfs_file_extent_compression(leaf, fi) > BTRFS_COMPRESS_TYPES) {
+	if (btrfs_file_extent_compression(leaf, fi) >= BTRFS_COMPRESS_TYPES) {
 		file_extent_err(fs_info, leaf, slot,
-	"invalid compression for file extent, have %u expect range [0, %u]",
+	"invalid compression for file extent, have %u expect range [0, %u)",
 			btrfs_file_extent_compression(leaf, fi),
 			BTRFS_COMPRESS_TYPES);
 		return -EUCLEAN;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 04/11] btrfs: unify compression ops with workspace_manager
  2019-01-28 21:24 [PATCH 00/11] btrfs: add zstd compression level support Dennis Zhou
                   ` (2 preceding siblings ...)
  2019-01-28 21:24 ` [PATCH 03/11] btrfs: manage heuristic workspace as index 0 Dennis Zhou
@ 2019-01-28 21:24 ` Dennis Zhou
  2019-01-29  7:54   ` Nikolay Borisov
  2019-01-29 18:03   ` Josef Bacik
  2019-01-28 21:24 ` [PATCH 05/11] btrfs: add helper methods for workspace manager init and cleanup Dennis Zhou
                   ` (7 subsequent siblings)
  11 siblings, 2 replies; 46+ messages in thread
From: Dennis Zhou @ 2019-01-28 21:24 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

Make the workspace_manager own the interface operations rather than
managing index-paired arrays for the workspace_manager and compression
operations.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
 fs/btrfs/compression.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index bda7e8d2cbc7..b7e986e16640 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -777,6 +777,7 @@ const struct btrfs_compress_op btrfs_heuristic_compress = {
 };
 
 struct workspace_manager {
+	const struct btrfs_compress_op *ops;
 	struct list_head idle_ws;
 	spinlock_t ws_lock;
 	/* Number of free workspaces */
@@ -802,6 +803,8 @@ void __init btrfs_init_compress(void)
 	int i;
 
 	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
+		wsm[i].ops = btrfs_compress_op[i];
+
 		INIT_LIST_HEAD(&wsm[i].idle_ws);
 		spin_lock_init(&wsm[i].ws_lock);
 		atomic_set(&wsm[i].total_ws, 0);
@@ -811,7 +814,7 @@ void __init btrfs_init_compress(void)
 		 * Preallocate one workspace for each compression type so
 		 * we can guarantee forward progress in the worst case
 		 */
-		workspace = btrfs_compress_op[i]->alloc_workspace();
+		workspace = wsm[i].ops->alloc_workspace();
 		if (IS_ERR(workspace)) {
 			pr_warn("BTRFS: cannot preallocate compression workspace, will try later\n");
 		} else {
@@ -874,7 +877,7 @@ static struct list_head *find_workspace(int type)
 	 * context of btrfs_compress_bio/btrfs_compress_pages
 	 */
 	nofs_flag = memalloc_nofs_save();
-	workspace = btrfs_compress_op[type]->alloc_workspace();
+	workspace = wsm[type].ops->alloc_workspace();
 	memalloc_nofs_restore(nofs_flag);
 
 	if (IS_ERR(workspace)) {
@@ -932,7 +935,7 @@ static void free_workspace(int type, struct list_head *workspace)
 	}
 	spin_unlock(ws_lock);
 
-	btrfs_compress_op[type]->free_workspace(workspace);
+	wsm[type].ops->free_workspace(workspace);
 	atomic_dec(total_ws);
 wake:
 	cond_wake_up(ws_wait);
@@ -950,7 +953,7 @@ static void free_workspaces(void)
 		while (!list_empty(&wsm[i].idle_ws)) {
 			workspace = wsm[i].idle_ws.next;
 			list_del(workspace);
-			btrfs_compress_op[i]->free_workspace(workspace);
+			wsm[i].ops->free_workspace(workspace);
 			atomic_dec(&wsm[i].total_ws);
 		}
 	}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 05/11] btrfs: add helper methods for workspace manager init and cleanup
  2019-01-28 21:24 [PATCH 00/11] btrfs: add zstd compression level support Dennis Zhou
                   ` (3 preceding siblings ...)
  2019-01-28 21:24 ` [PATCH 04/11] btrfs: unify compression ops with workspace_manager Dennis Zhou
@ 2019-01-28 21:24 ` Dennis Zhou
  2019-01-29  7:58   ` Nikolay Borisov
  2019-01-29 18:04   ` Josef Bacik
  2019-01-28 21:24 ` [PATCH 06/11] btrfs: add compression interface in (get/put)_workspace() Dennis Zhou
                   ` (6 subsequent siblings)
  11 siblings, 2 replies; 46+ messages in thread
From: Dennis Zhou @ 2019-01-28 21:24 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

Workspace manager init and cleanup code is open coded inside a for loop
over the compression types. This forces each compression type to rely on
the same workspace manager implementation. This patch creates helper
methods that will be the generic implementation for btrfs workspace
management.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
 fs/btrfs/compression.c | 81 ++++++++++++++++++++++--------------------
 1 file changed, 42 insertions(+), 39 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index b7e986e16640..63fa3eaeeacc 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -797,31 +797,41 @@ static const struct btrfs_compress_op * const btrfs_compress_op[] = {
 	&btrfs_zstd_compress,
 };
 
-void __init btrfs_init_compress(void)
+static void btrfs_init_workspace_manager(int type)
 {
+	struct workspace_manager *wsman = &wsm[type];
 	struct list_head *workspace;
-	int i;
 
-	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
-		wsm[i].ops = btrfs_compress_op[i];
+	wsman->ops = btrfs_compress_op[type];
 
-		INIT_LIST_HEAD(&wsm[i].idle_ws);
-		spin_lock_init(&wsm[i].ws_lock);
-		atomic_set(&wsm[i].total_ws, 0);
-		init_waitqueue_head(&wsm[i].ws_wait);
+	INIT_LIST_HEAD(&wsman->idle_ws);
+	spin_lock_init(&wsman->ws_lock);
+	atomic_set(&wsman->total_ws, 0);
+	init_waitqueue_head(&wsman->ws_wait);
 
-		/*
-		 * Preallocate one workspace for each compression type so
-		 * we can guarantee forward progress in the worst case
-		 */
-		workspace = wsm[i].ops->alloc_workspace();
-		if (IS_ERR(workspace)) {
-			pr_warn("BTRFS: cannot preallocate compression workspace, will try later\n");
-		} else {
-			atomic_set(&wsm[i].total_ws, 1);
-			wsm[i].free_ws = 1;
-			list_add(workspace, &wsm[i].idle_ws);
-		}
+	/*
+	 * Preallocate one workspace for each compression type so
+	 * we can guarantee forward progress in the worst case
+	 */
+	workspace = wsman->ops->alloc_workspace();
+	if (IS_ERR(workspace)) {
+		pr_warn("BTRFS: cannot preallocate compression workspace, will try later\n");
+	} else {
+		atomic_set(&wsman->total_ws, 1);
+		wsman->free_ws = 1;
+		list_add(workspace, &wsman->idle_ws);
+	}
+}
+
+static void btrfs_cleanup_workspace_manager(struct workspace_manager *wsman)
+{
+	struct list_head *ws;
+
+	while (!list_empty(&wsman->idle_ws)) {
+		ws = wsman->idle_ws.next;
+		list_del(ws);
+		wsman->ops->free_workspace(ws);
+		atomic_dec(&wsman->total_ws);
 	}
 }
 
@@ -941,24 +951,6 @@ static void free_workspace(int type, struct list_head *workspace)
 	cond_wake_up(ws_wait);
 }
 
-/*
- * cleanup function for module exit
- */
-static void free_workspaces(void)
-{
-	struct list_head *workspace;
-	int i;
-
-	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
-		while (!list_empty(&wsm[i].idle_ws)) {
-			workspace = wsm[i].idle_ws.next;
-			list_del(workspace);
-			wsm[i].ops->free_workspace(workspace);
-			atomic_dec(&wsm[i].total_ws);
-		}
-	}
-}
-
 /*
  * Given an address space and start and length, compress the bytes into @pages
  * that are allocated on demand.
@@ -1051,9 +1043,20 @@ int btrfs_decompress(int type, unsigned char *data_in, struct page *dest_page,
 	return ret;
 }
 
+void __init btrfs_init_compress(void)
+{
+	int i;
+
+	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++)
+		btrfs_init_workspace_manager(i);
+}
+
 void __cold btrfs_exit_compress(void)
 {
-	free_workspaces();
+	int i;
+
+	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++)
+		btrfs_cleanup_workspace_manager(&wsm[i]);
 }
 
 /*
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 06/11] btrfs: add compression interface in (get/put)_workspace()
  2019-01-28 21:24 [PATCH 00/11] btrfs: add zstd compression level support Dennis Zhou
                   ` (4 preceding siblings ...)
  2019-01-28 21:24 ` [PATCH 05/11] btrfs: add helper methods for workspace manager init and cleanup Dennis Zhou
@ 2019-01-28 21:24 ` Dennis Zhou
  2019-01-29 18:06   ` Josef Bacik
  2019-01-28 21:24 ` [PATCH 07/11] btrfs: move to fn pointers for get/put workspaces Dennis Zhou
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 46+ messages in thread
From: Dennis Zhou @ 2019-01-28 21:24 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

There are two levels of workspace management. First, alloc()/free()
which are responsible for actually creating and destroy workspaces.
Second, at a higher level, get()/put() which is the compression code
asking for a workspace from a workspace_manager.

The compression code shouldn't really care how it gets a workspace, but
that it got a workspace. This adds get_workspace() and put_workspace()
to be the higher level interface which is responsible for indexing into
the appropriate compression type. It also introduces
btrfs_put_workspace() and btrfs_get_workspace() to be the generic
implementations of the higher interface.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
 fs/btrfs/compression.c | 57 +++++++++++++++++++++++++-----------------
 1 file changed, 34 insertions(+), 23 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 63fa3eaeeacc..2e748d8785f0 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -841,7 +841,7 @@ static void btrfs_cleanup_workspace_manager(struct workspace_manager *wsman)
  * Preallocation makes a forward progress guarantees and we do not return
  * errors.
  */
-static struct list_head *find_workspace(int type)
+static struct list_head *btrfs_get_workspace(struct workspace_manager *wsman)
 {
 	struct list_head *workspace;
 	int cpus = num_online_cpus();
@@ -852,11 +852,11 @@ static struct list_head *find_workspace(int type)
 	wait_queue_head_t *ws_wait;
 	int *free_ws;
 
-	idle_ws	 = &wsm[type].idle_ws;
-	ws_lock	 = &wsm[type].ws_lock;
-	total_ws = &wsm[type].total_ws;
-	ws_wait	 = &wsm[type].ws_wait;
-	free_ws	 = &wsm[type].free_ws;
+	idle_ws	 = &wsman->idle_ws;
+	ws_lock	 = &wsman->ws_lock;
+	total_ws = &wsman->total_ws;
+	ws_wait	 = &wsman->ws_wait;
+	free_ws	 = &wsman->free_ws;
 
 again:
 	spin_lock(ws_lock);
@@ -887,7 +887,7 @@ static struct list_head *find_workspace(int type)
 	 * context of btrfs_compress_bio/btrfs_compress_pages
 	 */
 	nofs_flag = memalloc_nofs_save();
-	workspace = wsm[type].ops->alloc_workspace();
+	workspace = wsman->ops->alloc_workspace();
 	memalloc_nofs_restore(nofs_flag);
 
 	if (IS_ERR(workspace)) {
@@ -918,11 +918,17 @@ static struct list_head *find_workspace(int type)
 	return workspace;
 }
 
+static struct list_head *get_workspace(int type)
+{
+	return btrfs_get_workspace(&wsm[type]);
+}
+
 /*
  * put a workspace struct back on the list or free it if we have enough
  * idle ones sitting around
  */
-static void free_workspace(int type, struct list_head *workspace)
+static void btrfs_put_workspace(struct workspace_manager *wsman,
+				struct list_head *ws)
 {
 	struct list_head *idle_ws;
 	spinlock_t *ws_lock;
@@ -930,27 +936,32 @@ static void free_workspace(int type, struct list_head *workspace)
 	wait_queue_head_t *ws_wait;
 	int *free_ws;
 
-	idle_ws	 = &wsm[type].idle_ws;
-	ws_lock	 = &wsm[type].ws_lock;
-	total_ws = &wsm[type].total_ws;
-	ws_wait	 = &wsm[type].ws_wait;
-	free_ws	 = &wsm[type].free_ws;
+	idle_ws	 = &wsman->idle_ws;
+	ws_lock	 = &wsman->ws_lock;
+	total_ws = &wsman->total_ws;
+	ws_wait	 = &wsman->ws_wait;
+	free_ws	 = &wsman->free_ws;
 
 	spin_lock(ws_lock);
 	if (*free_ws <= num_online_cpus()) {
-		list_add(workspace, idle_ws);
+		list_add(ws, idle_ws);
 		(*free_ws)++;
 		spin_unlock(ws_lock);
 		goto wake;
 	}
 	spin_unlock(ws_lock);
 
-	wsm[type].ops->free_workspace(workspace);
+	wsman->ops->free_workspace(ws);
 	atomic_dec(total_ws);
 wake:
 	cond_wake_up(ws_wait);
 }
 
+static void put_workspace(int type, struct list_head *ws)
+{
+	return btrfs_put_workspace(&wsm[type], ws);
+}
+
 /*
  * Given an address space and start and length, compress the bytes into @pages
  * that are allocated on demand.
@@ -984,14 +995,14 @@ int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping,
 	struct list_head *workspace;
 	int ret;
 
-	workspace = find_workspace(type);
+	workspace = get_workspace(type);
 
 	btrfs_compress_op[type]->set_level(workspace, type_level);
 	ret = btrfs_compress_op[type]->compress_pages(workspace, mapping,
 						      start, pages,
 						      out_pages,
 						      total_in, total_out);
-	free_workspace(type, workspace);
+	put_workspace(type, workspace);
 	return ret;
 }
 
@@ -1015,9 +1026,9 @@ static int btrfs_decompress_bio(struct compressed_bio *cb)
 	int ret;
 	int type = cb->compress_type;
 
-	workspace = find_workspace(type);
+	workspace = get_workspace(type);
 	ret = btrfs_compress_op[type]->decompress_bio(workspace, cb);
-	free_workspace(type, workspace);
+	put_workspace(type, workspace);
 
 	return ret;
 }
@@ -1033,13 +1044,13 @@ int btrfs_decompress(int type, unsigned char *data_in, struct page *dest_page,
 	struct list_head *workspace;
 	int ret;
 
-	workspace = find_workspace(type);
+	workspace = get_workspace(type);
 
 	ret = btrfs_compress_op[type]->decompress(workspace, data_in,
 						  dest_page, start_byte,
 						  srclen, destlen);
 
-	free_workspace(type, workspace);
+	put_workspace(type, workspace);
 	return ret;
 }
 
@@ -1467,7 +1478,7 @@ static void heuristic_collect_sample(struct inode *inode, u64 start, u64 end,
  */
 int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end)
 {
-	struct list_head *ws_list = find_workspace(0);
+	struct list_head *ws_list = get_workspace(0);
 	struct heuristic_ws *ws;
 	u32 i;
 	u8 byte;
@@ -1536,7 +1547,7 @@ int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end)
 	}
 
 out:
-	free_workspace(0, ws_list);
+	put_workspace(0, ws_list);
 	return ret;
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 07/11] btrfs: move to fn pointers for get/put workspaces
  2019-01-28 21:24 [PATCH 00/11] btrfs: add zstd compression level support Dennis Zhou
                   ` (5 preceding siblings ...)
  2019-01-28 21:24 ` [PATCH 06/11] btrfs: add compression interface in (get/put)_workspace() Dennis Zhou
@ 2019-01-28 21:24 ` Dennis Zhou
  2019-01-29  8:22   ` Nikolay Borisov
  2019-01-29 18:17   ` Josef Bacik
  2019-01-28 21:24 ` [PATCH 08/11] btrfs: plumb level through the compression interface Dennis Zhou
                   ` (4 subsequent siblings)
  11 siblings, 2 replies; 46+ messages in thread
From: Dennis Zhou @ 2019-01-28 21:24 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

The previous patch added generic helpers for get_workspace() and
put_workspace(). Now, we can migrate ownership of the workspace_manager
to be in the compression type code as the compression code itself
doesn't care beyond being able to get a workspace. The init/cleanup
and get/put methods are abstracted so each compression algorithm can
decide how they want to manage their workspaces.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
 fs/btrfs/compression.c | 101 +++++++++++++++++++++++------------------
 fs/btrfs/compression.h |  26 +++++++++++
 fs/btrfs/lzo.c         |  26 +++++++++++
 fs/btrfs/zlib.c        |  26 +++++++++++
 fs/btrfs/zstd.c        |  26 +++++++++++
 5 files changed, 160 insertions(+), 45 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 2e748d8785f0..ab694760ffdb 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -732,6 +732,28 @@ struct heuristic_ws {
 	struct list_head list;
 };
 
+static struct workspace_manager heuristic_wsm;
+
+static void heuristic_init_workspace_manager(void)
+{
+	btrfs_init_workspace_manager(&heuristic_wsm, &btrfs_heuristic_compress);
+}
+
+static void heuristic_cleanup_workspace_manager(void)
+{
+	btrfs_cleanup_workspace_manager(&heuristic_wsm);
+}
+
+static struct list_head *heuristic_get_workspace(void)
+{
+	return btrfs_get_workspace(&heuristic_wsm);
+}
+
+static void heuristic_put_workspace(struct list_head *ws)
+{
+	btrfs_put_workspace(&heuristic_wsm, ws);
+}
+
 static void free_heuristic_ws(struct list_head *ws)
 {
 	struct heuristic_ws *workspace;
@@ -772,24 +794,14 @@ static struct list_head *alloc_heuristic_ws(void)
 }
 
 const struct btrfs_compress_op btrfs_heuristic_compress = {
+	.init_workspace_manager = heuristic_init_workspace_manager,
+	.cleanup_workspace_manager = heuristic_cleanup_workspace_manager,
+	.get_workspace = heuristic_get_workspace,
+	.put_workspace = heuristic_put_workspace,
 	.alloc_workspace = alloc_heuristic_ws,
 	.free_workspace = free_heuristic_ws,
 };
 
-struct workspace_manager {
-	const struct btrfs_compress_op *ops;
-	struct list_head idle_ws;
-	spinlock_t ws_lock;
-	/* Number of free workspaces */
-	int free_ws;
-	/* Total number of allocated workspaces */
-	atomic_t total_ws;
-	/* Waiters for a free workspace */
-	wait_queue_head_t ws_wait;
-};
-
-static struct workspace_manager wsm[BTRFS_COMPRESS_TYPES];
-
 static const struct btrfs_compress_op * const btrfs_compress_op[] = {
 	&btrfs_heuristic_compress,
 	&btrfs_zlib_compress,
@@ -797,33 +809,33 @@ static const struct btrfs_compress_op * const btrfs_compress_op[] = {
 	&btrfs_zstd_compress,
 };
 
-static void btrfs_init_workspace_manager(int type)
+void btrfs_init_workspace_manager(struct workspace_manager *wsm,
+				  const struct btrfs_compress_op *ops)
 {
-	struct workspace_manager *wsman = &wsm[type];
 	struct list_head *workspace;
 
-	wsman->ops = btrfs_compress_op[type];
+	wsm->ops = ops;
 
-	INIT_LIST_HEAD(&wsman->idle_ws);
-	spin_lock_init(&wsman->ws_lock);
-	atomic_set(&wsman->total_ws, 0);
-	init_waitqueue_head(&wsman->ws_wait);
+	INIT_LIST_HEAD(&wsm->idle_ws);
+	spin_lock_init(&wsm->ws_lock);
+	atomic_set(&wsm->total_ws, 0);
+	init_waitqueue_head(&wsm->ws_wait);
 
 	/*
 	 * Preallocate one workspace for each compression type so
 	 * we can guarantee forward progress in the worst case
 	 */
-	workspace = wsman->ops->alloc_workspace();
+	workspace = wsm->ops->alloc_workspace();
 	if (IS_ERR(workspace)) {
 		pr_warn("BTRFS: cannot preallocate compression workspace, will try later\n");
 	} else {
-		atomic_set(&wsman->total_ws, 1);
-		wsman->free_ws = 1;
-		list_add(workspace, &wsman->idle_ws);
+		atomic_set(&wsm->total_ws, 1);
+		wsm->free_ws = 1;
+		list_add(workspace, &wsm->idle_ws);
 	}
 }
 
-static void btrfs_cleanup_workspace_manager(struct workspace_manager *wsman)
+void btrfs_cleanup_workspace_manager(struct workspace_manager *wsman)
 {
 	struct list_head *ws;
 
@@ -841,7 +853,7 @@ static void btrfs_cleanup_workspace_manager(struct workspace_manager *wsman)
  * Preallocation makes a forward progress guarantees and we do not return
  * errors.
  */
-static struct list_head *btrfs_get_workspace(struct workspace_manager *wsman)
+struct list_head *btrfs_get_workspace(struct workspace_manager *wsm)
 {
 	struct list_head *workspace;
 	int cpus = num_online_cpus();
@@ -852,11 +864,11 @@ static struct list_head *btrfs_get_workspace(struct workspace_manager *wsman)
 	wait_queue_head_t *ws_wait;
 	int *free_ws;
 
-	idle_ws	 = &wsman->idle_ws;
-	ws_lock	 = &wsman->ws_lock;
-	total_ws = &wsman->total_ws;
-	ws_wait	 = &wsman->ws_wait;
-	free_ws	 = &wsman->free_ws;
+	idle_ws	 = &wsm->idle_ws;
+	ws_lock	 = &wsm->ws_lock;
+	total_ws = &wsm->total_ws;
+	ws_wait	 = &wsm->ws_wait;
+	free_ws	 = &wsm->free_ws;
 
 again:
 	spin_lock(ws_lock);
@@ -887,7 +899,7 @@ static struct list_head *btrfs_get_workspace(struct workspace_manager *wsman)
 	 * context of btrfs_compress_bio/btrfs_compress_pages
 	 */
 	nofs_flag = memalloc_nofs_save();
-	workspace = wsman->ops->alloc_workspace();
+	workspace = wsm->ops->alloc_workspace();
 	memalloc_nofs_restore(nofs_flag);
 
 	if (IS_ERR(workspace)) {
@@ -920,15 +932,14 @@ static struct list_head *btrfs_get_workspace(struct workspace_manager *wsman)
 
 static struct list_head *get_workspace(int type)
 {
-	return btrfs_get_workspace(&wsm[type]);
+	return btrfs_compress_op[type]->get_workspace();
 }
 
 /*
  * put a workspace struct back on the list or free it if we have enough
  * idle ones sitting around
  */
-static void btrfs_put_workspace(struct workspace_manager *wsman,
-				struct list_head *ws)
+void btrfs_put_workspace(struct workspace_manager *wsm, struct list_head *ws)
 {
 	struct list_head *idle_ws;
 	spinlock_t *ws_lock;
@@ -936,11 +947,11 @@ static void btrfs_put_workspace(struct workspace_manager *wsman,
 	wait_queue_head_t *ws_wait;
 	int *free_ws;
 
-	idle_ws	 = &wsman->idle_ws;
-	ws_lock	 = &wsman->ws_lock;
-	total_ws = &wsman->total_ws;
-	ws_wait	 = &wsman->ws_wait;
-	free_ws	 = &wsman->free_ws;
+	idle_ws	 = &wsm->idle_ws;
+	ws_lock	 = &wsm->ws_lock;
+	total_ws = &wsm->total_ws;
+	ws_wait	 = &wsm->ws_wait;
+	free_ws	 = &wsm->free_ws;
 
 	spin_lock(ws_lock);
 	if (*free_ws <= num_online_cpus()) {
@@ -951,7 +962,7 @@ static void btrfs_put_workspace(struct workspace_manager *wsman,
 	}
 	spin_unlock(ws_lock);
 
-	wsman->ops->free_workspace(ws);
+	wsm->ops->free_workspace(ws);
 	atomic_dec(total_ws);
 wake:
 	cond_wake_up(ws_wait);
@@ -959,7 +970,7 @@ static void btrfs_put_workspace(struct workspace_manager *wsman,
 
 static void put_workspace(int type, struct list_head *ws)
 {
-	return btrfs_put_workspace(&wsm[type], ws);
+	return btrfs_compress_op[type]->put_workspace(ws);
 }
 
 /*
@@ -1059,7 +1070,7 @@ void __init btrfs_init_compress(void)
 	int i;
 
 	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++)
-		btrfs_init_workspace_manager(i);
+		btrfs_compress_op[i]->init_workspace_manager();
 }
 
 void __cold btrfs_exit_compress(void)
@@ -1067,7 +1078,7 @@ void __cold btrfs_exit_compress(void)
 	int i;
 
 	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++)
-		btrfs_cleanup_workspace_manager(&wsm[i]);
+		btrfs_compress_op[i]->cleanup_workspace_manager();
 }
 
 /*
diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index 53a8b9e93217..05342ad081d6 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -100,7 +100,33 @@ enum btrfs_compression_type {
 	BTRFS_COMPRESS_TYPES = 4,
 };
 
+struct workspace_manager {
+	const struct btrfs_compress_op *ops;
+	struct list_head idle_ws;
+	spinlock_t ws_lock;
+	/* Number of free workspaces */
+	int free_ws;
+	/* Total number of allocated workspaces */
+	atomic_t total_ws;
+	/* Waiters for a free workspace */
+	wait_queue_head_t ws_wait;
+};
+
+void btrfs_init_workspace_manager(struct workspace_manager *wsm,
+				  const struct btrfs_compress_op *ops);
+struct list_head *btrfs_get_workspace(struct workspace_manager *wsm);
+void btrfs_put_workspace(struct workspace_manager *wsm, struct list_head *ws);
+void btrfs_cleanup_workspace_manager(struct workspace_manager *wsm);
+
 struct btrfs_compress_op {
+	void (*init_workspace_manager)(void);
+
+	void (*cleanup_workspace_manager)(void);
+
+	struct list_head *(*get_workspace)(void);
+
+	void (*put_workspace)(struct list_head *ws);
+
 	struct list_head *(*alloc_workspace)(void);
 
 	void (*free_workspace)(struct list_head *workspace);
diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
index 90639140439f..f0837b2c8e94 100644
--- a/fs/btrfs/lzo.c
+++ b/fs/btrfs/lzo.c
@@ -61,6 +61,28 @@ struct workspace {
 	struct list_head list;
 };
 
+static struct workspace_manager wsm;
+
+static void lzo_init_workspace_manager(void)
+{
+	btrfs_init_workspace_manager(&wsm, &btrfs_lzo_compress);
+}
+
+static void lzo_cleanup_workspace_manager(void)
+{
+	btrfs_cleanup_workspace_manager(&wsm);
+}
+
+static struct list_head *lzo_get_workspace(void)
+{
+	return btrfs_get_workspace(&wsm);
+}
+
+static void lzo_put_workspace(struct list_head *ws)
+{
+	btrfs_put_workspace(&wsm, ws);
+}
+
 static void lzo_free_workspace(struct list_head *ws)
 {
 	struct workspace *workspace = list_entry(ws, struct workspace, list);
@@ -490,6 +512,10 @@ static void lzo_set_level(struct list_head *ws, unsigned int type)
 }
 
 const struct btrfs_compress_op btrfs_lzo_compress = {
+	.init_workspace_manager	= lzo_init_workspace_manager,
+	.cleanup_workspace_manager = lzo_cleanup_workspace_manager,
+	.get_workspace		= lzo_get_workspace,
+	.put_workspace		= lzo_put_workspace,
 	.alloc_workspace	= lzo_alloc_workspace,
 	.free_workspace		= lzo_free_workspace,
 	.compress_pages		= lzo_compress_pages,
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index 1480b3eee306..04687bf692e3 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -27,6 +27,28 @@ struct workspace {
 	int level;
 };
 
+static struct workspace_manager wsm;
+
+static void zlib_init_workspace_manager(void)
+{
+	btrfs_init_workspace_manager(&wsm, &btrfs_zlib_compress);
+}
+
+static void zlib_cleanup_workspace_manager(void)
+{
+	btrfs_cleanup_workspace_manager(&wsm);
+}
+
+static struct list_head *zlib_get_workspace(void)
+{
+	return btrfs_get_workspace(&wsm);
+}
+
+static void zlib_put_workspace(struct list_head *ws)
+{
+	btrfs_put_workspace(&wsm, ws);
+}
+
 static void zlib_free_workspace(struct list_head *ws)
 {
 	struct workspace *workspace = list_entry(ws, struct workspace, list);
@@ -402,6 +424,10 @@ static void zlib_set_level(struct list_head *ws, unsigned int type)
 }
 
 const struct btrfs_compress_op btrfs_zlib_compress = {
+	.init_workspace_manager	= zlib_init_workspace_manager,
+	.cleanup_workspace_manager = zlib_cleanup_workspace_manager,
+	.get_workspace		= zlib_get_workspace,
+	.put_workspace		= zlib_put_workspace,
 	.alloc_workspace	= zlib_alloc_workspace,
 	.free_workspace		= zlib_free_workspace,
 	.compress_pages		= zlib_compress_pages,
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index af6ec59972f5..b06eaf171be7 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -41,6 +41,28 @@ struct workspace {
 	ZSTD_outBuffer out_buf;
 };
 
+static struct workspace_manager wsm;
+
+static void zstd_init_workspace_manager(void)
+{
+	btrfs_init_workspace_manager(&wsm, &btrfs_zstd_compress);
+}
+
+static void zstd_cleanup_workspace_manager(void)
+{
+	btrfs_cleanup_workspace_manager(&wsm);
+}
+
+static struct list_head *zstd_get_workspace(void)
+{
+	return btrfs_get_workspace(&wsm);
+}
+
+static void zstd_put_workspace(struct list_head *ws)
+{
+	btrfs_put_workspace(&wsm, ws);
+}
+
 static void zstd_free_workspace(struct list_head *ws)
 {
 	struct workspace *workspace = list_entry(ws, struct workspace, list);
@@ -424,6 +446,10 @@ static void zstd_set_level(struct list_head *ws, unsigned int type)
 }
 
 const struct btrfs_compress_op btrfs_zstd_compress = {
+	.init_workspace_manager = zstd_init_workspace_manager,
+	.cleanup_workspace_manager = zstd_cleanup_workspace_manager,
+	.get_workspace = zstd_get_workspace,
+	.put_workspace = zstd_put_workspace,
 	.alloc_workspace = zstd_alloc_workspace,
 	.free_workspace = zstd_free_workspace,
 	.compress_pages = zstd_compress_pages,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 08/11] btrfs: plumb level through the compression interface
  2019-01-28 21:24 [PATCH 00/11] btrfs: add zstd compression level support Dennis Zhou
                   ` (6 preceding siblings ...)
  2019-01-28 21:24 ` [PATCH 07/11] btrfs: move to fn pointers for get/put workspaces Dennis Zhou
@ 2019-01-28 21:24 ` Dennis Zhou
  2019-01-29  8:08   ` Nikolay Borisov
  2019-01-29 18:17   ` Josef Bacik
  2019-01-28 21:24 ` [PATCH 09/11] btrfs: change set_level() to bound the level passed in Dennis Zhou
                   ` (3 subsequent siblings)
  11 siblings, 2 replies; 46+ messages in thread
From: Dennis Zhou @ 2019-01-28 21:24 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

Zlib compression supports multiple levels, but doesn't require changing
in how a workspace itself is created and managed. Zstd introduces a
different memory requirement such that higher levels of compression
require more memory. This requires changes in how the alloc()/get()
methods work for zstd. This pach plumbs compression level through the
interface as a parameter in preparation for zstd compression levels.
This gives the compression types opportunity to create/manage based on
the compression level.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
 fs/btrfs/compression.c | 31 ++++++++++++++++---------------
 fs/btrfs/compression.h |  7 ++++---
 fs/btrfs/lzo.c         |  6 +++---
 fs/btrfs/zlib.c        |  7 ++++---
 fs/btrfs/zstd.c        |  6 +++---
 5 files changed, 30 insertions(+), 27 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index ab694760ffdb..e509071eaa69 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -744,9 +744,9 @@ static void heuristic_cleanup_workspace_manager(void)
 	btrfs_cleanup_workspace_manager(&heuristic_wsm);
 }
 
-static struct list_head *heuristic_get_workspace(void)
+static struct list_head *heuristic_get_workspace(unsigned int level)
 {
-	return btrfs_get_workspace(&heuristic_wsm);
+	return btrfs_get_workspace(&heuristic_wsm, level);
 }
 
 static void heuristic_put_workspace(struct list_head *ws)
@@ -766,7 +766,7 @@ static void free_heuristic_ws(struct list_head *ws)
 	kfree(workspace);
 }
 
-static struct list_head *alloc_heuristic_ws(void)
+static struct list_head *alloc_heuristic_ws(unsigned int level)
 {
 	struct heuristic_ws *ws;
 
@@ -825,7 +825,7 @@ void btrfs_init_workspace_manager(struct workspace_manager *wsm,
 	 * Preallocate one workspace for each compression type so
 	 * we can guarantee forward progress in the worst case
 	 */
-	workspace = wsm->ops->alloc_workspace();
+	workspace = wsm->ops->alloc_workspace(0);
 	if (IS_ERR(workspace)) {
 		pr_warn("BTRFS: cannot preallocate compression workspace, will try later\n");
 	} else {
@@ -853,7 +853,8 @@ void btrfs_cleanup_workspace_manager(struct workspace_manager *wsman)
  * Preallocation makes a forward progress guarantees and we do not return
  * errors.
  */
-struct list_head *btrfs_get_workspace(struct workspace_manager *wsm)
+struct list_head *btrfs_get_workspace(struct workspace_manager *wsm,
+				      unsigned int level)
 {
 	struct list_head *workspace;
 	int cpus = num_online_cpus();
@@ -899,7 +900,7 @@ struct list_head *btrfs_get_workspace(struct workspace_manager *wsm)
 	 * context of btrfs_compress_bio/btrfs_compress_pages
 	 */
 	nofs_flag = memalloc_nofs_save();
-	workspace = wsm->ops->alloc_workspace();
+	workspace = wsm->ops->alloc_workspace(level);
 	memalloc_nofs_restore(nofs_flag);
 
 	if (IS_ERR(workspace)) {
@@ -930,9 +931,9 @@ struct list_head *btrfs_get_workspace(struct workspace_manager *wsm)
 	return workspace;
 }
 
-static struct list_head *get_workspace(int type)
+static struct list_head *get_workspace(int type, int level)
 {
-	return btrfs_compress_op[type]->get_workspace();
+	return btrfs_compress_op[type]->get_workspace(level);
 }
 
 /*
@@ -1003,12 +1004,13 @@ int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping,
 			 unsigned long *total_out)
 {
 	int type = BTRFS_COMPRESS_TYPE(type_level);
+	int level = BTRFS_COMPRESS_LEVEL(type_level);
 	struct list_head *workspace;
 	int ret;
 
-	workspace = get_workspace(type);
+	workspace = get_workspace(type, level);
 
-	btrfs_compress_op[type]->set_level(workspace, type_level);
+	btrfs_compress_op[type]->set_level(workspace, level);
 	ret = btrfs_compress_op[type]->compress_pages(workspace, mapping,
 						      start, pages,
 						      out_pages,
@@ -1037,7 +1039,7 @@ static int btrfs_decompress_bio(struct compressed_bio *cb)
 	int ret;
 	int type = cb->compress_type;
 
-	workspace = get_workspace(type);
+	workspace = get_workspace(type, 0);
 	ret = btrfs_compress_op[type]->decompress_bio(workspace, cb);
 	put_workspace(type, workspace);
 
@@ -1055,13 +1057,12 @@ int btrfs_decompress(int type, unsigned char *data_in, struct page *dest_page,
 	struct list_head *workspace;
 	int ret;
 
-	workspace = get_workspace(type);
-
+	workspace = get_workspace(type, 0);
 	ret = btrfs_compress_op[type]->decompress(workspace, data_in,
 						  dest_page, start_byte,
 						  srclen, destlen);
-
 	put_workspace(type, workspace);
+
 	return ret;
 }
 
@@ -1489,7 +1490,7 @@ static void heuristic_collect_sample(struct inode *inode, u64 start, u64 end,
  */
 int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end)
 {
-	struct list_head *ws_list = get_workspace(0);
+	struct list_head *ws_list = get_workspace(0, 0);
 	struct heuristic_ws *ws;
 	u32 i;
 	u8 byte;
diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index 05342ad081d6..e3627139bc5c 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -114,7 +114,8 @@ struct workspace_manager {
 
 void btrfs_init_workspace_manager(struct workspace_manager *wsm,
 				  const struct btrfs_compress_op *ops);
-struct list_head *btrfs_get_workspace(struct workspace_manager *wsm);
+struct list_head *btrfs_get_workspace(struct workspace_manager *wsm,
+				      unsigned int level);
 void btrfs_put_workspace(struct workspace_manager *wsm, struct list_head *ws);
 void btrfs_cleanup_workspace_manager(struct workspace_manager *wsm);
 
@@ -123,11 +124,11 @@ struct btrfs_compress_op {
 
 	void (*cleanup_workspace_manager)(void);
 
-	struct list_head *(*get_workspace)(void);
+	struct list_head *(*get_workspace)(unsigned int level);
 
 	void (*put_workspace)(struct list_head *ws);
 
-	struct list_head *(*alloc_workspace)(void);
+	struct list_head *(*alloc_workspace)(unsigned int level);
 
 	void (*free_workspace)(struct list_head *workspace);
 
diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
index f0837b2c8e94..f132af45a924 100644
--- a/fs/btrfs/lzo.c
+++ b/fs/btrfs/lzo.c
@@ -73,9 +73,9 @@ static void lzo_cleanup_workspace_manager(void)
 	btrfs_cleanup_workspace_manager(&wsm);
 }
 
-static struct list_head *lzo_get_workspace(void)
+static struct list_head *lzo_get_workspace(unsigned int level)
 {
-	return btrfs_get_workspace(&wsm);
+	return btrfs_get_workspace(&wsm, level);
 }
 
 static void lzo_put_workspace(struct list_head *ws)
@@ -93,7 +93,7 @@ static void lzo_free_workspace(struct list_head *ws)
 	kfree(workspace);
 }
 
-static struct list_head *lzo_alloc_workspace(void)
+static struct list_head *lzo_alloc_workspace(unsigned int level)
 {
 	struct workspace *workspace;
 
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index 04687bf692e3..e2173d0c4fd3 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -39,9 +39,9 @@ static void zlib_cleanup_workspace_manager(void)
 	btrfs_cleanup_workspace_manager(&wsm);
 }
 
-static struct list_head *zlib_get_workspace(void)
+static struct list_head *zlib_get_workspace(unsigned int level)
 {
-	return btrfs_get_workspace(&wsm);
+	return btrfs_get_workspace(&wsm, level);
 }
 
 static void zlib_put_workspace(struct list_head *ws)
@@ -58,7 +58,7 @@ static void zlib_free_workspace(struct list_head *ws)
 	kfree(workspace);
 }
 
-static struct list_head *zlib_alloc_workspace(void)
+static struct list_head *zlib_alloc_workspace(unsigned int level)
 {
 	struct workspace *workspace;
 	int workspacesize;
@@ -71,6 +71,7 @@ static struct list_head *zlib_alloc_workspace(void)
 			zlib_inflate_workspacesize());
 	workspace->strm.workspace = kvmalloc(workspacesize, GFP_KERNEL);
 	workspace->buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
+	workspace->level = level;
 	if (!workspace->strm.workspace || !workspace->buf)
 		goto fail;
 
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index b06eaf171be7..404101864220 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -53,9 +53,9 @@ static void zstd_cleanup_workspace_manager(void)
 	btrfs_cleanup_workspace_manager(&wsm);
 }
 
-static struct list_head *zstd_get_workspace(void)
+static struct list_head *zstd_get_workspace(unsigned int level)
 {
-	return btrfs_get_workspace(&wsm);
+	return btrfs_get_workspace(&wsm, level);
 }
 
 static void zstd_put_workspace(struct list_head *ws)
@@ -72,7 +72,7 @@ static void zstd_free_workspace(struct list_head *ws)
 	kfree(workspace);
 }
 
-static struct list_head *zstd_alloc_workspace(void)
+static struct list_head *zstd_alloc_workspace(unsigned int level)
 {
 	ZSTD_parameters params =
 			zstd_get_btrfs_parameters(ZSTD_BTRFS_MAX_INPUT);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 09/11] btrfs: change set_level() to bound the level passed in
  2019-01-28 21:24 [PATCH 00/11] btrfs: add zstd compression level support Dennis Zhou
                   ` (7 preceding siblings ...)
  2019-01-28 21:24 ` [PATCH 08/11] btrfs: plumb level through the compression interface Dennis Zhou
@ 2019-01-28 21:24 ` Dennis Zhou
  2019-01-29  8:14   ` Nikolay Borisov
  2019-01-28 21:24 ` [PATCH 10/11] btrfs: zstd use the passed through level instead of default Dennis Zhou
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 46+ messages in thread
From: Dennis Zhou @ 2019-01-28 21:24 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

Currently, the only user of set_level() is zlib which sets an internal
workspace parameter. As level is now plumbed into get_workspace(), this
can be handled there rather than separately.

This repurposes set_level() to bound the level passed in so it can be
used when setting the mounts compression level and as well as verifying
the level before getting a workspace. The other benefit is this divides
the meaning of compress(0) and get_workspace(0). The former means we
want to use the default compression level of the compression type. The
latter means we can use any workspace available.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
 fs/btrfs/compression.c | 23 +++++++++++++++--------
 fs/btrfs/compression.h |  4 ++--
 fs/btrfs/lzo.c         |  3 ++-
 fs/btrfs/super.c       |  4 +++-
 fs/btrfs/zlib.c        | 18 +++++++++++-------
 fs/btrfs/zstd.c        |  3 ++-
 6 files changed, 35 insertions(+), 20 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index e509071eaa69..a552c6f61e6d 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -1008,9 +1008,9 @@ int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping,
 	struct list_head *workspace;
 	int ret;
 
-	workspace = get_workspace(type, level);
+	level = btrfs_compress_op[type]->set_level(level);
 
-	btrfs_compress_op[type]->set_level(workspace, level);
+	workspace = get_workspace(type, level);
 	ret = btrfs_compress_op[type]->compress_pages(workspace, mapping,
 						      start, pages,
 						      out_pages,
@@ -1563,14 +1563,21 @@ int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end)
 	return ret;
 }
 
-unsigned int btrfs_compress_str2level(const char *str)
+unsigned int btrfs_compress_str2level(unsigned int type, const char *str)
 {
-	if (strncmp(str, "zlib", 4) != 0)
+	unsigned int level;
+	int ret;
+
+	if (!type)
 		return 0;
 
-	/* Accepted form: zlib:1 up to zlib:9 and nothing left after the number */
-	if (str[4] == ':' && '1' <= str[5] && str[5] <= '9' && str[6] == 0)
-		return str[5] - '0';
+	if (str[0] == ':') {
+		ret = kstrtouint(str + 1, 10, &level);
+		if (ret)
+			level = 0;
+	}
+
+	level = btrfs_compress_op[type]->set_level(level);
 
-	return BTRFS_ZLIB_DEFAULT_LEVEL;
+	return level;
 }
diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index e3627139bc5c..d607be40aa0e 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -90,7 +90,7 @@ blk_status_t btrfs_submit_compressed_write(struct inode *inode, u64 start,
 blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
 				 int mirror_num, unsigned long bio_flags);
 
-unsigned btrfs_compress_str2level(const char *str);
+unsigned int btrfs_compress_str2level(unsigned int type, const char *str);
 
 enum btrfs_compression_type {
 	BTRFS_COMPRESS_NONE  = 0,
@@ -149,7 +149,7 @@ struct btrfs_compress_op {
 			  unsigned long start_byte,
 			  size_t srclen, size_t destlen);
 
-	void (*set_level)(struct list_head *ws, unsigned int type);
+	unsigned int (*set_level)(unsigned int level);
 };
 
 extern const struct btrfs_compress_op btrfs_heuristic_compress;
diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
index f132af45a924..579d53ae256f 100644
--- a/fs/btrfs/lzo.c
+++ b/fs/btrfs/lzo.c
@@ -507,8 +507,9 @@ static int lzo_decompress(struct list_head *ws, unsigned char *data_in,
 	return ret;
 }
 
-static void lzo_set_level(struct list_head *ws, unsigned int type)
+static unsigned int lzo_set_level(unsigned int level)
 {
+	return 0;
 }
 
 const struct btrfs_compress_op btrfs_lzo_compress = {
diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
index c5586ffd1426..b28dff207383 100644
--- a/fs/btrfs/super.c
+++ b/fs/btrfs/super.c
@@ -529,7 +529,9 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
 				if (token != Opt_compress &&
 				    token != Opt_compress_force)
 					info->compress_level =
-					  btrfs_compress_str2level(args[0].from);
+					  btrfs_compress_str2level(
+							BTRFS_COMPRESS_ZLIB,
+							args[0].from + 4);
 				btrfs_set_opt(info->mount_opt, COMPRESS);
 				btrfs_clear_opt(info->mount_opt, NODATACOW);
 				btrfs_clear_opt(info->mount_opt, NODATASUM);
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index e2173d0c4fd3..388b1f000fca 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -41,7 +41,12 @@ static void zlib_cleanup_workspace_manager(void)
 
 static struct list_head *zlib_get_workspace(unsigned int level)
 {
-	return btrfs_get_workspace(&wsm, level);
+	struct list_head *ws = btrfs_get_workspace(&wsm, level);
+	struct workspace *workspace = list_entry(ws, struct workspace, list);
+
+	workspace->level = level;
+
+	return ws;
 }
 
 static void zlib_put_workspace(struct list_head *ws)
@@ -413,15 +418,14 @@ static int zlib_decompress(struct list_head *ws, unsigned char *data_in,
 	return ret;
 }
 
-static void zlib_set_level(struct list_head *ws, unsigned int type)
+static unsigned int zlib_set_level(unsigned int level)
 {
-	struct workspace *workspace = list_entry(ws, struct workspace, list);
-	unsigned int level = BTRFS_COMPRESS_LEVEL(type);
-
-	if (level > 9)
+	if (!level)
+		level = BTRFS_ZLIB_DEFAULT_LEVEL;
+	else if (level > 9)
 		level = 9;
 
-	workspace->level = level > 0 ? level : 3;
+	return level;
 }
 
 const struct btrfs_compress_op btrfs_zlib_compress = {
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index 404101864220..43f3be755b8c 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -441,8 +441,9 @@ static int zstd_decompress(struct list_head *ws, unsigned char *data_in,
 	return ret;
 }
 
-static void zstd_set_level(struct list_head *ws, unsigned int type)
+static unsigned int zstd_set_level(unsigned int level)
 {
+	return ZSTD_BTRFS_DEFAULT_LEVEL;
 }
 
 const struct btrfs_compress_op btrfs_zstd_compress = {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 10/11] btrfs: zstd use the passed through level instead of default
  2019-01-28 21:24 [PATCH 00/11] btrfs: add zstd compression level support Dennis Zhou
                   ` (8 preceding siblings ...)
  2019-01-28 21:24 ` [PATCH 09/11] btrfs: change set_level() to bound the level passed in Dennis Zhou
@ 2019-01-28 21:24 ` Dennis Zhou
  2019-01-29  8:15   ` Nikolay Borisov
  2019-01-28 21:24 ` [PATCH 11/11] btrfs: add zstd compression level support Dennis Zhou
  2019-01-29 17:18 ` [PATCH 00/11] " David Sterba
  11 siblings, 1 reply; 46+ messages in thread
From: Dennis Zhou @ 2019-01-28 21:24 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou

Zstd currently only supports the default level of compression. This
patch switches to using the level passed in for btrfs zstd
configuration.

Zstd workspaces now keep track of the requested level as this can differ
from the size of the workspace.

Signed-off-by: Dennis Zhou <dennis@kernel.org>
---
 fs/btrfs/zstd.c | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index 43f3be755b8c..a951d4fe77f7 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -21,10 +21,10 @@
 #define ZSTD_BTRFS_MAX_INPUT (1 << ZSTD_BTRFS_MAX_WINDOWLOG)
 #define ZSTD_BTRFS_DEFAULT_LEVEL 3
 
-static ZSTD_parameters zstd_get_btrfs_parameters(size_t src_len)
+static ZSTD_parameters zstd_get_btrfs_parameters(unsigned int level,
+						 size_t src_len)
 {
-	ZSTD_parameters params = ZSTD_getParams(ZSTD_BTRFS_DEFAULT_LEVEL,
-						src_len, 0);
+	ZSTD_parameters params = ZSTD_getParams(level, src_len, 0);
 
 	if (params.cParams.windowLog > ZSTD_BTRFS_MAX_WINDOWLOG)
 		params.cParams.windowLog = ZSTD_BTRFS_MAX_WINDOWLOG;
@@ -36,6 +36,7 @@ struct workspace {
 	void *mem;
 	size_t size;
 	char *buf;
+	unsigned int req_level;
 	struct list_head list;
 	ZSTD_inBuffer in_buf;
 	ZSTD_outBuffer out_buf;
@@ -55,7 +56,12 @@ static void zstd_cleanup_workspace_manager(void)
 
 static struct list_head *zstd_get_workspace(unsigned int level)
 {
-	return btrfs_get_workspace(&wsm, level);
+	struct list_head *ws = btrfs_get_workspace(&wsm, level);
+	struct workspace *workspace = list_entry(ws, struct workspace, list);
+
+	workspace->req_level = level;
+
+	return ws;
 }
 
 static void zstd_put_workspace(struct list_head *ws)
@@ -75,7 +81,7 @@ static void zstd_free_workspace(struct list_head *ws)
 static struct list_head *zstd_alloc_workspace(unsigned int level)
 {
 	ZSTD_parameters params =
-			zstd_get_btrfs_parameters(ZSTD_BTRFS_MAX_INPUT);
+			zstd_get_btrfs_parameters(level, ZSTD_BTRFS_MAX_INPUT);
 	struct workspace *workspace;
 
 	workspace = kzalloc(sizeof(*workspace), GFP_KERNEL);
@@ -117,7 +123,8 @@ static int zstd_compress_pages(struct list_head *ws,
 	unsigned long len = *total_out;
 	const unsigned long nr_dest_pages = *out_pages;
 	unsigned long max_out = nr_dest_pages * PAGE_SIZE;
-	ZSTD_parameters params = zstd_get_btrfs_parameters(len);
+	ZSTD_parameters params = zstd_get_btrfs_parameters(workspace->req_level,
+							   len);
 
 	*out_pages = 0;
 	*total_out = 0;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 11/11] btrfs: add zstd compression level support
  2019-01-28 21:24 [PATCH 00/11] btrfs: add zstd compression level support Dennis Zhou
                   ` (9 preceding siblings ...)
  2019-01-28 21:24 ` [PATCH 10/11] btrfs: zstd use the passed through level instead of default Dennis Zhou
@ 2019-01-28 21:24 ` Dennis Zhou
  2019-01-29  7:25   ` Nikolay Borisov
                     ` (2 more replies)
  2019-01-29 17:18 ` [PATCH 00/11] " David Sterba
  11 siblings, 3 replies; 46+ messages in thread
From: Dennis Zhou @ 2019-01-28 21:24 UTC (permalink / raw)
  To: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel, Dennis Zhou, Omar Sandoval

Zstd compression requires different amounts of memory for each level of
compression. The prior patches implemented indirection to allow for each
compression type to manage their workspaces independently. This patch
uses this indirection to implement compression level support for zstd.

As mentioned above, a requirement that differs zstd from zlib is that
higher levels of compression require more memory. To manage this, each
compression level has its own queue of workspaces. A global LRU is used
to help with reclaim. To guarantee forward progress, a max level
workspace is preallocated and hidden from the LRU.

When getting a workspace, it uses a bitmap to identify the levels that
are populated and scans up. If it finds a workspace that is greater than
it, it uses it, but does not update the last_used time and the
corresponding place in the LRU. This provides a mechanism to decrease
memory utilization as we only keep around workspaces that are sized
appropriately for the in use compression levels.

By knowing which compression levels have available workspaces, we can
recycle rather than always create new workspaces as well as take
advantage of the preallocated max level for forward progress. If we hit
memory pressure, we sleep on the max level workspace. We continue to
rescan in case we can use a smaller workspace, but eventually should be
able to obtain the max level workspace or allocate one again should
memory pressure subside. The memory requirement for decompression is the
same as level 1, and therefore can use any of available workspace.

The number of workspaces is bound by an upper limit of the workqueue's
limit which currently is 2 (percpu limit). Second, a reclaim timer is
used to free inactive/improperly sized workspaces. The reclaim timer is
set to 67s to avoid colliding with transaction commit (every 30s) and
attempts to reclaim any unused workspace older than 45s.

Repeating the experiment from v2 [1], the Silesia corpus was copied to a
btrfs filesystem 10 times and then read back after dropping the caches.
The btrfs filesystem was on an SSD.

Level   Ratio   Compression (MB/s)  Decompression (MB/s)
1       2.658        438.47                910.51
2       2.744        364.86                886.55
3       2.801        336.33                828.41
4       2.858        286.71                886.55
5       2.916        212.77                556.84
6       2.363        119.82                990.85
7       3.000        154.06                849.30
8       3.011        159.54                875.03
9       3.025        100.51                940.15
10      3.033        118.97                616.26
11      3.036         94.19                802.11
12      3.037         73.45                931.49
13      3.041         55.17                835.26
14      3.087         44.70                716.78
15      3.126         37.30                878.84

[1] https://lore.kernel.org/linux-btrfs/20181031181108.289340-1-terrelln@fb.com/

Signed-off-by: Dennis Zhou <dennis@kernel.org>
Cc: Nick Terrell <terrelln@fb.com>
Cc: Omar Sandoval <osandov@fb.com>
---
 fs/btrfs/super.c |   6 +-
 fs/btrfs/zstd.c  | 229 +++++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 226 insertions(+), 9 deletions(-)

diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
index b28dff207383..0ecc513cb56c 100644
--- a/fs/btrfs/super.c
+++ b/fs/btrfs/super.c
@@ -544,9 +544,13 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
 				btrfs_clear_opt(info->mount_opt, NODATASUM);
 				btrfs_set_fs_incompat(info, COMPRESS_LZO);
 				no_compress = 0;
-			} else if (strcmp(args[0].from, "zstd") == 0) {
+			} else if (strncmp(args[0].from, "zstd", 4) == 0) {
 				compress_type = "zstd";
 				info->compress_type = BTRFS_COMPRESS_ZSTD;
+				info->compress_level =
+					btrfs_compress_str2level(
+							 BTRFS_COMPRESS_ZSTD,
+							 args[0].from + 4);
 				btrfs_set_opt(info->mount_opt, COMPRESS);
 				btrfs_clear_opt(info->mount_opt, NODATACOW);
 				btrfs_clear_opt(info->mount_opt, NODATASUM);
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index a951d4fe77f7..ce9b466c197f 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -6,20 +6,27 @@
  */
 
 #include <linux/bio.h>
+#include <linux/bitmap.h>
 #include <linux/err.h>
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/mm.h>
+#include <linux/sched/mm.h>
 #include <linux/pagemap.h>
 #include <linux/refcount.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
 #include <linux/zstd.h>
 #include "compression.h"
+#include "ctree.h"
 
 #define ZSTD_BTRFS_MAX_WINDOWLOG 17
 #define ZSTD_BTRFS_MAX_INPUT (1 << ZSTD_BTRFS_MAX_WINDOWLOG)
 #define ZSTD_BTRFS_DEFAULT_LEVEL 3
+#define ZSTD_BTRFS_MAX_LEVEL 15
+#define ZSTD_BTRFS_RECLAIM_NS (45 * NSEC_PER_SEC)
+/* 67s to avoid clashing with transaction commit (every 30s) */
+#define ZSTD_BTRFS_RECLAIM_JIFFIES (67 * HZ)
 
 static ZSTD_parameters zstd_get_btrfs_parameters(unsigned int level,
 						 size_t src_len)
@@ -36,37 +43,234 @@ struct workspace {
 	void *mem;
 	size_t size;
 	char *buf;
+	unsigned int level;
 	unsigned int req_level;
+	u64 last_used;
 	struct list_head list;
+	struct list_head lru_list;
 	ZSTD_inBuffer in_buf;
 	ZSTD_outBuffer out_buf;
 };
 
-static struct workspace_manager wsm;
+struct zstd_workspace_manager {
+	const struct btrfs_compress_op *ops;
+	spinlock_t lock;
+	struct list_head lru_list;
+	struct list_head idle_ws[ZSTD_BTRFS_MAX_LEVEL];
+	unsigned long active_map;
+	wait_queue_head_t wait;
+	struct timer_list timer;
+};
+
+static struct zstd_workspace_manager wsm;
+
+static inline struct workspace *list_to_workspace(struct list_head *list)
+{
+	return container_of(list, struct workspace, list);
+}
+
+/*
+ * zstd_reclaim_timer_fn - reclaim timer
+ * @t: timer
+ *
+ * This is scheduled every ZSTD_BTRFS_RECLAIM_JIFFIES and checks if a workspace
+ * has not been used by the corresponding level for ZSTD_BTRFS_RECLAIM_NS.
+ */
+static void zstd_reclaim_timer_fn(struct timer_list *t)
+{
+	u64 now = ktime_get_ns();
+	struct list_head *pos, *next;
+
+	spin_lock(&wsm.lock);
+
+	if (list_empty(&wsm.lru_list)) {
+		spin_unlock(&wsm.lock);
+		return;
+	}
+
+	list_for_each_prev_safe(pos, next, &wsm.lru_list) {
+		struct workspace *victim = container_of(pos, struct workspace,
+							lru_list);
+		unsigned int level;
+
+		if (now < victim->last_used + ZSTD_BTRFS_RECLAIM_NS)
+			break;
+
+		/* workspace is in use */
+		if (victim->req_level)
+			continue;
+
+		level = victim->level;
+		list_del(&victim->lru_list);
+		list_del(&victim->list);
+		wsm.ops->free_workspace(&victim->list);
+
+		if (list_empty(&wsm.idle_ws[level - 1]))
+			clear_bit(level - 1, &wsm.active_map);
+
+	}
+
+	if (!list_empty(&wsm.lru_list))
+		mod_timer(&wsm.timer, jiffies + ZSTD_BTRFS_RECLAIM_JIFFIES);
+
+	spin_unlock(&wsm.lock);
+}
 
 static void zstd_init_workspace_manager(void)
 {
-	btrfs_init_workspace_manager(&wsm, &btrfs_zstd_compress);
+	struct list_head *ws;
+	int i;
+
+	wsm.ops = &btrfs_zstd_compress;
+	spin_lock_init(&wsm.lock);
+	init_waitqueue_head(&wsm.wait);
+	timer_setup(&wsm.timer, zstd_reclaim_timer_fn, 0);
+
+	INIT_LIST_HEAD(&wsm.lru_list);
+	for (i = 0; i < ZSTD_BTRFS_MAX_LEVEL; i++)
+		INIT_LIST_HEAD(&wsm.idle_ws[i]);
+
+	ws = wsm.ops->alloc_workspace(ZSTD_BTRFS_MAX_LEVEL);
+	if (IS_ERR(ws)) {
+		pr_warn("BTRFS: cannot preallocate zstd compression workspace\n");
+	} else {
+		set_bit(ZSTD_BTRFS_MAX_LEVEL - 1, &wsm.active_map);
+		list_add(ws, &wsm.idle_ws[ZSTD_BTRFS_MAX_LEVEL - 1]);
+	}
 }
 
 static void zstd_cleanup_workspace_manager(void)
 {
-	btrfs_cleanup_workspace_manager(&wsm);
+	struct workspace *workspace;
+	int i;
+
+	for (i = 0; i < ZSTD_BTRFS_MAX_LEVEL; i++) {
+		while (!list_empty(&wsm.idle_ws[i])) {
+			workspace = container_of(wsm.idle_ws[i].next,
+						 struct workspace, list);
+			list_del(&workspace->list);
+			list_del(&workspace->lru_list);
+			wsm.ops->free_workspace(&workspace->list);
+		}
+	}
+}
+
+/*
+ * zstd_find_workspace - find workspace
+ * @level: compression level
+ *
+ * This iterates over the set bits in the active_map beginning at the requested
+ * compression level.  This lets us utilize already allocated workspaces before
+ * allocating a new one.  If the workspace is of a larger size, it is used, but
+ * the place in the lru_list and last_used times are not updated.  This is to
+ * offer the opportunity to reclaim the workspace in favor of allocating an
+ * appropriately sized one in the future.
+ */
+static struct list_head *zstd_find_workspace(unsigned int level)
+{
+	struct list_head *ws;
+	struct workspace *workspace;
+	int i = level - 1;
+
+	spin_lock(&wsm.lock);
+	for_each_set_bit_from(i, &wsm.active_map, ZSTD_BTRFS_MAX_LEVEL) {
+		if (!list_empty(&wsm.idle_ws[i])) {
+			ws = wsm.idle_ws[i].next;
+			workspace = list_to_workspace(ws);
+			list_del_init(ws);
+			/* keep its place if it's a lower level using this */
+			workspace->req_level = level;
+			if (level == workspace->level)
+				list_del(&workspace->lru_list);
+			if (list_empty(&wsm.idle_ws[i]))
+				clear_bit(i, &wsm.active_map);
+			spin_unlock(&wsm.lock);
+			return ws;
+		}
+	}
+	spin_unlock(&wsm.lock);
+
+	return NULL;
 }
 
+/*
+ * zstd_get_workspace - zstd's get_workspace
+ * @level: compression level
+ *
+ * If @level is 0, then any compression level can be used.  Therefore, we begin
+ * scanning from 1.  We first scan through possible workspaces and then after
+ * attempt to allocate a new workspace.  If we fail to allocate one due to
+ * memory pressure, go to sleep waiting for the max level workspace to free up.
+ */
 static struct list_head *zstd_get_workspace(unsigned int level)
 {
-	struct list_head *ws = btrfs_get_workspace(&wsm, level);
-	struct workspace *workspace = list_entry(ws, struct workspace, list);
+	struct list_head *ws;
+	unsigned long nofs_flag;
 
-	workspace->req_level = level;
+	/* level == 0 means we can use any workspace */
+	if (!level)
+		level = 1;
+
+again:
+	ws = zstd_find_workspace(level);
+	if (ws)
+		return ws;
+
+	nofs_flag = memalloc_nofs_save();
+	ws = wsm.ops->alloc_workspace(level);
+	memalloc_nofs_restore(nofs_flag);
+
+	if (IS_ERR(ws)) {
+		DEFINE_WAIT(wait);
+
+		prepare_to_wait(&wsm.wait, &wait, TASK_UNINTERRUPTIBLE);
+		schedule();
+		finish_wait(&wsm.wait, &wait);
+
+		goto again;
+	}
 
 	return ws;
 }
 
+/*
+ * zstd_put_workspace - zstd put_workspace
+ * @ws: list_head for the workspace
+ *
+ * When putting back a workspace, we only need to update the LRU if we are of
+ * the requested compression level.  Here is where we continue to protect the
+ * max level workspace or update last_used accordingly.  If the reclaim timer
+ * isn't set, it is also set here.  Only the max level workspace tries and wakes
+ * up waiting workspaces.
+ */
 static void zstd_put_workspace(struct list_head *ws)
 {
-	btrfs_put_workspace(&wsm, ws);
+	struct workspace *workspace = list_to_workspace(ws);
+
+	spin_lock(&wsm.lock);
+
+	/* a node is only taken off the lru if we are the corresponding level */
+	if (workspace->req_level == workspace->level) {
+		/* hide a max level workspace from reclaim */
+		if (list_empty(&wsm.idle_ws[ZSTD_BTRFS_MAX_LEVEL - 1])) {
+			INIT_LIST_HEAD(&workspace->lru_list);
+		} else {
+			workspace->last_used = ktime_get_ns();
+			list_add(&workspace->lru_list, &wsm.lru_list);
+			if (!timer_pending(&wsm.timer))
+				mod_timer(&wsm.timer,
+					  jiffies + ZSTD_BTRFS_RECLAIM_JIFFIES);
+		}
+	}
+
+	set_bit(workspace->level - 1, &wsm.active_map);
+	list_add(&workspace->list, &wsm.idle_ws[workspace->level - 1]);
+	workspace->req_level = 0;
+
+	spin_unlock(&wsm.lock);
+
+	if (workspace->level == ZSTD_BTRFS_MAX_LEVEL)
+		cond_wake_up(&wsm.wait);
 }
 
 static void zstd_free_workspace(struct list_head *ws)
@@ -93,10 +297,14 @@ static struct list_head *zstd_alloc_workspace(unsigned int level)
 			ZSTD_DStreamWorkspaceBound(ZSTD_BTRFS_MAX_INPUT));
 	workspace->mem = kvmalloc(workspace->size, GFP_KERNEL);
 	workspace->buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
+	workspace->level = level;
+	workspace->req_level = level;
+	workspace->last_used = ktime_get_ns();
 	if (!workspace->mem || !workspace->buf)
 		goto fail;
 
 	INIT_LIST_HEAD(&workspace->list);
+	INIT_LIST_HEAD(&workspace->lru_list);
 
 	return &workspace->list;
 fail:
@@ -450,7 +658,12 @@ static int zstd_decompress(struct list_head *ws, unsigned char *data_in,
 
 static unsigned int zstd_set_level(unsigned int level)
 {
-	return ZSTD_BTRFS_DEFAULT_LEVEL;
+	if (!level)
+		level = ZSTD_BTRFS_DEFAULT_LEVEL;
+	else if (level > ZSTD_BTRFS_MAX_LEVEL)
+		level = ZSTD_BTRFS_MAX_LEVEL;
+
+	return level;
 }
 
 const struct btrfs_compress_op btrfs_zstd_compress = {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* Re: [PATCH 11/11] btrfs: add zstd compression level support
  2019-01-28 21:24 ` [PATCH 11/11] btrfs: add zstd compression level support Dennis Zhou
@ 2019-01-29  7:25   ` Nikolay Borisov
  2019-01-29 22:50     ` Dennis Zhou
  2019-01-31 18:10   ` David Sterba
  2019-01-31 18:13   ` David Sterba
  2 siblings, 1 reply; 46+ messages in thread
From: Nikolay Borisov @ 2019-01-29  7:25 UTC (permalink / raw)
  To: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel, Omar Sandoval



On 28.01.19 г. 23:24 ч., Dennis Zhou wrote:
> Zstd compression requires different amounts of memory for each level of
> compression. The prior patches implemented indirection to allow for each
> compression type to manage their workspaces independently. This patch
> uses this indirection to implement compression level support for zstd.
> 
> As mentioned above, a requirement that differs zstd from zlib is that
> higher levels of compression require more memory. To manage this, each
> compression level has its own queue of workspaces. A global LRU is used
> to help with reclaim. To guarantee forward progress, a max level
> workspace is preallocated and hidden from the LRU.
> 
> When getting a workspace, it uses a bitmap to identify the levels that
> are populated and scans up. If it finds a workspace that is greater than
> it, it uses it, but does not update the last_used time and the
> corresponding place in the LRU. This provides a mechanism to decrease
> memory utilization as we only keep around workspaces that are sized
> appropriately for the in use compression levels.
> 
> By knowing which compression levels have available workspaces, we can
> recycle rather than always create new workspaces as well as take
> advantage of the preallocated max level for forward progress. If we hit
> memory pressure, we sleep on the max level workspace. We continue to
> rescan in case we can use a smaller workspace, but eventually should be
> able to obtain the max level workspace or allocate one again should
> memory pressure subside. The memory requirement for decompression is the
> same as level 1, and therefore can use any of available workspace.
> 
> The number of workspaces is bound by an upper limit of the workqueue's
> limit which currently is 2 (percpu limit). Second, a reclaim timer is
> used to free inactive/improperly sized workspaces. The reclaim timer is
> set to 67s to avoid colliding with transaction commit (every 30s) and
> attempts to reclaim any unused workspace older than 45s.
> 
> Repeating the experiment from v2 [1], the Silesia corpus was copied to a
> btrfs filesystem 10 times and then read back after dropping the caches.
> The btrfs filesystem was on an SSD.
> 
> Level   Ratio   Compression (MB/s)  Decompression (MB/s)
> 1       2.658        438.47                910.51
> 2       2.744        364.86                886.55
> 3       2.801        336.33                828.41
> 4       2.858        286.71                886.55
> 5       2.916        212.77                556.84
> 6       2.363        119.82                990.85
> 7       3.000        154.06                849.30
> 8       3.011        159.54                875.03
> 9       3.025        100.51                940.15
> 10      3.033        118.97                616.26
> 11      3.036         94.19                802.11
> 12      3.037         73.45                931.49
> 13      3.041         55.17                835.26
> 14      3.087         44.70                716.78
> 15      3.126         37.30                878.84
> 
> [1] https://lore.kernel.org/linux-btrfs/20181031181108.289340-1-terrelln@fb.com/
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>
> Cc: Nick Terrell <terrelln@fb.com>
> Cc: Omar Sandoval <osandov@fb.com>
> ---
>  fs/btrfs/super.c |   6 +-
>  fs/btrfs/zstd.c  | 229 +++++++++++++++++++++++++++++++++++++++++++++--
>  2 files changed, 226 insertions(+), 9 deletions(-)
> 
> diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
> index b28dff207383..0ecc513cb56c 100644
> --- a/fs/btrfs/super.c
> +++ b/fs/btrfs/super.c
> @@ -544,9 +544,13 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
>  				btrfs_clear_opt(info->mount_opt, NODATASUM);
>  				btrfs_set_fs_incompat(info, COMPRESS_LZO);
>  				no_compress = 0;
> -			} else if (strcmp(args[0].from, "zstd") == 0) {
> +			} else if (strncmp(args[0].from, "zstd", 4) == 0) {
>  				compress_type = "zstd";
>  				info->compress_type = BTRFS_COMPRESS_ZSTD;
> +				info->compress_level =
> +					btrfs_compress_str2level(
> +							 BTRFS_COMPRESS_ZSTD,
> +							 args[0].from + 4);
>  				btrfs_set_opt(info->mount_opt, COMPRESS);
>  				btrfs_clear_opt(info->mount_opt, NODATACOW);
>  				btrfs_clear_opt(info->mount_opt, NODATASUM);
> diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
> index a951d4fe77f7..ce9b466c197f 100644
> --- a/fs/btrfs/zstd.c
> +++ b/fs/btrfs/zstd.c
> @@ -6,20 +6,27 @@
>   */
>  
>  #include <linux/bio.h>
> +#include <linux/bitmap.h>
>  #include <linux/err.h>
>  #include <linux/init.h>
>  #include <linux/kernel.h>
>  #include <linux/mm.h>
> +#include <linux/sched/mm.h>
>  #include <linux/pagemap.h>
>  #include <linux/refcount.h>
>  #include <linux/sched.h>
>  #include <linux/slab.h>
>  #include <linux/zstd.h>
>  #include "compression.h"
> +#include "ctree.h"
>  
>  #define ZSTD_BTRFS_MAX_WINDOWLOG 17
>  #define ZSTD_BTRFS_MAX_INPUT (1 << ZSTD_BTRFS_MAX_WINDOWLOG)
>  #define ZSTD_BTRFS_DEFAULT_LEVEL 3
> +#define ZSTD_BTRFS_MAX_LEVEL 15
> +#define ZSTD_BTRFS_RECLAIM_NS (45 * NSEC_PER_SEC)
> +/* 67s to avoid clashing with transaction commit (every 30s) */
> +#define ZSTD_BTRFS_RECLAIM_JIFFIES (67 * HZ)

This is valid provided that transaction commit time is not overriden by
Opt_commit_interval. If it is such a problem to not clash with trans
commit maybe this should be calculated upon mount?

<snip>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 01/11] btrfs: add macros for compression type and level
  2019-01-28 21:24 ` [PATCH 01/11] btrfs: add macros for compression type and level Dennis Zhou
@ 2019-01-29  7:26   ` Nikolay Borisov
  2019-01-29 17:57   ` Josef Bacik
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 46+ messages in thread
From: Nikolay Borisov @ 2019-01-29  7:26 UTC (permalink / raw)
  To: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel



On 28.01.19 г. 23:24 ч., Dennis Zhou wrote:
> It is very easy to miss places that rely on a certain bitshifting for
> decyphering the type_level overloading. Make macros handle this instead.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>

Reviewed-by: Nikolay Borisov <nborisov@suse.com>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 02/11] btrfs: rename workspaces_list to workspace_manager
  2019-01-28 21:24 ` [PATCH 02/11] btrfs: rename workspaces_list to workspace_manager Dennis Zhou
@ 2019-01-29  7:27   ` Nikolay Borisov
  2019-01-29 17:58   ` Josef Bacik
  1 sibling, 0 replies; 46+ messages in thread
From: Nikolay Borisov @ 2019-01-29  7:27 UTC (permalink / raw)
  To: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel



On 28.01.19 г. 23:24 ч., Dennis Zhou wrote:
> This is in preparation for zstd compression levels. As each level will
> require different sized workspaces, workspaces_list is no longer a
> really fitting name.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>

Reviewed-by: Nikolay Borisov <nborisov@suse.com>

> ---
>  fs/btrfs/compression.c | 46 +++++++++++++++++++++---------------------
>  1 file changed, 23 insertions(+), 23 deletions(-)
> 
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> index 586f95ac0aea..aced261984e2 100644
> --- a/fs/btrfs/compression.c
> +++ b/fs/btrfs/compression.c
> @@ -769,7 +769,7 @@ static struct list_head *alloc_heuristic_ws(void)
>  	return ERR_PTR(-ENOMEM);
>  }
>  
> -struct workspaces_list {
> +struct workspace_manager {
>  	struct list_head idle_ws;
>  	spinlock_t ws_lock;
>  	/* Number of free workspaces */
> @@ -780,9 +780,9 @@ struct workspaces_list {
>  	wait_queue_head_t ws_wait;
>  };
>  
> -static struct workspaces_list btrfs_comp_ws[BTRFS_COMPRESS_TYPES];
> +static struct workspace_manager wsm[BTRFS_COMPRESS_TYPES];
>  
> -static struct workspaces_list btrfs_heuristic_ws;
> +static struct workspace_manager btrfs_heuristic_ws;
>  
>  static const struct btrfs_compress_op * const btrfs_compress_op[] = {
>  	&btrfs_zlib_compress,
> @@ -811,10 +811,10 @@ void __init btrfs_init_compress(void)
>  	}
>  
>  	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
> -		INIT_LIST_HEAD(&btrfs_comp_ws[i].idle_ws);
> -		spin_lock_init(&btrfs_comp_ws[i].ws_lock);
> -		atomic_set(&btrfs_comp_ws[i].total_ws, 0);
> -		init_waitqueue_head(&btrfs_comp_ws[i].ws_wait);
> +		INIT_LIST_HEAD(&wsm[i].idle_ws);
> +		spin_lock_init(&wsm[i].ws_lock);
> +		atomic_set(&wsm[i].total_ws, 0);
> +		init_waitqueue_head(&wsm[i].ws_wait);
>  
>  		/*
>  		 * Preallocate one workspace for each compression type so
> @@ -824,9 +824,9 @@ void __init btrfs_init_compress(void)
>  		if (IS_ERR(workspace)) {
>  			pr_warn("BTRFS: cannot preallocate compression workspace, will try later\n");
>  		} else {
> -			atomic_set(&btrfs_comp_ws[i].total_ws, 1);
> -			btrfs_comp_ws[i].free_ws = 1;
> -			list_add(workspace, &btrfs_comp_ws[i].idle_ws);
> +			atomic_set(&wsm[i].total_ws, 1);
> +			wsm[i].free_ws = 1;
> +			list_add(workspace, &wsm[i].idle_ws);
>  		}
>  	}
>  }
> @@ -856,11 +856,11 @@ static struct list_head *__find_workspace(int type, bool heuristic)
>  		ws_wait	 = &btrfs_heuristic_ws.ws_wait;
>  		free_ws	 = &btrfs_heuristic_ws.free_ws;
>  	} else {
> -		idle_ws	 = &btrfs_comp_ws[idx].idle_ws;
> -		ws_lock	 = &btrfs_comp_ws[idx].ws_lock;
> -		total_ws = &btrfs_comp_ws[idx].total_ws;
> -		ws_wait	 = &btrfs_comp_ws[idx].ws_wait;
> -		free_ws	 = &btrfs_comp_ws[idx].free_ws;
> +		idle_ws	 = &wsm[idx].idle_ws;
> +		ws_lock	 = &wsm[idx].ws_lock;
> +		total_ws = &wsm[idx].total_ws;
> +		ws_wait	 = &wsm[idx].ws_wait;
> +		free_ws	 = &wsm[idx].free_ws;
>  	}
>  
>  again:
> @@ -952,11 +952,11 @@ static void __free_workspace(int type, struct list_head *workspace,
>  		ws_wait	 = &btrfs_heuristic_ws.ws_wait;
>  		free_ws	 = &btrfs_heuristic_ws.free_ws;
>  	} else {
> -		idle_ws	 = &btrfs_comp_ws[idx].idle_ws;
> -		ws_lock	 = &btrfs_comp_ws[idx].ws_lock;
> -		total_ws = &btrfs_comp_ws[idx].total_ws;
> -		ws_wait	 = &btrfs_comp_ws[idx].ws_wait;
> -		free_ws	 = &btrfs_comp_ws[idx].free_ws;
> +		idle_ws	 = &wsm[idx].idle_ws;
> +		ws_lock	 = &wsm[idx].ws_lock;
> +		total_ws = &wsm[idx].total_ws;
> +		ws_wait	 = &wsm[idx].ws_wait;
> +		free_ws	 = &wsm[idx].free_ws;
>  	}
>  
>  	spin_lock(ws_lock);
> @@ -998,11 +998,11 @@ static void free_workspaces(void)
>  	}
>  
>  	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
> -		while (!list_empty(&btrfs_comp_ws[i].idle_ws)) {
> -			workspace = btrfs_comp_ws[i].idle_ws.next;
> +		while (!list_empty(&wsm[i].idle_ws)) {
> +			workspace = wsm[i].idle_ws.next;
>  			list_del(workspace);
>  			btrfs_compress_op[i]->free_workspace(workspace);
> -			atomic_dec(&btrfs_comp_ws[i].total_ws);
> +			atomic_dec(&wsm[i].total_ws);
>  		}
>  	}
>  }
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 03/11] btrfs: manage heuristic workspace as index 0
  2019-01-28 21:24 ` [PATCH 03/11] btrfs: manage heuristic workspace as index 0 Dennis Zhou
@ 2019-01-29  7:53   ` Nikolay Borisov
  2019-01-29 22:43     ` Dennis Zhou
  2019-01-29 18:02   ` Josef Bacik
  2019-01-31 16:10   ` David Sterba
  2 siblings, 1 reply; 46+ messages in thread
From: Nikolay Borisov @ 2019-01-29  7:53 UTC (permalink / raw)
  To: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel



On 28.01.19 г. 23:24 ч., Dennis Zhou wrote:
> While the heuristic workspaces aren't really compression workspaces,
> they use the same interface for managing them. So rather than branching,
> let's just handle them once again as the index 0 compression type.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>

Reviewed-by: Nikolay Borisov <nborisov@suse.com> albeit one minor nit
below.
> ---
>  fs/btrfs/compression.c  | 107 +++++++++++-----------------------------
>  fs/btrfs/compression.h  |   3 +-
>  fs/btrfs/ioctl.c        |   2 +-
>  fs/btrfs/tree-checker.c |   4 +-
>  4 files changed, 33 insertions(+), 83 deletions(-)
> 
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> index aced261984e2..bda7e8d2cbc7 100644
> --- a/fs/btrfs/compression.c
> +++ b/fs/btrfs/compression.c
> @@ -37,6 +37,8 @@ const char* btrfs_compress_type2str(enum btrfs_compression_type type)
>  	case BTRFS_COMPRESS_ZSTD:
>  	case BTRFS_COMPRESS_NONE:
>  		return btrfs_compress_types[type];
> +	default:
> +		return NULL;

nit: With this change...

>  	}
>  
>  	return NULL;

This becomes redundant. I doubt the compiler will issue a warning since
it should be clever enough to figure we will never exit the switch()
construct.

> @@ -769,6 +771,11 @@ static struct list_head *alloc_heuristic_ws(void)
>  	return ERR_PTR(-ENOMEM);
>  }
>  
> +const struct btrfs_compress_op btrfs_heuristic_compress = {
> +	.alloc_workspace = alloc_heuristic_ws,
> +	.free_workspace = free_heuristic_ws,
> +};
> +
>  struct workspace_manager {
>  	struct list_head idle_ws;
>  	spinlock_t ws_lock;
> @@ -782,9 +789,8 @@ struct workspace_manager {
>  
>  static struct workspace_manager wsm[BTRFS_COMPRESS_TYPES];
>  
> -static struct workspace_manager btrfs_heuristic_ws;
> -
>  static const struct btrfs_compress_op * const btrfs_compress_op[] = {
> +	&btrfs_heuristic_compress,
>  	&btrfs_zlib_compress,
>  	&btrfs_lzo_compress,
>  	&btrfs_zstd_compress,
> @@ -795,21 +801,6 @@ void __init btrfs_init_compress(void)
>  	struct list_head *workspace;
>  	int i;
>  
> -	INIT_LIST_HEAD(&btrfs_heuristic_ws.idle_ws);
> -	spin_lock_init(&btrfs_heuristic_ws.ws_lock);
> -	atomic_set(&btrfs_heuristic_ws.total_ws, 0);
> -	init_waitqueue_head(&btrfs_heuristic_ws.ws_wait);
> -
> -	workspace = alloc_heuristic_ws();
> -	if (IS_ERR(workspace)) {
> -		pr_warn(
> -	"BTRFS: cannot preallocate heuristic workspace, will try later\n");
> -	} else {
> -		atomic_set(&btrfs_heuristic_ws.total_ws, 1);
> -		btrfs_heuristic_ws.free_ws = 1;
> -		list_add(workspace, &btrfs_heuristic_ws.idle_ws);
> -	}
> -
>  	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
>  		INIT_LIST_HEAD(&wsm[i].idle_ws);
>  		spin_lock_init(&wsm[i].ws_lock);
> @@ -837,11 +828,10 @@ void __init btrfs_init_compress(void)
>   * Preallocation makes a forward progress guarantees and we do not return
>   * errors.
>   */
> -static struct list_head *__find_workspace(int type, bool heuristic)
> +static struct list_head *find_workspace(int type)
>  {
>  	struct list_head *workspace;
>  	int cpus = num_online_cpus();
> -	int idx = type - 1;
>  	unsigned nofs_flag;
>  	struct list_head *idle_ws;
>  	spinlock_t *ws_lock;
> @@ -849,19 +839,11 @@ static struct list_head *__find_workspace(int type, bool heuristic)
>  	wait_queue_head_t *ws_wait;
>  	int *free_ws;
>  
> -	if (heuristic) {
> -		idle_ws	 = &btrfs_heuristic_ws.idle_ws;
> -		ws_lock	 = &btrfs_heuristic_ws.ws_lock;
> -		total_ws = &btrfs_heuristic_ws.total_ws;
> -		ws_wait	 = &btrfs_heuristic_ws.ws_wait;
> -		free_ws	 = &btrfs_heuristic_ws.free_ws;
> -	} else {
> -		idle_ws	 = &wsm[idx].idle_ws;
> -		ws_lock	 = &wsm[idx].ws_lock;
> -		total_ws = &wsm[idx].total_ws;
> -		ws_wait	 = &wsm[idx].ws_wait;
> -		free_ws	 = &wsm[idx].free_ws;
> -	}
> +	idle_ws	 = &wsm[type].idle_ws;
> +	ws_lock	 = &wsm[type].ws_lock;
> +	total_ws = &wsm[type].total_ws;
> +	ws_wait	 = &wsm[type].ws_wait;
> +	free_ws	 = &wsm[type].free_ws;
>  
>  again:
>  	spin_lock(ws_lock);
> @@ -892,10 +874,7 @@ static struct list_head *__find_workspace(int type, bool heuristic)
>  	 * context of btrfs_compress_bio/btrfs_compress_pages
>  	 */
>  	nofs_flag = memalloc_nofs_save();
> -	if (heuristic)
> -		workspace = alloc_heuristic_ws();
> -	else
> -		workspace = btrfs_compress_op[idx]->alloc_workspace();
> +	workspace = btrfs_compress_op[type]->alloc_workspace();
>  	memalloc_nofs_restore(nofs_flag);
>  
>  	if (IS_ERR(workspace)) {
> @@ -926,38 +905,23 @@ static struct list_head *__find_workspace(int type, bool heuristic)
>  	return workspace;
>  }
>  
> -static struct list_head *find_workspace(int type)
> -{
> -	return __find_workspace(type, false);
> -}
> -
>  /*
>   * put a workspace struct back on the list or free it if we have enough
>   * idle ones sitting around
>   */
> -static void __free_workspace(int type, struct list_head *workspace,
> -			     bool heuristic)
> +static void free_workspace(int type, struct list_head *workspace)
>  {
> -	int idx = type - 1;
>  	struct list_head *idle_ws;
>  	spinlock_t *ws_lock;
>  	atomic_t *total_ws;
>  	wait_queue_head_t *ws_wait;
>  	int *free_ws;
>  
> -	if (heuristic) {
> -		idle_ws	 = &btrfs_heuristic_ws.idle_ws;
> -		ws_lock	 = &btrfs_heuristic_ws.ws_lock;
> -		total_ws = &btrfs_heuristic_ws.total_ws;
> -		ws_wait	 = &btrfs_heuristic_ws.ws_wait;
> -		free_ws	 = &btrfs_heuristic_ws.free_ws;
> -	} else {
> -		idle_ws	 = &wsm[idx].idle_ws;
> -		ws_lock	 = &wsm[idx].ws_lock;
> -		total_ws = &wsm[idx].total_ws;
> -		ws_wait	 = &wsm[idx].ws_wait;
> -		free_ws	 = &wsm[idx].free_ws;
> -	}
> +	idle_ws	 = &wsm[type].idle_ws;
> +	ws_lock	 = &wsm[type].ws_lock;
> +	total_ws = &wsm[type].total_ws;
> +	ws_wait	 = &wsm[type].ws_wait;
> +	free_ws	 = &wsm[type].free_ws;
>  
>  	spin_lock(ws_lock);
>  	if (*free_ws <= num_online_cpus()) {
> @@ -968,20 +932,12 @@ static void __free_workspace(int type, struct list_head *workspace,
>  	}
>  	spin_unlock(ws_lock);
>  
> -	if (heuristic)
> -		free_heuristic_ws(workspace);
> -	else
> -		btrfs_compress_op[idx]->free_workspace(workspace);
> +	btrfs_compress_op[type]->free_workspace(workspace);
>  	atomic_dec(total_ws);
>  wake:
>  	cond_wake_up(ws_wait);
>  }
>  
> -static void free_workspace(int type, struct list_head *ws)
> -{
> -	return __free_workspace(type, ws, false);
> -}
> -
>  /*
>   * cleanup function for module exit
>   */
> @@ -990,13 +946,6 @@ static void free_workspaces(void)
>  	struct list_head *workspace;
>  	int i;
>  
> -	while (!list_empty(&btrfs_heuristic_ws.idle_ws)) {
> -		workspace = btrfs_heuristic_ws.idle_ws.next;
> -		list_del(workspace);
> -		free_heuristic_ws(workspace);
> -		atomic_dec(&btrfs_heuristic_ws.total_ws);
> -	}
> -
>  	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
>  		while (!list_empty(&wsm[i].idle_ws)) {
>  			workspace = wsm[i].idle_ws.next;
> @@ -1042,8 +991,8 @@ int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping,
>  
>  	workspace = find_workspace(type);
>  
> -	btrfs_compress_op[type - 1]->set_level(workspace, type_level);
> -	ret = btrfs_compress_op[type-1]->compress_pages(workspace, mapping,
> +	btrfs_compress_op[type]->set_level(workspace, type_level);
> +	ret = btrfs_compress_op[type]->compress_pages(workspace, mapping,
>  						      start, pages,
>  						      out_pages,
>  						      total_in, total_out);
> @@ -1072,7 +1021,7 @@ static int btrfs_decompress_bio(struct compressed_bio *cb)
>  	int type = cb->compress_type;
>  
>  	workspace = find_workspace(type);
> -	ret = btrfs_compress_op[type - 1]->decompress_bio(workspace, cb);
> +	ret = btrfs_compress_op[type]->decompress_bio(workspace, cb);
>  	free_workspace(type, workspace);
>  
>  	return ret;
> @@ -1091,7 +1040,7 @@ int btrfs_decompress(int type, unsigned char *data_in, struct page *dest_page,
>  
>  	workspace = find_workspace(type);
>  
> -	ret = btrfs_compress_op[type-1]->decompress(workspace, data_in,
> +	ret = btrfs_compress_op[type]->decompress(workspace, data_in,
>  						  dest_page, start_byte,
>  						  srclen, destlen);
>  
> @@ -1512,7 +1461,7 @@ static void heuristic_collect_sample(struct inode *inode, u64 start, u64 end,
>   */
>  int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end)
>  {
> -	struct list_head *ws_list = __find_workspace(0, true);
> +	struct list_head *ws_list = find_workspace(0);
>  	struct heuristic_ws *ws;
>  	u32 i;
>  	u8 byte;
> @@ -1581,7 +1530,7 @@ int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end)
>  	}
>  
>  out:
> -	__free_workspace(0, ws_list, true);
> +	free_workspace(0, ws_list);
>  	return ret;
>  }
>  
> diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
> index 69a9197dadc3..53a8b9e93217 100644
> --- a/fs/btrfs/compression.h
> +++ b/fs/btrfs/compression.h
> @@ -97,7 +97,7 @@ enum btrfs_compression_type {
>  	BTRFS_COMPRESS_ZLIB  = 1,
>  	BTRFS_COMPRESS_LZO   = 2,
>  	BTRFS_COMPRESS_ZSTD  = 3,
> -	BTRFS_COMPRESS_TYPES = 3,
> +	BTRFS_COMPRESS_TYPES = 4,
>  };
>  
>  struct btrfs_compress_op {
> @@ -125,6 +125,7 @@ struct btrfs_compress_op {
>  	void (*set_level)(struct list_head *ws, unsigned int type);
>  };
>  
> +extern const struct btrfs_compress_op btrfs_heuristic_compress;
>  extern const struct btrfs_compress_op btrfs_zlib_compress;
>  extern const struct btrfs_compress_op btrfs_lzo_compress;
>  extern const struct btrfs_compress_op btrfs_zstd_compress;
> diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
> index 9c8e1734429c..20081465a451 100644
> --- a/fs/btrfs/ioctl.c
> +++ b/fs/btrfs/ioctl.c
> @@ -1410,7 +1410,7 @@ int btrfs_defrag_file(struct inode *inode, struct file *file,
>  		return -EINVAL;
>  
>  	if (do_compress) {
> -		if (range->compress_type > BTRFS_COMPRESS_TYPES)
> +		if (range->compress_type >= BTRFS_COMPRESS_TYPES)
>  			return -EINVAL;
>  		if (range->compress_type)
>  			compress_type = range->compress_type;
> diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
> index a62e1e837a89..c88e146d8e99 100644
> --- a/fs/btrfs/tree-checker.c
> +++ b/fs/btrfs/tree-checker.c
> @@ -133,9 +133,9 @@ static int check_extent_data_item(struct btrfs_fs_info *fs_info,
>  	 * Support for new compression/encryption must introduce incompat flag,
>  	 * and must be caught in open_ctree().
>  	 */
> -	if (btrfs_file_extent_compression(leaf, fi) > BTRFS_COMPRESS_TYPES) {
> +	if (btrfs_file_extent_compression(leaf, fi) >= BTRFS_COMPRESS_TYPES) {
>  		file_extent_err(fs_info, leaf, slot,
> -	"invalid compression for file extent, have %u expect range [0, %u]",
> +	"invalid compression for file extent, have %u expect range [0, %u)",
>  			btrfs_file_extent_compression(leaf, fi),
>  			BTRFS_COMPRESS_TYPES);
>  		return -EUCLEAN;
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 04/11] btrfs: unify compression ops with workspace_manager
  2019-01-28 21:24 ` [PATCH 04/11] btrfs: unify compression ops with workspace_manager Dennis Zhou
@ 2019-01-29  7:54   ` Nikolay Borisov
  2019-01-29 18:03   ` Josef Bacik
  1 sibling, 0 replies; 46+ messages in thread
From: Nikolay Borisov @ 2019-01-29  7:54 UTC (permalink / raw)
  To: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel



On 28.01.19 г. 23:24 ч., Dennis Zhou wrote:
> Make the workspace_manager own the interface operations rather than
> managing index-paired arrays for the workspace_manager and compression
> operations.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>

Reviewed-by: Nikolay Borisov <nborisov@suse.com>

> ---
>  fs/btrfs/compression.c | 11 +++++++----
>  1 file changed, 7 insertions(+), 4 deletions(-)
> 
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> index bda7e8d2cbc7..b7e986e16640 100644
> --- a/fs/btrfs/compression.c
> +++ b/fs/btrfs/compression.c
> @@ -777,6 +777,7 @@ const struct btrfs_compress_op btrfs_heuristic_compress = {
>  };
>  
>  struct workspace_manager {
> +	const struct btrfs_compress_op *ops;
>  	struct list_head idle_ws;
>  	spinlock_t ws_lock;
>  	/* Number of free workspaces */
> @@ -802,6 +803,8 @@ void __init btrfs_init_compress(void)
>  	int i;
>  
>  	for (i = 0; i < BTRFS_COMPRESS_TYPES; i++) {
> +		wsm[i].ops = btrfs_compress_op[i];
> +
>  		INIT_LIST_HEAD(&wsm[i].idle_ws);
>  		spin_lock_init(&wsm[i].ws_lock);
>  		atomic_set(&wsm[i].total_ws, 0);
> @@ -811,7 +814,7 @@ void __init btrfs_init_compress(void)
>  		 * Preallocate one workspace for each compression type so
>  		 * we can guarantee forward progress in the worst case
>  		 */
> -		workspace = btrfs_compress_op[i]->alloc_workspace();
> +		workspace = wsm[i].ops->alloc_workspace();
>  		if (IS_ERR(workspace)) {
>  			pr_warn("BTRFS: cannot preallocate compression workspace, will try later\n");
>  		} else {
> @@ -874,7 +877,7 @@ static struct list_head *find_workspace(int type)
>  	 * context of btrfs_compress_bio/btrfs_compress_pages
>  	 */
>  	nofs_flag = memalloc_nofs_save();
> -	workspace = btrfs_compress_op[type]->alloc_workspace();
> +	workspace = wsm[type].ops->alloc_workspace();
>  	memalloc_nofs_restore(nofs_flag);
>  
>  	if (IS_ERR(workspace)) {
> @@ -932,7 +935,7 @@ static void free_workspace(int type, struct list_head *workspace)
>  	}
>  	spin_unlock(ws_lock);
>  
> -	btrfs_compress_op[type]->free_workspace(workspace);
> +	wsm[type].ops->free_workspace(workspace);
>  	atomic_dec(total_ws);
>  wake:
>  	cond_wake_up(ws_wait);
> @@ -950,7 +953,7 @@ static void free_workspaces(void)
>  		while (!list_empty(&wsm[i].idle_ws)) {
>  			workspace = wsm[i].idle_ws.next;
>  			list_del(workspace);
> -			btrfs_compress_op[i]->free_workspace(workspace);
> +			wsm[i].ops->free_workspace(workspace);
>  			atomic_dec(&wsm[i].total_ws);
>  		}
>  	}
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 05/11] btrfs: add helper methods for workspace manager init and cleanup
  2019-01-28 21:24 ` [PATCH 05/11] btrfs: add helper methods for workspace manager init and cleanup Dennis Zhou
@ 2019-01-29  7:58   ` Nikolay Borisov
  2019-01-29 18:04   ` Josef Bacik
  1 sibling, 0 replies; 46+ messages in thread
From: Nikolay Borisov @ 2019-01-29  7:58 UTC (permalink / raw)
  To: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel



On 28.01.19 г. 23:24 ч., Dennis Zhou wrote:
> Workspace manager init and cleanup code is open coded inside a for loop
> over the compression types. This forces each compression type to rely on
> the same workspace manager implementation. This patch creates helper
> methods that will be the generic implementation for btrfs workspace
> management.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>

Reviewed-by: Nikolay Borisov <nborisov@suse.com>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 08/11] btrfs: plumb level through the compression interface
  2019-01-28 21:24 ` [PATCH 08/11] btrfs: plumb level through the compression interface Dennis Zhou
@ 2019-01-29  8:08   ` Nikolay Borisov
  2019-01-29 18:17   ` Josef Bacik
  1 sibling, 0 replies; 46+ messages in thread
From: Nikolay Borisov @ 2019-01-29  8:08 UTC (permalink / raw)
  To: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel



On 28.01.19 г. 23:24 ч., Dennis Zhou wrote:
> Zlib compression supports multiple levels, but doesn't require changing
> in how a workspace itself is created and managed. Zstd introduces a
> different memory requirement such that higher levels of compression
> require more memory. This requires changes in how the alloc()/get()
> methods work for zstd. This pach plumbs compression level through the
> interface as a parameter in preparation for zstd compression levels.
> This gives the compression types opportunity to create/manage based on
> the compression level.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>

Reviewed-by: Nikolay Borisov <nborisov@suse.com>

> ---
>  fs/btrfs/compression.c | 31 ++++++++++++++++---------------
>  fs/btrfs/compression.h |  7 ++++---
>  fs/btrfs/lzo.c         |  6 +++---
>  fs/btrfs/zlib.c        |  7 ++++---
>  fs/btrfs/zstd.c        |  6 +++---
>  5 files changed, 30 insertions(+), 27 deletions(-)
> 
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> index ab694760ffdb..e509071eaa69 100644
> --- a/fs/btrfs/compression.c
> +++ b/fs/btrfs/compression.c
> @@ -744,9 +744,9 @@ static void heuristic_cleanup_workspace_manager(void)
>  	btrfs_cleanup_workspace_manager(&heuristic_wsm);
>  }
>  
> -static struct list_head *heuristic_get_workspace(void)
> +static struct list_head *heuristic_get_workspace(unsigned int level)
>  {
> -	return btrfs_get_workspace(&heuristic_wsm);
> +	return btrfs_get_workspace(&heuristic_wsm, level);
>  }
>  
>  static void heuristic_put_workspace(struct list_head *ws)
> @@ -766,7 +766,7 @@ static void free_heuristic_ws(struct list_head *ws)
>  	kfree(workspace);
>  }
>  
> -static struct list_head *alloc_heuristic_ws(void)
> +static struct list_head *alloc_heuristic_ws(unsigned int level)
>  {
>  	struct heuristic_ws *ws;
>  
> @@ -825,7 +825,7 @@ void btrfs_init_workspace_manager(struct workspace_manager *wsm,
>  	 * Preallocate one workspace for each compression type so
>  	 * we can guarantee forward progress in the worst case
>  	 */
> -	workspace = wsm->ops->alloc_workspace();
> +	workspace = wsm->ops->alloc_workspace(0);
>  	if (IS_ERR(workspace)) {
>  		pr_warn("BTRFS: cannot preallocate compression workspace, will try later\n");
>  	} else {
> @@ -853,7 +853,8 @@ void btrfs_cleanup_workspace_manager(struct workspace_manager *wsman)
>   * Preallocation makes a forward progress guarantees and we do not return
>   * errors.
>   */
> -struct list_head *btrfs_get_workspace(struct workspace_manager *wsm)
> +struct list_head *btrfs_get_workspace(struct workspace_manager *wsm,
> +				      unsigned int level)
>  {
>  	struct list_head *workspace;
>  	int cpus = num_online_cpus();
> @@ -899,7 +900,7 @@ struct list_head *btrfs_get_workspace(struct workspace_manager *wsm)
>  	 * context of btrfs_compress_bio/btrfs_compress_pages
>  	 */
>  	nofs_flag = memalloc_nofs_save();
> -	workspace = wsm->ops->alloc_workspace();
> +	workspace = wsm->ops->alloc_workspace(level);
>  	memalloc_nofs_restore(nofs_flag);
>  
>  	if (IS_ERR(workspace)) {
> @@ -930,9 +931,9 @@ struct list_head *btrfs_get_workspace(struct workspace_manager *wsm)
>  	return workspace;
>  }
>  
> -static struct list_head *get_workspace(int type)
> +static struct list_head *get_workspace(int type, int level)
>  {
> -	return btrfs_compress_op[type]->get_workspace();
> +	return btrfs_compress_op[type]->get_workspace(level);
>  }
>  
>  /*
> @@ -1003,12 +1004,13 @@ int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping,
>  			 unsigned long *total_out)
>  {
>  	int type = BTRFS_COMPRESS_TYPE(type_level);
> +	int level = BTRFS_COMPRESS_LEVEL(type_level);
>  	struct list_head *workspace;
>  	int ret;
>  
> -	workspace = get_workspace(type);
> +	workspace = get_workspace(type, level);
>  
> -	btrfs_compress_op[type]->set_level(workspace, type_level);
> +	btrfs_compress_op[type]->set_level(workspace, level);
>  	ret = btrfs_compress_op[type]->compress_pages(workspace, mapping,
>  						      start, pages,
>  						      out_pages,
> @@ -1037,7 +1039,7 @@ static int btrfs_decompress_bio(struct compressed_bio *cb)
>  	int ret;
>  	int type = cb->compress_type;
>  
> -	workspace = get_workspace(type);
> +	workspace = get_workspace(type, 0);
>  	ret = btrfs_compress_op[type]->decompress_bio(workspace, cb);
>  	put_workspace(type, workspace);
>  
> @@ -1055,13 +1057,12 @@ int btrfs_decompress(int type, unsigned char *data_in, struct page *dest_page,
>  	struct list_head *workspace;
>  	int ret;
>  
> -	workspace = get_workspace(type);
> -
> +	workspace = get_workspace(type, 0);
>  	ret = btrfs_compress_op[type]->decompress(workspace, data_in,
>  						  dest_page, start_byte,
>  						  srclen, destlen);
> -
>  	put_workspace(type, workspace);
> +
>  	return ret;
>  }
>  
> @@ -1489,7 +1490,7 @@ static void heuristic_collect_sample(struct inode *inode, u64 start, u64 end,
>   */
>  int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end)
>  {
> -	struct list_head *ws_list = get_workspace(0);
> +	struct list_head *ws_list = get_workspace(0, 0);
>  	struct heuristic_ws *ws;
>  	u32 i;
>  	u8 byte;
> diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
> index 05342ad081d6..e3627139bc5c 100644
> --- a/fs/btrfs/compression.h
> +++ b/fs/btrfs/compression.h
> @@ -114,7 +114,8 @@ struct workspace_manager {
>  
>  void btrfs_init_workspace_manager(struct workspace_manager *wsm,
>  				  const struct btrfs_compress_op *ops);
> -struct list_head *btrfs_get_workspace(struct workspace_manager *wsm);
> +struct list_head *btrfs_get_workspace(struct workspace_manager *wsm,
> +				      unsigned int level);
>  void btrfs_put_workspace(struct workspace_manager *wsm, struct list_head *ws);
>  void btrfs_cleanup_workspace_manager(struct workspace_manager *wsm);
>  
> @@ -123,11 +124,11 @@ struct btrfs_compress_op {
>  
>  	void (*cleanup_workspace_manager)(void);
>  
> -	struct list_head *(*get_workspace)(void);
> +	struct list_head *(*get_workspace)(unsigned int level);
>  
>  	void (*put_workspace)(struct list_head *ws);
>  
> -	struct list_head *(*alloc_workspace)(void);
> +	struct list_head *(*alloc_workspace)(unsigned int level);
>  
>  	void (*free_workspace)(struct list_head *workspace);
>  
> diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
> index f0837b2c8e94..f132af45a924 100644
> --- a/fs/btrfs/lzo.c
> +++ b/fs/btrfs/lzo.c
> @@ -73,9 +73,9 @@ static void lzo_cleanup_workspace_manager(void)
>  	btrfs_cleanup_workspace_manager(&wsm);
>  }
>  
> -static struct list_head *lzo_get_workspace(void)
> +static struct list_head *lzo_get_workspace(unsigned int level)
>  {
> -	return btrfs_get_workspace(&wsm);
> +	return btrfs_get_workspace(&wsm, level);
>  }
>  
>  static void lzo_put_workspace(struct list_head *ws)
> @@ -93,7 +93,7 @@ static void lzo_free_workspace(struct list_head *ws)
>  	kfree(workspace);
>  }
>  
> -static struct list_head *lzo_alloc_workspace(void)
> +static struct list_head *lzo_alloc_workspace(unsigned int level)
>  {
>  	struct workspace *workspace;
>  
> diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
> index 04687bf692e3..e2173d0c4fd3 100644
> --- a/fs/btrfs/zlib.c
> +++ b/fs/btrfs/zlib.c
> @@ -39,9 +39,9 @@ static void zlib_cleanup_workspace_manager(void)
>  	btrfs_cleanup_workspace_manager(&wsm);
>  }
>  
> -static struct list_head *zlib_get_workspace(void)
> +static struct list_head *zlib_get_workspace(unsigned int level)
>  {
> -	return btrfs_get_workspace(&wsm);
> +	return btrfs_get_workspace(&wsm, level);
>  }
>  
>  static void zlib_put_workspace(struct list_head *ws)
> @@ -58,7 +58,7 @@ static void zlib_free_workspace(struct list_head *ws)
>  	kfree(workspace);
>  }
>  
> -static struct list_head *zlib_alloc_workspace(void)
> +static struct list_head *zlib_alloc_workspace(unsigned int level)
>  {
>  	struct workspace *workspace;
>  	int workspacesize;
> @@ -71,6 +71,7 @@ static struct list_head *zlib_alloc_workspace(void)
>  			zlib_inflate_workspacesize());
>  	workspace->strm.workspace = kvmalloc(workspacesize, GFP_KERNEL);
>  	workspace->buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
> +	workspace->level = level;
>  	if (!workspace->strm.workspace || !workspace->buf)
>  		goto fail;
>  
> diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
> index b06eaf171be7..404101864220 100644
> --- a/fs/btrfs/zstd.c
> +++ b/fs/btrfs/zstd.c
> @@ -53,9 +53,9 @@ static void zstd_cleanup_workspace_manager(void)
>  	btrfs_cleanup_workspace_manager(&wsm);
>  }
>  
> -static struct list_head *zstd_get_workspace(void)
> +static struct list_head *zstd_get_workspace(unsigned int level)
>  {
> -	return btrfs_get_workspace(&wsm);
> +	return btrfs_get_workspace(&wsm, level);
>  }
>  
>  static void zstd_put_workspace(struct list_head *ws)
> @@ -72,7 +72,7 @@ static void zstd_free_workspace(struct list_head *ws)
>  	kfree(workspace);
>  }
>  
> -static struct list_head *zstd_alloc_workspace(void)
> +static struct list_head *zstd_alloc_workspace(unsigned int level)
>  {
>  	ZSTD_parameters params =
>  			zstd_get_btrfs_parameters(ZSTD_BTRFS_MAX_INPUT);
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 09/11] btrfs: change set_level() to bound the level passed in
  2019-01-28 21:24 ` [PATCH 09/11] btrfs: change set_level() to bound the level passed in Dennis Zhou
@ 2019-01-29  8:14   ` Nikolay Borisov
  2019-01-30 22:06     ` Dennis Zhou
  0 siblings, 1 reply; 46+ messages in thread
From: Nikolay Borisov @ 2019-01-29  8:14 UTC (permalink / raw)
  To: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel



On 28.01.19 г. 23:24 ч., Dennis Zhou wrote:
> Currently, the only user of set_level() is zlib which sets an internal
> workspace parameter. As level is now plumbed into get_workspace(), this
> can be handled there rather than separately.
> 
> This repurposes set_level() to bound the level passed in so it can be
> used when setting the mounts compression level and as well as verifying
> the level before getting a workspace. The other benefit is this divides
> the meaning of compress(0) and get_workspace(0). The former means we
> want to use the default compression level of the compression type. The
> latter means we can use any workspace available.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>
> ---
>  fs/btrfs/compression.c | 23 +++++++++++++++--------
>  fs/btrfs/compression.h |  4 ++--
>  fs/btrfs/lzo.c         |  3 ++-
>  fs/btrfs/super.c       |  4 +++-
>  fs/btrfs/zlib.c        | 18 +++++++++++-------
>  fs/btrfs/zstd.c        |  3 ++-
>  6 files changed, 35 insertions(+), 20 deletions(-)
> 
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> index e509071eaa69..a552c6f61e6d 100644
> --- a/fs/btrfs/compression.c
> +++ b/fs/btrfs/compression.c
> @@ -1008,9 +1008,9 @@ int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping,
>  	struct list_head *workspace;
>  	int ret;
>  
> -	workspace = get_workspace(type, level);
> +	level = btrfs_compress_op[type]->set_level(level);
>  
> -	btrfs_compress_op[type]->set_level(workspace, level);
> +	workspace = get_workspace(type, level);
>  	ret = btrfs_compress_op[type]->compress_pages(workspace, mapping,
>  						      start, pages,
>  						      out_pages,
> @@ -1563,14 +1563,21 @@ int btrfs_compress_heuristic(struct inode *inode, u64 start, u64 end)
>  	return ret;
>  }
>  
> -unsigned int btrfs_compress_str2level(const char *str)
> +unsigned int btrfs_compress_str2level(unsigned int type, const char *str)
>  {
> -	if (strncmp(str, "zlib", 4) != 0)
> +	unsigned int level;
> +	int ret;
> +
> +	if (!type)
>  		return 0;
>  
> -	/* Accepted form: zlib:1 up to zlib:9 and nothing left after the number */
> -	if (str[4] == ':' && '1' <= str[5] && str[5] <= '9' && str[6] == 0)
> -		return str[5] - '0';
> +	if (str[0] == ':') {
> +		ret = kstrtouint(str + 1, 10, &level);
> +		if (ret)
> +			level = 0;
> +	}
> +
> +	level = btrfs_compress_op[type]->set_level(level);
>  
> -	return BTRFS_ZLIB_DEFAULT_LEVEL;
> +	return level;
>  }
> diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
> index e3627139bc5c..d607be40aa0e 100644
> --- a/fs/btrfs/compression.h
> +++ b/fs/btrfs/compression.h
> @@ -90,7 +90,7 @@ blk_status_t btrfs_submit_compressed_write(struct inode *inode, u64 start,
>  blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
>  				 int mirror_num, unsigned long bio_flags);
>  
> -unsigned btrfs_compress_str2level(const char *str);
> +unsigned int btrfs_compress_str2level(unsigned int type, const char *str);
>  
>  enum btrfs_compression_type {
>  	BTRFS_COMPRESS_NONE  = 0,
> @@ -149,7 +149,7 @@ struct btrfs_compress_op {
>  			  unsigned long start_byte,
>  			  size_t srclen, size_t destlen);
>  
> -	void (*set_level)(struct list_head *ws, unsigned int type);
> +	unsigned int (*set_level)(unsigned int level);

It might be good to document the return value since this is an
interface. AFAICS implementations are required to return the actual
level set irrespective of what level was passed in, no?

>  };
>  
>  extern const struct btrfs_compress_op btrfs_heuristic_compress;
> diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
> index f132af45a924..579d53ae256f 100644
> --- a/fs/btrfs/lzo.c
> +++ b/fs/btrfs/lzo.c
> @@ -507,8 +507,9 @@ static int lzo_decompress(struct list_head *ws, unsigned char *data_in,
>  	return ret;
>  }
>  
> -static void lzo_set_level(struct list_head *ws, unsigned int type)
> +static unsigned int lzo_set_level(unsigned int level)
>  {
> +	return 0;
>  }
>  
>  const struct btrfs_compress_op btrfs_lzo_compress = {
> diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
> index c5586ffd1426..b28dff207383 100644
> --- a/fs/btrfs/super.c
> +++ b/fs/btrfs/super.c
> @@ -529,7 +529,9 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
>  				if (token != Opt_compress &&
>  				    token != Opt_compress_force)
>  					info->compress_level =
> -					  btrfs_compress_str2level(args[0].from);
> +					  btrfs_compress_str2level(
> +							BTRFS_COMPRESS_ZLIB,
> +							args[0].from + 4);
>  				btrfs_set_opt(info->mount_opt, COMPRESS);
>  				btrfs_clear_opt(info->mount_opt, NODATACOW);
>  				btrfs_clear_opt(info->mount_opt, NODATASUM);
> diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
> index e2173d0c4fd3..388b1f000fca 100644
> --- a/fs/btrfs/zlib.c
> +++ b/fs/btrfs/zlib.c
> @@ -41,7 +41,12 @@ static void zlib_cleanup_workspace_manager(void)
>  
>  static struct list_head *zlib_get_workspace(unsigned int level)
>  {
> -	return btrfs_get_workspace(&wsm, level);
> +	struct list_head *ws = btrfs_get_workspace(&wsm, level);
> +	struct workspace *workspace = list_entry(ws, struct workspace, list);
> +
> +	workspace->level = level;
> +
> +	return ws;
>  }
>  
>  static void zlib_put_workspace(struct list_head *ws)
> @@ -413,15 +418,14 @@ static int zlib_decompress(struct list_head *ws, unsigned char *data_in,
>  	return ret;
>  }
>  
> -static void zlib_set_level(struct list_head *ws, unsigned int type)
> +static unsigned int zlib_set_level(unsigned int level)
>  {
> -	struct workspace *workspace = list_entry(ws, struct workspace, list);
> -	unsigned int level = BTRFS_COMPRESS_LEVEL(type);
> -
> -	if (level > 9)
> +	if (!level)
> +		level = BTRFS_ZLIB_DEFAULT_LEVEL;
> +	else if (level > 9)
>  		level = 9;

nit:  This makes it a bit more obvious (IMO) that you are essentially
doing max:

if (!level)
   level = BTRFS_ZLIB_DEFAULT_LEVEL;
level = max(level, 9);

>  
> -	workspace->level = level > 0 ? level : 3;
> +	return level;
>  }
>  
>  const struct btrfs_compress_op btrfs_zlib_compress = {
> diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
> index 404101864220..43f3be755b8c 100644
> --- a/fs/btrfs/zstd.c
> +++ b/fs/btrfs/zstd.c
> @@ -441,8 +441,9 @@ static int zstd_decompress(struct list_head *ws, unsigned char *data_in,
>  	return ret;
>  }
>  
> -static void zstd_set_level(struct list_head *ws, unsigned int type)
> +static unsigned int zstd_set_level(unsigned int level)
>  {
> +	return ZSTD_BTRFS_DEFAULT_LEVEL;
>  }
>  
>  const struct btrfs_compress_op btrfs_zstd_compress = {
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 10/11] btrfs: zstd use the passed through level instead of default
  2019-01-28 21:24 ` [PATCH 10/11] btrfs: zstd use the passed through level instead of default Dennis Zhou
@ 2019-01-29  8:15   ` Nikolay Borisov
  0 siblings, 0 replies; 46+ messages in thread
From: Nikolay Borisov @ 2019-01-29  8:15 UTC (permalink / raw)
  To: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel



On 28.01.19 г. 23:24 ч., Dennis Zhou wrote:
> Zstd currently only supports the default level of compression. This
> patch switches to using the level passed in for btrfs zstd
> configuration.
> 
> Zstd workspaces now keep track of the requested level as this can differ
> from the size of the workspace.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>

Reviewed-by: Nikolay Borisov <nborisov@suse.com>

> ---
>  fs/btrfs/zstd.c | 19 +++++++++++++------
>  1 file changed, 13 insertions(+), 6 deletions(-)
> 
> diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
> index 43f3be755b8c..a951d4fe77f7 100644
> --- a/fs/btrfs/zstd.c
> +++ b/fs/btrfs/zstd.c
> @@ -21,10 +21,10 @@
>  #define ZSTD_BTRFS_MAX_INPUT (1 << ZSTD_BTRFS_MAX_WINDOWLOG)
>  #define ZSTD_BTRFS_DEFAULT_LEVEL 3
>  
> -static ZSTD_parameters zstd_get_btrfs_parameters(size_t src_len)
> +static ZSTD_parameters zstd_get_btrfs_parameters(unsigned int level,
> +						 size_t src_len)
>  {
> -	ZSTD_parameters params = ZSTD_getParams(ZSTD_BTRFS_DEFAULT_LEVEL,
> -						src_len, 0);
> +	ZSTD_parameters params = ZSTD_getParams(level, src_len, 0);
>  
>  	if (params.cParams.windowLog > ZSTD_BTRFS_MAX_WINDOWLOG)
>  		params.cParams.windowLog = ZSTD_BTRFS_MAX_WINDOWLOG;
> @@ -36,6 +36,7 @@ struct workspace {
>  	void *mem;
>  	size_t size;
>  	char *buf;
> +	unsigned int req_level;
>  	struct list_head list;
>  	ZSTD_inBuffer in_buf;
>  	ZSTD_outBuffer out_buf;
> @@ -55,7 +56,12 @@ static void zstd_cleanup_workspace_manager(void)
>  
>  static struct list_head *zstd_get_workspace(unsigned int level)
>  {
> -	return btrfs_get_workspace(&wsm, level);
> +	struct list_head *ws = btrfs_get_workspace(&wsm, level);
> +	struct workspace *workspace = list_entry(ws, struct workspace, list);
> +
> +	workspace->req_level = level;
> +
> +	return ws;
>  }
>  
>  static void zstd_put_workspace(struct list_head *ws)
> @@ -75,7 +81,7 @@ static void zstd_free_workspace(struct list_head *ws)
>  static struct list_head *zstd_alloc_workspace(unsigned int level)
>  {
>  	ZSTD_parameters params =
> -			zstd_get_btrfs_parameters(ZSTD_BTRFS_MAX_INPUT);
> +			zstd_get_btrfs_parameters(level, ZSTD_BTRFS_MAX_INPUT);
>  	struct workspace *workspace;
>  
>  	workspace = kzalloc(sizeof(*workspace), GFP_KERNEL);
> @@ -117,7 +123,8 @@ static int zstd_compress_pages(struct list_head *ws,
>  	unsigned long len = *total_out;
>  	const unsigned long nr_dest_pages = *out_pages;
>  	unsigned long max_out = nr_dest_pages * PAGE_SIZE;
> -	ZSTD_parameters params = zstd_get_btrfs_parameters(len);
> +	ZSTD_parameters params = zstd_get_btrfs_parameters(workspace->req_level,
> +							   len);
>  
>  	*out_pages = 0;
>  	*total_out = 0;
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 07/11] btrfs: move to fn pointers for get/put workspaces
  2019-01-28 21:24 ` [PATCH 07/11] btrfs: move to fn pointers for get/put workspaces Dennis Zhou
@ 2019-01-29  8:22   ` Nikolay Borisov
  2019-01-29 23:35     ` Dennis Zhou
  2019-01-29 18:17   ` Josef Bacik
  1 sibling, 1 reply; 46+ messages in thread
From: Nikolay Borisov @ 2019-01-29  8:22 UTC (permalink / raw)
  To: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell
  Cc: kernel-team, linux-btrfs, linux-kernel



On 28.01.19 г. 23:24 ч., Dennis Zhou wrote:
> The previous patch added generic helpers for get_workspace() and
> put_workspace(). Now, we can migrate ownership of the workspace_manager
> to be in the compression type code as the compression code itself
> doesn't care beyond being able to get a workspace. The init/cleanup
> and get/put methods are abstracted so each compression algorithm can
> decide how they want to manage their workspaces.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>

TBH I can't really see the value in this patch. IMO it doesn't make the
code more readable, on the contrary, you create algorithm-specific
wrappers over the generic function, where the sole specialization is in
the arguments passed to the generic functions. You introduce 4 more
function pointers and this affects performance negatively (albeit can't
say to what extent) due to spectre mitigations (retpolines).

I also read the follow up patches with the hopes of seeing how the code
becomes cleaner to no avail. At this point I'm really not in favor of
this particular patch.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 00/11] btrfs: add zstd compression level support
  2019-01-28 21:24 [PATCH 00/11] btrfs: add zstd compression level support Dennis Zhou
                   ` (10 preceding siblings ...)
  2019-01-28 21:24 ` [PATCH 11/11] btrfs: add zstd compression level support Dennis Zhou
@ 2019-01-29 17:18 ` David Sterba
  2019-01-29 21:12   ` Nick Terrell
  2019-01-30 17:40   ` Dennis Zhou
  11 siblings, 2 replies; 46+ messages in thread
From: David Sterba @ 2019-01-29 17:18 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, kernel-team, linux-btrfs, linux-kernel

On Mon, Jan 28, 2019 at 04:24:26PM -0500, Dennis Zhou wrote:
> As mentioned above, a requirement that differs zstd from zlib is that
> higher levels of compression require more memory. To manage this, each
> compression level has its own queue of workspaces. A global LRU is used
> to help with reclaim. To guarantee forward progress, a max level
> workspace is preallocated and hidden from the LRU.

Here I'd like to bring up what was mentioned in previous iteration, the
workspace sizes.

Level   Compression Memory
1       0.8 MB
2       1.0 MB
3       1.3 MB
4       0.9 MB
5       1.4 MB
6       1.5 MB
7       1.4 MB
8       1.8 MB
9       1.8 MB
10      1.8 MB
11      1.8 MB
12      1.8 MB
13      2.3 MB
14      2.6 MB
15      2.6 MB

and decompression needs memory of level 1. The sizes can be grouped
together to say 3 sizes, I'm not sure that we'd really need 15 distinct
workspaces. The reclaim mechanism helps, but I'd rather keep a smaller
number of workspaces that covers average use.

Default level is 3, that's 1.3 MiB, that also covers level 1, 2 and 4.
For 5 to 12 it's 1.8 and the rest is 2.6 MiB.

> btrfs filesystem 10 times and then read back after dropping the caches.
> The btrfs filesystem was on an SSD.
> 
> Level   Ratio   Compression (MB/s)  Decompression (MB/s)
> 1       2.658        438.47                910.51
> 2       2.744        364.86                886.55
> 3       2.801        336.33                828.41
> 4       2.858        286.71                886.55
> 5       2.916        212.77                556.84
> 6       2.363        119.82                990.85
> 7       3.000        154.06                849.30
> 8       3.011        159.54                875.03
> 9       3.025        100.51                940.15
> 10      3.033        118.97                616.26
> 11      3.036         94.19                802.11
> 12      3.037         73.45                931.49
> 13      3.041         55.17                835.26
> 14      3.087         44.70                716.78
> 15      3.126         37.30                878.84

From my casual user's perspective, I'd use the level 1 for speed, 7 for
better ratio and 15 for the best compression. Anything else does not
look good, though the results would vary based on the data set. I
assume that the silesia corpus serves as a good approximation of the
worst case average.

The levels 7-14 strike particularly obvious pattern: same ratio but the
speed gets worse with each level. Taking the default level into account,
(my) recommended levels would be 1, 3, 7, 15.

I went through the patches, looks mostly ok, I don't like the
indirections but at the moment it's an implementation detail as I'd like
to agree on the overall approach first.

We might need a few revisions or cleanup rounds to converge to an
efficient solution, the advantage here is that it's all in-memory and
without compatibility concerns once the level support for zstd is in and
works.

For that reason, I'm not opposed to the current version of the patchset.
Given the time in development schedule, it's really close to code
freeze, but the functionality has a narrow scope so I'm tentatively
counting with it for 5.1.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 01/11] btrfs: add macros for compression type and level
  2019-01-28 21:24 ` [PATCH 01/11] btrfs: add macros for compression type and level Dennis Zhou
  2019-01-29  7:26   ` Nikolay Borisov
@ 2019-01-29 17:57   ` Josef Bacik
  2019-01-29 22:30   ` Omar Sandoval
  2019-01-31 16:00   ` David Sterba
  3 siblings, 0 replies; 46+ messages in thread
From: Josef Bacik @ 2019-01-29 17:57 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, kernel-team, linux-btrfs, linux-kernel

On Mon, Jan 28, 2019 at 04:24:27PM -0500, Dennis Zhou wrote:
> It is very easy to miss places that rely on a certain bitshifting for
> decyphering the type_level overloading. Make macros handle this instead.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 02/11] btrfs: rename workspaces_list to workspace_manager
  2019-01-28 21:24 ` [PATCH 02/11] btrfs: rename workspaces_list to workspace_manager Dennis Zhou
  2019-01-29  7:27   ` Nikolay Borisov
@ 2019-01-29 17:58   ` Josef Bacik
  1 sibling, 0 replies; 46+ messages in thread
From: Josef Bacik @ 2019-01-29 17:58 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, kernel-team, linux-btrfs, linux-kernel

On Mon, Jan 28, 2019 at 04:24:28PM -0500, Dennis Zhou wrote:
> This is in preparation for zstd compression levels. As each level will
> require different sized workspaces, workspaces_list is no longer a
> really fitting name.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>
> ---

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 03/11] btrfs: manage heuristic workspace as index 0
  2019-01-28 21:24 ` [PATCH 03/11] btrfs: manage heuristic workspace as index 0 Dennis Zhou
  2019-01-29  7:53   ` Nikolay Borisov
@ 2019-01-29 18:02   ` Josef Bacik
  2019-01-31 16:10   ` David Sterba
  2 siblings, 0 replies; 46+ messages in thread
From: Josef Bacik @ 2019-01-29 18:02 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, kernel-team, linux-btrfs, linux-kernel

On Mon, Jan 28, 2019 at 04:24:29PM -0500, Dennis Zhou wrote:
> While the heuristic workspaces aren't really compression workspaces,
> they use the same interface for managing them. So rather than branching,
> let's just handle them once again as the index 0 compression type.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 04/11] btrfs: unify compression ops with workspace_manager
  2019-01-28 21:24 ` [PATCH 04/11] btrfs: unify compression ops with workspace_manager Dennis Zhou
  2019-01-29  7:54   ` Nikolay Borisov
@ 2019-01-29 18:03   ` Josef Bacik
  1 sibling, 0 replies; 46+ messages in thread
From: Josef Bacik @ 2019-01-29 18:03 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, kernel-team, linux-btrfs, linux-kernel

On Mon, Jan 28, 2019 at 04:24:30PM -0500, Dennis Zhou wrote:
> Make the workspace_manager own the interface operations rather than
> managing index-paired arrays for the workspace_manager and compression
> operations.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 05/11] btrfs: add helper methods for workspace manager init and cleanup
  2019-01-28 21:24 ` [PATCH 05/11] btrfs: add helper methods for workspace manager init and cleanup Dennis Zhou
  2019-01-29  7:58   ` Nikolay Borisov
@ 2019-01-29 18:04   ` Josef Bacik
  1 sibling, 0 replies; 46+ messages in thread
From: Josef Bacik @ 2019-01-29 18:04 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, kernel-team, linux-btrfs, linux-kernel

On Mon, Jan 28, 2019 at 04:24:31PM -0500, Dennis Zhou wrote:
> Workspace manager init and cleanup code is open coded inside a for loop
> over the compression types. This forces each compression type to rely on
> the same workspace manager implementation. This patch creates helper
> methods that will be the generic implementation for btrfs workspace
> management.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 06/11] btrfs: add compression interface in (get/put)_workspace()
  2019-01-28 21:24 ` [PATCH 06/11] btrfs: add compression interface in (get/put)_workspace() Dennis Zhou
@ 2019-01-29 18:06   ` Josef Bacik
  0 siblings, 0 replies; 46+ messages in thread
From: Josef Bacik @ 2019-01-29 18:06 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, kernel-team, linux-btrfs, linux-kernel

On Mon, Jan 28, 2019 at 04:24:32PM -0500, Dennis Zhou wrote:
> There are two levels of workspace management. First, alloc()/free()
> which are responsible for actually creating and destroy workspaces.
> Second, at a higher level, get()/put() which is the compression code
> asking for a workspace from a workspace_manager.
> 
> The compression code shouldn't really care how it gets a workspace, but
> that it got a workspace. This adds get_workspace() and put_workspace()
> to be the higher level interface which is responsible for indexing into
> the appropriate compression type. It also introduces
> btrfs_put_workspace() and btrfs_get_workspace() to be the generic
> implementations of the higher interface.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 07/11] btrfs: move to fn pointers for get/put workspaces
  2019-01-28 21:24 ` [PATCH 07/11] btrfs: move to fn pointers for get/put workspaces Dennis Zhou
  2019-01-29  8:22   ` Nikolay Borisov
@ 2019-01-29 18:17   ` Josef Bacik
  2019-01-29 18:44     ` Josef Bacik
  1 sibling, 1 reply; 46+ messages in thread
From: Josef Bacik @ 2019-01-29 18:17 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, kernel-team, linux-btrfs, linux-kernel

On Mon, Jan 28, 2019 at 04:24:33PM -0500, Dennis Zhou wrote:
> The previous patch added generic helpers for get_workspace() and
> put_workspace(). Now, we can migrate ownership of the workspace_manager
> to be in the compression type code as the compression code itself
> doesn't care beyond being able to get a workspace. The init/cleanup
> and get/put methods are abstracted so each compression algorithm can
> decide how they want to manage their workspaces.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>

We're doing this to have special handling for extra workspaces to be free'd at
some point in the future if they are unused.  This is fine by me, but why not
just add a shrinker and let it be handled by memory pressure?  Then we avoid all
this abstraction and allow for ztsd to have its shrinker for its extra
workspaces.  You can even use the list_lru stuff to make it super simple, then
you don't have to worry about all the infrastructure.  Thanks,

Josef

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 08/11] btrfs: plumb level through the compression interface
  2019-01-28 21:24 ` [PATCH 08/11] btrfs: plumb level through the compression interface Dennis Zhou
  2019-01-29  8:08   ` Nikolay Borisov
@ 2019-01-29 18:17   ` Josef Bacik
  1 sibling, 0 replies; 46+ messages in thread
From: Josef Bacik @ 2019-01-29 18:17 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, kernel-team, linux-btrfs, linux-kernel

On Mon, Jan 28, 2019 at 04:24:34PM -0500, Dennis Zhou wrote:
> Zlib compression supports multiple levels, but doesn't require changing
> in how a workspace itself is created and managed. Zstd introduces a
> different memory requirement such that higher levels of compression
> require more memory. This requires changes in how the alloc()/get()
> methods work for zstd. This pach plumbs compression level through the
> interface as a parameter in preparation for zstd compression levels.
> This gives the compression types opportunity to create/manage based on
> the compression level.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 07/11] btrfs: move to fn pointers for get/put workspaces
  2019-01-29 18:17   ` Josef Bacik
@ 2019-01-29 18:44     ` Josef Bacik
  0 siblings, 0 replies; 46+ messages in thread
From: Josef Bacik @ 2019-01-29 18:44 UTC (permalink / raw)
  To: Josef Bacik
  Cc: Dennis Zhou, David Sterba, Chris Mason, Omar Sandoval,
	Nick Terrell, kernel-team, linux-btrfs, linux-kernel

On Tue, Jan 29, 2019 at 01:17:17PM -0500, Josef Bacik wrote:
> On Mon, Jan 28, 2019 at 04:24:33PM -0500, Dennis Zhou wrote:
> > The previous patch added generic helpers for get_workspace() and
> > put_workspace(). Now, we can migrate ownership of the workspace_manager
> > to be in the compression type code as the compression code itself
> > doesn't care beyond being able to get a workspace. The init/cleanup
> > and get/put methods are abstracted so each compression algorithm can
> > decide how they want to manage their workspaces.
> > 
> > Signed-off-by: Dennis Zhou <dennis@kernel.org>
> 
> We're doing this to have special handling for extra workspaces to be free'd at
> some point in the future if they are unused.  This is fine by me, but why not
> just add a shrinker and let it be handled by memory pressure?  Then we avoid all
> this abstraction and allow for ztsd to have its shrinker for its extra
> workspaces.  You can even use the list_lru stuff to make it super simple, then
> you don't have to worry about all the infrastructure.  Thanks,
> 

Nevermind, I missed that you also change the get side to lookup the workspace
for the compression level instead of cycling through the idle_ws list.  In that
case this is fine by me.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 00/11] btrfs: add zstd compression level support
  2019-01-29 17:18 ` [PATCH 00/11] " David Sterba
@ 2019-01-29 21:12   ` Nick Terrell
  2019-01-30 17:40   ` Dennis Zhou
  1 sibling, 0 replies; 46+ messages in thread
From: Nick Terrell @ 2019-01-29 21:12 UTC (permalink / raw)
  To: David Sterba
  Cc: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Kernel Team, linux-btrfs, linux-kernel



> On Jan 29, 2019, at 9:18 AM, David Sterba <dsterba@suse.cz> wrote:
> 
> On Mon, Jan 28, 2019 at 04:24:26PM -0500, Dennis Zhou wrote:
>> As mentioned above, a requirement that differs zstd from zlib is that
>> higher levels of compression require more memory. To manage this, each
>> compression level has its own queue of workspaces. A global LRU is used
>> to help with reclaim. To guarantee forward progress, a max level
>> workspace is preallocated and hidden from the LRU.
> 
> Here I'd like to bring up what was mentioned in previous iteration, the
> workspace sizes.
> 
> Level   Compression Memory
> 1       0.8 MB
> 2       1.0 MB
> 3       1.3 MB
> 4       0.9 MB
> 5       1.4 MB
> 6       1.5 MB
> 7       1.4 MB
> 8       1.8 MB
> 9       1.8 MB
> 10      1.8 MB
> 11      1.8 MB
> 12      1.8 MB
> 13      2.3 MB
> 14      2.6 MB
> 15      2.6 MB
> 
> and decompression needs memory of level 1. The sizes can be grouped
> together to say 3 sizes, I'm not sure that we'd really need 15 distinct
> workspaces. The reclaim mechanism helps, but I'd rather keep a smaller
> number of workspaces that covers average use.
> 
> Default level is 3, that's 1.3 MiB, that also covers level 1, 2 and 4.
> For 5 to 12 it's 1.8 and the rest is 2.6 MiB.
> 
>> btrfs filesystem 10 times and then read back after dropping the caches.
>> The btrfs filesystem was on an SSD.
>> 
>> Level   Ratio   Compression (MB/s)  Decompression (MB/s)
>> 1       2.658        438.47                910.51
>> 2       2.744        364.86                886.55
>> 3       2.801        336.33                828.41
>> 4       2.858        286.71                886.55
>> 5       2.916        212.77                556.84
>> 6       2.363        119.82                990.85
>> 7       3.000        154.06                849.30
>> 8       3.011        159.54                875.03
>> 9       3.025        100.51                940.15
>> 10      3.033        118.97                616.26
>> 11      3.036         94.19                802.11
>> 12      3.037         73.45                931.49
>> 13      3.041         55.17                835.26
>> 14      3.087         44.70                716.78
>> 15      3.126         37.30                878.84
> 
> From my casual user's perspective, I'd use the level 1 for speed, 7 for
> better ratio and 15 for the best compression. Anything else does not
> look good, though the results would vary based on the data set. I
> assume that the silesia corpus serves as a good approximation of the
> worst case average.
>
> The levels 7-14 strike particularly obvious pattern: same ratio but the
> speed gets worse with each level. Taking the default level into account,
> (my) recommended levels would be 1, 3, 7, 15.

Silesia is used because it is a standard corpus, and I'd call it about average,
but there is a lot of variance and extreme edge case data. The intermediate
strategies will change in effectiveness on different types of data. For example,
the lower levels are generally more effective on text, and you want slightly
higher levels for non-text data, because they can find shorter matches.

Upstream zstd also shifts around its levels, and the memory usage of each
level from time-to-time, and I am going to update zstd in the kernel in this
next year, since we are slowing down development. The shifts will be small
though.

It could make sense to map the levels into size classes, since that could
reduce memory spikes, at the cost of higher stead-state memory usage.
I'm not familiar with the machinery used in these patches, so I can't actually
say much. I would probably use levels 1, 3, 7 (after it is made monotonic),
12, and 15. You might skip 7, but leave 12.

> I went through the patches, looks mostly ok, I don't like the
> indirections but at the moment it's an implementation detail as I'd like
> to agree on the overall approach first.
> 
> We might need a few revisions or cleanup rounds to converge to an
> efficient solution, the advantage here is that it's all in-memory and
> without compatibility concerns once the level support for zstd is in and
> works.
> 
> For that reason, I'm not opposed to the current version of the patchset.
> Given the time in development schedule, it's really close to code
> freeze, but the functionality has a narrow scope so I'm tentatively
> counting with it for 5.1.


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 01/11] btrfs: add macros for compression type and level
  2019-01-28 21:24 ` [PATCH 01/11] btrfs: add macros for compression type and level Dennis Zhou
  2019-01-29  7:26   ` Nikolay Borisov
  2019-01-29 17:57   ` Josef Bacik
@ 2019-01-29 22:30   ` Omar Sandoval
  2019-01-31 16:00   ` David Sterba
  3 siblings, 0 replies; 46+ messages in thread
From: Omar Sandoval @ 2019-01-29 22:30 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Nick Terrell,
	kernel-team, linux-btrfs, linux-kernel

On Mon, Jan 28, 2019 at 04:24:27PM -0500, Dennis Zhou wrote:
> It is very easy to miss places that rely on a certain bitshifting for
> decyphering the type_level overloading. Make macros handle this instead.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>
> ---
>  fs/btrfs/compression.c | 2 +-
>  fs/btrfs/compression.h | 3 +++
>  fs/btrfs/zlib.c        | 2 +-
>  3 files changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> index 548057630b69..586f95ac0aea 100644
> --- a/fs/btrfs/compression.c
> +++ b/fs/btrfs/compression.c
> @@ -1036,9 +1036,9 @@ int btrfs_compress_pages(unsigned int type_level, struct address_space *mapping,
>  			 unsigned long *total_in,
>  			 unsigned long *total_out)
>  {
> +	int type = BTRFS_COMPRESS_TYPE(type_level);
>  	struct list_head *workspace;
>  	int ret;
> -	int type = type_level & 0xF;
>  
>  	workspace = find_workspace(type);
>  
> diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
> index ddda9b80bf20..69a9197dadc3 100644
> --- a/fs/btrfs/compression.h
> +++ b/fs/btrfs/compression.h
> @@ -25,6 +25,9 @@
>  
>  #define	BTRFS_ZLIB_DEFAULT_LEVEL		3
>  
> +#define BTRFS_COMPRESS_TYPE(type_level)		(type_level & 0xF)
> +#define BTRFS_COMPRESS_LEVEL(type_level)	((type_level & 0xF0) >> 4)

Nit: these can be inline functions rather than macros.

>  struct compressed_bio {
>  	/* number of bios pending for this compressed extent */
>  	refcount_t pending_bios;
> diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
> index 970ff3e35bb3..1480b3eee306 100644
> --- a/fs/btrfs/zlib.c
> +++ b/fs/btrfs/zlib.c
> @@ -393,7 +393,7 @@ static int zlib_decompress(struct list_head *ws, unsigned char *data_in,
>  static void zlib_set_level(struct list_head *ws, unsigned int type)
>  {
>  	struct workspace *workspace = list_entry(ws, struct workspace, list);
> -	unsigned level = (type & 0xF0) >> 4;
> +	unsigned int level = BTRFS_COMPRESS_LEVEL(type);
>  
>  	if (level > 9)
>  		level = 9;
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 03/11] btrfs: manage heuristic workspace as index 0
  2019-01-29  7:53   ` Nikolay Borisov
@ 2019-01-29 22:43     ` Dennis Zhou
  0 siblings, 0 replies; 46+ messages in thread
From: Dennis Zhou @ 2019-01-29 22:43 UTC (permalink / raw)
  To: Nikolay Borisov
  Cc: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell, kernel-team, linux-btrfs,
	linux-kernel

On Tue, Jan 29, 2019 at 09:53:33AM +0200, Nikolay Borisov wrote:
> 
> 
> On 28.01.19 г. 23:24 ч., Dennis Zhou wrote:
> > While the heuristic workspaces aren't really compression workspaces,
> > they use the same interface for managing them. So rather than branching,
> > let's just handle them once again as the index 0 compression type.
> > 
> > Signed-off-by: Dennis Zhou <dennis@kernel.org>
> 
> Reviewed-by: Nikolay Borisov <nborisov@suse.com> albeit one minor nit
> below.
> > ---
> >  fs/btrfs/compression.c  | 107 +++++++++++-----------------------------
> >  fs/btrfs/compression.h  |   3 +-
> >  fs/btrfs/ioctl.c        |   2 +-
> >  fs/btrfs/tree-checker.c |   4 +-
> >  4 files changed, 33 insertions(+), 83 deletions(-)
> > 
> > diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> > index aced261984e2..bda7e8d2cbc7 100644
> > --- a/fs/btrfs/compression.c
> > +++ b/fs/btrfs/compression.c
> > @@ -37,6 +37,8 @@ const char* btrfs_compress_type2str(enum btrfs_compression_type type)
> >  	case BTRFS_COMPRESS_ZSTD:
> >  	case BTRFS_COMPRESS_NONE:
> >  		return btrfs_compress_types[type];
> > +	default:
> > +		return NULL;
> 
> nit: With this change...
> 
> >  	}
> >  
> >  	return NULL;
> 
> This becomes redundant. I doubt the compiler will issue a warning since
> it should be clever enough to figure we will never exit the switch()
> construct.
> 

Ah yes. I've removed it for v2.

Thanks,
Dennis

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 11/11] btrfs: add zstd compression level support
  2019-01-29  7:25   ` Nikolay Borisov
@ 2019-01-29 22:50     ` Dennis Zhou
  0 siblings, 0 replies; 46+ messages in thread
From: Dennis Zhou @ 2019-01-29 22:50 UTC (permalink / raw)
  To: Nikolay Borisov
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, kernel-team, linux-btrfs, linux-kernel,
	Omar Sandoval

On Tue, Jan 29, 2019 at 09:25:54AM +0200, Nikolay Borisov wrote:
> 
> 
> On 28.01.19 г. 23:24 ч., Dennis Zhou wrote:
> > Zstd compression requires different amounts of memory for each level of
> > compression. The prior patches implemented indirection to allow for each
> > compression type to manage their workspaces independently. This patch
> > uses this indirection to implement compression level support for zstd.
> > 
> > As mentioned above, a requirement that differs zstd from zlib is that
> > higher levels of compression require more memory. To manage this, each
> > compression level has its own queue of workspaces. A global LRU is used
> > to help with reclaim. To guarantee forward progress, a max level
> > workspace is preallocated and hidden from the LRU.
> > 
> > When getting a workspace, it uses a bitmap to identify the levels that
> > are populated and scans up. If it finds a workspace that is greater than
> > it, it uses it, but does not update the last_used time and the
> > corresponding place in the LRU. This provides a mechanism to decrease
> > memory utilization as we only keep around workspaces that are sized
> > appropriately for the in use compression levels.
> > 
> > By knowing which compression levels have available workspaces, we can
> > recycle rather than always create new workspaces as well as take
> > advantage of the preallocated max level for forward progress. If we hit
> > memory pressure, we sleep on the max level workspace. We continue to
> > rescan in case we can use a smaller workspace, but eventually should be
> > able to obtain the max level workspace or allocate one again should
> > memory pressure subside. The memory requirement for decompression is the
> > same as level 1, and therefore can use any of available workspace.
> > 
> > The number of workspaces is bound by an upper limit of the workqueue's
> > limit which currently is 2 (percpu limit). Second, a reclaim timer is
> > used to free inactive/improperly sized workspaces. The reclaim timer is
> > set to 67s to avoid colliding with transaction commit (every 30s) and
> > attempts to reclaim any unused workspace older than 45s.
> > 
> > Repeating the experiment from v2 [1], the Silesia corpus was copied to a
> > btrfs filesystem 10 times and then read back after dropping the caches.
> > The btrfs filesystem was on an SSD.
> > 
> > Level   Ratio   Compression (MB/s)  Decompression (MB/s)
> > 1       2.658        438.47                910.51
> > 2       2.744        364.86                886.55
> > 3       2.801        336.33                828.41
> > 4       2.858        286.71                886.55
> > 5       2.916        212.77                556.84
> > 6       2.363        119.82                990.85
> > 7       3.000        154.06                849.30
> > 8       3.011        159.54                875.03
> > 9       3.025        100.51                940.15
> > 10      3.033        118.97                616.26
> > 11      3.036         94.19                802.11
> > 12      3.037         73.45                931.49
> > 13      3.041         55.17                835.26
> > 14      3.087         44.70                716.78
> > 15      3.126         37.30                878.84
> > 
> > [1] https://lore.kernel.org/linux-btrfs/20181031181108.289340-1-terrelln@fb.com/
> > 
> > Signed-off-by: Dennis Zhou <dennis@kernel.org>
> > Cc: Nick Terrell <terrelln@fb.com>
> > Cc: Omar Sandoval <osandov@fb.com>
> > ---
> >  fs/btrfs/super.c |   6 +-
> >  fs/btrfs/zstd.c  | 229 +++++++++++++++++++++++++++++++++++++++++++++--
> >  2 files changed, 226 insertions(+), 9 deletions(-)
> > 
> > diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
> > index b28dff207383..0ecc513cb56c 100644
> > --- a/fs/btrfs/super.c
> > +++ b/fs/btrfs/super.c
> > @@ -544,9 +544,13 @@ int btrfs_parse_options(struct btrfs_fs_info *info, char *options,
> >  				btrfs_clear_opt(info->mount_opt, NODATASUM);
> >  				btrfs_set_fs_incompat(info, COMPRESS_LZO);
> >  				no_compress = 0;
> > -			} else if (strcmp(args[0].from, "zstd") == 0) {
> > +			} else if (strncmp(args[0].from, "zstd", 4) == 0) {
> >  				compress_type = "zstd";
> >  				info->compress_type = BTRFS_COMPRESS_ZSTD;
> > +				info->compress_level =
> > +					btrfs_compress_str2level(
> > +							 BTRFS_COMPRESS_ZSTD,
> > +							 args[0].from + 4);
> >  				btrfs_set_opt(info->mount_opt, COMPRESS);
> >  				btrfs_clear_opt(info->mount_opt, NODATACOW);
> >  				btrfs_clear_opt(info->mount_opt, NODATASUM);
> > diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
> > index a951d4fe77f7..ce9b466c197f 100644
> > --- a/fs/btrfs/zstd.c
> > +++ b/fs/btrfs/zstd.c
> > @@ -6,20 +6,27 @@
> >   */
> >  
> >  #include <linux/bio.h>
> > +#include <linux/bitmap.h>
> >  #include <linux/err.h>
> >  #include <linux/init.h>
> >  #include <linux/kernel.h>
> >  #include <linux/mm.h>
> > +#include <linux/sched/mm.h>
> >  #include <linux/pagemap.h>
> >  #include <linux/refcount.h>
> >  #include <linux/sched.h>
> >  #include <linux/slab.h>
> >  #include <linux/zstd.h>
> >  #include "compression.h"
> > +#include "ctree.h"
> >  
> >  #define ZSTD_BTRFS_MAX_WINDOWLOG 17
> >  #define ZSTD_BTRFS_MAX_INPUT (1 << ZSTD_BTRFS_MAX_WINDOWLOG)
> >  #define ZSTD_BTRFS_DEFAULT_LEVEL 3
> > +#define ZSTD_BTRFS_MAX_LEVEL 15
> > +#define ZSTD_BTRFS_RECLAIM_NS (45 * NSEC_PER_SEC)
> > +/* 67s to avoid clashing with transaction commit (every 30s) */
> > +#define ZSTD_BTRFS_RECLAIM_JIFFIES (67 * HZ)
> 
> This is valid provided that transaction commit time is not overriden by
> Opt_commit_interval. If it is such a problem to not clash with trans
> commit maybe this should be calculated upon mount?
> 

Because the workspace managers are initialized once and shared,
different mounts can have different Opt_commit_interval settings. I
don't think it's particularly problematic to clash with trans commit,
it's kind of nice to have the offset to in the majority of cases not run
into this.

Thanks,
Dennis

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 07/11] btrfs: move to fn pointers for get/put workspaces
  2019-01-29  8:22   ` Nikolay Borisov
@ 2019-01-29 23:35     ` Dennis Zhou
  0 siblings, 0 replies; 46+ messages in thread
From: Dennis Zhou @ 2019-01-29 23:35 UTC (permalink / raw)
  To: Nikolay Borisov
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, kernel-team, linux-btrfs, linux-kernel

Hi Nikolay,

On Tue, Jan 29, 2019 at 10:22:34AM +0200, Nikolay Borisov wrote:
> 
> 
> On 28.01.19 г. 23:24 ч., Dennis Zhou wrote:
> > The previous patch added generic helpers for get_workspace() and
> > put_workspace(). Now, we can migrate ownership of the workspace_manager
> > to be in the compression type code as the compression code itself
> > doesn't care beyond being able to get a workspace. The init/cleanup
> > and get/put methods are abstracted so each compression algorithm can
> > decide how they want to manage their workspaces.
> > 
> > Signed-off-by: Dennis Zhou <dennis@kernel.org>

Thanks for reviewing my patches so promptly!

> 
> TBH I can't really see the value in this patch. IMO it doesn't make the
> code more readable, on the contrary, you create algorithm-specific
> wrappers over the generic function, where the sole specialization is in
> the arguments passed to the generic functions. You introduce 4 more
> function pointers and this affects performance negatively (albeit can't
> say to what extent) due to spectre mitigations (retpolines).
> 

I'll try and address the need for function pointers below, but of the 4,
only 2 are called often as 2 are for init and clenaup.

For the cost of function pointers, I tested with retpoline on, with zlib
both via function pointers and hard coded using the Silesia corpus test
in the cover letter. It didn't show up in perf and the runtimes were
very close to each other with the function pointers actually performing
marginally better. I imagine the actual compression is heavy enough that
it greatly outweighs the cost of a function pointer.

> I also read the follow up patches with the hopes of seeing how the code
> becomes cleaner to no avail. At this point I'm really not in favor of
> this particular patch.

So I guess to recap the whole idea. The original implementation forced
everyone to use workspaces_list. This is a generic implementation that
only allows the management of workspaces by a lru list.

Zstd compression levels introduces the requirement of being able to
distinguish workspaces. This is not possible with the original
workspaces_list implementation.  If I modify this, it becomes rather
cumbersome adding variables to the generic workspaces_list that will
only be used by a particular compression types. It also fractures the
code having find_workspace() and free_workspace() deal with changing
code paths based on the compression type.

I think a big strength of this series is that we no longer rely on using
paired arrays and passing around compression type. Once we are working
within a compression type, it is not possible to accidentally cross
boundaries due to an off by one error. Each compression type is siloed
with workspace management code being up to the compression type code and
not the core compression code.

This series works to migrate to a two-level API with get_workspace() and
put_workspace() being the interface that the core compression code
interacts with each compression type. The core compression code does not
and should not care beyond the fact that it is getting a workspace. So,
get_workspace() and put_workspace() deal with managing the workspaces
themselves. Underneath this is alloc_workspace() and free_workspace()
which deal only with creating and deleting a workspace.  I think the
division of labor here is neat and allows for code sharing.

Prior to this, no one really needed to do anything smarter with
workspace management and therefore could share a single implementation.
Now, zstd has to do something a little smarter based on the level
requested and has the ability to use higher level workspaces should they
be available. The division of the API let's zstd do just that. It hides
all of the manager complexity away from the core compression code.

I apologize if I'm only restating what I've already said. If the
reasoning is still unclear, I can try and be more specific to those
unaddressed areas.

Thanks,
Dennis

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 00/11] btrfs: add zstd compression level support
  2019-01-29 17:18 ` [PATCH 00/11] " David Sterba
  2019-01-29 21:12   ` Nick Terrell
@ 2019-01-30 17:40   ` Dennis Zhou
  2019-01-31 14:04     ` David Sterba
  1 sibling, 1 reply; 46+ messages in thread
From: Dennis Zhou @ 2019-01-30 17:40 UTC (permalink / raw)
  To: dsterba, Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell, kernel-team, linux-btrfs,
	linux-kernel

Hi David,

On Tue, Jan 29, 2019 at 06:18:30PM +0100, David Sterba wrote:
> On Mon, Jan 28, 2019 at 04:24:26PM -0500, Dennis Zhou wrote:
> > As mentioned above, a requirement that differs zstd from zlib is that
> > higher levels of compression require more memory. To manage this, each
> > compression level has its own queue of workspaces. A global LRU is used
> > to help with reclaim. To guarantee forward progress, a max level
> > workspace is preallocated and hidden from the LRU.
> 
> Here I'd like to bring up what was mentioned in previous iteration, the
> workspace sizes.
> 
> Level   Compression Memory
> 1       0.8 MB
> 2       1.0 MB
> 3       1.3 MB
> 4       0.9 MB
> 5       1.4 MB
> 6       1.5 MB
> 7       1.4 MB
> 8       1.8 MB
> 9       1.8 MB
> 10      1.8 MB
> 11      1.8 MB
> 12      1.8 MB
> 13      2.3 MB
> 14      2.6 MB
> 15      2.6 MB
> 
> and decompression needs memory of level 1. The sizes can be grouped
> together to say 3 sizes, I'm not sure that we'd really need 15 distinct
> workspaces. The reclaim mechanism helps, but I'd rather keep a smaller
> number of workspaces that covers average use.
> 
> Default level is 3, that's 1.3 MiB, that also covers level 1, 2 and 4.
> For 5 to 12 it's 1.8 and the rest is 2.6 MiB.
> 

I realize the current implementation doesn't have a monotonic memory
requirement guarantee. I've added that, and below is updated memory
requirements per level. I've updated the commit to include this too.

Level     Memory (KB)
1            780 
2           1004
3           1260
4           1260
5           1388
6           1516
7           1516
8           1772
9           1772
10          1772
11          1772
12          1772
13          2284
14          2547
15          2547

> > btrfs filesystem 10 times and then read back after dropping the caches.
> > The btrfs filesystem was on an SSD.
> > 
> > Level   Ratio   Compression (MB/s)  Decompression (MB/s)
> > 1       2.658        438.47                910.51
> > 2       2.744        364.86                886.55
> > 3       2.801        336.33                828.41
> > 4       2.858        286.71                886.55
> > 5       2.916        212.77                556.84
> > 6       2.363        119.82                990.85
> > 7       3.000        154.06                849.30
> > 8       3.011        159.54                875.03
> > 9       3.025        100.51                940.15
> > 10      3.033        118.97                616.26
> > 11      3.036         94.19                802.11
> > 12      3.037         73.45                931.49
> > 13      3.041         55.17                835.26
> > 14      3.087         44.70                716.78
> > 15      3.126         37.30                878.84
> 
> From my casual user's perspective, I'd use the level 1 for speed, 7 for
> better ratio and 15 for the best compression. Anything else does not
> look good, though the results would vary based on the data set. I
> assume that the silesia corpus serves as a good approximation of the
> worst case average.
> 
> The levels 7-14 strike particularly obvious pattern: same ratio but the
> speed gets worse with each level. Taking the default level into account,
> (my) recommended levels would be 1, 3, 7, 15.
> 

I do see why we want to limit the number of levels as the memory
requirements do kind of bucket themselves. But, this means when zstd
gets updated, we'd have to reevaluate the compression levels btrfs
supports. I'm not sure it's a great idea to have that dependency.
I imagine we could offer some level of guidance, but it really would be
up to the user to figure out what works best for them.

The reclaim mechanism only keeps workspaces around if they are being
used by the appropriate level. So, the memory overhead is actively used
memory and if not, it is reclaimed after at most ~2 minutes later. I
also scan up before allocating a workspace, so that should help limit
the number of workspaces in circulation.

> I went through the patches, looks mostly ok, I don't like the
> indirections but at the moment it's an implementation detail as I'd like
> to agree on the overall approach first.
> 
> We might need a few revisions or cleanup rounds to converge to an
> efficient solution, the advantage here is that it's all in-memory and
> without compatibility concerns once the level support for zstd is in and
> works.
> 
> For that reason, I'm not opposed to the current version of the patchset.
> Given the time in development schedule, it's really close to code
> freeze, but the functionality has a narrow scope so I'm tentatively
> counting with it for 5.1.

Thanks,
Dennis

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 09/11] btrfs: change set_level() to bound the level passed in
  2019-01-29  8:14   ` Nikolay Borisov
@ 2019-01-30 22:06     ` Dennis Zhou
  0 siblings, 0 replies; 46+ messages in thread
From: Dennis Zhou @ 2019-01-30 22:06 UTC (permalink / raw)
  To: Nikolay Borisov
  Cc: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell, kernel-team, linux-btrfs,
	linux-kernel

On Tue, Jan 29, 2019 at 10:14:18AM +0200, Nikolay Borisov wrote:
> 
> 
> On 28.01.19 г. 23:24 ч., Dennis Zhou wrote:
> > Currently, the only user of set_level() is zlib which sets an internal
> > workspace parameter. As level is now plumbed into get_workspace(), this
> > can be handled there rather than separately.
> > 
> > This repurposes set_level() to bound the level passed in so it can be
> > used when setting the mounts compression level and as well as verifying
> > the level before getting a workspace. The other benefit is this divides
> > the meaning of compress(0) and get_workspace(0). The former means we
> > want to use the default compression level of the compression type. The
> > latter means we can use any workspace available.
> > 
> > Signed-off-by: Dennis Zhou <dennis@kernel.org>
> > ---
 >  }
> > diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
> > index e3627139bc5c..d607be40aa0e 100644
> > --- a/fs/btrfs/compression.h
> > +++ b/fs/btrfs/compression.h
> > @@ -90,7 +90,7 @@ blk_status_t btrfs_submit_compressed_write(struct inode *inode, u64 start,
> >  blk_status_t btrfs_submit_compressed_read(struct inode *inode, struct bio *bio,
> >  				 int mirror_num, unsigned long bio_flags);
> >  
> > -unsigned btrfs_compress_str2level(const char *str);
> > +unsigned int btrfs_compress_str2level(unsigned int type, const char *str);
> >  
> >  enum btrfs_compression_type {
> >  	BTRFS_COMPRESS_NONE  = 0,
> > @@ -149,7 +149,7 @@ struct btrfs_compress_op {
> >  			  unsigned long start_byte,
> >  			  size_t srclen, size_t destlen);
> >  
> > -	void (*set_level)(struct list_head *ws, unsigned int type);
> > +	unsigned int (*set_level)(unsigned int level);
> 
> It might be good to document the return value since this is an
> interface. AFAICS implementations are required to return the actual
> level set irrespective of what level was passed in, no?
> 
> >  };

Yeah that makes sense. I've added a comment about it in
btrfs_compress_op.

> >  
> >  static void zlib_put_workspace(struct list_head *ws)
> > @@ -413,15 +418,14 @@ static int zlib_decompress(struct list_head *ws, unsigned char *data_in,
> >  	return ret;
> >  }
> >  
> > -static void zlib_set_level(struct list_head *ws, unsigned int type)
> > +static unsigned int zlib_set_level(unsigned int level)
> >  {
> > -	struct workspace *workspace = list_entry(ws, struct workspace, list);
> > -	unsigned int level = BTRFS_COMPRESS_LEVEL(type);
> > -
> > -	if (level > 9)
> > +	if (!level)
> > +		level = BTRFS_ZLIB_DEFAULT_LEVEL;
> > +	else if (level > 9)
> >  		level = 9;
> 
> nit:  This makes it a bit more obvious (IMO) that you are essentially
> doing max:
> 
> if (!level)
>    level = BTRFS_ZLIB_DEFAULT_LEVEL;
> level = max(level, 9);
> 

I agree. I've updated it in both places. It should be min instead of max
though.

Thanks,
Dennis

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 00/11] btrfs: add zstd compression level support
  2019-01-30 17:40   ` Dennis Zhou
@ 2019-01-31 14:04     ` David Sterba
  2019-01-31 15:56       ` Dennis Zhou
  0 siblings, 1 reply; 46+ messages in thread
From: David Sterba @ 2019-01-31 14:04 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: dsterba, David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, kernel-team, linux-btrfs, linux-kernel

On Wed, Jan 30, 2019 at 12:40:59PM -0500, Dennis Zhou wrote:
> Hi David,
> 
> On Tue, Jan 29, 2019 at 06:18:30PM +0100, David Sterba wrote:
> > On Mon, Jan 28, 2019 at 04:24:26PM -0500, Dennis Zhou wrote:
> > > As mentioned above, a requirement that differs zstd from zlib is that
> > > higher levels of compression require more memory. To manage this, each
> > > compression level has its own queue of workspaces. A global LRU is used
> > > to help with reclaim. To guarantee forward progress, a max level
> > > workspace is preallocated and hidden from the LRU.
> > 
> > Here I'd like to bring up what was mentioned in previous iteration, the
> > workspace sizes.
> > 
> > Level   Compression Memory
> > 1       0.8 MB
> > 2       1.0 MB
> > 3       1.3 MB
> > 4       0.9 MB
> > 5       1.4 MB
> > 6       1.5 MB
> > 7       1.4 MB
> > 8       1.8 MB
> > 9       1.8 MB
> > 10      1.8 MB
> > 11      1.8 MB
> > 12      1.8 MB
> > 13      2.3 MB
> > 14      2.6 MB
> > 15      2.6 MB
> > 
> > and decompression needs memory of level 1. The sizes can be grouped
> > together to say 3 sizes, I'm not sure that we'd really need 15 distinct
> > workspaces. The reclaim mechanism helps, but I'd rather keep a smaller
> > number of workspaces that covers average use.
> > 
> > Default level is 3, that's 1.3 MiB, that also covers level 1, 2 and 4.
> > For 5 to 12 it's 1.8 and the rest is 2.6 MiB.
> > 
> 
> I realize the current implementation doesn't have a monotonic memory
> requirement guarantee. I've added that, and below is updated memory
> requirements per level. I've updated the commit to include this too.
> 
> Level     Memory (KB)
> 1            780 
> 2           1004
> 3           1260
> 4           1260
> 5           1388
> 6           1516
> 7           1516
> 8           1772
> 9           1772
> 10          1772
> 11          1772
> 12          1772
> 13          2284
> 14          2547
> 15          2547
> 
> > > btrfs filesystem 10 times and then read back after dropping the caches.
> > > The btrfs filesystem was on an SSD.
> > > 
> > > Level   Ratio   Compression (MB/s)  Decompression (MB/s)
> > > 1       2.658        438.47                910.51
> > > 2       2.744        364.86                886.55
> > > 3       2.801        336.33                828.41
> > > 4       2.858        286.71                886.55
> > > 5       2.916        212.77                556.84
> > > 6       2.363        119.82                990.85
> > > 7       3.000        154.06                849.30
> > > 8       3.011        159.54                875.03
> > > 9       3.025        100.51                940.15
> > > 10      3.033        118.97                616.26
> > > 11      3.036         94.19                802.11
> > > 12      3.037         73.45                931.49
> > > 13      3.041         55.17                835.26
> > > 14      3.087         44.70                716.78
> > > 15      3.126         37.30                878.84
> > 
> > From my casual user's perspective, I'd use the level 1 for speed, 7 for
> > better ratio and 15 for the best compression. Anything else does not
> > look good, though the results would vary based on the data set. I
> > assume that the silesia corpus serves as a good approximation of the
> > worst case average.
> > 
> > The levels 7-14 strike particularly obvious pattern: same ratio but the
> > speed gets worse with each level. Taking the default level into account,
> > (my) recommended levels would be 1, 3, 7, 15.
> > 
> 
> I do see why we want to limit the number of levels as the memory
> requirements do kind of bucket themselves. But, this means when zstd
> gets updated, we'd have to reevaluate the compression levels btrfs
> supports. I'm not sure it's a great idea to have that dependency.
> I imagine we could offer some level of guidance, but it really would be
> up to the user to figure out what works best for them.

If it was not clear, I did not mean to have only 4 levels, keep all 15
same as there are 9 for zlib. The guildelines would be desirable and I
don't want to make decision for the user which level to pick. So we
don't disagree.

> The reclaim mechanism only keeps workspaces around if they are being
> used by the appropriate level. So, the memory overhead is actively used
> memory and if not, it is reclaimed after at most ~2 minutes later. I
> also scan up before allocating a workspace, so that should help limit
> the number of workspaces in circulation.

We'd need to observe that in practice before doing refinements, simpler
logic is better for the start.

There's some penalty caused by the allocation if there are no workspaces
at all, as the amount of memory is quite large for kernel.
This could stress the memory subsystem also because the memory has to be
either contiguous or vmalloced. As the memory is released soon, all the
work might need to be done again and again. So, more than one
preallocated workspace could be good but the number of levels does not
make it easy to choose which one.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 00/11] btrfs: add zstd compression level support
  2019-01-31 14:04     ` David Sterba
@ 2019-01-31 15:56       ` Dennis Zhou
  0 siblings, 0 replies; 46+ messages in thread
From: Dennis Zhou @ 2019-01-31 15:56 UTC (permalink / raw)
  To: dsterba, Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell, kernel-team, linux-btrfs,
	linux-kernel

On Thu, Jan 31, 2019 at 03:04:36PM +0100, David Sterba wrote:
> On Wed, Jan 30, 2019 at 12:40:59PM -0500, Dennis Zhou wrote:
> > Hi David,
> > 
> > On Tue, Jan 29, 2019 at 06:18:30PM +0100, David Sterba wrote:
> > > On Mon, Jan 28, 2019 at 04:24:26PM -0500, Dennis Zhou wrote:
> > > > As mentioned above, a requirement that differs zstd from zlib is that
> > > > higher levels of compression require more memory. To manage this, each
> > > > compression level has its own queue of workspaces. A global LRU is used
> > > > to help with reclaim. To guarantee forward progress, a max level
> > > > workspace is preallocated and hidden from the LRU.
> > > 
> > > Here I'd like to bring up what was mentioned in previous iteration, the
> > > workspace sizes.
> > > 
> > > Level   Compression Memory
> > > 1       0.8 MB
> > > 2       1.0 MB
> > > 3       1.3 MB
> > > 4       0.9 MB
> > > 5       1.4 MB
> > > 6       1.5 MB
> > > 7       1.4 MB
> > > 8       1.8 MB
> > > 9       1.8 MB
> > > 10      1.8 MB
> > > 11      1.8 MB
> > > 12      1.8 MB
> > > 13      2.3 MB
> > > 14      2.6 MB
> > > 15      2.6 MB
> > > 
> > > and decompression needs memory of level 1. The sizes can be grouped
> > > together to say 3 sizes, I'm not sure that we'd really need 15 distinct
> > > workspaces. The reclaim mechanism helps, but I'd rather keep a smaller
> > > number of workspaces that covers average use.
> > > 
> > > Default level is 3, that's 1.3 MiB, that also covers level 1, 2 and 4.
> > > For 5 to 12 it's 1.8 and the rest is 2.6 MiB.
> > > 
> > 
> > I realize the current implementation doesn't have a monotonic memory
> > requirement guarantee. I've added that, and below is updated memory
> > requirements per level. I've updated the commit to include this too.
> > 
> > Level     Memory (KB)
> > 1            780 
> > 2           1004
> > 3           1260
> > 4           1260
> > 5           1388
> > 6           1516
> > 7           1516
> > 8           1772
> > 9           1772
> > 10          1772
> > 11          1772
> > 12          1772
> > 13          2284
> > 14          2547
> > 15          2547
> > 
> > > > btrfs filesystem 10 times and then read back after dropping the caches.
> > > > The btrfs filesystem was on an SSD.
> > > > 
> > > > Level   Ratio   Compression (MB/s)  Decompression (MB/s)
> > > > 1       2.658        438.47                910.51
> > > > 2       2.744        364.86                886.55
> > > > 3       2.801        336.33                828.41
> > > > 4       2.858        286.71                886.55
> > > > 5       2.916        212.77                556.84
> > > > 6       2.363        119.82                990.85
> > > > 7       3.000        154.06                849.30
> > > > 8       3.011        159.54                875.03
> > > > 9       3.025        100.51                940.15
> > > > 10      3.033        118.97                616.26
> > > > 11      3.036         94.19                802.11
> > > > 12      3.037         73.45                931.49
> > > > 13      3.041         55.17                835.26
> > > > 14      3.087         44.70                716.78
> > > > 15      3.126         37.30                878.84
> > > 
> > > From my casual user's perspective, I'd use the level 1 for speed, 7 for
> > > better ratio and 15 for the best compression. Anything else does not
> > > look good, though the results would vary based on the data set. I
> > > assume that the silesia corpus serves as a good approximation of the
> > > worst case average.
> > > 
> > > The levels 7-14 strike particularly obvious pattern: same ratio but the
> > > speed gets worse with each level. Taking the default level into account,
> > > (my) recommended levels would be 1, 3, 7, 15.
> > > 
> > 
> > I do see why we want to limit the number of levels as the memory
> > requirements do kind of bucket themselves. But, this means when zstd
> > gets updated, we'd have to reevaluate the compression levels btrfs
> > supports. I'm not sure it's a great idea to have that dependency.
> > I imagine we could offer some level of guidance, but it really would be
> > up to the user to figure out what works best for them.
> 
> If it was not clear, I did not mean to have only 4 levels, keep all 15
> same as there are 9 for zlib. The guildelines would be desirable and I
> don't want to make decision for the user which level to pick. So we
> don't disagree.
> 

I see, that was my misunderstanding.

> > The reclaim mechanism only keeps workspaces around if they are being
> > used by the appropriate level. So, the memory overhead is actively used
> > memory and if not, it is reclaimed after at most ~2 minutes later. I
> > also scan up before allocating a workspace, so that should help limit
> > the number of workspaces in circulation.
> 
> We'd need to observe that in practice before doing refinements, simpler
> logic is better for the start.
> 
> There's some penalty caused by the allocation if there are no workspaces
> at all, as the amount of memory is quite large for kernel.
> This could stress the memory subsystem also because the memory has to be
> either contiguous or vmalloced. As the memory is released soon, all the
> work might need to be done again and again. So, more than one
> preallocated workspace could be good but the number of levels does not
> make it easy to choose which one.

That makes sense. I don't have an answer for how to balance the number
of workspaces, but am happy to iterate on this as we get more data.

If no one has any other comments on the series after another day or so I
can send v2 with the handful of things people have mentioned and the
monotonic memory requirement patch.

Thanks,
Dennis

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 01/11] btrfs: add macros for compression type and level
  2019-01-28 21:24 ` [PATCH 01/11] btrfs: add macros for compression type and level Dennis Zhou
                     ` (2 preceding siblings ...)
  2019-01-29 22:30   ` Omar Sandoval
@ 2019-01-31 16:00   ` David Sterba
  2019-01-31 16:17     ` Dennis Zhou
  3 siblings, 1 reply; 46+ messages in thread
From: David Sterba @ 2019-01-31 16:00 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, kernel-team, linux-btrfs, linux-kernel

On Mon, Jan 28, 2019 at 04:24:27PM -0500, Dennis Zhou wrote:
> It is very easy to miss places that rely on a certain bitshifting for
> decyphering the type_level overloading. Make macros handle this instead.

This encoding was a simple way to introduce the combined type and level
for zlib and it could certainly be improved. Either we'll use helpers
like you suggest or add a new structure that contains type and level as
members. That way we'd see where the level still matters and where the
just the type is ok.

I haven't checked how much intrusive this would be, so that's for later
consideration. Some indirection can be removed for the .set_level
callbacks as it does not necessarily need to be a function so I'm
expecting that the code around that would be touched anyway.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 03/11] btrfs: manage heuristic workspace as index 0
  2019-01-28 21:24 ` [PATCH 03/11] btrfs: manage heuristic workspace as index 0 Dennis Zhou
  2019-01-29  7:53   ` Nikolay Borisov
  2019-01-29 18:02   ` Josef Bacik
@ 2019-01-31 16:10   ` David Sterba
  2 siblings, 0 replies; 46+ messages in thread
From: David Sterba @ 2019-01-31 16:10 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, kernel-team, linux-btrfs, linux-kernel

On Mon, Jan 28, 2019 at 04:24:29PM -0500, Dennis Zhou wrote:
> While the heuristic workspaces aren't really compression workspaces,
> they use the same interface for managing them. So rather than branching,
> let's just handle them once again as the index 0 compression type.
> 
> Signed-off-by: Dennis Zhou <dennis@kernel.org>
> +const struct btrfs_compress_op btrfs_heuristic_compress = {
> +	.alloc_workspace = alloc_heuristic_ws,
> +	.free_workspace = free_heuristic_ws,
> +};
>  struct workspace_manager {
>  	struct list_head idle_ws;
>  	spinlock_t ws_lock;
> @@ -782,9 +789,8 @@ struct workspace_manager {
>  
>  static struct workspace_manager wsm[BTRFS_COMPRESS_TYPES];

This deserves a comment that the 0th workspace is for the heuristics.

> --- a/fs/btrfs/compression.h
> +++ b/fs/btrfs/compression.h
> @@ -97,7 +97,7 @@ enum btrfs_compression_type {
>  	BTRFS_COMPRESS_ZLIB  = 1,
>  	BTRFS_COMPRESS_LZO   = 2,
>  	BTRFS_COMPRESS_ZSTD  = 3,
> -	BTRFS_COMPRESS_TYPES = 3,
> +	BTRFS_COMPRESS_TYPES = 4,

And here too, as there are only 3 compressors but 4 types as value of
the enum. Or rename BTRFS_COMPRESS_TYPES if you find a better name.

>  };
>  
>  struct btrfs_compress_op {
> @@ -125,6 +125,7 @@ struct btrfs_compress_op {
>  	void (*set_level)(struct list_head *ws, unsigned int type);
>  };
>  
> +extern const struct btrfs_compress_op btrfs_heuristic_compress;
>  extern const struct btrfs_compress_op btrfs_zlib_compress;
>  extern const struct btrfs_compress_op btrfs_lzo_compress;
>  extern const struct btrfs_compress_op btrfs_zstd_compress;
> diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
> index 9c8e1734429c..20081465a451 100644
> --- a/fs/btrfs/ioctl.c
> +++ b/fs/btrfs/ioctl.c
> @@ -1410,7 +1410,7 @@ int btrfs_defrag_file(struct inode *inode, struct file *file,
>  		return -EINVAL;
>  
>  	if (do_compress) {
> -		if (range->compress_type > BTRFS_COMPRESS_TYPES)
> +		if (range->compress_type >= BTRFS_COMPRESS_TYPES)
>  			return -EINVAL;
>  		if (range->compress_type)
>  			compress_type = range->compress_type;
> diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
> index a62e1e837a89..c88e146d8e99 100644
> --- a/fs/btrfs/tree-checker.c
> +++ b/fs/btrfs/tree-checker.c
> @@ -133,9 +133,9 @@ static int check_extent_data_item(struct btrfs_fs_info *fs_info,
>  	 * Support for new compression/encryption must introduce incompat flag,
>  	 * and must be caught in open_ctree().
>  	 */
> -	if (btrfs_file_extent_compression(leaf, fi) > BTRFS_COMPRESS_TYPES) {
> +	if (btrfs_file_extent_compression(leaf, fi) >= BTRFS_COMPRESS_TYPES) {
>  		file_extent_err(fs_info, leaf, slot,
> -	"invalid compression for file extent, have %u expect range [0, %u]",
> +	"invalid compression for file extent, have %u expect range [0, %u)",
>  			btrfs_file_extent_compression(leaf, fi),
>  			BTRFS_COMPRESS_TYPES);

This might become a bit confusing, the message is updated to say [0, 4)
but I'm not sure this is commonly understood that 4 does not belong
there. Either do -1 or define a new enum that contains the maximum
number so BTRFS_COMPRESS_TYPES is not overloaded.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 01/11] btrfs: add macros for compression type and level
  2019-01-31 16:00   ` David Sterba
@ 2019-01-31 16:17     ` Dennis Zhou
  0 siblings, 0 replies; 46+ messages in thread
From: Dennis Zhou @ 2019-01-31 16:17 UTC (permalink / raw)
  To: David Sterba
  Cc: Dennis Zhou, David Sterba, Josef Bacik, Chris Mason,
	Omar Sandoval, Nick Terrell, kernel-team, linux-btrfs,
	linux-kernel

On Thu, Jan 31, 2019 at 05:00:10PM +0100, David Sterba wrote:
> On Mon, Jan 28, 2019 at 04:24:27PM -0500, Dennis Zhou wrote:
> > It is very easy to miss places that rely on a certain bitshifting for
> > decyphering the type_level overloading. Make macros handle this instead.
> 
> This encoding was a simple way to introduce the combined type and level
> for zlib and it could certainly be improved. Either we'll use helpers
> like you suggest or add a new structure that contains type and level as
> members. That way we'd see where the level still matters and where the
> just the type is ok.
> 
> I haven't checked how much intrusive this would be, so that's for later
> consideration. Some indirection can be removed for the .set_level
> callbacks as it does not necessarily need to be a function so I'm
> expecting that the code around that would be touched anyway.

The only user of type_level is btrfs_compress_pages(). I do make an
extra call to .set_level() there just to be safe, but that can be taken
out as it should be correctly set from when we mounted.

I envisioned .set_level() to be called when setting the compression
level of the mount. This can be used in the same place if we were to add
per-file compression levels too. I mention later, but an important part
of .set_level() is to repurpose the meaning of 0 from meaning use the
default to meaning use any workspace available.

Thanks,
Dennis

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 11/11] btrfs: add zstd compression level support
  2019-01-28 21:24 ` [PATCH 11/11] btrfs: add zstd compression level support Dennis Zhou
  2019-01-29  7:25   ` Nikolay Borisov
@ 2019-01-31 18:10   ` David Sterba
  2019-01-31 18:13   ` David Sterba
  2 siblings, 0 replies; 46+ messages in thread
From: David Sterba @ 2019-01-31 18:10 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, kernel-team, linux-btrfs, linux-kernel,
	Omar Sandoval

On Mon, Jan 28, 2019 at 04:24:37PM -0500, Dennis Zhou wrote:
>  static struct list_head *zstd_get_workspace(unsigned int level)
>  {
> -	struct list_head *ws = btrfs_get_workspace(&wsm, level);
> -	struct workspace *workspace = list_entry(ws, struct workspace, list);
> +	struct list_head *ws;
> +	unsigned long nofs_flag;

nofs_flag should be 'unsigned'

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 11/11] btrfs: add zstd compression level support
  2019-01-28 21:24 ` [PATCH 11/11] btrfs: add zstd compression level support Dennis Zhou
  2019-01-29  7:25   ` Nikolay Borisov
  2019-01-31 18:10   ` David Sterba
@ 2019-01-31 18:13   ` David Sterba
  2 siblings, 0 replies; 46+ messages in thread
From: David Sterba @ 2019-01-31 18:13 UTC (permalink / raw)
  To: Dennis Zhou
  Cc: David Sterba, Josef Bacik, Chris Mason, Omar Sandoval,
	Nick Terrell, kernel-team, linux-btrfs, linux-kernel,
	Omar Sandoval

On Mon, Jan 28, 2019 at 04:24:37PM -0500, Dennis Zhou wrote:
> As mentioned above, a requirement that differs zstd from zlib is that
> higher levels of compression require more memory. To manage this, each
> compression level has its own queue of workspaces. A global LRU is used
> to help with reclaim. To guarantee forward progress, a max level
> workspace is preallocated and hidden from the LRU.
> 
> When getting a workspace, it uses a bitmap to identify the levels that
> are populated and scans up. If it finds a workspace that is greater than
> it, it uses it, but does not update the last_used time and the
> corresponding place in the LRU. This provides a mechanism to decrease
> memory utilization as we only keep around workspaces that are sized
> appropriately for the in use compression levels.
> 
> By knowing which compression levels have available workspaces, we can
> recycle rather than always create new workspaces as well as take
> advantage of the preallocated max level for forward progress. If we hit
> memory pressure, we sleep on the max level workspace. We continue to
> rescan in case we can use a smaller workspace, but eventually should be
> able to obtain the max level workspace or allocate one again should
> memory pressure subside. The memory requirement for decompression is the
> same as level 1, and therefore can use any of available workspace.
> 
> The number of workspaces is bound by an upper limit of the workqueue's
> limit which currently is 2 (percpu limit). Second, a reclaim timer is
> used to free inactive/improperly sized workspaces. The reclaim timer is
> set to 67s to avoid colliding with transaction commit (every 30s) and
> attempts to reclaim any unused workspace older than 45s.

> --- a/fs/btrfs/zstd.c
> +++ b/fs/btrfs/zstd.c
>  #define ZSTD_BTRFS_MAX_WINDOWLOG 17
>  #define ZSTD_BTRFS_MAX_INPUT (1 << ZSTD_BTRFS_MAX_WINDOWLOG)
>  #define ZSTD_BTRFS_DEFAULT_LEVEL 3
> +#define ZSTD_BTRFS_MAX_LEVEL 15
> +#define ZSTD_BTRFS_RECLAIM_NS (45 * NSEC_PER_SEC)
> +/* 67s to avoid clashing with transaction commit (every 30s) */
> +#define ZSTD_BTRFS_RECLAIM_JIFFIES (67 * HZ)

I think you can copy the relevant part of changelog here to explain the
logic of the levels and reclaim as it's scattered over several
functions. The description would give a nice overview in one place.

^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2019-01-31 18:14 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-28 21:24 [PATCH 00/11] btrfs: add zstd compression level support Dennis Zhou
2019-01-28 21:24 ` [PATCH 01/11] btrfs: add macros for compression type and level Dennis Zhou
2019-01-29  7:26   ` Nikolay Borisov
2019-01-29 17:57   ` Josef Bacik
2019-01-29 22:30   ` Omar Sandoval
2019-01-31 16:00   ` David Sterba
2019-01-31 16:17     ` Dennis Zhou
2019-01-28 21:24 ` [PATCH 02/11] btrfs: rename workspaces_list to workspace_manager Dennis Zhou
2019-01-29  7:27   ` Nikolay Borisov
2019-01-29 17:58   ` Josef Bacik
2019-01-28 21:24 ` [PATCH 03/11] btrfs: manage heuristic workspace as index 0 Dennis Zhou
2019-01-29  7:53   ` Nikolay Borisov
2019-01-29 22:43     ` Dennis Zhou
2019-01-29 18:02   ` Josef Bacik
2019-01-31 16:10   ` David Sterba
2019-01-28 21:24 ` [PATCH 04/11] btrfs: unify compression ops with workspace_manager Dennis Zhou
2019-01-29  7:54   ` Nikolay Borisov
2019-01-29 18:03   ` Josef Bacik
2019-01-28 21:24 ` [PATCH 05/11] btrfs: add helper methods for workspace manager init and cleanup Dennis Zhou
2019-01-29  7:58   ` Nikolay Borisov
2019-01-29 18:04   ` Josef Bacik
2019-01-28 21:24 ` [PATCH 06/11] btrfs: add compression interface in (get/put)_workspace() Dennis Zhou
2019-01-29 18:06   ` Josef Bacik
2019-01-28 21:24 ` [PATCH 07/11] btrfs: move to fn pointers for get/put workspaces Dennis Zhou
2019-01-29  8:22   ` Nikolay Borisov
2019-01-29 23:35     ` Dennis Zhou
2019-01-29 18:17   ` Josef Bacik
2019-01-29 18:44     ` Josef Bacik
2019-01-28 21:24 ` [PATCH 08/11] btrfs: plumb level through the compression interface Dennis Zhou
2019-01-29  8:08   ` Nikolay Borisov
2019-01-29 18:17   ` Josef Bacik
2019-01-28 21:24 ` [PATCH 09/11] btrfs: change set_level() to bound the level passed in Dennis Zhou
2019-01-29  8:14   ` Nikolay Borisov
2019-01-30 22:06     ` Dennis Zhou
2019-01-28 21:24 ` [PATCH 10/11] btrfs: zstd use the passed through level instead of default Dennis Zhou
2019-01-29  8:15   ` Nikolay Borisov
2019-01-28 21:24 ` [PATCH 11/11] btrfs: add zstd compression level support Dennis Zhou
2019-01-29  7:25   ` Nikolay Borisov
2019-01-29 22:50     ` Dennis Zhou
2019-01-31 18:10   ` David Sterba
2019-01-31 18:13   ` David Sterba
2019-01-29 17:18 ` [PATCH 00/11] " David Sterba
2019-01-29 21:12   ` Nick Terrell
2019-01-30 17:40   ` Dennis Zhou
2019-01-31 14:04     ` David Sterba
2019-01-31 15:56       ` Dennis Zhou

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).