All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] Parameter reduction in compression wrappers
@ 2017-02-17 18:05 David Sterba
  2017-02-17 18:05 ` [PATCH 1/5] btrfs: merge length input and output parameter in compress_pages David Sterba
                   ` (4 more replies)
  0 siblings, 5 replies; 7+ messages in thread
From: David Sterba @ 2017-02-17 18:05 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

I've noticed that some of the parameters passed to compress_pages are redundant
and that we can either reuse a parameter for both input and output value
(number of pages) or we can infer some value from the existing parameters (the
maximum output limit).

There's no functional change, the stack consumption is slightly smaller.

David Sterba (5):
      btrfs: merge length input and output parameter in compress_pages
      btrfs: merge nr_pages input and output parameter in compress_pages
      btrfs: export compression buffer limits in a header
      btrfs: use predefined limits for calculating maximum number of pages for compression
      btrfs: derive maximum output size in the compression implementation

 fs/btrfs/compression.c | 33 ++++++++++++++-------------------
 fs/btrfs/compression.h | 28 +++++++++++++++++++---------
 fs/btrfs/inode.c       | 37 +++++++++++++------------------------
 fs/btrfs/lzo.c         | 10 +++++-----
 fs/btrfs/zlib.c        |  9 +++++----
 5 files changed, 56 insertions(+), 61 deletions(-)

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 1/5] btrfs: merge length input and output parameter in compress_pages
  2017-02-17 18:05 [PATCH 0/5] Parameter reduction in compression wrappers David Sterba
@ 2017-02-17 18:05 ` David Sterba
  2017-02-17 18:05 ` [PATCH 2/5] btrfs: merge nr_pages " David Sterba
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: David Sterba @ 2017-02-17 18:05 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

The length parameter is basically duplicated for input and output in the
top level caller of the compress_pages chain. We can simply use one
variable for that and reduce stack consumption. The compression
implementation will sink the parameter to a local variable so everything
works as before.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/compression.c | 24 +++++++++++-------------
 fs/btrfs/compression.h |  5 ++---
 fs/btrfs/inode.c       |  2 +-
 fs/btrfs/lzo.c         |  4 ++--
 fs/btrfs/zlib.c        |  3 ++-
 5 files changed, 18 insertions(+), 20 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 903c32c9eb22..eca4704fba9d 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -911,27 +911,25 @@ static void free_workspaces(void)
 }
 
 /*
- * given an address space and start/len, compress the bytes.
+ * Given an address space and start and length, compress the bytes into @pages
+ * that are allocated on demand.
  *
- * pages are allocated to hold the compressed result and stored
- * in 'pages'
+ * @out_pages is used to return the number of pages allocated.  There
+ * may be pages allocated even if we return an error.
  *
- * out_pages is used to return the number of pages allocated.  There
- * may be pages allocated even if we return an error
- *
- * total_in is used to return the number of bytes actually read.  It
- * may be smaller then len if we had to exit early because we
+ * @total_in is used to return the number of bytes actually read.  It
+ * may be smaller than the input length if we had to exit early because we
  * ran out of room in the pages array or because we cross the
  * max_out threshold.
  *
- * total_out is used to return the total number of compressed bytes
+ * @total_out is an in/out parameter, must be set to the input length and will
+ * be also used to return the total number of compressed bytes
  *
- * max_out tells us the max number of bytes that we're allowed to
+ * @max_out tells us the max number of bytes that we're allowed to
  * stuff into pages
  */
 int btrfs_compress_pages(int type, struct address_space *mapping,
-			 u64 start, unsigned long len,
-			 struct page **pages,
+			 u64 start, struct page **pages,
 			 unsigned long nr_dest_pages,
 			 unsigned long *out_pages,
 			 unsigned long *total_in,
@@ -944,7 +942,7 @@ int btrfs_compress_pages(int type, struct address_space *mapping,
 	workspace = find_workspace(type);
 
 	ret = btrfs_compress_op[type-1]->compress_pages(workspace, mapping,
-						      start, len, pages,
+						      start, pages,
 						      nr_dest_pages, out_pages,
 						      total_in, total_out,
 						      max_out);
diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index 09879579fbc8..2d53879e8519 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -23,8 +23,7 @@ void btrfs_init_compress(void);
 void btrfs_exit_compress(void);
 
 int btrfs_compress_pages(int type, struct address_space *mapping,
-			 u64 start, unsigned long len,
-			 struct page **pages,
+			 u64 start, struct page **pages,
 			 unsigned long nr_dest_pages,
 			 unsigned long *out_pages,
 			 unsigned long *total_in,
@@ -59,7 +58,7 @@ struct btrfs_compress_op {
 
 	int (*compress_pages)(struct list_head *workspace,
 			      struct address_space *mapping,
-			      u64 start, unsigned long len,
+			      u64 start,
 			      struct page **pages,
 			      unsigned long nr_dest_pages,
 			      unsigned long *out_pages,
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index dae2734a725b..7b9cdc0a1b5c 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -516,7 +516,7 @@ static noinline void compress_file_range(struct inode *inode,
 		redirty = 1;
 		ret = btrfs_compress_pages(compress_type,
 					   inode->i_mapping, start,
-					   total_compressed, pages,
+					   pages,
 					   nr_pages, &nr_pages_ret,
 					   &total_in,
 					   &total_compressed,
diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
index 45d26980caf9..dfa690d4ce86 100644
--- a/fs/btrfs/lzo.c
+++ b/fs/btrfs/lzo.c
@@ -86,7 +86,7 @@ static inline size_t read_compress_length(char *buf)
 
 static int lzo_compress_pages(struct list_head *ws,
 			      struct address_space *mapping,
-			      u64 start, unsigned long len,
+			      u64 start,
 			      struct page **pages,
 			      unsigned long nr_dest_pages,
 			      unsigned long *out_pages,
@@ -102,7 +102,7 @@ static int lzo_compress_pages(struct list_head *ws,
 	struct page *in_page = NULL;
 	struct page *out_page = NULL;
 	unsigned long bytes_left;
-
+	unsigned long len = *total_out;
 	size_t in_len;
 	size_t out_len;
 	char *buf;
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index da497f184ff4..42d76b7824c3 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -73,7 +73,7 @@ static struct list_head *zlib_alloc_workspace(void)
 
 static int zlib_compress_pages(struct list_head *ws,
 			       struct address_space *mapping,
-			       u64 start, unsigned long len,
+			       u64 start,
 			       struct page **pages,
 			       unsigned long nr_dest_pages,
 			       unsigned long *out_pages,
@@ -89,6 +89,7 @@ static int zlib_compress_pages(struct list_head *ws,
 	struct page *in_page = NULL;
 	struct page *out_page = NULL;
 	unsigned long bytes_left;
+	unsigned long len = *total_out;
 
 	*out_pages = 0;
 	*total_out = 0;
-- 
2.10.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 2/5] btrfs: merge nr_pages input and output parameter in compress_pages
  2017-02-17 18:05 [PATCH 0/5] Parameter reduction in compression wrappers David Sterba
  2017-02-17 18:05 ` [PATCH 1/5] btrfs: merge length input and output parameter in compress_pages David Sterba
@ 2017-02-17 18:05 ` David Sterba
  2017-02-17 18:05 ` [PATCH 3/5] btrfs: export compression buffer limits in a header David Sterba
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: David Sterba @ 2017-02-17 18:05 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

The parameter saying how many pages can be allocated at maximum can be
merged with the output page counter, to save some stack space.  The
compression implementation will sink the parameter to a local variable
so everything works as before.

The nr_pages variables can also be simply merged in compress_file_range
into one.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/compression.c |  7 +++----
 fs/btrfs/compression.h |  2 --
 fs/btrfs/inode.c       | 13 ++++++-------
 fs/btrfs/lzo.c         |  2 +-
 fs/btrfs/zlib.c        |  2 +-
 5 files changed, 11 insertions(+), 15 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index eca4704fba9d..3a05c7576a7f 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -914,8 +914,8 @@ static void free_workspaces(void)
  * Given an address space and start and length, compress the bytes into @pages
  * that are allocated on demand.
  *
- * @out_pages is used to return the number of pages allocated.  There
- * may be pages allocated even if we return an error.
+ * @out_pages is an in/out parameter, holds maximum number of pages to allocate
+ * and returns number of actually allocated pages
  *
  * @total_in is used to return the number of bytes actually read.  It
  * may be smaller than the input length if we had to exit early because we
@@ -930,7 +930,6 @@ static void free_workspaces(void)
  */
 int btrfs_compress_pages(int type, struct address_space *mapping,
 			 u64 start, struct page **pages,
-			 unsigned long nr_dest_pages,
 			 unsigned long *out_pages,
 			 unsigned long *total_in,
 			 unsigned long *total_out,
@@ -943,7 +942,7 @@ int btrfs_compress_pages(int type, struct address_space *mapping,
 
 	ret = btrfs_compress_op[type-1]->compress_pages(workspace, mapping,
 						      start, pages,
-						      nr_dest_pages, out_pages,
+						      out_pages,
 						      total_in, total_out,
 						      max_out);
 	free_workspace(type, workspace);
diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index 2d53879e8519..e453f42b3bbf 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -24,7 +24,6 @@ void btrfs_exit_compress(void);
 
 int btrfs_compress_pages(int type, struct address_space *mapping,
 			 u64 start, struct page **pages,
-			 unsigned long nr_dest_pages,
 			 unsigned long *out_pages,
 			 unsigned long *total_in,
 			 unsigned long *total_out,
@@ -60,7 +59,6 @@ struct btrfs_compress_op {
 			      struct address_space *mapping,
 			      u64 start,
 			      struct page **pages,
-			      unsigned long nr_dest_pages,
 			      unsigned long *out_pages,
 			      unsigned long *total_in,
 			      unsigned long *total_out,
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 7b9cdc0a1b5c..801d1b3fd9d7 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -430,7 +430,6 @@ static noinline void compress_file_range(struct inode *inode,
 	int ret = 0;
 	struct page **pages = NULL;
 	unsigned long nr_pages;
-	unsigned long nr_pages_ret = 0;
 	unsigned long total_compressed = 0;
 	unsigned long total_in = 0;
 	unsigned long max_compressed = SZ_128K;
@@ -517,7 +516,7 @@ static noinline void compress_file_range(struct inode *inode,
 		ret = btrfs_compress_pages(compress_type,
 					   inode->i_mapping, start,
 					   pages,
-					   nr_pages, &nr_pages_ret,
+					   &nr_pages,
 					   &total_in,
 					   &total_compressed,
 					   max_compressed);
@@ -525,7 +524,7 @@ static noinline void compress_file_range(struct inode *inode,
 		if (!ret) {
 			unsigned long offset = total_compressed &
 				(PAGE_SIZE - 1);
-			struct page *page = pages[nr_pages_ret - 1];
+			struct page *page = pages[nr_pages - 1];
 			char *kaddr;
 
 			/* zero the tail end of the last page, we might be
@@ -606,7 +605,7 @@ static noinline void compress_file_range(struct inode *inode,
 			 * will submit them to the elevator.
 			 */
 			add_async_extent(async_cow, start, num_bytes,
-					total_compressed, pages, nr_pages_ret,
+					total_compressed, pages, nr_pages,
 					compress_type);
 
 			if (start + num_bytes < end) {
@@ -623,14 +622,14 @@ static noinline void compress_file_range(struct inode *inode,
 		 * the compression code ran but failed to make things smaller,
 		 * free any pages it allocated and our page pointer array
 		 */
-		for (i = 0; i < nr_pages_ret; i++) {
+		for (i = 0; i < nr_pages; i++) {
 			WARN_ON(pages[i]->mapping);
 			put_page(pages[i]);
 		}
 		kfree(pages);
 		pages = NULL;
 		total_compressed = 0;
-		nr_pages_ret = 0;
+		nr_pages = 0;
 
 		/* flag the file so we don't compress in the future */
 		if (!btrfs_test_opt(fs_info, FORCE_COMPRESS) &&
@@ -659,7 +658,7 @@ static noinline void compress_file_range(struct inode *inode,
 	return;
 
 free_pages_out:
-	for (i = 0; i < nr_pages_ret; i++) {
+	for (i = 0; i < nr_pages; i++) {
 		WARN_ON(pages[i]->mapping);
 		put_page(pages[i]);
 	}
diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
index dfa690d4ce86..72b07f0bb80a 100644
--- a/fs/btrfs/lzo.c
+++ b/fs/btrfs/lzo.c
@@ -88,7 +88,6 @@ static int lzo_compress_pages(struct list_head *ws,
 			      struct address_space *mapping,
 			      u64 start,
 			      struct page **pages,
-			      unsigned long nr_dest_pages,
 			      unsigned long *out_pages,
 			      unsigned long *total_in,
 			      unsigned long *total_out,
@@ -103,6 +102,7 @@ static int lzo_compress_pages(struct list_head *ws,
 	struct page *out_page = NULL;
 	unsigned long bytes_left;
 	unsigned long len = *total_out;
+	unsigned long nr_dest_pages = *out_pages;
 	size_t in_len;
 	size_t out_len;
 	char *buf;
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index 42d76b7824c3..e7f2020f8ee7 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -75,7 +75,6 @@ static int zlib_compress_pages(struct list_head *ws,
 			       struct address_space *mapping,
 			       u64 start,
 			       struct page **pages,
-			       unsigned long nr_dest_pages,
 			       unsigned long *out_pages,
 			       unsigned long *total_in,
 			       unsigned long *total_out,
@@ -90,6 +89,7 @@ static int zlib_compress_pages(struct list_head *ws,
 	struct page *out_page = NULL;
 	unsigned long bytes_left;
 	unsigned long len = *total_out;
+	unsigned long nr_dest_pages = *out_pages;
 
 	*out_pages = 0;
 	*total_out = 0;
-- 
2.10.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 3/5] btrfs: export compression buffer limits in a header
  2017-02-17 18:05 [PATCH 0/5] Parameter reduction in compression wrappers David Sterba
  2017-02-17 18:05 ` [PATCH 1/5] btrfs: merge length input and output parameter in compress_pages David Sterba
  2017-02-17 18:05 ` [PATCH 2/5] btrfs: merge nr_pages " David Sterba
@ 2017-02-17 18:05 ` David Sterba
  2017-02-20  8:07   ` Qu Wenruo
  2017-02-17 18:05 ` [PATCH 4/5] btrfs: use predefined limits for calculating maximum number of pages for compression David Sterba
  2017-02-17 18:06 ` [PATCH 5/5] btrfs: derive maximum output size in the compression implementation David Sterba
  4 siblings, 1 reply; 7+ messages in thread
From: David Sterba @ 2017-02-17 18:05 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

Move the buffer limit definitions out of compress_file_range.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/compression.h | 15 +++++++++++++++
 fs/btrfs/inode.c       | 10 ----------
 2 files changed, 15 insertions(+), 10 deletions(-)

diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index e453f42b3bbf..c9d7e552cfa8 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -19,6 +19,21 @@
 #ifndef __BTRFS_COMPRESSION_
 #define __BTRFS_COMPRESSION_
 
+/*
+ * We want to make sure that amount of RAM required to uncompress an extent is
+ * reasonable, so we limit the total size in ram of a compressed extent to
+ * 128k.  This is a crucial number because it also controls how easily we can
+ * spread reads across cpus for decompression.
+ *
+ * We also want to make sure the amount of IO required to do a random read is
+ * reasonably small, so we limit the size of a compressed extent to 128k.
+ */
+
+/* Maximum length of compressed data stored on disk */
+#define BTRFS_MAX_COMPRESSED		(SZ_128K)
+/* Maximum size of data before compression */
+#define BTRFS_MAX_UNCOMPRESSED		(SZ_128K)
+
 void btrfs_init_compress(void);
 void btrfs_exit_compress(void);
 
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 801d1b3fd9d7..bc547608ff1a 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -470,16 +470,6 @@ static noinline void compress_file_range(struct inode *inode,
 	   (start > 0 || end + 1 < BTRFS_I(inode)->disk_i_size))
 		goto cleanup_and_bail_uncompressed;
 
-	/* we want to make sure that amount of ram required to uncompress
-	 * an extent is reasonable, so we limit the total size in ram
-	 * of a compressed extent to 128k.  This is a crucial number
-	 * because it also controls how easily we can spread reads across
-	 * cpus for decompression.
-	 *
-	 * We also want to make sure the amount of IO required to do
-	 * a random read is reasonably small, so we limit the size of
-	 * a compressed extent to 128k.
-	 */
 	total_compressed = min(total_compressed, max_uncompressed);
 	num_bytes = ALIGN(end - start + 1, blocksize);
 	num_bytes = max(blocksize,  num_bytes);
-- 
2.10.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 4/5] btrfs: use predefined limits for calculating maximum number of pages for compression
  2017-02-17 18:05 [PATCH 0/5] Parameter reduction in compression wrappers David Sterba
                   ` (2 preceding siblings ...)
  2017-02-17 18:05 ` [PATCH 3/5] btrfs: export compression buffer limits in a header David Sterba
@ 2017-02-17 18:05 ` David Sterba
  2017-02-17 18:06 ` [PATCH 5/5] btrfs: derive maximum output size in the compression implementation David Sterba
  4 siblings, 0 replies; 7+ messages in thread
From: David Sterba @ 2017-02-17 18:05 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/inode.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index bc547608ff1a..9533a516bace 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -432,8 +432,6 @@ static noinline void compress_file_range(struct inode *inode,
 	unsigned long nr_pages;
 	unsigned long total_compressed = 0;
 	unsigned long total_in = 0;
-	unsigned long max_compressed = SZ_128K;
-	unsigned long max_uncompressed = SZ_128K;
 	int i;
 	int will_compress;
 	int compress_type = fs_info->compress_type;
@@ -445,7 +443,9 @@ static noinline void compress_file_range(struct inode *inode,
 again:
 	will_compress = 0;
 	nr_pages = (end >> PAGE_SHIFT) - (start >> PAGE_SHIFT) + 1;
-	nr_pages = min_t(unsigned long, nr_pages, SZ_128K / PAGE_SIZE);
+	BUILD_BUG_ON((BTRFS_MAX_COMPRESSED % PAGE_SIZE) != 0);
+	nr_pages = min_t(unsigned long, nr_pages,
+			BTRFS_MAX_COMPRESSED / PAGE_SIZE);
 
 	/*
 	 * we don't want to send crud past the end of i_size through
@@ -470,7 +470,8 @@ static noinline void compress_file_range(struct inode *inode,
 	   (start > 0 || end + 1 < BTRFS_I(inode)->disk_i_size))
 		goto cleanup_and_bail_uncompressed;
 
-	total_compressed = min(total_compressed, max_uncompressed);
+	total_compressed = min_t(unsigned long, total_compressed,
+			BTRFS_MAX_UNCOMPRESSED);
 	num_bytes = ALIGN(end - start + 1, blocksize);
 	num_bytes = max(blocksize,  num_bytes);
 	total_in = 0;
@@ -509,7 +510,7 @@ static noinline void compress_file_range(struct inode *inode,
 					   &nr_pages,
 					   &total_in,
 					   &total_compressed,
-					   max_compressed);
+					   BTRFS_MAX_COMPRESSED);
 
 		if (!ret) {
 			unsigned long offset = total_compressed &
-- 
2.10.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 5/5] btrfs: derive maximum output size in the compression implementation
  2017-02-17 18:05 [PATCH 0/5] Parameter reduction in compression wrappers David Sterba
                   ` (3 preceding siblings ...)
  2017-02-17 18:05 ` [PATCH 4/5] btrfs: use predefined limits for calculating maximum number of pages for compression David Sterba
@ 2017-02-17 18:06 ` David Sterba
  4 siblings, 0 replies; 7+ messages in thread
From: David Sterba @ 2017-02-17 18:06 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

The value of max_out can be calculated from the parameters passed to the
compressors, which is number of pages and the page size, and we don't
have to needlessly pass it around.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/compression.c | 6 ++----
 fs/btrfs/compression.h | 6 ++----
 fs/btrfs/inode.c       | 3 +--
 fs/btrfs/lzo.c         | 4 ++--
 fs/btrfs/zlib.c        | 4 ++--
 5 files changed, 9 insertions(+), 14 deletions(-)

diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 3a05c7576a7f..75e4e61d338b 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -932,8 +932,7 @@ int btrfs_compress_pages(int type, struct address_space *mapping,
 			 u64 start, struct page **pages,
 			 unsigned long *out_pages,
 			 unsigned long *total_in,
-			 unsigned long *total_out,
-			 unsigned long max_out)
+			 unsigned long *total_out)
 {
 	struct list_head *workspace;
 	int ret;
@@ -943,8 +942,7 @@ int btrfs_compress_pages(int type, struct address_space *mapping,
 	ret = btrfs_compress_op[type-1]->compress_pages(workspace, mapping,
 						      start, pages,
 						      out_pages,
-						      total_in, total_out,
-						      max_out);
+						      total_in, total_out);
 	free_workspace(type, workspace);
 	return ret;
 }
diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
index c9d7e552cfa8..0f53410f755c 100644
--- a/fs/btrfs/compression.h
+++ b/fs/btrfs/compression.h
@@ -41,8 +41,7 @@ int btrfs_compress_pages(int type, struct address_space *mapping,
 			 u64 start, struct page **pages,
 			 unsigned long *out_pages,
 			 unsigned long *total_in,
-			 unsigned long *total_out,
-			 unsigned long max_out);
+			 unsigned long *total_out);
 int btrfs_decompress(int type, unsigned char *data_in, struct page *dest_page,
 		     unsigned long start_byte, size_t srclen, size_t destlen);
 int btrfs_decompress_buf2page(char *buf, unsigned long buf_start,
@@ -76,8 +75,7 @@ struct btrfs_compress_op {
 			      struct page **pages,
 			      unsigned long *out_pages,
 			      unsigned long *total_in,
-			      unsigned long *total_out,
-			      unsigned long max_out);
+			      unsigned long *total_out);
 
 	int (*decompress_bio)(struct list_head *workspace,
 				 struct page **pages_in,
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 9533a516bace..e9d589944d23 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -509,8 +509,7 @@ static noinline void compress_file_range(struct inode *inode,
 					   pages,
 					   &nr_pages,
 					   &total_in,
-					   &total_compressed,
-					   BTRFS_MAX_COMPRESSED);
+					   &total_compressed);
 
 		if (!ret) {
 			unsigned long offset = total_compressed &
diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
index 72b07f0bb80a..23402dde8b0a 100644
--- a/fs/btrfs/lzo.c
+++ b/fs/btrfs/lzo.c
@@ -90,8 +90,7 @@ static int lzo_compress_pages(struct list_head *ws,
 			      struct page **pages,
 			      unsigned long *out_pages,
 			      unsigned long *total_in,
-			      unsigned long *total_out,
-			      unsigned long max_out)
+			      unsigned long *total_out)
 {
 	struct workspace *workspace = list_entry(ws, struct workspace, list);
 	int ret = 0;
@@ -103,6 +102,7 @@ static int lzo_compress_pages(struct list_head *ws,
 	unsigned long bytes_left;
 	unsigned long len = *total_out;
 	unsigned long nr_dest_pages = *out_pages;
+	const unsigned long max_out = nr_dest_pages * PAGE_SIZE;
 	size_t in_len;
 	size_t out_len;
 	char *buf;
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index e7f2020f8ee7..135b10823c6d 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -77,8 +77,7 @@ static int zlib_compress_pages(struct list_head *ws,
 			       struct page **pages,
 			       unsigned long *out_pages,
 			       unsigned long *total_in,
-			       unsigned long *total_out,
-			       unsigned long max_out)
+			       unsigned long *total_out)
 {
 	struct workspace *workspace = list_entry(ws, struct workspace, list);
 	int ret;
@@ -90,6 +89,7 @@ static int zlib_compress_pages(struct list_head *ws,
 	unsigned long bytes_left;
 	unsigned long len = *total_out;
 	unsigned long nr_dest_pages = *out_pages;
+	const unsigned long max_out = nr_dest_pages * PAGE_SIZE;
 
 	*out_pages = 0;
 	*total_out = 0;
-- 
2.10.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH 3/5] btrfs: export compression buffer limits in a header
  2017-02-17 18:05 ` [PATCH 3/5] btrfs: export compression buffer limits in a header David Sterba
@ 2017-02-20  8:07   ` Qu Wenruo
  0 siblings, 0 replies; 7+ messages in thread
From: Qu Wenruo @ 2017-02-20  8:07 UTC (permalink / raw)
  To: David Sterba, linux-btrfs



At 02/18/2017 02:05 AM, David Sterba wrote:
> Move the buffer limit definitions out of compress_file_range.
>

Nice one.
No longer need to dig the codes to know the 128K limit now.

Reviewed-by: Qu Wenruo <quwenruo@cn.fujitsu.com>

Thanks,
Qu
> Signed-off-by: David Sterba <dsterba@suse.com>
> ---
>  fs/btrfs/compression.h | 15 +++++++++++++++
>  fs/btrfs/inode.c       | 10 ----------
>  2 files changed, 15 insertions(+), 10 deletions(-)
>
> diff --git a/fs/btrfs/compression.h b/fs/btrfs/compression.h
> index e453f42b3bbf..c9d7e552cfa8 100644
> --- a/fs/btrfs/compression.h
> +++ b/fs/btrfs/compression.h
> @@ -19,6 +19,21 @@
>  #ifndef __BTRFS_COMPRESSION_
>  #define __BTRFS_COMPRESSION_
>
> +/*
> + * We want to make sure that amount of RAM required to uncompress an extent is
> + * reasonable, so we limit the total size in ram of a compressed extent to
> + * 128k.  This is a crucial number because it also controls how easily we can
> + * spread reads across cpus for decompression.
> + *
> + * We also want to make sure the amount of IO required to do a random read is
> + * reasonably small, so we limit the size of a compressed extent to 128k.
> + */
> +
> +/* Maximum length of compressed data stored on disk */
> +#define BTRFS_MAX_COMPRESSED		(SZ_128K)
> +/* Maximum size of data before compression */
> +#define BTRFS_MAX_UNCOMPRESSED		(SZ_128K)
> +
>  void btrfs_init_compress(void);
>  void btrfs_exit_compress(void);
>
> diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
> index 801d1b3fd9d7..bc547608ff1a 100644
> --- a/fs/btrfs/inode.c
> +++ b/fs/btrfs/inode.c
> @@ -470,16 +470,6 @@ static noinline void compress_file_range(struct inode *inode,
>  	   (start > 0 || end + 1 < BTRFS_I(inode)->disk_i_size))
>  		goto cleanup_and_bail_uncompressed;
>
> -	/* we want to make sure that amount of ram required to uncompress
> -	 * an extent is reasonable, so we limit the total size in ram
> -	 * of a compressed extent to 128k.  This is a crucial number
> -	 * because it also controls how easily we can spread reads across
> -	 * cpus for decompression.
> -	 *
> -	 * We also want to make sure the amount of IO required to do
> -	 * a random read is reasonably small, so we limit the size of
> -	 * a compressed extent to 128k.
> -	 */
>  	total_compressed = min(total_compressed, max_uncompressed);
>  	num_bytes = ALIGN(end - start + 1, blocksize);
>  	num_bytes = max(blocksize,  num_bytes);
>



^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-02-20  8:07 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-17 18:05 [PATCH 0/5] Parameter reduction in compression wrappers David Sterba
2017-02-17 18:05 ` [PATCH 1/5] btrfs: merge length input and output parameter in compress_pages David Sterba
2017-02-17 18:05 ` [PATCH 2/5] btrfs: merge nr_pages " David Sterba
2017-02-17 18:05 ` [PATCH 3/5] btrfs: export compression buffer limits in a header David Sterba
2017-02-20  8:07   ` Qu Wenruo
2017-02-17 18:05 ` [PATCH 4/5] btrfs: use predefined limits for calculating maximum number of pages for compression David Sterba
2017-02-17 18:06 ` [PATCH 5/5] btrfs: derive maximum output size in the compression implementation David Sterba

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.