linux-f2fs-devel.lists.sourceforge.net archive mirror
 help / color / mirror / Atom feed
* [f2fs-dev] [PATCH] f2fs: change maximum zstd compression buffer size
@ 2020-05-04 14:30 Jaegeuk Kim
  2020-05-05  1:52 ` Chao Yu
  0 siblings, 1 reply; 8+ messages in thread
From: Jaegeuk Kim @ 2020-05-04 14:30 UTC (permalink / raw)
  To: linux-kernel, linux-f2fs-devel, kernel-team

From: Daeho Jeong <daehojeong@google.com>

Current zstd compression buffer size is one page and header size less
than cluster size. By this, zstd compression always succeeds even if
the real compression data is failed to fit into the buffer size, and
eventually reading the cluster returns I/O error with the corrupted
compression data.

Signed-off-by: Daeho Jeong <daehojeong@google.com>
---
 fs/f2fs/compress.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index 4c7eaeee52336..a9fa8049b295f 100644
--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -313,7 +313,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
 	cc->private = workspace;
 	cc->private2 = stream;
 
-	cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE;
+	cc->clen = ZSTD_compressBound(PAGE_SIZE << cc->log_cluster_size);
 	return 0;
 }
 
@@ -330,7 +330,7 @@ static int zstd_compress_pages(struct compress_ctx *cc)
 	ZSTD_inBuffer inbuf;
 	ZSTD_outBuffer outbuf;
 	int src_size = cc->rlen;
-	int dst_size = src_size - PAGE_SIZE - COMPRESS_HEADER_SIZE;
+	int dst_size = cc->clen;
 	int ret;
 
 	inbuf.pos = 0;
-- 
2.26.2.526.g744177e7f7-goog



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: change maximum zstd compression buffer size
  2020-05-04 14:30 [f2fs-dev] [PATCH] f2fs: change maximum zstd compression buffer size Jaegeuk Kim
@ 2020-05-05  1:52 ` Chao Yu
  2020-05-05  2:08   ` Chao Yu
  2020-05-05 23:05   ` Jaegeuk Kim
  0 siblings, 2 replies; 8+ messages in thread
From: Chao Yu @ 2020-05-05  1:52 UTC (permalink / raw)
  To: Jaegeuk Kim, linux-kernel, linux-f2fs-devel, kernel-team

On 2020-5-4 22:30, Jaegeuk Kim wrote:
> From: Daeho Jeong <daehojeong@google.com>
>
> Current zstd compression buffer size is one page and header size less
> than cluster size. By this, zstd compression always succeeds even if
> the real compression data is failed to fit into the buffer size, and
> eventually reading the cluster returns I/O error with the corrupted
> compression data.

What's the root cause of this issue? I didn't get it.

>
> Signed-off-by: Daeho Jeong <daehojeong@google.com>
> ---
>  fs/f2fs/compress.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
> index 4c7eaeee52336..a9fa8049b295f 100644
> --- a/fs/f2fs/compress.c
> +++ b/fs/f2fs/compress.c
> @@ -313,7 +313,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
>  	cc->private = workspace;
>  	cc->private2 = stream;
>
> -	cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE;
> +	cc->clen = ZSTD_compressBound(PAGE_SIZE << cc->log_cluster_size);

In my machine, the value is 66572 which is much larger than size of dst buffer, 
so, in where we can tell the real size of dst buffer to zstd compressor? 
Otherwise, if compressed data size is larger than dst buffer size, when we flush 
compressed data into dst buffer, we may suffer overflow.

>  	return 0;
>  }
>
> @@ -330,7 +330,7 @@ static int zstd_compress_pages(struct compress_ctx *cc)
>  	ZSTD_inBuffer inbuf;
>  	ZSTD_outBuffer outbuf;
>  	int src_size = cc->rlen;
> -	int dst_size = src_size - PAGE_SIZE - COMPRESS_HEADER_SIZE;
> +	int dst_size = cc->clen;
>  	int ret;
>
>  	inbuf.pos = 0;
>


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: change maximum zstd compression buffer size
  2020-05-05  1:52 ` Chao Yu
@ 2020-05-05  2:08   ` Chao Yu
  2020-05-05 23:05   ` Jaegeuk Kim
  1 sibling, 0 replies; 8+ messages in thread
From: Chao Yu @ 2020-05-05  2:08 UTC (permalink / raw)
  To: Jaegeuk Kim, linux-kernel, linux-f2fs-devel, kernel-team

On 2020-5-5 9:52, Chao Yu wrote:
> On 2020-5-4 22:30, Jaegeuk Kim wrote:
>> From: Daeho Jeong <daehojeong@google.com>
>>
>> Current zstd compression buffer size is one page and header size less
>> than cluster size. By this, zstd compression always succeeds even if
>> the real compression data is failed to fit into the buffer size, and
>> eventually reading the cluster returns I/O error with the corrupted
>> compression data.
>
> What's the root cause of this issue? I didn't get it.
>
>>
>> Signed-off-by: Daeho Jeong <daehojeong@google.com>
>> ---
>>  fs/f2fs/compress.c | 4 ++--
>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
>> index 4c7eaeee52336..a9fa8049b295f 100644
>> --- a/fs/f2fs/compress.c
>> +++ b/fs/f2fs/compress.c
>> @@ -313,7 +313,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
>>      cc->private = workspace;
>>      cc->private2 = stream;
>>
>> -    cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE;
>> +    cc->clen = ZSTD_compressBound(PAGE_SIZE << cc->log_cluster_size);

Max size of intermediate buffer has been set as below?

- zstd_init_compress_ctx
  - params = ZSTD_getParams(F2FS_ZSTD_DEFAULT_CLEVEL, cc->rlen, 0)
  - ZSTD_CStreamWorkspaceBound(params.cParams)
   - outBuffSize = ZSTD_compressBound(blockSize) + 1;
   - workspace_size = ... + outBuffSize + ...
  - workspace = f2fs_kvmalloc(workspace_size)

>
> In my machine, the value is 66572 which is much larger than size of dst buffer,
> so, in where we can tell the real size of dst buffer to zstd compressor?
> Otherwise, if compressed data size is larger than dst buffer size, when we flush
> compressed data into dst buffer, we may suffer overflow.
>
>>      return 0;
>>  }
>>
>> @@ -330,7 +330,7 @@ static int zstd_compress_pages(struct compress_ctx *cc)
>>      ZSTD_inBuffer inbuf;
>>      ZSTD_outBuffer outbuf;
>>      int src_size = cc->rlen;
>> -    int dst_size = src_size - PAGE_SIZE - COMPRESS_HEADER_SIZE;
>> +    int dst_size = cc->clen;
>>      int ret;
>>
>>      inbuf.pos = 0;
>>
>
>
> _______________________________________________
> Linux-f2fs-devel mailing list
> Linux-f2fs-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: change maximum zstd compression buffer size
  2020-05-05  1:52 ` Chao Yu
  2020-05-05  2:08   ` Chao Yu
@ 2020-05-05 23:05   ` Jaegeuk Kim
  2020-05-06  7:45     ` Chao Yu
  1 sibling, 1 reply; 8+ messages in thread
From: Jaegeuk Kim @ 2020-05-05 23:05 UTC (permalink / raw)
  To: Chao Yu; +Cc: kernel-team, linux-kernel, linux-f2fs-devel

On 05/05, Chao Yu wrote:
> On 2020-5-4 22:30, Jaegeuk Kim wrote:
> > From: Daeho Jeong <daehojeong@google.com>
> > 
> > Current zstd compression buffer size is one page and header size less
> > than cluster size. By this, zstd compression always succeeds even if
> > the real compression data is failed to fit into the buffer size, and
> > eventually reading the cluster returns I/O error with the corrupted
> > compression data.
> 
> What's the root cause of this issue? I didn't get it.
> 
> > 
> > Signed-off-by: Daeho Jeong <daehojeong@google.com>
> > ---
> >  fs/f2fs/compress.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
> > index 4c7eaeee52336..a9fa8049b295f 100644
> > --- a/fs/f2fs/compress.c
> > +++ b/fs/f2fs/compress.c
> > @@ -313,7 +313,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
> >  	cc->private = workspace;
> >  	cc->private2 = stream;
> > 
> > -	cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE;
> > +	cc->clen = ZSTD_compressBound(PAGE_SIZE << cc->log_cluster_size);
> 
> In my machine, the value is 66572 which is much larger than size of dst
> buffer, so, in where we can tell the real size of dst buffer to zstd
> compressor? Otherwise, if compressed data size is larger than dst buffer
> size, when we flush compressed data into dst buffer, we may suffer overflow.

Could you give it a try compress_log_size=2 and check decompression works?

> 
> >  	return 0;
> >  }
> > 
> > @@ -330,7 +330,7 @@ static int zstd_compress_pages(struct compress_ctx *cc)
> >  	ZSTD_inBuffer inbuf;
> >  	ZSTD_outBuffer outbuf;
> >  	int src_size = cc->rlen;
> > -	int dst_size = src_size - PAGE_SIZE - COMPRESS_HEADER_SIZE;
> > +	int dst_size = cc->clen;
> >  	int ret;
> > 
> >  	inbuf.pos = 0;
> > 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: change maximum zstd compression buffer size
  2020-05-05 23:05   ` Jaegeuk Kim
@ 2020-05-06  7:45     ` Chao Yu
  2020-05-06 14:56       ` Jaegeuk Kim
  0 siblings, 1 reply; 8+ messages in thread
From: Chao Yu @ 2020-05-06  7:45 UTC (permalink / raw)
  To: Jaegeuk Kim, Chao Yu; +Cc: kernel-team, linux-kernel, linux-f2fs-devel

On 2020/5/6 7:05, Jaegeuk Kim wrote:
> On 05/05, Chao Yu wrote:
>> On 2020-5-4 22:30, Jaegeuk Kim wrote:
>>> From: Daeho Jeong <daehojeong@google.com>
>>>
>>> Current zstd compression buffer size is one page and header size less
>>> than cluster size. By this, zstd compression always succeeds even if
>>> the real compression data is failed to fit into the buffer size, and
>>> eventually reading the cluster returns I/O error with the corrupted
>>> compression data.
>>
>> What's the root cause of this issue? I didn't get it.
>>
>>>
>>> Signed-off-by: Daeho Jeong <daehojeong@google.com>
>>> ---
>>>  fs/f2fs/compress.c | 4 ++--
>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
>>> index 4c7eaeee52336..a9fa8049b295f 100644
>>> --- a/fs/f2fs/compress.c
>>> +++ b/fs/f2fs/compress.c
>>> @@ -313,7 +313,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
>>>  	cc->private = workspace;
>>>  	cc->private2 = stream;
>>>
>>> -	cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE;
>>> +	cc->clen = ZSTD_compressBound(PAGE_SIZE << cc->log_cluster_size);
>>
>> In my machine, the value is 66572 which is much larger than size of dst
>> buffer, so, in where we can tell the real size of dst buffer to zstd
>> compressor? Otherwise, if compressed data size is larger than dst buffer
>> size, when we flush compressed data into dst buffer, we may suffer overflow.
> 
> Could you give it a try compress_log_size=2 and check decompression works?

I tried some samples before submitting the patch, did you encounter app's data
corruption when using zstd algorithm? If so, let me check this issue.

Thanks,

> 
>>
>>>  	return 0;
>>>  }
>>>
>>> @@ -330,7 +330,7 @@ static int zstd_compress_pages(struct compress_ctx *cc)
>>>  	ZSTD_inBuffer inbuf;
>>>  	ZSTD_outBuffer outbuf;
>>>  	int src_size = cc->rlen;
>>> -	int dst_size = src_size - PAGE_SIZE - COMPRESS_HEADER_SIZE;
>>> +	int dst_size = cc->clen;
>>>  	int ret;
>>>
>>>  	inbuf.pos = 0;
>>>
> 
> 
> _______________________________________________
> Linux-f2fs-devel mailing list
> Linux-f2fs-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
> .
> 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: change maximum zstd compression buffer size
  2020-05-06  7:45     ` Chao Yu
@ 2020-05-06 14:56       ` Jaegeuk Kim
  2020-05-07  2:48         ` Chao Yu
  2020-05-07  9:16         ` Chao Yu
  0 siblings, 2 replies; 8+ messages in thread
From: Jaegeuk Kim @ 2020-05-06 14:56 UTC (permalink / raw)
  To: Chao Yu; +Cc: kernel-team, linux-kernel, linux-f2fs-devel

On 05/06, Chao Yu wrote:
> On 2020/5/6 7:05, Jaegeuk Kim wrote:
> > On 05/05, Chao Yu wrote:
> >> On 2020-5-4 22:30, Jaegeuk Kim wrote:
> >>> From: Daeho Jeong <daehojeong@google.com>
> >>>
> >>> Current zstd compression buffer size is one page and header size less
> >>> than cluster size. By this, zstd compression always succeeds even if
> >>> the real compression data is failed to fit into the buffer size, and
> >>> eventually reading the cluster returns I/O error with the corrupted
> >>> compression data.
> >>
> >> What's the root cause of this issue? I didn't get it.
> >>
> >>>
> >>> Signed-off-by: Daeho Jeong <daehojeong@google.com>
> >>> ---
> >>>  fs/f2fs/compress.c | 4 ++--
> >>>  1 file changed, 2 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
> >>> index 4c7eaeee52336..a9fa8049b295f 100644
> >>> --- a/fs/f2fs/compress.c
> >>> +++ b/fs/f2fs/compress.c
> >>> @@ -313,7 +313,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
> >>>  	cc->private = workspace;
> >>>  	cc->private2 = stream;
> >>>
> >>> -	cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE;
> >>> +	cc->clen = ZSTD_compressBound(PAGE_SIZE << cc->log_cluster_size);
> >>
> >> In my machine, the value is 66572 which is much larger than size of dst
> >> buffer, so, in where we can tell the real size of dst buffer to zstd
> >> compressor? Otherwise, if compressed data size is larger than dst buffer
> >> size, when we flush compressed data into dst buffer, we may suffer overflow.
> > 
> > Could you give it a try compress_log_size=2 and check decompression works?
> 
> I tried some samples before submitting the patch, did you encounter app's data
> corruption when using zstd algorithm? If so, let me check this issue.

Daeho reported:
1. cp -a src_file comp_dir/dst_file (comp_dir is a directory for compression)
2. sync comp_dir/dst_file
3. echo 3 > /proc/sys/vm/drop_caches
4. cat comp_dir/dst_file > dump (it returns I/O error depending on the file).

> 
> Thanks,
> 
> > 
> >>
> >>>  	return 0;
> >>>  }
> >>>
> >>> @@ -330,7 +330,7 @@ static int zstd_compress_pages(struct compress_ctx *cc)
> >>>  	ZSTD_inBuffer inbuf;
> >>>  	ZSTD_outBuffer outbuf;
> >>>  	int src_size = cc->rlen;
> >>> -	int dst_size = src_size - PAGE_SIZE - COMPRESS_HEADER_SIZE;
> >>> +	int dst_size = cc->clen;
> >>>  	int ret;
> >>>
> >>>  	inbuf.pos = 0;
> >>>
> > 
> > 
> > _______________________________________________
> > Linux-f2fs-devel mailing list
> > Linux-f2fs-devel@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
> > .
> > 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: change maximum zstd compression buffer size
  2020-05-06 14:56       ` Jaegeuk Kim
@ 2020-05-07  2:48         ` Chao Yu
  2020-05-07  9:16         ` Chao Yu
  1 sibling, 0 replies; 8+ messages in thread
From: Chao Yu @ 2020-05-07  2:48 UTC (permalink / raw)
  To: Jaegeuk Kim; +Cc: kernel-team, linux-kernel, linux-f2fs-devel

On 2020/5/6 22:56, Jaegeuk Kim wrote:
> On 05/06, Chao Yu wrote:
>> On 2020/5/6 7:05, Jaegeuk Kim wrote:
>>> On 05/05, Chao Yu wrote:
>>>> On 2020-5-4 22:30, Jaegeuk Kim wrote:
>>>>> From: Daeho Jeong <daehojeong@google.com>
>>>>>
>>>>> Current zstd compression buffer size is one page and header size less
>>>>> than cluster size. By this, zstd compression always succeeds even if
>>>>> the real compression data is failed to fit into the buffer size, and
>>>>> eventually reading the cluster returns I/O error with the corrupted
>>>>> compression data.
>>>>
>>>> What's the root cause of this issue? I didn't get it.
>>>>
>>>>>
>>>>> Signed-off-by: Daeho Jeong <daehojeong@google.com>
>>>>> ---
>>>>>  fs/f2fs/compress.c | 4 ++--
>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
>>>>> index 4c7eaeee52336..a9fa8049b295f 100644
>>>>> --- a/fs/f2fs/compress.c
>>>>> +++ b/fs/f2fs/compress.c
>>>>> @@ -313,7 +313,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
>>>>>  	cc->private = workspace;
>>>>>  	cc->private2 = stream;
>>>>>
>>>>> -	cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE;
>>>>> +	cc->clen = ZSTD_compressBound(PAGE_SIZE << cc->log_cluster_size);
>>>>
>>>> In my machine, the value is 66572 which is much larger than size of dst
>>>> buffer, so, in where we can tell the real size of dst buffer to zstd
>>>> compressor? Otherwise, if compressed data size is larger than dst buffer
>>>> size, when we flush compressed data into dst buffer, we may suffer overflow.
>>>
>>> Could you give it a try compress_log_size=2 and check decompression works?
>>
>> I tried some samples before submitting the patch, did you encounter app's data
>> corruption when using zstd algorithm? If so, let me check this issue.
> 
> Daeho reported:
> 1. cp -a src_file comp_dir/dst_file (comp_dir is a directory for compression)
> 2. sync comp_dir/dst_file
> 3. echo 3 > /proc/sys/vm/drop_caches
> 4. cat comp_dir/dst_file > dump (it returns I/O error depending on the file).

Let me check this issue.

Thanks,

> 
>>
>> Thanks,
>>
>>>
>>>>
>>>>>  	return 0;
>>>>>  }
>>>>>
>>>>> @@ -330,7 +330,7 @@ static int zstd_compress_pages(struct compress_ctx *cc)
>>>>>  	ZSTD_inBuffer inbuf;
>>>>>  	ZSTD_outBuffer outbuf;
>>>>>  	int src_size = cc->rlen;
>>>>> -	int dst_size = src_size - PAGE_SIZE - COMPRESS_HEADER_SIZE;
>>>>> +	int dst_size = cc->clen;
>>>>>  	int ret;
>>>>>
>>>>>  	inbuf.pos = 0;
>>>>>
>>>
>>>
>>> _______________________________________________
>>> Linux-f2fs-devel mailing list
>>> Linux-f2fs-devel@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
>>> .
>>>
> .
> 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: change maximum zstd compression buffer size
  2020-05-06 14:56       ` Jaegeuk Kim
  2020-05-07  2:48         ` Chao Yu
@ 2020-05-07  9:16         ` Chao Yu
  1 sibling, 0 replies; 8+ messages in thread
From: Chao Yu @ 2020-05-07  9:16 UTC (permalink / raw)
  To: Jaegeuk Kim; +Cc: kernel-team, linux-kernel, linux-f2fs-devel

On 2020/5/6 22:56, Jaegeuk Kim wrote:
> On 05/06, Chao Yu wrote:
>> On 2020/5/6 7:05, Jaegeuk Kim wrote:
>>> On 05/05, Chao Yu wrote:
>>>> On 2020-5-4 22:30, Jaegeuk Kim wrote:
>>>>> From: Daeho Jeong <daehojeong@google.com>
>>>>>
>>>>> Current zstd compression buffer size is one page and header size less
>>>>> than cluster size. By this, zstd compression always succeeds even if
>>>>> the real compression data is failed to fit into the buffer size, and
>>>>> eventually reading the cluster returns I/O error with the corrupted
>>>>> compression data.
>>>>
>>>> What's the root cause of this issue? I didn't get it.
>>>>
>>>>>
>>>>> Signed-off-by: Daeho Jeong <daehojeong@google.com>
>>>>> ---
>>>>>  fs/f2fs/compress.c | 4 ++--
>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
>>>>> index 4c7eaeee52336..a9fa8049b295f 100644
>>>>> --- a/fs/f2fs/compress.c
>>>>> +++ b/fs/f2fs/compress.c
>>>>> @@ -313,7 +313,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
>>>>>  	cc->private = workspace;
>>>>>  	cc->private2 = stream;
>>>>>
>>>>> -	cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE;
>>>>> +	cc->clen = ZSTD_compressBound(PAGE_SIZE << cc->log_cluster_size);
>>>>
>>>> In my machine, the value is 66572 which is much larger than size of dst
>>>> buffer, so, in where we can tell the real size of dst buffer to zstd
>>>> compressor? Otherwise, if compressed data size is larger than dst buffer
>>>> size, when we flush compressed data into dst buffer, we may suffer overflow.
>>>
>>> Could you give it a try compress_log_size=2 and check decompression works?
>>
>> I tried some samples before submitting the patch, did you encounter app's data
>> corruption when using zstd algorithm? If so, let me check this issue.
> 
> Daeho reported:
> 1. cp -a src_file comp_dir/dst_file (comp_dir is a directory for compression)
> 2. sync comp_dir/dst_file
> 3. echo 3 > /proc/sys/vm/drop_caches
> 4. cat comp_dir/dst_file > dump (it returns I/O error depending on the file).

 * ZSTD_endStream() to complete the flush. It returns the number of bytes left
 * in the internal buffer and must be called until it returns 0.

It looks we need to check return value of ZSTD_endStream() to see whether
dst buffer has enough space to store all compressed data.

Thanks,

> 
>>
>> Thanks,
>>
>>>
>>>>
>>>>>  	return 0;
>>>>>  }
>>>>>
>>>>> @@ -330,7 +330,7 @@ static int zstd_compress_pages(struct compress_ctx *cc)
>>>>>  	ZSTD_inBuffer inbuf;
>>>>>  	ZSTD_outBuffer outbuf;
>>>>>  	int src_size = cc->rlen;
>>>>> -	int dst_size = src_size - PAGE_SIZE - COMPRESS_HEADER_SIZE;
>>>>> +	int dst_size = cc->clen;
>>>>>  	int ret;
>>>>>
>>>>>  	inbuf.pos = 0;
>>>>>
>>>
>>>
>>> _______________________________________________
>>> Linux-f2fs-devel mailing list
>>> Linux-f2fs-devel@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
>>> .
>>>
> .
> 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-05-07  9:16 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-04 14:30 [f2fs-dev] [PATCH] f2fs: change maximum zstd compression buffer size Jaegeuk Kim
2020-05-05  1:52 ` Chao Yu
2020-05-05  2:08   ` Chao Yu
2020-05-05 23:05   ` Jaegeuk Kim
2020-05-06  7:45     ` Chao Yu
2020-05-06 14:56       ` Jaegeuk Kim
2020-05-07  2:48         ` Chao Yu
2020-05-07  9:16         ` Chao Yu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).