All of lore.kernel.org
 help / color / mirror / Atom feed
* FAILED: patch "[PATCH] zram: do not use copy_page with non-page aligned address" failed to apply to 3.18-stable tree
@ 2017-04-18 12:18 gregkh
  2017-04-19  0:45 ` Minchan Kim
  0 siblings, 1 reply; 8+ messages in thread
From: gregkh @ 2017-04-18 12:18 UTC (permalink / raw)
  To: minchan, akpm, sergey.senozhatsky, stable, torvalds; +Cc: stable


The patch below does not apply to the 3.18-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable@vger.kernel.org>.

thanks,

greg k-h

------------------ original commit in Linus's tree ------------------

>From d72e9a7a93e4f8e9e52491921d99e0c8aa89eb4e Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan@kernel.org>
Date: Thu, 13 Apr 2017 14:56:37 -0700
Subject: [PATCH] zram: do not use copy_page with non-page aligned address

The copy_page is optimized memcpy for page-alinged address.  If it is
used with non-page aligned address, it can corrupt memory which means
system corruption.  With zram, it can happen with

1. 64K architecture
2. partial IO
3. slub debug

Partial IO need to allocate a page and zram allocates it via kmalloc.
With slub debug, kmalloc(PAGE_SIZE) doesn't return page-size aligned
address.  And finally, copy_page(mem, cmem) corrupts memory.

So, this patch changes it to memcpy.

Actuaully, we don't need to change zram_bvec_write part because zsmalloc
returns page-aligned address in case of PAGE_SIZE class but it's not
good to rely on the internal of zsmalloc.

Note:
 When this patch is merged to stable, clear_page should be fixed, too.
 Unfortunately, recent zram removes it by "same page merge" feature so
 it's hard to backport this patch to -stable tree.

I will handle it when I receive the mail from stable tree maintainer to
merge this patch to backport.

Fixes: 42e99bd ("zram: optimize memory operations with clear_page()/copy_page()")
Link: http://lkml.kernel.org/r/1492042622-12074-2-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 03a7408e090d..0c09d4256108 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -523,7 +523,7 @@ static int zram_decompress_page(struct zram *zram, char *mem, u32 index)
 
 	cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_RO);
 	if (size == PAGE_SIZE) {
-		copy_page(mem, cmem);
+		memcpy(mem, cmem, PAGE_SIZE);
 	} else {
 		struct zcomp_strm *zstrm = zcomp_stream_get(zram->comp);
 
@@ -717,7 +717,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
 
 	if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
 		src = kmap_atomic(page);
-		copy_page(cmem, src);
+		memcpy(cmem, src, PAGE_SIZE);
 		kunmap_atomic(src);
 	} else {
 		memcpy(cmem, src, clen);

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: FAILED: patch "[PATCH] zram: do not use copy_page with non-page aligned address" failed to apply to 3.18-stable tree
  2017-04-18 12:18 FAILED: patch "[PATCH] zram: do not use copy_page with non-page aligned address" failed to apply to 3.18-stable tree gregkh
@ 2017-04-19  0:45 ` Minchan Kim
  2017-04-19  5:08   ` Sergey Senozhatsky
  2017-04-19 11:40   ` Greg KH
  0 siblings, 2 replies; 8+ messages in thread
From: Minchan Kim @ 2017-04-19  0:45 UTC (permalink / raw)
  To: gregkh; +Cc: akpm, sergey.senozhatsky, stable, torvalds

On Tue, Apr 18, 2017 at 02:18:28PM +0200, gregkh@linuxfoundation.org wrote:
> 
> The patch below does not apply to the 3.18-stable tree.
> If someone wants it applied there, or to any other stable or longterm
> tree, then please email the backport, including the original git commit
> id to <stable@vger.kernel.org>.
> 
> thanks,
> 
> greg k-h

Hi Greg,

I backported this patch for v3.18.49. Could you merge this?
Sergey, could you review this?

>From 042b3d57120146ab24b284713d44a9c1b37935b3 Mon Sep 17 00:00:00 2001
From: Minchan Kim <minchan@kernel.org>
Date: Wed, 19 Apr 2017 09:28:39 +0900
Subject: [PATCH for v3.18] zram: do not use copy_page with non-page aligned address

commit d72e9a7a93e4f8e9e52491921d99e0c8aa89eb4e upstream

The copy_page is optimized memcpy for page-alinged address.  If it is
used with non-page aligned address, it can corrupt memory which means
system corruption.  With zram, it can happen with

1. 64K architecture
2. partial IO
3. slub debug

Partial IO need to allocate a page and zram allocates it via kmalloc.
With slub debug, kmalloc(PAGE_SIZE) doesn't return page-size aligned
address.  And finally, copy_page(mem, cmem) corrupts memory.

So, this patch changes it to memcpy/memset.

Actuaully, we don't need to change zram_bvec_write part because zsmalloc
returns page-aligned address in case of PAGE_SIZE class but it's not
good to rely on the internal of zsmalloc.

And I didn't change clear_page in handle_zero_page intentionally because
it's very clear no problem because kmap_atomic guarantees address
is page-size alinged.

Fixes: 42e99bd ("zram: optimize memory operations with clear_page()/copy_page()")
Link: http://lkml.kernel.org/r/1492042622-12074-2-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: <stable@vger.kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 drivers/block/zram/zram_drv.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 3920ee45aa59..7e94459a489a 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -431,13 +431,13 @@ static int zram_decompress_page(struct zram *zram, char *mem, u32 index)
 
 	if (!handle || zram_test_flag(meta, index, ZRAM_ZERO)) {
 		bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value);
-		clear_page(mem);
+		memset(mem, 0, PAGE_SIZE);
 		return 0;
 	}
 
 	cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_RO);
 	if (size == PAGE_SIZE)
-		copy_page(mem, cmem);
+		memcpy(mem, cmem, PAGE_SIZE);
 	else
 		ret = zcomp_decompress(zram->comp, cmem, size, mem);
 	zs_unmap_object(meta->mem_pool, handle);
@@ -612,7 +612,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
 
 	if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
 		src = kmap_atomic(page);
-		copy_page(cmem, src);
+		memcpy(cmem, src, PAGE_SIZE);
 		kunmap_atomic(src);
 	} else {
 		memcpy(cmem, src, clen);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: FAILED: patch "[PATCH] zram: do not use copy_page with non-page aligned address" failed to apply to 3.18-stable tree
  2017-04-19  0:45 ` Minchan Kim
@ 2017-04-19  5:08   ` Sergey Senozhatsky
  2017-04-19  6:20     ` Minchan Kim
  2017-04-19 11:40   ` Greg KH
  1 sibling, 1 reply; 8+ messages in thread
From: Sergey Senozhatsky @ 2017-04-19  5:08 UTC (permalink / raw)
  To: Minchan Kim; +Cc: gregkh, akpm, sergey.senozhatsky, stable, torvalds

Hello,

On (04/19/17 09:45), Minchan Kim wrote:
> index 3920ee45aa59..7e94459a489a 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -431,13 +431,13 @@ static int zram_decompress_page(struct zram *zram, char *mem, u32 index)
>  
>  	if (!handle || zram_test_flag(meta, index, ZRAM_ZERO)) {
>  		bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value);
> -		clear_page(mem);
> +		memset(mem, 0, PAGE_SIZE);

after a quick look I haven't found a clear_page() that would impose
alignment requirements. but ok, `mem' _can_ be kmalloc-ed there, yes.

a side question:
how bad would it hurt to switch to page_size aligned allocator partial
IO instead of slab allocator? and just use potentially faster *_page()
functions instead of mem* functions? I think people normally don't see
partial IOs (well, unless they use some 'weird' filesystems). while
copy_page()->memcpy() happens to everyone.


>  	cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_RO);
>  	if (size == PAGE_SIZE)
> -		copy_page(mem, cmem);
> +		memcpy(mem, cmem, PAGE_SIZE);

I think here we have both page_size aligned `mem' and page_size aligned
`cmem'. `mem' is guaranteed to be kmapp-ed ->bv_page, which is allocated
by alloc_page(), and zs_map_object() returns a kmapp-ed alloc_page()-ed
page for huge)class object (which is the case here, isn't it?). so we can
keep copy_page()? am I wrong? I'm afraid we can slow down a regularly
executed (semi-hot) path here.


> @@ -612,7 +612,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>  
>  	if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
>  		src = kmap_atomic(page);
> -		copy_page(cmem, src);
> +		memcpy(cmem, src, PAGE_SIZE);

here `src' is kmapp-ed ->bv_page, which is always page_size aligned.
it's allocated by alloc_page(). `cmem' is also page_size aligned,
isn't it? seems that we can use copy_page(). am I missing something?
I'm afraid we can slow down a regularly executed (semi-hot) path here.

	-ss

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: FAILED: patch "[PATCH] zram: do not use copy_page with non-page aligned address" failed to apply to 3.18-stable tree
  2017-04-19  5:08   ` Sergey Senozhatsky
@ 2017-04-19  6:20     ` Minchan Kim
  2017-04-19  7:11       ` Sergey Senozhatsky
  0 siblings, 1 reply; 8+ messages in thread
From: Minchan Kim @ 2017-04-19  6:20 UTC (permalink / raw)
  To: Sergey Senozhatsky; +Cc: gregkh, akpm, sergey.senozhatsky, stable, torvalds

On Wed, Apr 19, 2017 at 02:08:53PM +0900, Sergey Senozhatsky wrote:
> Hello,
> 
> On (04/19/17 09:45), Minchan Kim wrote:
> > index 3920ee45aa59..7e94459a489a 100644
> > --- a/drivers/block/zram/zram_drv.c
> > +++ b/drivers/block/zram/zram_drv.c
> > @@ -431,13 +431,13 @@ static int zram_decompress_page(struct zram *zram, char *mem, u32 index)
> >  
> >  	if (!handle || zram_test_flag(meta, index, ZRAM_ZERO)) {
> >  		bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value);
> > -		clear_page(mem);
> > +		memset(mem, 0, PAGE_SIZE);
> 
> after a quick look I haven't found a clear_page() that would impose
> alignment requirements. but ok, `mem' _can_ be kmalloc-ed there, yes.
> 
> a side question:
> how bad would it hurt to switch to page_size aligned allocator partial
> IO instead of slab allocator? and just use potentially faster *_page()
> functions instead of mem* functions? I think people normally don't see

You meant alloc_page or wanted to create new own slab cache with
PAGE_SIZE alighment requirement?
Either way, it makes more changes so it would be not proper for
-stable material, IMHO. As well, this path(zero-page) would be rare
so it wouldn't hurt performance, either.

> partial IOs (well, unless they use some 'weird' filesystems). while
> copy_page()->memcpy() happens to everyone.
> 
> 
> >  	cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_RO);
> >  	if (size == PAGE_SIZE)
> > -		copy_page(mem, cmem);
> > +		memcpy(mem, cmem, PAGE_SIZE);
> 
> I think here we have both page_size aligned `mem' and page_size aligned
> `cmem'. `mem' is guaranteed to be kmapp-ed ->bv_page, which is allocated
> by alloc_page(), and zs_map_object() returns a kmapp-ed alloc_page()-ed
> page for huge)class object (which is the case here, isn't it?). so we can
> keep copy_page()? am I wrong? I'm afraid we can slow down a regularly

No, you're right but as I wrote in description, I don't want to rely
on zsmalloc's internal implementation. And this bad compress ratio
path would be rare, too.

> executed (semi-hot) path here.
> 
> 
> > @@ -612,7 +612,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> >  
> >  	if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
> >  		src = kmap_atomic(page);
> > -		copy_page(cmem, src);
> > +		memcpy(cmem, src, PAGE_SIZE);
> 
> here `src' is kmapp-ed ->bv_page, which is always page_size aligned.
> it's allocated by alloc_page(). `cmem' is also page_size aligned,
> isn't it? seems that we can use copy_page(). am I missing something?
> I'm afraid we can slow down a regularly executed (semi-hot) path here.

I understand your concern about slow down but I belive thre are
not performance sensitive paths because they are rare so I wanted to
bias to safety for -stable material and we can make it fast
in mainline. :)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: FAILED: patch "[PATCH] zram: do not use copy_page with non-page aligned address" failed to apply to 3.18-stable tree
  2017-04-19  6:20     ` Minchan Kim
@ 2017-04-19  7:11       ` Sergey Senozhatsky
  2017-04-19  7:22         ` Minchan Kim
  0 siblings, 1 reply; 8+ messages in thread
From: Sergey Senozhatsky @ 2017-04-19  7:11 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Sergey Senozhatsky, gregkh, akpm, sergey.senozhatsky, stable, torvalds

On (04/19/17 15:20), Minchan Kim wrote:
[..]
> > after a quick look I haven't found a clear_page() that would impose
> > alignment requirements. but ok, `mem' _can_ be kmalloc-ed there, yes.
> > 
> > a side question:
> > how bad would it hurt to switch to page_size aligned allocator partial
> > IO instead of slab allocator? and just use potentially faster *_page()
> > functions instead of mem* functions? I think people normally don't see
> 
> You meant alloc_page or wanted to create new own slab cache with
> PAGE_SIZE alighment requirement?

alloc_page. yes.

> Either way, it makes more changes so it would be not proper for
> -stable material, IMHO. As well, this path(zero-page) would be rare
> so it wouldn't hurt performance, either.

oh, sure. I meant upstream, not -stable. sorry for confusion.

> > partial IOs (well, unless they use some 'weird' filesystems). while
> > copy_page()->memcpy() happens to everyone.
> > 
> > 
> > >  	cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_RO);
> > >  	if (size == PAGE_SIZE)
> > > -		copy_page(mem, cmem);
> > > +		memcpy(mem, cmem, PAGE_SIZE);
> > 
> > I think here we have both page_size aligned `mem' and page_size aligned
> > `cmem'. `mem' is guaranteed to be kmapp-ed ->bv_page, which is allocated
> > by alloc_page(), and zs_map_object() returns a kmapp-ed alloc_page()-ed
> > page for huge)class object (which is the case here, isn't it?). so we can
> > keep copy_page()? am I wrong? I'm afraid we can slow down a regularly
> 
> No, you're right but as I wrote in description, I don't want to rely
> on zsmalloc's internal implementation.

well, zsmalloc() won't change that much in -stable ;)
but up to you.

Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>



> And this bad compress ratio path would be rare, too.

"bad compression" is not uncommon. I wish it was, but it's not. we can't
really guarantee/expect anything, it's up to compression algorithm and data.


I added two zram.stat counters -- bad_compression and good_compression.
and did

+       if (unlikely(comp_len > max_zpage_size)) {
+               atomic64_inc(&zram->stats.bad_compression);
                comp_len = PAGE_SIZE;
+       } else {
+               atomic64_inc(&zram->stats.good_compression);
+       }

in zram_compress().

and in mm_stat_show()

	        ret = scnprintf(buf, PAGE_SIZE,
-                       "%8llu %8llu %8llu %8lu %8ld %8llu %8lu\n",
+                       "%8llu %8llu %8llu %8lu %8ld %8llu %8lu %8llu %8llu\n",

...

+                       (u64)atomic64_read(&zram->stats.bad_compression),
+                       (u64)atomic64_read(&zram->stats.good_compression));


so bad_compression first, good_compression last.


and here are some of the results (LZ0):

cat /sys/block/zram0/mm_stat 
224010240 186613588 188694528        0 188694528    15958        0    37818    16878

'bad' compression twice as common as 'good' compression on some of my workloads.

cat /sys/block/zram0/mm_stat 
482918400 404083224 409276416        0 409276416    15138        0    76789    45188

# cat /sys/block/zram0/mm_stat
919818240 581986600 596762624        0 596762624    14296        0   119052   187966



=== a different test:

# cat /sys/block/zram0/mm_stat 
2976886784 2061379701 2087194624        0 2087194624     5752        0   387983   341593



=== another test:

# cat /sys/block/zram0/mm_stat 
975462400 341683056 356196352        0 356196352     9172        0    15833   224226

ok, not too bad.

# cat /sys/block/zram0/mm_stat
1841291264 933397356 958345216        0 958345216     5940        0   116175   336128

# cat /sys/block/zram0/mm_stat
3100012544 2183029827 2208866304        0 2208866304     5447        0   417539   342450

# cat /sys/block/zram0/mm_stat 
3338133504 2265031857 2294886400        0 2294886400     1199        0   420382   402257

meh.

# cat /sys/block/zram0/mm_stat 
3057053696 2128156096 2153943040        0 2295697408      104     3903   420530   420055

close to 50/50.

# cat /sys/block/zram0/mm_stat 
3094814720 2142308405 2168918016        0 2295697408      296     3903   420727   431247

# cat /sys/block/zram0/mm_stat
3177267200 2172831422 2201022464        0 2295697408      346     3903   421139   455748

# cat /sys/block/zram0/mm_stat 
3338510336 2269447044 2299387904        0 2299387904     1179     3983   434806   483659

end of test.


[..]
> > here `src' is kmapp-ed ->bv_page, which is always page_size aligned.
> > it's allocated by alloc_page(). `cmem' is also page_size aligned,
> > isn't it? seems that we can use copy_page(). am I missing something?
> > I'm afraid we can slow down a regularly executed (semi-hot) path here.
> 
> I understand your concern about slow down but I belive thre are
> not performance sensitive paths because they are rare so I wanted to
> bias to safety for -stable material and we can make it fast
> in mainline. :)

ok.

	-ss

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: FAILED: patch "[PATCH] zram: do not use copy_page with non-page aligned address" failed to apply to 3.18-stable tree
  2017-04-19  7:11       ` Sergey Senozhatsky
@ 2017-04-19  7:22         ` Minchan Kim
  2017-04-19  7:47           ` Sergey Senozhatsky
  0 siblings, 1 reply; 8+ messages in thread
From: Minchan Kim @ 2017-04-19  7:22 UTC (permalink / raw)
  To: Sergey Senozhatsky; +Cc: gregkh, akpm, sergey.senozhatsky, stable, torvalds

On Wed, Apr 19, 2017 at 04:11:20PM +0900, Sergey Senozhatsky wrote:
> On (04/19/17 15:20), Minchan Kim wrote:
> [..]
> > > after a quick look I haven't found a clear_page() that would impose
> > > alignment requirements. but ok, `mem' _can_ be kmalloc-ed there, yes.
> > > 
> > > a side question:
> > > how bad would it hurt to switch to page_size aligned allocator partial
> > > IO instead of slab allocator? and just use potentially faster *_page()
> > > functions instead of mem* functions? I think people normally don't see
> > 
> > You meant alloc_page or wanted to create new own slab cache with
> > PAGE_SIZE alighment requirement?
> 
> alloc_page. yes.
> 
> > Either way, it makes more changes so it would be not proper for
> > -stable material, IMHO. As well, this path(zero-page) would be rare
> > so it wouldn't hurt performance, either.
> 
> oh, sure. I meant upstream, not -stable. sorry for confusion.
> 
> > > partial IOs (well, unless they use some 'weird' filesystems). while
> > > copy_page()->memcpy() happens to everyone.
> > > 
> > > 
> > > >  	cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_RO);
> > > >  	if (size == PAGE_SIZE)
> > > > -		copy_page(mem, cmem);
> > > > +		memcpy(mem, cmem, PAGE_SIZE);
> > > 
> > > I think here we have both page_size aligned `mem' and page_size aligned
> > > `cmem'. `mem' is guaranteed to be kmapp-ed ->bv_page, which is allocated
> > > by alloc_page(), and zs_map_object() returns a kmapp-ed alloc_page()-ed
> > > page for huge)class object (which is the case here, isn't it?). so we can
> > > keep copy_page()? am I wrong? I'm afraid we can slow down a regularly
> > 
> > No, you're right but as I wrote in description, I don't want to rely
> > on zsmalloc's internal implementation.
> 
> well, zsmalloc() won't change that much in -stable ;)
> but up to you.
> 
> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>

Thanks!

> 
> 
> 
> > And this bad compress ratio path would be rare, too.
> 
> "bad compression" is not uncommon. I wish it was, but it's not. we can't
> really guarantee/expect anything, it's up to compression algorithm and data.

True.  Thanks for the testing!
What are your workloads?
I think user shouldn't use zram for such lots of bad compression workload
if it isn't temporal :).
Anyway, Okay, let's enhance it for recent zram in mainline.

Thanks!

> 
> 
> I added two zram.stat counters -- bad_compression and good_compression.
> and did
> 
> +       if (unlikely(comp_len > max_zpage_size)) {
> +               atomic64_inc(&zram->stats.bad_compression);
>                 comp_len = PAGE_SIZE;
> +       } else {
> +               atomic64_inc(&zram->stats.good_compression);
> +       }
> 
> in zram_compress().
> 
> and in mm_stat_show()
> 
> 	        ret = scnprintf(buf, PAGE_SIZE,
> -                       "%8llu %8llu %8llu %8lu %8ld %8llu %8lu\n",
> +                       "%8llu %8llu %8llu %8lu %8ld %8llu %8lu %8llu %8llu\n",
> 
> ...
> 
> +                       (u64)atomic64_read(&zram->stats.bad_compression),
> +                       (u64)atomic64_read(&zram->stats.good_compression));
> 
> 
> so bad_compression first, good_compression last.
> 
> 
> and here are some of the results (LZ0):
> 
> cat /sys/block/zram0/mm_stat 
> 224010240 186613588 188694528        0 188694528    15958        0    37818    16878
> 
> 'bad' compression twice as common as 'good' compression on some of my workloads.
> 
> cat /sys/block/zram0/mm_stat 
> 482918400 404083224 409276416        0 409276416    15138        0    76789    45188
> 
> # cat /sys/block/zram0/mm_stat
> 919818240 581986600 596762624        0 596762624    14296        0   119052   187966
> 
> 
> 
> === a different test:
> 
> # cat /sys/block/zram0/mm_stat 
> 2976886784 2061379701 2087194624        0 2087194624     5752        0   387983   341593
> 
> 
> 
> === another test:
> 
> # cat /sys/block/zram0/mm_stat 
> 975462400 341683056 356196352        0 356196352     9172        0    15833   224226
> 
> ok, not too bad.
> 
> # cat /sys/block/zram0/mm_stat
> 1841291264 933397356 958345216        0 958345216     5940        0   116175   336128
> 
> # cat /sys/block/zram0/mm_stat
> 3100012544 2183029827 2208866304        0 2208866304     5447        0   417539   342450
> 
> # cat /sys/block/zram0/mm_stat 
> 3338133504 2265031857 2294886400        0 2294886400     1199        0   420382   402257
> 
> meh.
> 
> # cat /sys/block/zram0/mm_stat 
> 3057053696 2128156096 2153943040        0 2295697408      104     3903   420530   420055
> 
> close to 50/50.
> 
> # cat /sys/block/zram0/mm_stat 
> 3094814720 2142308405 2168918016        0 2295697408      296     3903   420727   431247
> 
> # cat /sys/block/zram0/mm_stat
> 3177267200 2172831422 2201022464        0 2295697408      346     3903   421139   455748
> 
> # cat /sys/block/zram0/mm_stat 
> 3338510336 2269447044 2299387904        0 2299387904     1179     3983   434806   483659
> 
> end of test.
> 
> 
> [..]
> > > here `src' is kmapp-ed ->bv_page, which is always page_size aligned.
> > > it's allocated by alloc_page(). `cmem' is also page_size aligned,
> > > isn't it? seems that we can use copy_page(). am I missing something?
> > > I'm afraid we can slow down a regularly executed (semi-hot) path here.
> > 
> > I understand your concern about slow down but I belive thre are
> > not performance sensitive paths because they are rare so I wanted to
> > bias to safety for -stable material and we can make it fast
> > in mainline. :)
> 
> ok.
> 
> 	-ss

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: FAILED: patch "[PATCH] zram: do not use copy_page with non-page aligned address" failed to apply to 3.18-stable tree
  2017-04-19  7:22         ` Minchan Kim
@ 2017-04-19  7:47           ` Sergey Senozhatsky
  0 siblings, 0 replies; 8+ messages in thread
From: Sergey Senozhatsky @ 2017-04-19  7:47 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Sergey Senozhatsky, gregkh, akpm, sergey.senozhatsky, stable, torvalds

On (04/19/17 16:22), Minchan Kim wrote:
> > > And this bad compress ratio path would be rare, too.
> > 
> > "bad compression" is not uncommon. I wish it was, but it's not. we can't
> > really guarantee/expect anything, it's up to compression algorithm and data.
> 
> True.  Thanks for the testing!
> What are your workloads?

on that particular box -- compilation. text + binary files.

we swap out pages with random binary data, basically, so I wouldn't
expect "bad compression" to be uncommon on embedded devices.


> I think user shouldn't use zram for such lots of bad compression workload
> if it isn't temporal :).

haha may be.

p.s.

may be there is nothing to fix in upstream and it was a false alarm
from my side. it's probably logical to expect that arch that bothered
to provide well optimized copy_page() also provides 'equally' optimized
memcpy(). if so, then we should be fine. sorry for the noise.

	-ss

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: FAILED: patch "[PATCH] zram: do not use copy_page with non-page aligned address" failed to apply to 3.18-stable tree
  2017-04-19  0:45 ` Minchan Kim
  2017-04-19  5:08   ` Sergey Senozhatsky
@ 2017-04-19 11:40   ` Greg KH
  1 sibling, 0 replies; 8+ messages in thread
From: Greg KH @ 2017-04-19 11:40 UTC (permalink / raw)
  To: Minchan Kim; +Cc: akpm, sergey.senozhatsky, stable, torvalds

On Wed, Apr 19, 2017 at 09:45:47AM +0900, Minchan Kim wrote:
> On Tue, Apr 18, 2017 at 02:18:28PM +0200, gregkh@linuxfoundation.org wrote:
> > 
> > The patch below does not apply to the 3.18-stable tree.
> > If someone wants it applied there, or to any other stable or longterm
> > tree, then please email the backport, including the original git commit
> > id to <stable@vger.kernel.org>.
> > 
> > thanks,
> > 
> > greg k-h
> 
> Hi Greg,
> 
> I backported this patch for v3.18.49. Could you merge this?

Now merged, thanks!

greg k-h

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-04-19 11:40 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-18 12:18 FAILED: patch "[PATCH] zram: do not use copy_page with non-page aligned address" failed to apply to 3.18-stable tree gregkh
2017-04-19  0:45 ` Minchan Kim
2017-04-19  5:08   ` Sergey Senozhatsky
2017-04-19  6:20     ` Minchan Kim
2017-04-19  7:11       ` Sergey Senozhatsky
2017-04-19  7:22         ` Minchan Kim
2017-04-19  7:47           ` Sergey Senozhatsky
2017-04-19 11:40   ` Greg KH

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.