From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751908AbcECFDw (ORCPT ); Tue, 3 May 2016 01:03:52 -0400 Received: from LGEAMRELO13.lge.com ([156.147.23.53]:52095 "EHLO lgeamrelo13.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751145AbcECFDv (ORCPT ); Tue, 3 May 2016 01:03:51 -0400 X-Original-SENDERIP: 156.147.1.151 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan@kernel.org Date: Tue, 3 May 2016 14:03:48 +0900 From: Minchan Kim To: Sergey Senozhatsky Cc: Sergey Senozhatsky , Andrew Morton , linux-kernel@vger.kernel.org Subject: Re: [PATCH 2/2] zram: user per-cpu compression streams Message-ID: <20160503050348.GA17316@bbox> References: <1461860230-849-3-git-send-email-sergey.senozhatsky@gmail.com> <20160502062311.GB6077@bbox> <20160502072508.GA1811@swordfish> <20160502082822.GC6077@bbox> <20160502092157.GA21764@swordfish> <20160503014018.GB2272@bbox> <20160503015333.GA9987@swordfish> <20160503022046.GB3642@bbox> <20160503023001.GB9987@swordfish> <20160503042902.GA25545@swordfish> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160503042902.GA25545@swordfish> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 03, 2016 at 01:29:02PM +0900, Sergey Senozhatsky wrote: > On (05/03/16 11:30), Sergey Senozhatsky wrote: > > > We are concerning about returing back to no per-cpu options but actually, > > > I don't want. If duplicate compression is really problem(But It's really > > > unlikely), we should try to solve the problem itself with different way > > > rather than roll-back to old, first of all. > > > > > > I hope we can. So let's not add big worry about adding new dup stat. :) > > > > ok, no prob. do you want it a separate sysfs node or a column in mm_stat? > > I'd prefer mm_stat column, or somewhere in those cumulative files; not a > > dedicated node: we decided to get rid of them some time ago. > > > > will io_stat node work for you? Firstly, I thought io_stat would be better. However, on second thought, I want to withdraw. I think io_stat should go away. failed_read failed_write invalid_io I think Above things are really unneeded. If something is fail, upper class on top of zram, for example, FSes or Swap should emit the warning. So, I don't think we need to maintain it in zram layer. notify_free It's kins of discard command for the point of block device so I think general block should take care of it like read and write. If block will do it, remained thing about notify_free is only zram_slot_free_notify so I think we can move it from io_stat to mm_stat because it's related to memory, not block I/O. With hoping with above things, I suggest let's not add anything to io_stat any more from now on and let's remove it sometime. Instead of it, let's add new dup stat. What do you think about it? > I'll submit a formal patch later today. when you have time, can you > take a look at http://marc.info/?l=linux-kernel&m=146217628030970 ? Oops, Sorry I missed. I will take a look. > I think we can fold this one into 0002. it will make 0002 slightly > bigger, but there nothing complicated in there, just cleanup. > > ==== > > From: Sergey Senozhatsky > Subject: [PATCH] zram: export the number of re-compressions > > Make the number of re-compressions visible via the io_stat node, > so we will be able to track down any issues caused by per-cpu > compression streams. > > Signed-off-by: Sergey Senozhatsky > Suggested-by: Minchan Kim > --- > Documentation/blockdev/zram.txt | 3 +++ > drivers/block/zram/zram_drv.c | 7 +++++-- > drivers/block/zram/zram_drv.h | 1 + > 3 files changed, 9 insertions(+), 2 deletions(-) > > diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt > index 5bda503..386d260 100644 > --- a/Documentation/blockdev/zram.txt > +++ b/Documentation/blockdev/zram.txt > @@ -183,6 +183,8 @@ mem_limit RW the maximum amount of memory ZRAM can use to store > pages_compacted RO the number of pages freed during compaction > (available only via zram/mm_stat node) > compact WO trigger memory compaction > +num_recompress RO the number of times fast compression paths failed > + and zram performed re-compression via a slow path > > WARNING > ======= > @@ -215,6 +217,7 @@ whitespace: > failed_writes > invalid_io > notify_free > + num_recompress > > File /sys/block/zram/mm_stat > > diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c > index 817e511..11b19c9 100644 > --- a/drivers/block/zram/zram_drv.c > +++ b/drivers/block/zram/zram_drv.c > @@ -395,7 +395,8 @@ static ssize_t io_stat_show(struct device *dev, > (u64)atomic64_read(&zram->stats.failed_reads), > (u64)atomic64_read(&zram->stats.failed_writes), > (u64)atomic64_read(&zram->stats.invalid_io), > - (u64)atomic64_read(&zram->stats.notify_free)); > + (u64)atomic64_read(&zram->stats.notify_free), > + (u64)atomic64_read(&zram->stats.num_recompress)); > up_read(&zram->init_lock); > > return ret; > @@ -721,8 +722,10 @@ compress_again: > > handle = zs_malloc(meta->mem_pool, clen, > GFP_NOIO | __GFP_HIGHMEM); > - if (handle) > + if (handle) { > + atomic64_inc(&zram->stats.num_recompress); > goto compress_again; > + } > > pr_err("Error allocating memory for compressed page: %u, size=%zu\n", > index, clen); > diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h > index 06b1636..78d7e8f 100644 > --- a/drivers/block/zram/zram_drv.h > +++ b/drivers/block/zram/zram_drv.h > @@ -85,6 +85,7 @@ struct zram_stats { > atomic64_t zero_pages; /* no. of zero filled pages */ > atomic64_t pages_stored; /* no. of pages currently stored */ > atomic_long_t max_used_pages; /* no. of maximum pages stored */ > + atomic64_t num_recompress; /* no. of failed compression fast paths */ > }; > > struct zram_meta { > -- > 2.8.2 >