From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752489AbdDKFig (ORCPT ); Tue, 11 Apr 2017 01:38:36 -0400 Received: from LGEAMRELO13.lge.com ([156.147.23.53]:39179 "EHLO lgeamrelo13.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751685AbdDKFie (ORCPT ); Tue, 11 Apr 2017 01:38:34 -0400 X-Original-SENDERIP: 156.147.1.126 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 165.244.249.25 X-Original-MAILFROM: minchan@kernel.org X-Original-SENDERIP: 10.177.223.161 X-Original-MAILFROM: minchan@kernel.org Date: Tue, 11 Apr 2017 14:38:30 +0900 From: Minchan Kim To: Andrew Morton CC: , Sergey Senozhatsky , Subject: Re: [PATCH 0/5] zram clean up Message-ID: <20170411053830.GA20340@bbox> References: <1491196653-7388-1-git-send-email-minchan@kernel.org> MIME-Version: 1.0 In-Reply-To: <1491196653-7388-1-git-send-email-minchan@kernel.org> User-Agent: Mutt/1.5.24 (2015-08-30) X-MIMETrack: Itemize by SMTP Server on LGEKRMHUB03/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2017/04/11 14:38:30, Serialize by Router on LGEKRMHUB03/LGE/LG Group(Release 8.5.3FP6|November 21, 2013) at 2017/04/11 14:38:30, Serialize complete at 2017/04/11 14:38:30 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Andrew, Could you drop this patchset? I will send updated version based on Sergey with some bug fixes. zram-handle-multiple-pages-attached-bios-bvec.patch zram-partial-io-refactoring.patch zram-use-zram_slot_lock-instead-of-raw-bit_spin_lock-op.patch zram-remove-zram_meta-structure.patch zram-introduce-zram-data-accessor.patch Thanks! On Mon, Apr 03, 2017 at 02:17:28PM +0900, Minchan Kim wrote: > This patchset aims zram clean-up. > > [1] clean up multiple pages's bvec handling. > [2] clean up partial IO handling > [3-5] clean up zram via using accessor and removing pointless structure. > > With [2-5] applied, we can get a few hundred bytes as well as huge > readibility enhance. > > This patchset is based on v4.11-rc4-mmotm-2017-03-30-16-31 and > *drop* zram-factor-out-partial-io-routine.patch. > > x86: 708 byte save > > add/remove: 1/1 grow/shrink: 0/11 up/down: 478/-1186 (-708) > function old new delta > zram_special_page_read - 478 +478 > zram_reset_device 317 314 -3 > mem_used_max_store 131 128 -3 > compact_store 96 93 -3 > mm_stat_show 203 197 -6 > zram_add 719 712 -7 > zram_slot_free_notify 229 214 -15 > zram_make_request 819 803 -16 > zram_meta_free 128 111 -17 > zram_free_page 180 151 -29 > disksize_store 432 361 -71 > zram_decompress_page.isra 504 - -504 > zram_bvec_rw 2592 2080 -512 > Total: Before=25350773, After=25350065, chg -0.00% > > ppc64: 231 byte save > > add/remove: 2/0 grow/shrink: 1/9 up/down: 681/-912 (-231) > function old new delta > zram_special_page_read - 480 +480 > zram_slot_lock - 200 +200 > vermagic 39 40 +1 > mm_stat_show 256 248 -8 > zram_meta_free 200 184 -16 > zram_add 944 912 -32 > zram_free_page 348 308 -40 > disksize_store 572 492 -80 > zram_decompress_page 664 564 -100 > zram_slot_free_notify 292 160 -132 > zram_make_request 1132 1000 -132 > zram_bvec_rw 2768 2396 -372 > Total: Before=17565825, After=17565594, chg -0.00% > > Minchan Kim (5): > [1] zram: handle multiple pages attached bio's bvec > [2] zram: partial IO refactoring > [3] zram: use zram_slot_lock instead of raw bit_spin_lock op > [4] zram: remove zram_meta structure > [5] zram: introduce zram data accessor > > drivers/block/zram/zram_drv.c | 532 +++++++++++++++++++++--------------------- > drivers/block/zram/zram_drv.h | 6 +- > 2 files changed, 270 insertions(+), 268 deletions(-) > > -- > 2.7.4 >