From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754728AbcIEGC6 (ORCPT ); Mon, 5 Sep 2016 02:02:58 -0400 Received: from mail-it0-f46.google.com ([209.85.214.46]:34890 "EHLO mail-it0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750720AbcIEGCz (ORCPT ); Mon, 5 Sep 2016 02:02:55 -0400 MIME-Version: 1.0 In-Reply-To: <20160905055133.GA28514@bbox> References: <1471854309-30414-1-git-send-email-zhuhui@xiaomi.com> <20160825060957.GA568@swordfish> <20160905021852.GB22701@bbox> <20160905055133.GA28514@bbox> From: Hui Zhu Date: Mon, 5 Sep 2016 14:02:14 +0800 Message-ID: Subject: Re: [RFC 0/4] ZRAM: make it just store the high compression rate page To: Minchan Kim Cc: Sergey Senozhatsky , Hui Zhu , ngupta@vflare.org, Hugh Dickins , Steven Rostedt , Ingo Molnar , Peter Zijlstra , acme@kernel.org, alexander.shishkin@linux.intel.com, Andrew Morton , mhocko@suse.com, hannes@cmpxchg.org, mgorman@techsingularity.net, vbabka@suse.cz, redkoi@virtuozzo.com, luto@kernel.org, kirill.shutemov@linux.intel.com, geliangtang@163.com, baiyaowei@cmss.chinamobile.com, dan.j.williams@intel.com, vdavydov@virtuozzo.com, aarcange@redhat.com, dvlasenk@redhat.com, jmarchan@redhat.com, koct9i@gmail.com, yang.shi@linaro.org, dave.hansen@linux.intel.com, vkuznets@redhat.com, vitalywool@gmail.com, ross.zwisler@linux.intel.com, Thomas Gleixner , kwapulinski.piotr@gmail.com, axboe@fb.com, mchristi@redhat.com, Joe Perches , namit@vmware.com, Rik van Riel , "linux-kernel@vger.kernel.org" , Linux Memory Management List Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Sep 5, 2016 at 1:51 PM, Minchan Kim wrote: > On Mon, Sep 05, 2016 at 01:12:05PM +0800, Hui Zhu wrote: >> On Mon, Sep 5, 2016 at 10:18 AM, Minchan Kim wrote: >> > On Thu, Aug 25, 2016 at 04:25:30PM +0800, Hui Zhu wrote: >> >> On Thu, Aug 25, 2016 at 2:09 PM, Sergey Senozhatsky >> >> wrote: >> >> > Hello, >> >> > >> >> > On (08/22/16 16:25), Hui Zhu wrote: >> >> >> >> >> >> Current ZRAM just can store all pages even if the compression rate >> >> >> of a page is really low. So the compression rate of ZRAM is out of >> >> >> control when it is running. >> >> >> In my part, I did some test and record with ZRAM. The compression rate >> >> >> is about 40%. >> >> >> >> >> >> This series of patches make ZRAM can just store the page that the >> >> >> compressed size is smaller than a value. >> >> >> With these patches, I set the value to 2048 and did the same test with >> >> >> before. The compression rate is about 20%. The times of lowmemorykiller >> >> >> also decreased. >> >> > >> >> > I haven't looked at the patches in details yet. can you educate me a bit? >> >> > is your test stable? why the number of lowmemorykill-s has decreased? >> >> > ... or am reading "The times of lowmemorykiller also decreased" wrong? >> >> > >> >> > suppose you have X pages that result in bad compression size (from zram >> >> > point of view). zram stores such pages uncompressed, IOW we have no memory >> >> > savings - swapped out page lands in zsmalloc PAGE_SIZE class. now you >> >> > don't try to store those pages in zsmalloc, but keep them as unevictable. >> >> > so the page still occupies PAGE_SIZE; no memory saving again. why did it >> >> > improve LMK? >> >> >> >> No, zram will not save this page uncompressed with these patches. It >> >> will set it as non-swap and kick back to shrink_page_list. >> >> Shrink_page_list will remove this page from swapcache and kick it to >> >> unevictable list. >> >> Then this page will not be swaped before it get write. >> >> That is why most of code are around vmscan.c. >> > >> > If I understand Sergey's point right, he means there is no gain >> > to save memory between before and after. >> > >> > With your approach, you can prevent unnecessary pageout(i.e., >> > uncompressible page swap out) but it doesn't mean you save the >> > memory compared to old so why does your patch decrease the number of >> > lowmemory killing? >> > >> > A thing I can imagine is without this feature, zram could be full of >> > uncompressible pages so good-compressible page cannot be swapped out. >> > Hui, is this scenario right for your case? >> > >> >> That is one reason. But it is not the principal one. >> >> Another reason is when swap is running to put page to zram, what the >> system wants is to get memory. >> Then the deal is system spends cpu time and memory to get memory. If >> the zram just access the high compression rate pages, system can get >> more memory with the same amount of memory. It will pull system from >> low memory status earlier. (Maybe more cpu time, because the >> compression rate checks. But maybe less, because fewer pages need to >> digress. That is the interesting part. :) >> I think that is why lmk times decrease. >> >> And yes, all of this depends on the number of high compression rate >> pages. So you cannot just set a non_swap limit to the system and get >> everything. You need to do a lot of test around it to make sure the >> non_swap limit is good for your system. >> >> And I think use AOP_WRITEPAGE_ACTIVATE without kicking page to a >> special list will make cpu too busy sometimes. > > Yes, and it would same with your patch if new arraival write on CoWed > page is uncompressible data. > >> I did some tests before I kick page to a special list. The shrink task > > What kinds of test? Could you elaborate a bit more? > shrink task. What does it mean? > Sorry for this part. It should be function shrink_page_list. I will do more test for that and post the patch later. Thanks, Hui >> will be moved around, around and around because low compression rate >> pages just moved from one list to another a lot of times, again, again >> and again. >> And all this low compression rate pages always stay together. > > I cannot understand with detail description. :( > Could you explain more?