From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 016CAC43331 for ; Mon, 11 Nov 2019 01:57:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B79C8206C3 for ; Mon, 11 Nov 2019 01:57:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B79C8206C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4E1BE6B0005; Sun, 10 Nov 2019 20:57:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 491546B0006; Sun, 10 Nov 2019 20:57:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3CE766B0007; Sun, 10 Nov 2019 20:57:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0206.hostedemail.com [216.40.44.206]) by kanga.kvack.org (Postfix) with ESMTP id 2B3CA6B0005 for ; Sun, 10 Nov 2019 20:57:58 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id C8A498249980 for ; Mon, 11 Nov 2019 01:57:57 +0000 (UTC) X-FDA: 76142335794.22.deer57_456d5e145520f X-HE-Tag: deer57_456d5e145520f X-Filterd-Recvd-Size: 4406 Received: from out30-54.freemail.mail.aliyun.com (out30-54.freemail.mail.aliyun.com [115.124.30.54]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Mon, 11 Nov 2019 01:57:56 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;DM=||false|;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04420;MF=teawaterz@linux.alibaba.com;NM=1;PH=DS;RN=4;SR=0;TI=SMTPD_---0ThfhG3L_1573437473; Received: from C02XQC8CJG5H.local(mailfrom:teawaterz@linux.alibaba.com fp:SMTPD_---0ThfhG3L_1573437473) by smtp.aliyun-inc.com(127.0.0.1); Mon, 11 Nov 2019 09:57:54 +0800 Subject: Re: [PATCH] zswap: Add shrink_enabled that can disable swap shrink to increase store performance To: Dan Streetman Cc: Seth Jennings , Linux-MM , linux-kernel References: <1571990538-6133-1-git-send-email-teawaterz@linux.alibaba.com> From: Hui Zhu Message-ID: <42753fc6-e352-adcb-52c2-6b68472318f5@linux.alibaba.com> Date: Mon, 11 Nov 2019 09:57:53 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0) Gecko/20100101 Thunderbird/68.2.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000093, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2019/11/9 12:04 =E4=B8=8A=E5=8D=88, Dan Streetman wrote: > On Fri, Oct 25, 2019 at 4:02 AM Hui Zhu w= rote: >> >> zswap will try to shrink pool when zswap is full. >> This commit add shrink_enabled that can disable swap shrink to increas= e >> store performance. User can disable swap shrink if care about the sto= re >> performance. >=20 > I don't understand - if zswap is full it can't store any more pages > without shrinking the current pool. This commit will just force all > pages to swap when zswap is full. This has nothing to do with 'store > performance'. >=20 > I think it would be much better to remove any user option for this and > implement some hysteresis; store pages normally until the zpool is > full, then reject all pages going to that pool until there is some % > free, at which point allow pages to be stored into the pool again. > That will prevent (or at least reduce) the constant performance hit > when a zpool fills up, and just fallback to normal swapping to disk > until the zpool has some amount of free space again. >=20 This idea is really cool! Do you mind I make a patch for it? Thanks, Hui >> >> For example in a VM with 1 CPU 1G memory 4G swap: >> echo lz4 > /sys/module/zswap/parameters/compressor >> echo z3fold > /sys/module/zswap/parameters/zpool >> echo 0 > /sys/module/zswap/parameters/same_filled_pages_enabled >> echo 1 > /sys/module/zswap/parameters/enabled >> usemem -a -n 1 $((4000 * 1024 * 1024)) >> 4718592000 bytes / 114937822 usecs =3D 40091 KB/s >> 101700 usecs to free memory >> echo 0 > /sys/module/zswap/parameters/shrink_enabled >> usemem -a -n 1 $((4000 * 1024 * 1024)) >> 4718592000 bytes / 8837320 usecs =3D 521425 KB/s >> 129577 usecs to free memory >> >> The store speed increased when zswap shrink disabled. >> >> Signed-off-by: Hui Zhu >> --- >> mm/zswap.c | 7 +++++++ >> 1 file changed, 7 insertions(+) >> >> diff --git a/mm/zswap.c b/mm/zswap.c >> index 46a3223..731e3d1e 100644 >> --- a/mm/zswap.c >> +++ b/mm/zswap.c >> @@ -114,6 +114,10 @@ static bool zswap_same_filled_pages_enabled =3D t= rue; >> module_param_named(same_filled_pages_enabled, zswap_same_filled_page= s_enabled, >> bool, 0644); >> >> +/* Enable/disable zswap shrink (enabled by default) */ >> +static bool zswap_shrink_enabled =3D true; >> +module_param_named(shrink_enabled, zswap_shrink_enabled, bool, 0644); >> + >> /********************************* >> * data structures >> **********************************/ >> @@ -947,6 +951,9 @@ static int zswap_shrink(void) >> struct zswap_pool *pool; >> int ret; >> >> + if (!zswap_shrink_enabled) >> + return -EPERM; >> + >> pool =3D zswap_pool_last_get(); >> if (!pool) >> return -ENOENT; >> -- >> 2.7.4 >>