From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CDE1C433E0 for ; Wed, 23 Dec 2020 12:44:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1CECA22482 for ; Wed, 23 Dec 2020 12:44:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1CECA22482 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 422938D0028; Wed, 23 Dec 2020 07:44:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3D2C18D0026; Wed, 23 Dec 2020 07:44:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E9138D0028; Wed, 23 Dec 2020 07:44:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0248.hostedemail.com [216.40.44.248]) by kanga.kvack.org (Postfix) with ESMTP id 18B418D0026 for ; Wed, 23 Dec 2020 07:44:38 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D36871F06 for ; Wed, 23 Dec 2020 12:44:37 +0000 (UTC) X-FDA: 77624515794.28.crown20_250d4b827468 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id A58AB6C36 for ; Wed, 23 Dec 2020 12:44:37 +0000 (UTC) X-HE-Tag: crown20_250d4b827468 X-Filterd-Recvd-Size: 8863 Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf14.hostedemail.com (Postfix) with ESMTP for ; Wed, 23 Dec 2020 12:44:36 +0000 (UTC) Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.59]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4D1CYm0JbNzhvbt; Wed, 23 Dec 2020 20:43:40 +0800 (CST) Received: from [127.0.0.1] (10.40.188.144) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.498.0; Wed, 23 Dec 2020 20:44:04 +0800 Subject: Re: [PATCH] zsmalloc: do not use bit_spin_lock To: Vitaly Wool , "Song Bao Hua (Barry Song)" CC: Shakeel Butt , Minchan Kim , "Mike Galbraith" , LKML , linux-mm , Sebastian Andrzej Siewior , NitinGupta , Sergey Senozhatsky , Andrew Morton , "tiantao (H)" References: <18669bd607ae9efbf4e00e36532c7aa167d0fa12.camel@gmx.de> <20201220002228.38697-1-vitaly.wool@konsulko.com> <8cc0e01fd03245a4994f2e0f54b264fa@hisilicon.com> <4490cb6a7e2243fba374e40652979e46@hisilicon.com> <08cbef1e43634c4099709be8e99e5d27@hisilicon.com> <1d0d4a3576e74d128d7849342a7e9faf@hisilicon.com> From: "tiantao (H)" Message-ID: <4e686c73-b453-e714-021a-1fcd0a565984@huawei.com> Date: Wed, 23 Dec 2020 20:44:04 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8"; format=flowed X-Originating-IP: [10.40.188.144] X-CFilter-Loop: Reflected Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =E5=9C=A8 2020/12/23 8:11, Vitaly Wool =E5=86=99=E9=81=93: > On Tue, 22 Dec 2020, 22:06 Song Bao Hua (Barry Song), > wrote: >> >> >>> -----Original Message----- >>> From: Vitaly Wool [mailto:vitaly.wool@konsulko.com] >>> Sent: Tuesday, December 22, 2020 10:44 PM >>> To: Song Bao Hua (Barry Song) >>> Cc: Shakeel Butt ; Minchan Kim ; Mike >>> Galbraith ; LKML ; linux= -mm >>> ; Sebastian Andrzej Siewior ; >>> NitinGupta ; Sergey Senozhatsky >>> ; Andrew Morton >>> ; tiantao (H) >>> Subject: Re: [PATCH] zsmalloc: do not use bit_spin_lock >>> >>> On Tue, 22 Dec 2020, 03:11 Song Bao Hua (Barry Song), >>> wrote: >>>> >>>> >>>>> -----Original Message----- >>>>> From: Song Bao Hua (Barry Song) >>>>> Sent: Tuesday, December 22, 2020 3:03 PM >>>>> To: 'Vitaly Wool' >>>>> Cc: Shakeel Butt ; Minchan Kim ; >>> Mike >>>>> Galbraith ; LKML ; lin= ux-mm >>>>> ; Sebastian Andrzej Siewior ; >>>>> NitinGupta ; Sergey Senozhatsky >>>>> ; Andrew Morton >>>>> ; tiantao (H) >>>>> Subject: RE: [PATCH] zsmalloc: do not use bit_spin_lock >>>>> >>>>> >>>>>> I'm still not convinced. Will kmap what, src? At this point src mi= ght >>> become >>>>> just a bogus pointer. >>>>> >>>>> As long as the memory is still there, we can kmap it by its page st= ruct. >>> But >>>>> if >>>>> it is not there anymore, we have no way. >>>>> >>>>>> Why couldn't the object have been moved somewhere else (due to the= compaction >>>>> mechanism for instance) >>>>>> at the time DMA kicks in? >>>>> So zs_map_object() will guarantee the src won't be moved by holding= those >>>>> preemption-disabled lock? >>>>> If so, it seems we have to drop the MOVABLE gfp in zswap for zsmall= oc case? >>>>> >>>> Or we can do get_page() to avoid the movement of the page. >>> >>> I would like to discuss this more in zswap context than zsmalloc's. >>> Since zsmalloc does not implement reclaim callback, using it in zswap >>> is a corner case anyway. >> I see. But it seems we still need a solution for the compatibility >> of zsmalloc and zswap? this will require change in either zsmalloc >> or zswap. >> or do you want to make zswap depend on !ZSMALLOC? > No, I really don't think we should go that far. What if we add a flag > to zpool, named like "can_sleep_mapped", and have it set for > zbud/z3fold? > Then zswap could go the current path if the flag is set; and if it's > not set, and mutex_trylock fails, copy data from src to a temporary > buffer, then unmap the handle, take the mutex, process the buffer > instead of src. Not the nicest thing to do but at least it won't break > anything. write the following patch according to your idea, what do you think ? --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1235,7 +1235,7 @@ static int zswap_frontswap_load(unsigned type,=20 pgoff_t offset, =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct zswap_entry *entry; =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct scatterlist input, out= put; =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct crypto_acomp_ctx *acom= p_ctx; -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 u8 *src, *dst; +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 u8 *src, *dst, *tmp; =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned int dlen; =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 int ret; @@ -1262,16 +1262,26 @@ static int zswap_frontswap_load(unsigned type,=20 pgoff_t offset, =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (zpool_evictable(entry->po= ol->zpool)) =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 src +=3D sizeof(struct zswap_header); +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (!zpool_can_sleep_mapped(entry->= pool->zpool) &&=20 !mutex_trylock(acomp_ctx->mutex)) { +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0 tmp =3D kmemdup(src, entry->length, GFP_ATOMIC); +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0 zpool_unmap_handle(entry->pool->zpool, entry->handle); ??? +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0 if (!tmp) +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 goto freeent= ry; +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 acomp_ctx =3D raw_cpu_ptr(ent= ry->pool->acomp_ctx); =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 mutex_lock(acomp_ctx->mutex); -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sg_init_one(&input, src, entry->len= gth); +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sg_init_one(&input, zpool_can_sleep= _mapped(entry->pool->zpool) ?=20 src : tmp, entry->length); =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sg_init_table(&output, 1); =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sg_set_page(&output, page, PA= GE_SIZE, 0); =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 acomp_request_set_params(acom= p_ctx->req, &input, &output,=20 entry->length, dlen); =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ret =3D crypto_wait_req(crypt= o_acomp_decompress(acomp_ctx->req),=20 &acomp_ctx->wait); =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 mutex_unlock(acomp_ctx->mutex= ); -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 zpool_unmap_handle(entry->pool->zpo= ol, entry->handle); +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (zpool_can_sleep_mapped(entry->p= ool->zpool)) +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0 zpool_unmap_handle(entry->pool->zpool, entry->handle); +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 else +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0 kfree(tmp); + --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -440,6 +440,7 @@ static u64 zs_zpool_total_size(void *pool) =C2=A0static struct zpool_driver zs_zpool_driver =3D { =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .type =3D=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0 "zsmalloc", +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .sleep_mapped =3D=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 false, =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .owner =3D=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 THIS_MODULE, =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .create =3D=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0 zs_zpool_create, =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .destroy =3D=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 = zs_zpool_destroy, > > ~Vitaly > >>> zswap, on the other hand, may be dealing with some new backends in >>> future which have more chances to become mainstream. Imagine typical >>> NUMA-like cases, i. e. a zswap pool allocated in some kind SRAM, or i= n >>> unused video memory. In such a case if you try to use a pointer to an >>> invalidated zpool mapping, you are on the way to thrash the system. >>> So: no assumptions that the zswap pool is in regular linear RAM shoul= d >>> be made. >>> >>> ~Vitaly >> Thanks >> Barry