From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76D74C4346E for ; Tue, 29 Sep 2020 09:24:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 03A8F2076B for ; Tue, 29 Sep 2020 09:24:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727856AbgI2JYl (ORCPT ); Tue, 29 Sep 2020 05:24:41 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:14768 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727700AbgI2JYl (ORCPT ); Tue, 29 Sep 2020 05:24:41 -0400 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 7B26A6DFDA93666B85F0; Tue, 29 Sep 2020 17:24:39 +0800 (CST) Received: from [10.136.114.67] (10.136.114.67) by smtp.huawei.com (10.3.19.207) with Microsoft SMTP Server (TLS) id 14.3.487.0; Tue, 29 Sep 2020 17:24:33 +0800 Subject: Re: [f2fs-dev] [PATCH v2 1/2] f2fs: compress: introduce page array slab cache From: Chao Yu To: Jaegeuk Kim CC: , References: <20200914090514.50102-1-yuchao0@huawei.com> <20200929082306.GA1567825@google.com> <6e7639db-9120-d406-0a46-ec841845bb28@huawei.com> <20200929084739.GB1567825@google.com> <1b9774da-b2a8-2009-7796-9c576af1b4c4@huawei.com> Message-ID: <5872f50c-4f3c-84bb-636f-6a6bd748c25f@huawei.com> Date: Tue, 29 Sep 2020 17:24:33 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <1b9774da-b2a8-2009-7796-9c576af1b4c4@huawei.com> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.136.114.67] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/9/29 17:15, Chao Yu wrote: > On 2020/9/29 16:47, Jaegeuk Kim wrote: >> On 09/29, Chao Yu wrote: >>> On 2020/9/29 16:23, Jaegeuk Kim wrote: >>>> I found a bug related to the number of page pointer allocation related to >>>> nr_cpages. >>> >>> Jaegeuk, >>> >>> If I didn't miss anything, you mean that nr_cpages could be larger >>> than nr_rpages, right? the problematic case here is lzo/lzo-rle: >>> >>> cc->clen = lzo1x_worst_compress(PAGE_SIZE << cc->log_cluster_size); >>> >>> As we can't limited clen as we did for lz4/zstd: >>> >>> cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE; >> >> Yes, I've seen some memory corruption in lzo test. Here is another patch to fix >> mem leak. >> >> Signed-off-by: Jaegeuk Kim >> --- >> fs/f2fs/compress.c | 67 ++++++++++++++++++++++++++++------------------ >> 1 file changed, 41 insertions(+), 26 deletions(-) >> >> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c >> index f086ac43ca825..ba2d4897744d8 100644 >> --- a/fs/f2fs/compress.c >> +++ b/fs/f2fs/compress.c >> @@ -20,22 +20,20 @@ >> static struct kmem_cache *cic_entry_slab; >> static struct kmem_cache *dic_entry_slab; >> >> -static void *page_array_alloc(struct inode *inode) >> +static void *page_array_alloc(struct inode *inode, int nr) >> { >> struct f2fs_sb_info *sbi = F2FS_I_SB(inode); >> - unsigned int size = sizeof(struct page *) << >> - F2FS_I(inode)->i_log_cluster_size; >> + unsigned int size = sizeof(struct page *) * nr; >> >> if (likely(size == sbi->page_array_slab_size)) >> return kmem_cache_zalloc(sbi->page_array_slab, GFP_NOFS); >> return f2fs_kzalloc(sbi, size, GFP_NOFS); >> } >> >> -static void page_array_free(struct inode *inode, void *pages) >> +static void page_array_free(struct inode *inode, void *pages, int nr) >> { >> struct f2fs_sb_info *sbi = F2FS_I_SB(inode); >> - unsigned int size = sizeof(struct page *) << >> - F2FS_I(inode)->i_log_cluster_size; >> + unsigned int size = sizeof(struct page *) * nr; >> >> if (!pages) >> return; >> @@ -162,13 +160,13 @@ int f2fs_init_compress_ctx(struct compress_ctx *cc) >> if (cc->rpages) >> return 0; >> >> - cc->rpages = page_array_alloc(cc->inode); >> + cc->rpages = page_array_alloc(cc->inode, cc->cluster_size); >> return cc->rpages ? 0 : -ENOMEM; >> } >> >> void f2fs_destroy_compress_ctx(struct compress_ctx *cc) >> { >> - page_array_free(cc->inode, cc->rpages); >> + page_array_free(cc->inode, cc->rpages, cc->cluster_size); >> cc->rpages = NULL; >> cc->nr_rpages = 0; >> cc->nr_cpages = 0; >> @@ -602,7 +600,8 @@ static int f2fs_compress_pages(struct compress_ctx *cc) >> struct f2fs_inode_info *fi = F2FS_I(cc->inode); >> const struct f2fs_compress_ops *cops = >> f2fs_cops[fi->i_compress_algorithm]; >> - unsigned int max_len, nr_cpages; >> + unsigned int max_len, new_nr_cpages; >> + struct page **new_cpages; >> int i, ret; >> >> trace_f2fs_compress_pages_start(cc->inode, cc->cluster_idx, >> @@ -617,7 +616,7 @@ static int f2fs_compress_pages(struct compress_ctx *cc) >> max_len = COMPRESS_HEADER_SIZE + cc->clen; >> cc->nr_cpages = DIV_ROUND_UP(max_len, PAGE_SIZE); >> >> - cc->cpages = page_array_alloc(cc->inode); >> + cc->cpages = page_array_alloc(cc->inode, cc->nr_cpages); > > Well, cc->nr_cpages will be set to cc->nr_rpages - 1 for zstd/lz4 cases, so > this will make cpages allocation fallback to kmalloc, which can cause more > memory use. Could we handle cpages allocation for lzo/lzo-rle separately as: force_xxx = is_lzo/lzo-rle_algorithm and is_cpages_array_allocation page_array_alloc(, force_kmalloc) page_array_free(, force_kfree) Thanks, > > Thanks, > >> if (!cc->cpages) { >> ret = -ENOMEM; >> goto destroy_compress_ctx; >> @@ -659,16 +658,28 @@ static int f2fs_compress_pages(struct compress_ctx *cc) >> for (i = 0; i < COMPRESS_DATA_RESERVED_SIZE; i++) >> cc->cbuf->reserved[i] = cpu_to_le32(0); >> >> - nr_cpages = DIV_ROUND_UP(cc->clen + COMPRESS_HEADER_SIZE, PAGE_SIZE); >> + new_nr_cpages = DIV_ROUND_UP(cc->clen + COMPRESS_HEADER_SIZE, PAGE_SIZE); >> + >> + /* Now we're going to cut unnecessary tail pages */ >> + new_cpages = page_array_alloc(cc->inode, new_nr_cpages); >> + if (new_cpages) { >> + ret = -ENOMEM; >> + goto out_vunmap_cbuf; >> + } >> >> /* zero out any unused part of the last page */ >> memset(&cc->cbuf->cdata[cc->clen], 0, >> - (nr_cpages * PAGE_SIZE) - (cc->clen + COMPRESS_HEADER_SIZE)); >> + (new_nr_cpages * PAGE_SIZE) - >> + (cc->clen + COMPRESS_HEADER_SIZE)); >> >> vm_unmap_ram(cc->cbuf, cc->nr_cpages); >> vm_unmap_ram(cc->rbuf, cc->cluster_size); >> >> - for (i = nr_cpages; i < cc->nr_cpages; i++) { >> + for (i = 0; i < cc->nr_cpages; i++) { >> + if (i < new_nr_cpages) { >> + new_cpages[i] = cc->cpages[i]; >> + continue; >> + } >> f2fs_compress_free_page(cc->cpages[i]); >> cc->cpages[i] = NULL; >> } >> @@ -676,7 +687,9 @@ static int f2fs_compress_pages(struct compress_ctx *cc) >> if (cops->destroy_compress_ctx) >> cops->destroy_compress_ctx(cc); >> >> - cc->nr_cpages = nr_cpages; >> + page_array_free(cc->inode, cc->cpages, cc->nr_cpages); >> + cc->cpages = new_cpages; >> + cc->nr_cpages = new_nr_cpages; >> >> trace_f2fs_compress_pages_end(cc->inode, cc->cluster_idx, >> cc->clen, ret); >> @@ -691,7 +704,7 @@ static int f2fs_compress_pages(struct compress_ctx *cc) >> if (cc->cpages[i]) >> f2fs_compress_free_page(cc->cpages[i]); >> } >> - page_array_free(cc->inode, cc->cpages); >> + page_array_free(cc->inode, cc->cpages, cc->nr_cpages); >> cc->cpages = NULL; >> destroy_compress_ctx: >> if (cops->destroy_compress_ctx) >> @@ -730,7 +743,7 @@ void f2fs_decompress_pages(struct bio *bio, struct page *page, bool verity) >> goto out_free_dic; >> } >> >> - dic->tpages = page_array_alloc(dic->inode); >> + dic->tpages = page_array_alloc(dic->inode, dic->cluster_size); >> if (!dic->tpages) { >> ret = -ENOMEM; >> goto out_free_dic; >> @@ -1203,7 +1216,7 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc, >> cic->magic = F2FS_COMPRESSED_PAGE_MAGIC; >> cic->inode = inode; >> atomic_set(&cic->pending_pages, cc->nr_cpages); >> - cic->rpages = page_array_alloc(cc->inode); >> + cic->rpages = page_array_alloc(cc->inode, cc->cluster_size); >> if (!cic->rpages) >> goto out_put_cic; >> >> @@ -1297,11 +1310,13 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc, >> spin_unlock(&fi->i_size_lock); >> >> f2fs_put_rpages(cc); >> + page_array_free(cc->inode, cc->cpages, cc->nr_cpages); >> + cc->cpages = NULL; >> f2fs_destroy_compress_ctx(cc); >> return 0; >> >> out_destroy_crypt: >> - page_array_free(cc->inode, cic->rpages); >> + page_array_free(cc->inode, cic->rpages, cc->cluster_size); >> >> for (--i; i >= 0; i--) >> fscrypt_finalize_bounce_page(&cc->cpages[i]); >> @@ -1310,6 +1325,8 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc, >> continue; >> f2fs_put_page(cc->cpages[i], 1); >> } >> + page_array_free(cc->inode, cc->cpages, cc->nr_cpages); >> + cc->cpages = NULL; >> out_put_cic: >> kmem_cache_free(cic_entry_slab, cic); >> out_put_dnode: >> @@ -1345,7 +1362,7 @@ void f2fs_compress_write_end_io(struct bio *bio, struct page *page) >> end_page_writeback(cic->rpages[i]); >> } >> >> - page_array_free(cic->inode, cic->rpages); >> + page_array_free(cic->inode, cic->rpages, cic->nr_rpages); >> kmem_cache_free(cic_entry_slab, cic); >> } >> >> @@ -1442,8 +1459,6 @@ int f2fs_write_multi_pages(struct compress_ctx *cc, >> >> err = f2fs_write_compressed_pages(cc, submitted, >> wbc, io_type); >> - page_array_free(cc->inode, cc->cpages); >> - cc->cpages = NULL; >> if (!err) >> return 0; >> f2fs_bug_on(F2FS_I_SB(cc->inode), err != -EAGAIN); >> @@ -1468,7 +1483,7 @@ struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc) >> if (!dic) >> return ERR_PTR(-ENOMEM); >> >> - dic->rpages = page_array_alloc(cc->inode); >> + dic->rpages = page_array_alloc(cc->inode, cc->cluster_size); >> if (!dic->rpages) { >> kmem_cache_free(dic_entry_slab, dic); >> return ERR_PTR(-ENOMEM); >> @@ -1487,7 +1502,7 @@ struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc) >> dic->rpages[i] = cc->rpages[i]; >> dic->nr_rpages = cc->cluster_size; >> >> - dic->cpages = page_array_alloc(dic->inode); >> + dic->cpages = page_array_alloc(dic->inode, dic->nr_cpages); >> if (!dic->cpages) >> goto out_free; >> >> @@ -1522,7 +1537,7 @@ void f2fs_free_dic(struct decompress_io_ctx *dic) >> continue; >> f2fs_compress_free_page(dic->tpages[i]); >> } >> - page_array_free(dic->inode, dic->tpages); >> + page_array_free(dic->inode, dic->tpages, dic->cluster_size); >> } >> >> if (dic->cpages) { >> @@ -1531,10 +1546,10 @@ void f2fs_free_dic(struct decompress_io_ctx *dic) >> continue; >> f2fs_compress_free_page(dic->cpages[i]); >> } >> - page_array_free(dic->inode, dic->cpages); >> + page_array_free(dic->inode, dic->cpages, dic->nr_cpages); >> } >> >> - page_array_free(dic->inode, dic->rpages); >> + page_array_free(dic->inode, dic->rpages, dic->nr_rpages); >> kmem_cache_free(dic_entry_slab, dic); >> } >> >> > > > _______________________________________________ > Linux-f2fs-devel mailing list > Linux-f2fs-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel > . > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.1 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, NICE_REPLY_A,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7D1BC4346E for ; Tue, 29 Sep 2020 09:25:10 +0000 (UTC) Received: from lists.sourceforge.net (lists.sourceforge.net [216.105.38.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 12DF72076B; Tue, 29 Sep 2020 09:25:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=sourceforge.net header.i=@sourceforge.net header.b="Y3RDfP12"; dkim=fail reason="signature verification failed" (1024-bit key) header.d=sf.net header.i=@sf.net header.b="E77S9Wc8" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 12DF72076B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linux-f2fs-devel-bounces@lists.sourceforge.net Received: from [127.0.0.1] (helo=sfs-ml-4.v29.lw.sourceforge.com) by sfs-ml-4.v29.lw.sourceforge.com with esmtp (Exim 4.90_1) (envelope-from ) id 1kNBsg-0002GG-SZ; Tue, 29 Sep 2020 09:25:06 +0000 Received: from [172.30.20.202] (helo=mx.sourceforge.net) by sfs-ml-4.v29.lw.sourceforge.com with esmtps (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kNBsc-0002G1-5C for linux-f2fs-devel@lists.sourceforge.net; Tue, 29 Sep 2020 09:25:02 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=sourceforge.net; s=x; h=Content-Transfer-Encoding:Content-Type:In-Reply-To: MIME-Version:Date:Message-ID:References:CC:To:From:Subject:Sender:Reply-To: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=gQzuJUPchhy8W3SZ4b98vDgLuj/el7ieVuZ08P2nG5M=; b=Y3RDfP12EjA8WNkjTfNVIWiQg7 ihrUy2fZFEchsGyyiCDxpJqhUYHGbEo9fGQw2qO7cT3h04nslEHWS1rIRQXC7NlM12BQVfBwfmLaG BzP9IInzHYZdis/Efp9qowUqGWe8aekk8vHWuKI5go1D3UkYDerdDPodOwvap58FeHW4=; DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=sf.net; s=x ; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:MIME-Version:Date: Message-ID:References:CC:To:From:Subject:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=gQzuJUPchhy8W3SZ4b98vDgLuj/el7ieVuZ08P2nG5M=; b=E77S9Wc88UzdruI+uuJX/dcnnH 62PuFSScozCbEmXAe0VfysKLhta/YkY/nIVJ4x39IcNZP7RQQ7dq2fhhDSTE+iYNMEr0C+r8l3T3y YmjsUGZYoe1MENWfWvzdYdEwwsRWRC1rieLmGcPbC66BkmBGXANByezbk4HDrQpLelpg=; Received: from szxga04-in.huawei.com ([45.249.212.190] helo=huawei.com) by sfi-mx-3.v28.lw.sourceforge.com with esmtps (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.92.2) id 1kNBsW-009stn-20 for linux-f2fs-devel@lists.sourceforge.net; Tue, 29 Sep 2020 09:25:02 +0000 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 7B26A6DFDA93666B85F0; Tue, 29 Sep 2020 17:24:39 +0800 (CST) Received: from [10.136.114.67] (10.136.114.67) by smtp.huawei.com (10.3.19.207) with Microsoft SMTP Server (TLS) id 14.3.487.0; Tue, 29 Sep 2020 17:24:33 +0800 From: Chao Yu To: Jaegeuk Kim References: <20200914090514.50102-1-yuchao0@huawei.com> <20200929082306.GA1567825@google.com> <6e7639db-9120-d406-0a46-ec841845bb28@huawei.com> <20200929084739.GB1567825@google.com> <1b9774da-b2a8-2009-7796-9c576af1b4c4@huawei.com> Message-ID: <5872f50c-4f3c-84bb-636f-6a6bd748c25f@huawei.com> Date: Tue, 29 Sep 2020 17:24:33 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <1b9774da-b2a8-2009-7796-9c576af1b4c4@huawei.com> Content-Language: en-US X-Originating-IP: [10.136.114.67] X-CFilter-Loop: Reflected X-Headers-End: 1kNBsW-009stn-20 Subject: Re: [f2fs-dev] [PATCH v2 1/2] f2fs: compress: introduce page array slab cache X-BeenThere: linux-f2fs-devel@lists.sourceforge.net X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-kernel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: linux-f2fs-devel-bounces@lists.sourceforge.net On 2020/9/29 17:15, Chao Yu wrote: > On 2020/9/29 16:47, Jaegeuk Kim wrote: >> On 09/29, Chao Yu wrote: >>> On 2020/9/29 16:23, Jaegeuk Kim wrote: >>>> I found a bug related to the number of page pointer allocation related to >>>> nr_cpages. >>> >>> Jaegeuk, >>> >>> If I didn't miss anything, you mean that nr_cpages could be larger >>> than nr_rpages, right? the problematic case here is lzo/lzo-rle: >>> >>> cc->clen = lzo1x_worst_compress(PAGE_SIZE << cc->log_cluster_size); >>> >>> As we can't limited clen as we did for lz4/zstd: >>> >>> cc->clen = cc->rlen - PAGE_SIZE - COMPRESS_HEADER_SIZE; >> >> Yes, I've seen some memory corruption in lzo test. Here is another patch to fix >> mem leak. >> >> Signed-off-by: Jaegeuk Kim >> --- >> fs/f2fs/compress.c | 67 ++++++++++++++++++++++++++++------------------ >> 1 file changed, 41 insertions(+), 26 deletions(-) >> >> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c >> index f086ac43ca825..ba2d4897744d8 100644 >> --- a/fs/f2fs/compress.c >> +++ b/fs/f2fs/compress.c >> @@ -20,22 +20,20 @@ >> static struct kmem_cache *cic_entry_slab; >> static struct kmem_cache *dic_entry_slab; >> >> -static void *page_array_alloc(struct inode *inode) >> +static void *page_array_alloc(struct inode *inode, int nr) >> { >> struct f2fs_sb_info *sbi = F2FS_I_SB(inode); >> - unsigned int size = sizeof(struct page *) << >> - F2FS_I(inode)->i_log_cluster_size; >> + unsigned int size = sizeof(struct page *) * nr; >> >> if (likely(size == sbi->page_array_slab_size)) >> return kmem_cache_zalloc(sbi->page_array_slab, GFP_NOFS); >> return f2fs_kzalloc(sbi, size, GFP_NOFS); >> } >> >> -static void page_array_free(struct inode *inode, void *pages) >> +static void page_array_free(struct inode *inode, void *pages, int nr) >> { >> struct f2fs_sb_info *sbi = F2FS_I_SB(inode); >> - unsigned int size = sizeof(struct page *) << >> - F2FS_I(inode)->i_log_cluster_size; >> + unsigned int size = sizeof(struct page *) * nr; >> >> if (!pages) >> return; >> @@ -162,13 +160,13 @@ int f2fs_init_compress_ctx(struct compress_ctx *cc) >> if (cc->rpages) >> return 0; >> >> - cc->rpages = page_array_alloc(cc->inode); >> + cc->rpages = page_array_alloc(cc->inode, cc->cluster_size); >> return cc->rpages ? 0 : -ENOMEM; >> } >> >> void f2fs_destroy_compress_ctx(struct compress_ctx *cc) >> { >> - page_array_free(cc->inode, cc->rpages); >> + page_array_free(cc->inode, cc->rpages, cc->cluster_size); >> cc->rpages = NULL; >> cc->nr_rpages = 0; >> cc->nr_cpages = 0; >> @@ -602,7 +600,8 @@ static int f2fs_compress_pages(struct compress_ctx *cc) >> struct f2fs_inode_info *fi = F2FS_I(cc->inode); >> const struct f2fs_compress_ops *cops = >> f2fs_cops[fi->i_compress_algorithm]; >> - unsigned int max_len, nr_cpages; >> + unsigned int max_len, new_nr_cpages; >> + struct page **new_cpages; >> int i, ret; >> >> trace_f2fs_compress_pages_start(cc->inode, cc->cluster_idx, >> @@ -617,7 +616,7 @@ static int f2fs_compress_pages(struct compress_ctx *cc) >> max_len = COMPRESS_HEADER_SIZE + cc->clen; >> cc->nr_cpages = DIV_ROUND_UP(max_len, PAGE_SIZE); >> >> - cc->cpages = page_array_alloc(cc->inode); >> + cc->cpages = page_array_alloc(cc->inode, cc->nr_cpages); > > Well, cc->nr_cpages will be set to cc->nr_rpages - 1 for zstd/lz4 cases, so > this will make cpages allocation fallback to kmalloc, which can cause more > memory use. Could we handle cpages allocation for lzo/lzo-rle separately as: force_xxx = is_lzo/lzo-rle_algorithm and is_cpages_array_allocation page_array_alloc(, force_kmalloc) page_array_free(, force_kfree) Thanks, > > Thanks, > >> if (!cc->cpages) { >> ret = -ENOMEM; >> goto destroy_compress_ctx; >> @@ -659,16 +658,28 @@ static int f2fs_compress_pages(struct compress_ctx *cc) >> for (i = 0; i < COMPRESS_DATA_RESERVED_SIZE; i++) >> cc->cbuf->reserved[i] = cpu_to_le32(0); >> >> - nr_cpages = DIV_ROUND_UP(cc->clen + COMPRESS_HEADER_SIZE, PAGE_SIZE); >> + new_nr_cpages = DIV_ROUND_UP(cc->clen + COMPRESS_HEADER_SIZE, PAGE_SIZE); >> + >> + /* Now we're going to cut unnecessary tail pages */ >> + new_cpages = page_array_alloc(cc->inode, new_nr_cpages); >> + if (new_cpages) { >> + ret = -ENOMEM; >> + goto out_vunmap_cbuf; >> + } >> >> /* zero out any unused part of the last page */ >> memset(&cc->cbuf->cdata[cc->clen], 0, >> - (nr_cpages * PAGE_SIZE) - (cc->clen + COMPRESS_HEADER_SIZE)); >> + (new_nr_cpages * PAGE_SIZE) - >> + (cc->clen + COMPRESS_HEADER_SIZE)); >> >> vm_unmap_ram(cc->cbuf, cc->nr_cpages); >> vm_unmap_ram(cc->rbuf, cc->cluster_size); >> >> - for (i = nr_cpages; i < cc->nr_cpages; i++) { >> + for (i = 0; i < cc->nr_cpages; i++) { >> + if (i < new_nr_cpages) { >> + new_cpages[i] = cc->cpages[i]; >> + continue; >> + } >> f2fs_compress_free_page(cc->cpages[i]); >> cc->cpages[i] = NULL; >> } >> @@ -676,7 +687,9 @@ static int f2fs_compress_pages(struct compress_ctx *cc) >> if (cops->destroy_compress_ctx) >> cops->destroy_compress_ctx(cc); >> >> - cc->nr_cpages = nr_cpages; >> + page_array_free(cc->inode, cc->cpages, cc->nr_cpages); >> + cc->cpages = new_cpages; >> + cc->nr_cpages = new_nr_cpages; >> >> trace_f2fs_compress_pages_end(cc->inode, cc->cluster_idx, >> cc->clen, ret); >> @@ -691,7 +704,7 @@ static int f2fs_compress_pages(struct compress_ctx *cc) >> if (cc->cpages[i]) >> f2fs_compress_free_page(cc->cpages[i]); >> } >> - page_array_free(cc->inode, cc->cpages); >> + page_array_free(cc->inode, cc->cpages, cc->nr_cpages); >> cc->cpages = NULL; >> destroy_compress_ctx: >> if (cops->destroy_compress_ctx) >> @@ -730,7 +743,7 @@ void f2fs_decompress_pages(struct bio *bio, struct page *page, bool verity) >> goto out_free_dic; >> } >> >> - dic->tpages = page_array_alloc(dic->inode); >> + dic->tpages = page_array_alloc(dic->inode, dic->cluster_size); >> if (!dic->tpages) { >> ret = -ENOMEM; >> goto out_free_dic; >> @@ -1203,7 +1216,7 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc, >> cic->magic = F2FS_COMPRESSED_PAGE_MAGIC; >> cic->inode = inode; >> atomic_set(&cic->pending_pages, cc->nr_cpages); >> - cic->rpages = page_array_alloc(cc->inode); >> + cic->rpages = page_array_alloc(cc->inode, cc->cluster_size); >> if (!cic->rpages) >> goto out_put_cic; >> >> @@ -1297,11 +1310,13 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc, >> spin_unlock(&fi->i_size_lock); >> >> f2fs_put_rpages(cc); >> + page_array_free(cc->inode, cc->cpages, cc->nr_cpages); >> + cc->cpages = NULL; >> f2fs_destroy_compress_ctx(cc); >> return 0; >> >> out_destroy_crypt: >> - page_array_free(cc->inode, cic->rpages); >> + page_array_free(cc->inode, cic->rpages, cc->cluster_size); >> >> for (--i; i >= 0; i--) >> fscrypt_finalize_bounce_page(&cc->cpages[i]); >> @@ -1310,6 +1325,8 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc, >> continue; >> f2fs_put_page(cc->cpages[i], 1); >> } >> + page_array_free(cc->inode, cc->cpages, cc->nr_cpages); >> + cc->cpages = NULL; >> out_put_cic: >> kmem_cache_free(cic_entry_slab, cic); >> out_put_dnode: >> @@ -1345,7 +1362,7 @@ void f2fs_compress_write_end_io(struct bio *bio, struct page *page) >> end_page_writeback(cic->rpages[i]); >> } >> >> - page_array_free(cic->inode, cic->rpages); >> + page_array_free(cic->inode, cic->rpages, cic->nr_rpages); >> kmem_cache_free(cic_entry_slab, cic); >> } >> >> @@ -1442,8 +1459,6 @@ int f2fs_write_multi_pages(struct compress_ctx *cc, >> >> err = f2fs_write_compressed_pages(cc, submitted, >> wbc, io_type); >> - page_array_free(cc->inode, cc->cpages); >> - cc->cpages = NULL; >> if (!err) >> return 0; >> f2fs_bug_on(F2FS_I_SB(cc->inode), err != -EAGAIN); >> @@ -1468,7 +1483,7 @@ struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc) >> if (!dic) >> return ERR_PTR(-ENOMEM); >> >> - dic->rpages = page_array_alloc(cc->inode); >> + dic->rpages = page_array_alloc(cc->inode, cc->cluster_size); >> if (!dic->rpages) { >> kmem_cache_free(dic_entry_slab, dic); >> return ERR_PTR(-ENOMEM); >> @@ -1487,7 +1502,7 @@ struct decompress_io_ctx *f2fs_alloc_dic(struct compress_ctx *cc) >> dic->rpages[i] = cc->rpages[i]; >> dic->nr_rpages = cc->cluster_size; >> >> - dic->cpages = page_array_alloc(dic->inode); >> + dic->cpages = page_array_alloc(dic->inode, dic->nr_cpages); >> if (!dic->cpages) >> goto out_free; >> >> @@ -1522,7 +1537,7 @@ void f2fs_free_dic(struct decompress_io_ctx *dic) >> continue; >> f2fs_compress_free_page(dic->tpages[i]); >> } >> - page_array_free(dic->inode, dic->tpages); >> + page_array_free(dic->inode, dic->tpages, dic->cluster_size); >> } >> >> if (dic->cpages) { >> @@ -1531,10 +1546,10 @@ void f2fs_free_dic(struct decompress_io_ctx *dic) >> continue; >> f2fs_compress_free_page(dic->cpages[i]); >> } >> - page_array_free(dic->inode, dic->cpages); >> + page_array_free(dic->inode, dic->cpages, dic->nr_cpages); >> } >> >> - page_array_free(dic->inode, dic->rpages); >> + page_array_free(dic->inode, dic->rpages, dic->nr_rpages); >> kmem_cache_free(dic_entry_slab, dic); >> } >> >> > > > _______________________________________________ > Linux-f2fs-devel mailing list > Linux-f2fs-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel > . > _______________________________________________ Linux-f2fs-devel mailing list Linux-f2fs-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel