From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:41156 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S932577AbeBUJ4A (ORCPT ); Wed, 21 Feb 2018 04:56:00 -0500 Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w1L9sfke037554 for ; Wed, 21 Feb 2018 04:56:00 -0500 Received: from e06smtp15.uk.ibm.com (e06smtp15.uk.ibm.com [195.75.94.111]) by mx0b-001b2d01.pphosted.com with ESMTP id 2g949twc8e-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 21 Feb 2018 04:55:59 -0500 Received: from localhost by e06smtp15.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 21 Feb 2018 09:55:58 -0000 From: Chandan Rajendra To: Eric Biggers Cc: linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-fscrypt@vger.kernel.org, tytso@mit.edu Subject: Re: [RFC PATCH V2 07/11] fscrypt_zeroout_range: Encrypt all zeroed out blocks of a page Date: Wed, 21 Feb 2018 15:27:24 +0530 In-Reply-To: <20180221011648.GD252219@gmail.com> References: <20180212094347.22071-1-chandan@linux.vnet.ibm.com> <20180212094347.22071-8-chandan@linux.vnet.ibm.com> <20180221011648.GD252219@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Message-Id: <5846743.5xSkMaRmik@localhost.localdomain> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Wednesday, February 21, 2018 6:46:48 AM IST Eric Biggers wrote: > On Mon, Feb 12, 2018 at 03:13:43PM +0530, Chandan Rajendra wrote: > > For block size < page size, This commit adds code to encrypt all zeroed > > out blocks of a page. > > > > Signed-off-by: Chandan Rajendra > > --- > > fs/crypto/bio.c | 38 +++++++++++++++++++++++++------------- > > 1 file changed, 25 insertions(+), 13 deletions(-) > > > > diff --git a/fs/crypto/bio.c b/fs/crypto/bio.c > > index 378df08..4d0d14f 100644 > > --- a/fs/crypto/bio.c > > +++ b/fs/crypto/bio.c > > @@ -104,10 +104,12 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk, > > { > > struct fscrypt_ctx *ctx; > > struct page *ciphertext_page = NULL; > > + unsigned int page_nr_blks; > > + unsigned int offset; > > + unsigned int page_io_len; > > struct bio *bio; > > int ret, err = 0; > > - > > - BUG_ON(inode->i_sb->s_blocksize != PAGE_SIZE); > > + int i; > > > > ctx = fscrypt_get_ctx(inode, GFP_NOFS); > > if (IS_ERR(ctx)) > > @@ -119,12 +121,23 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk, > > goto errout; > > } > > > > - while (len--) { > > - err = fscrypt_do_block_crypto(inode, FS_ENCRYPT, lblk, > > - ZERO_PAGE(0), ciphertext_page, > > - PAGE_SIZE, 0, GFP_NOFS); > > - if (err) > > - goto errout; > > + page_nr_blks = 1 << (PAGE_SHIFT - inode->i_blkbits); > > + > > + while (len) { > > + page_nr_blks = min_t(unsigned int, page_nr_blks, len); > > + page_io_len = page_nr_blks << inode->i_blkbits; > > + offset = 0; > > The 'page_io_len' variable isn't needed, since 'offset == page_io_len' after the > encryption loop. You can do 'bio_add_page(bio, ciphertext_page, offset, 0);'. > You are right. I will fix that up in the next iteration of the patchset. -- chandan