All of lore.kernel.org
 help / color / mirror / Atom feed
From: kbuild test robot <lkp@intel.com>
To: kbuild-all@lists.01.org
Subject: Re: [PATCH RFC 3/3] ext4: Notify block device about fallocate(0)-assigned blocks
Date: Wed, 11 Dec 2019 09:02:53 +0800	[thread overview]
Message-ID: <201912110841.P2uhRoib%lkp@intel.com> (raw)
In-Reply-To: <157599697948.12112.3846364542350011691.stgit@localhost.localdomain>

[-- Attachment #1: Type: text/plain, Size: 14696 bytes --]

Hi Kirill,

[FYI, it's a private test report for your RFC patch.]
[auto build test ERROR on block/for-next]
[also build test ERROR on linus/master v5.5-rc1 next-20191210]
[cannot apply to ext4/dev]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Kirill-Tkhai/block-ext4-Introduce-REQ_OP_ASSIGN_RANGE-to-reflect-extents-allocation-in-block-device-internals/20191211-073400
base:   https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git for-next
config: riscv-defconfig (attached as .config)
compiler: riscv64-linux-gcc (GCC) 7.5.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        GCC_VERSION=7.5.0 make.cross ARCH=riscv 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   fs/ext4/extents.c: In function 'ext4_ext_map_blocks':
>> fs/ext4/extents.c:4493:51: error: 'struct ext4_sb_info' has no member named 'fallocate'
     if ((flags & EXT4_GET_BLOCKS_SUBMIT_ALLOC) && sbi->fallocate) {
                                                      ^~

vim +4493 fs/ext4/extents.c

  4261	
  4262	
  4263	/*
  4264	 * Block allocation/map/preallocation routine for extents based files
  4265	 *
  4266	 *
  4267	 * Need to be called with
  4268	 * down_read(&EXT4_I(inode)->i_data_sem) if not allocating file system block
  4269	 * (ie, create is zero). Otherwise down_write(&EXT4_I(inode)->i_data_sem)
  4270	 *
  4271	 * return > 0, number of of blocks already mapped/allocated
  4272	 *          if create == 0 and these are pre-allocated blocks
  4273	 *          	buffer head is unmapped
  4274	 *          otherwise blocks are mapped
  4275	 *
  4276	 * return = 0, if plain look up failed (blocks have not been allocated)
  4277	 *          buffer head is unmapped
  4278	 *
  4279	 * return < 0, error case.
  4280	 */
  4281	int ext4_ext_map_blocks(handle_t *handle, struct inode *inode,
  4282				struct ext4_map_blocks *map, int flags)
  4283	{
  4284		struct ext4_ext_path *path = NULL;
  4285		struct ext4_extent newex, *ex, *ex2;
  4286		struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb);
  4287		ext4_fsblk_t newblock = 0;
  4288		int free_on_err = 0, err = 0, depth, ret;
  4289		unsigned int allocated = 0, offset = 0;
  4290		unsigned int allocated_clusters = 0;
  4291		struct ext4_allocation_request ar;
  4292		ext4_lblk_t cluster_offset;
  4293		bool map_from_cluster = false;
  4294	
  4295		ext_debug("blocks %u/%u requested for inode %lu\n",
  4296			  map->m_lblk, map->m_len, inode->i_ino);
  4297		trace_ext4_ext_map_blocks_enter(inode, map->m_lblk, map->m_len, flags);
  4298	
  4299		/* find extent for this block */
  4300		path = ext4_find_extent(inode, map->m_lblk, NULL, 0);
  4301		if (IS_ERR(path)) {
  4302			err = PTR_ERR(path);
  4303			path = NULL;
  4304			goto out2;
  4305		}
  4306	
  4307		depth = ext_depth(inode);
  4308	
  4309		/*
  4310		 * consistent leaf must not be empty;
  4311		 * this situation is possible, though, _during_ tree modification;
  4312		 * this is why assert can't be put in ext4_find_extent()
  4313		 */
  4314		if (unlikely(path[depth].p_ext == NULL && depth != 0)) {
  4315			EXT4_ERROR_INODE(inode, "bad extent address "
  4316					 "lblock: %lu, depth: %d pblock %lld",
  4317					 (unsigned long) map->m_lblk, depth,
  4318					 path[depth].p_block);
  4319			err = -EFSCORRUPTED;
  4320			goto out2;
  4321		}
  4322	
  4323		ex = path[depth].p_ext;
  4324		if (ex) {
  4325			ext4_lblk_t ee_block = le32_to_cpu(ex->ee_block);
  4326			ext4_fsblk_t ee_start = ext4_ext_pblock(ex);
  4327			unsigned short ee_len;
  4328	
  4329	
  4330			/*
  4331			 * unwritten extents are treated as holes, except that
  4332			 * we split out initialized portions during a write.
  4333			 */
  4334			ee_len = ext4_ext_get_actual_len(ex);
  4335	
  4336			trace_ext4_ext_show_extent(inode, ee_block, ee_start, ee_len);
  4337	
  4338			/* if found extent covers block, simply return it */
  4339			if (in_range(map->m_lblk, ee_block, ee_len)) {
  4340				newblock = map->m_lblk - ee_block + ee_start;
  4341				/* number of remaining blocks in the extent */
  4342				allocated = ee_len - (map->m_lblk - ee_block);
  4343				ext_debug("%u fit into %u:%d -> %llu\n", map->m_lblk,
  4344					  ee_block, ee_len, newblock);
  4345	
  4346				/*
  4347				 * If the extent is initialized check whether the
  4348				 * caller wants to convert it to unwritten.
  4349				 */
  4350				if ((!ext4_ext_is_unwritten(ex)) &&
  4351				    (flags & EXT4_GET_BLOCKS_CONVERT_UNWRITTEN)) {
  4352					allocated = convert_initialized_extent(
  4353							handle, inode, map, &path,
  4354							allocated);
  4355					goto out2;
  4356				} else if (!ext4_ext_is_unwritten(ex))
  4357					goto out;
  4358	
  4359				ret = ext4_ext_handle_unwritten_extents(
  4360					handle, inode, map, &path, flags,
  4361					allocated, newblock);
  4362				if (ret < 0)
  4363					err = ret;
  4364				else
  4365					allocated = ret;
  4366				goto out2;
  4367			}
  4368		}
  4369	
  4370		/*
  4371		 * requested block isn't allocated yet;
  4372		 * we couldn't try to create block if create flag is zero
  4373		 */
  4374		if ((flags & EXT4_GET_BLOCKS_CREATE) == 0) {
  4375			ext4_lblk_t hole_start, hole_len;
  4376	
  4377			hole_start = map->m_lblk;
  4378			hole_len = ext4_ext_determine_hole(inode, path, &hole_start);
  4379			/*
  4380			 * put just found gap into cache to speed up
  4381			 * subsequent requests
  4382			 */
  4383			ext4_ext_put_gap_in_cache(inode, hole_start, hole_len);
  4384	
  4385			/* Update hole_len to reflect hole size after map->m_lblk */
  4386			if (hole_start != map->m_lblk)
  4387				hole_len -= map->m_lblk - hole_start;
  4388			map->m_pblk = 0;
  4389			map->m_len = min_t(unsigned int, map->m_len, hole_len);
  4390	
  4391			goto out2;
  4392		}
  4393	
  4394		/*
  4395		 * Okay, we need to do block allocation.
  4396		 */
  4397		newex.ee_block = cpu_to_le32(map->m_lblk);
  4398		cluster_offset = EXT4_LBLK_COFF(sbi, map->m_lblk);
  4399	
  4400		/*
  4401		 * If we are doing bigalloc, check to see if the extent returned
  4402		 * by ext4_find_extent() implies a cluster we can use.
  4403		 */
  4404		if (cluster_offset && ex &&
  4405		    get_implied_cluster_alloc(inode->i_sb, map, ex, path)) {
  4406			ar.len = allocated = map->m_len;
  4407			newblock = map->m_pblk;
  4408			map_from_cluster = true;
  4409			goto got_allocated_blocks;
  4410		}
  4411	
  4412		/* find neighbour allocated blocks */
  4413		ar.lleft = map->m_lblk;
  4414		err = ext4_ext_search_left(inode, path, &ar.lleft, &ar.pleft);
  4415		if (err)
  4416			goto out2;
  4417		ar.lright = map->m_lblk;
  4418		ex2 = NULL;
  4419		err = ext4_ext_search_right(inode, path, &ar.lright, &ar.pright, &ex2);
  4420		if (err)
  4421			goto out2;
  4422	
  4423		/* Check if the extent after searching to the right implies a
  4424		 * cluster we can use. */
  4425		if ((sbi->s_cluster_ratio > 1) && ex2 &&
  4426		    get_implied_cluster_alloc(inode->i_sb, map, ex2, path)) {
  4427			ar.len = allocated = map->m_len;
  4428			newblock = map->m_pblk;
  4429			map_from_cluster = true;
  4430			goto got_allocated_blocks;
  4431		}
  4432	
  4433		/*
  4434		 * See if request is beyond maximum number of blocks we can have in
  4435		 * a single extent. For an initialized extent this limit is
  4436		 * EXT_INIT_MAX_LEN and for an unwritten extent this limit is
  4437		 * EXT_UNWRITTEN_MAX_LEN.
  4438		 */
  4439		if (map->m_len > EXT_INIT_MAX_LEN &&
  4440		    !(flags & EXT4_GET_BLOCKS_UNWRIT_EXT))
  4441			map->m_len = EXT_INIT_MAX_LEN;
  4442		else if (map->m_len > EXT_UNWRITTEN_MAX_LEN &&
  4443			 (flags & EXT4_GET_BLOCKS_UNWRIT_EXT))
  4444			map->m_len = EXT_UNWRITTEN_MAX_LEN;
  4445	
  4446		/* Check if we can really insert (m_lblk)::(m_lblk + m_len) extent */
  4447		newex.ee_len = cpu_to_le16(map->m_len);
  4448		err = ext4_ext_check_overlap(sbi, inode, &newex, path);
  4449		if (err)
  4450			allocated = ext4_ext_get_actual_len(&newex);
  4451		else
  4452			allocated = map->m_len;
  4453	
  4454		/* allocate new block */
  4455		ar.inode = inode;
  4456		ar.goal = ext4_ext_find_goal(inode, path, map->m_lblk);
  4457		ar.logical = map->m_lblk;
  4458		/*
  4459		 * We calculate the offset from the beginning of the cluster
  4460		 * for the logical block number, since when we allocate a
  4461		 * physical cluster, the physical block should start at the
  4462		 * same offset from the beginning of the cluster.  This is
  4463		 * needed so that future calls to get_implied_cluster_alloc()
  4464		 * work correctly.
  4465		 */
  4466		offset = EXT4_LBLK_COFF(sbi, map->m_lblk);
  4467		ar.len = EXT4_NUM_B2C(sbi, offset+allocated);
  4468		ar.goal -= offset;
  4469		ar.logical -= offset;
  4470		if (S_ISREG(inode->i_mode))
  4471			ar.flags = EXT4_MB_HINT_DATA;
  4472		else
  4473			/* disable in-core preallocation for non-regular files */
  4474			ar.flags = 0;
  4475		if (flags & EXT4_GET_BLOCKS_NO_NORMALIZE)
  4476			ar.flags |= EXT4_MB_HINT_NOPREALLOC;
  4477		if (flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE)
  4478			ar.flags |= EXT4_MB_DELALLOC_RESERVED;
  4479		if (flags & EXT4_GET_BLOCKS_METADATA_NOFAIL)
  4480			ar.flags |= EXT4_MB_USE_RESERVED;
  4481		newblock = ext4_mb_new_blocks(handle, &ar, &err);
  4482		if (!newblock)
  4483			goto out2;
  4484		ext_debug("allocate new block: goal %llu, found %llu/%u\n",
  4485			  ar.goal, newblock, allocated);
  4486		free_on_err = 1;
  4487		allocated_clusters = ar.len;
  4488		ar.len = EXT4_C2B(sbi, ar.len) - offset;
  4489		if (ar.len > allocated)
  4490			ar.len = allocated;
  4491	
  4492	got_allocated_blocks:
> 4493		if ((flags & EXT4_GET_BLOCKS_SUBMIT_ALLOC) && sbi->fallocate) {
  4494			err = sb_issue_assign_range(inode->i_sb, newblock,
  4495				EXT4_C2B(sbi, allocated_clusters), GFP_NOFS);
  4496			if (err)
  4497				goto free_on_err;
  4498		}
  4499	
  4500		/* try to insert new extent into found leaf and return */
  4501		ext4_ext_store_pblock(&newex, newblock + offset);
  4502		newex.ee_len = cpu_to_le16(ar.len);
  4503		/* Mark unwritten */
  4504		if (flags & EXT4_GET_BLOCKS_UNWRIT_EXT){
  4505			ext4_ext_mark_unwritten(&newex);
  4506			map->m_flags |= EXT4_MAP_UNWRITTEN;
  4507		}
  4508	
  4509		err = 0;
  4510		if ((flags & EXT4_GET_BLOCKS_KEEP_SIZE) == 0)
  4511			err = check_eofblocks_fl(handle, inode, map->m_lblk,
  4512						 path, ar.len);
  4513		if (!err)
  4514			err = ext4_ext_insert_extent(handle, inode, &path,
  4515						     &newex, flags);
  4516	free_on_err:
  4517		if (err && free_on_err) {
  4518			int fb_flags = flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE ?
  4519				EXT4_FREE_BLOCKS_NO_QUOT_UPDATE : 0;
  4520			/* free data blocks we just allocated */
  4521			/* not a good idea to call discard here directly,
  4522			 * but otherwise we'd need to call it every free() */
  4523			ext4_discard_preallocations(inode);
  4524			ext4_free_blocks(handle, inode, NULL, newblock,
  4525					 EXT4_C2B(sbi, allocated_clusters), fb_flags);
  4526			goto out2;
  4527		}
  4528	
  4529		/* previous routine could use block we allocated */
  4530		newblock = ext4_ext_pblock(&newex);
  4531		allocated = ext4_ext_get_actual_len(&newex);
  4532		if (allocated > map->m_len)
  4533			allocated = map->m_len;
  4534		map->m_flags |= EXT4_MAP_NEW;
  4535	
  4536		/*
  4537		 * Reduce the reserved cluster count to reflect successful deferred
  4538		 * allocation of delayed allocated clusters or direct allocation of
  4539		 * clusters discovered to be delayed allocated.  Once allocated, a
  4540		 * cluster is not included in the reserved count.
  4541		 */
  4542		if (test_opt(inode->i_sb, DELALLOC) && !map_from_cluster) {
  4543			if (flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE) {
  4544				/*
  4545				 * When allocating delayed allocated clusters, simply
  4546				 * reduce the reserved cluster count and claim quota
  4547				 */
  4548				ext4_da_update_reserve_space(inode, allocated_clusters,
  4549								1);
  4550			} else {
  4551				ext4_lblk_t lblk, len;
  4552				unsigned int n;
  4553	
  4554				/*
  4555				 * When allocating non-delayed allocated clusters
  4556				 * (from fallocate, filemap, DIO, or clusters
  4557				 * allocated when delalloc has been disabled by
  4558				 * ext4_nonda_switch), reduce the reserved cluster
  4559				 * count by the number of allocated clusters that
  4560				 * have previously been delayed allocated.  Quota
  4561				 * has been claimed by ext4_mb_new_blocks() above,
  4562				 * so release the quota reservations made for any
  4563				 * previously delayed allocated clusters.
  4564				 */
  4565				lblk = EXT4_LBLK_CMASK(sbi, map->m_lblk);
  4566				len = allocated_clusters << sbi->s_cluster_bits;
  4567				n = ext4_es_delayed_clu(inode, lblk, len);
  4568				if (n > 0)
  4569					ext4_da_update_reserve_space(inode, (int) n, 0);
  4570			}
  4571		}
  4572	
  4573		/*
  4574		 * Cache the extent and update transaction to commit on fdatasync only
  4575		 * when it is _not_ an unwritten extent.
  4576		 */
  4577		if ((flags & EXT4_GET_BLOCKS_UNWRIT_EXT) == 0)
  4578			ext4_update_inode_fsync_trans(handle, inode, 1);
  4579		else
  4580			ext4_update_inode_fsync_trans(handle, inode, 0);
  4581	out:
  4582		if (allocated > map->m_len)
  4583			allocated = map->m_len;
  4584		ext4_ext_show_leaf(inode, path);
  4585		map->m_flags |= EXT4_MAP_MAPPED;
  4586		map->m_pblk = newblock;
  4587		map->m_len = allocated;
  4588	out2:
  4589		ext4_ext_drop_refs(path);
  4590		kfree(path);
  4591	
  4592		trace_ext4_ext_map_blocks_exit(inode, flags, map,
  4593					       err ? err : allocated);
  4594		return err ? err : allocated;
  4595	}
  4596	

---
0-DAY kernel test infrastructure                 Open Source Technology Center
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org Intel Corporation

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 18497 bytes --]

  reply	other threads:[~2019-12-11  1:02 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-10 16:56 [PATCH RFC 0/3] block,ext4: Introduce REQ_OP_ASSIGN_RANGE to reflect extents allocation in block device internals Kirill Tkhai
2019-12-10 16:56 ` [PATCH RFC 1/3] block: Add support for REQ_OP_ASSIGN_RANGE operation Kirill Tkhai
2019-12-19  3:03   ` Martin K. Petersen
2019-12-19 11:07     ` Kirill Tkhai
2019-12-19 22:03       ` Chaitanya Kulkarni
2019-12-19 22:37       ` Martin K. Petersen
2019-12-20  1:53         ` Darrick J. Wong
2019-12-20  2:22           ` Martin K. Petersen
2019-12-20 11:55         ` Kirill Tkhai
2019-12-21 18:54           ` Martin K. Petersen
2019-12-23  8:51             ` Kirill Tkhai
2020-01-07  3:24               ` Martin K. Petersen
2020-01-07 13:59                 ` Kirill Tkhai
2020-01-08  2:49                   ` Martin K. Petersen
2020-01-09  9:43                     ` Kirill Tkhai
2019-12-10 16:56 ` [PATCH RFC 2/3] loop: Forward REQ_OP_ASSIGN_RANGE into fallocate(0) Kirill Tkhai
2019-12-10 16:56 ` [PATCH RFC 3/3] ext4: Notify block device about fallocate(0)-assigned blocks Kirill Tkhai
2019-12-11  1:02   ` kbuild test robot [this message]
2019-12-11 12:55   ` [PATCH RFC v2 " Kirill Tkhai
2019-12-15 15:35   ` [PATCH RFC " kbuild test robot
2019-12-11  7:42 ` [PATCH RFC 0/3] block,ext4: Introduce REQ_OP_ASSIGN_RANGE to reflect extents allocation in block device internals Chaitanya Kulkarni
2019-12-11  8:50   ` Kirill Tkhai
2019-12-17 14:16 ` Kirill Tkhai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201912110841.P2uhRoib%lkp@intel.com \
    --to=lkp@intel.com \
    --cc=kbuild-all@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.