On Mon, Apr 03, 2017 at 06:09:33PM +0200, Max Reitz wrote: > + BDRVQcow2State *s = bs->opaque; > + uint64_t aligned_cur_size = align_offset(current_size, cluster_size); > + uint64_t creftable_length; > + uint64_t i; > + > + /* new total size of L2 tables */ > + nl2e = aligned_total_size / cluster_size; > + nl2e = align_offset(nl2e, cluster_size / sizeof(uint64_t)); > + meta_size += nl2e * sizeof(uint64_t); > + > + /* Subtract L2 tables which are already present */ > + for (i = 0; i < s->l1_size; i++) { > + if (s->l1_table[i] & L1E_OFFSET_MASK) { > + meta_size -= cluster_size; > + } > + } Allocated L1 table entries are 'A', unallocated L1 table entries in the existing image are '-', and new L1 table entries after growth are '+': |-A-AAA--+++++| This for loop includes '-' entries. Should we count them or just the '+' entries? > > - /* total size of refcount tables */ > - nreftablee = nrefblocke / refblock_size; > - nreftablee = align_offset(nreftablee, cluster_size / sizeof(uint64_t)); > - meta_size += nreftablee * sizeof(uint64_t); > + /* Do not add L1 table size because the only caller of this path > + * (qcow2_truncate) has increased its size already. */ > > - return aligned_total_size + meta_size; > + /* Calculate size of the additional refblocks (this assumes that all of > + * the existing image is covered by refblocks, which is extremely > + * likely); this may result in overallocation because parts of the newly > + * added space may be covered by existing refblocks, but that is fine. > + * > + * This only considers the newly added space. Since we cannot update the > + * reftable in-place, we will have to able to store both the old and the > + * new one at the same time, though. Therefore, we need to add the size > + * of the old reftable here. > + */ > + creftable_length = ROUND_UP(s->refcount_table_size * sizeof(uint64_t), > + cluster_size); > + nrefblocke = ((aligned_total_size - aligned_cur_size) + meta_size + > + creftable_length + cluster_size) > + / (cluster_size - rces - > + rces * sizeof(uint64_t) / cluster_size); > + meta_size += DIV_ROUND_UP(nrefblocke, refblock_size) * cluster_size; > + > + /* total size of the new refcount table (again, may be too much because > + * it assumes that the new area is not covered by any refcount blocks > + * yet) */ > + nreftablee = s->max_refcount_table_index + 1 + > + nrefblocke / refblock_size; > + nreftablee = align_offset(nreftablee, cluster_size / sizeof(uint64_t)); > + meta_size += nreftablee * sizeof(uint64_t); > + > + return (aligned_total_size - aligned_cur_size) + meta_size; This calculation is an approximation. Here is a simpler approximation: current_usage = qcow2_calc_size_usage(current_size, ...); new_usage = qcow2_calc_size_usage(new_size, ...); delta = new_usage - current_usage; (Perhaps the new refcount_table clusters need to be added to new_size too.) Is there an advantage of the more elaborate calculation implemented in this patch? Stefan