Hi, AFAIU, external data files with data_file_raw=on are supposed to return the same data as the qcow2 file when read. But we still use the qcow2 metadata structures (which are by default initialized to “everything unallocated”), even though we never ensure that the external data file is zero, too, so this can happen: $ dd if=/dev/urandom of=foo.raw 64M [...] $ sudo losetup -f --show foo.raw /dev/loop0 $ sudo ./qemu-img create -f qcow2 -o \ data_file=/dev/loop0,data_file_raw=on foo.qcow2 64M [...] $ sudo ./qemu-io -c 'read -P 0 0 64M' foo.qcow2 read 67108864/67108864 bytes at offset 0 64 MiB, 1 ops; 00.00 sec (25.036 GiB/sec and 400.5751 ops/sec) $ sudo ./qemu-io -c 'read -P 0 0 64M' -f raw foo.raw Pattern verification failed at offset 0, 67108864 bytes read 67108864/67108864 bytes at offset 0 64 MiB, 1 ops; 00.01 sec (5.547 GiB/sec and 88.7484 ops/sec) I suppose this behavior is fine for blockdev-create because I guess it’s the user’s responsibility to ensure that the external data file is zero. But maybe it isn’t, so that’s my first question: Is it really the user’s responsibility or should we always ensure it’s zero? My second question is: If we decide that this is fine for blockdev-create, should at least qcow2_co_create_opts() ensure the data file it just created is zero? Max