All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] How do you do when write more than 16TB data to qcow2 on ext4?
@ 2018-08-16  1:35 lampahome
  2018-08-16  8:22 ` Daniel P. Berrangé
  0 siblings, 1 reply; 5+ messages in thread
From: lampahome @ 2018-08-16  1:35 UTC (permalink / raw)
  To: QEMU Developers

We all know there's a file size limit 16TB in ext4 and other fs has their
limit,too.

If I create an qcow2 20TB on ext4 and write to it more than 16TB. Data more
than 16TB can't be written to qcow2.

So, is there any better ways to solve this situation?

What I thought is to create new qcow2 called qcow2-new and setup the
backing file be the previous qcow2.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] How do you do when write more than 16TB data to qcow2 on ext4?
  2018-08-16  1:35 [Qemu-devel] How do you do when write more than 16TB data to qcow2 on ext4? lampahome
@ 2018-08-16  8:22 ` Daniel P. Berrangé
  2018-08-16 11:46   ` Eric Blake
  0 siblings, 1 reply; 5+ messages in thread
From: Daniel P. Berrangé @ 2018-08-16  8:22 UTC (permalink / raw)
  To: lampahome; +Cc: QEMU Developers

On Thu, Aug 16, 2018 at 09:35:52AM +0800, lampahome wrote:
> We all know there's a file size limit 16TB in ext4 and other fs has their
> limit,too.
> 
> If I create an qcow2 20TB on ext4 and write to it more than 16TB. Data more
> than 16TB can't be written to qcow2.
> 
> So, is there any better ways to solve this situation?

I'd really just recommend using a different filesystem, in particular XFS
has massively higher file size limit - tested to 500 TB in RHEL-7, with a
theoretical max size of 8 EB. It is a very mature filesystem & the default
in RHEL-7.

> What I thought is to create new qcow2 called qcow2-new and setup the
> backing file be the previous qcow2.

A bit of a hack but it could work, albeit with the extra pain for managing
your VMs. If you create the qcow2 layer and the guest rewrites existing
written blocks you're going to end up storing data twice (used original
data in the backing file, and new active data in the top layer). So your
20 TB disk may end up storing waaay more than 20 TB.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] How do you do when write more than 16TB data to qcow2 on ext4?
  2018-08-16  8:22 ` Daniel P. Berrangé
@ 2018-08-16 11:46   ` Eric Blake
  2018-08-17  8:05     ` lampahome
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Blake @ 2018-08-16 11:46 UTC (permalink / raw)
  To: Daniel P. Berrangé, lampahome; +Cc: QEMU Developers

On 08/16/2018 03:22 AM, Daniel P. Berrangé wrote:
> On Thu, Aug 16, 2018 at 09:35:52AM +0800, lampahome wrote:
>> We all know there's a file size limit 16TB in ext4 and other fs has their
>> limit,too.
>>
>> If I create an qcow2 20TB on ext4 and write to it more than 16TB. Data more
>> than 16TB can't be written to qcow2.
>>
>> So, is there any better ways to solve this situation?
> 
> I'd really just recommend using a different filesystem, in particular XFS
> has massively higher file size limit - tested to 500 TB in RHEL-7, with a
> theoretical max size of 8 EB. It is a very mature filesystem & the default
> in RHEL-7.

Or target raw block devices instead of using a filesystem. LVM works great.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] How do you do when write more than 16TB data to qcow2 on ext4?
  2018-08-16 11:46   ` Eric Blake
@ 2018-08-17  8:05     ` lampahome
  2018-08-17 14:20       ` Eric Blake
  0 siblings, 1 reply; 5+ messages in thread
From: lampahome @ 2018-08-17  8:05 UTC (permalink / raw)
  To: Eric Blake; +Cc: Daniel P. Berrangé, QEMU Developers

Really? How to mount a blk device to /dev/nbdN?
I always find tips to mount from file-like image to /dev/nbdN

2018-08-16 19:46 GMT+08:00 Eric Blake <eblake@redhat.com>:

> On 08/16/2018 03:22 AM, Daniel P. Berrangé wrote:
>
>> On Thu, Aug 16, 2018 at 09:35:52AM +0800, lampahome wrote:
>>
>>> We all know there's a file size limit 16TB in ext4 and other fs has their
>>> limit,too.
>>>
>>> If I create an qcow2 20TB on ext4 and write to it more than 16TB. Data
>>> more
>>> than 16TB can't be written to qcow2.
>>>
>>> So, is there any better ways to solve this situation?
>>>
>>
>> I'd really just recommend using a different filesystem, in particular XFS
>> has massively higher file size limit - tested to 500 TB in RHEL-7, with a
>> theoretical max size of 8 EB. It is a very mature filesystem & the default
>> in RHEL-7.
>>
>
> Or target raw block devices instead of using a filesystem. LVM works great.
>
> --
> Eric Blake, Principal Software Engineer
> Red Hat, Inc.           +1-919-301-3266
> Virtualization:  qemu.org | libvirt.org
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Qemu-devel] How do you do when write more than 16TB data to qcow2 on ext4?
  2018-08-17  8:05     ` lampahome
@ 2018-08-17 14:20       ` Eric Blake
  0 siblings, 0 replies; 5+ messages in thread
From: Eric Blake @ 2018-08-17 14:20 UTC (permalink / raw)
  To: lampahome; +Cc: Daniel P. Berrangé, QEMU Developers

On 08/17/2018 03:05 AM, lampahome wrote:
> Really? How to mount a blk device to /dev/nbdN?
> I always find tips to mount from file-like image to /dev/nbdN

Please don't top-post on technical lists, it makes the conversation 
harder to follow.

I'm not sure how your question about /dev/nbdN relates to your setup of 
trying to produce a large guest image. You can export ANYTHING that qemu 
can recognize to /dev/nbdN, via:

qemu-nbd -c /dev/nbd$n $image_that_qemu_recognizes

This is true for raw images:
qemu-nbd -c /dev/nbd$n -f raw /path/to/file

for block devices such as LVM storing qcow2 format:
qemu-nbd -c /dev/nbd$n -f qcow2 /dev/mapper/vg_$whatever

for remote access to another NBD server:
qemu-nbd -c /dev/nbd$n -f raw nbd://remote:10809/export

again, but this time with only a 1M subset exported:
qemu-nbd -c /dev/nbd$n --image-opts 
driver=raw,offset=1M,size=1M,file.driver=nbd,\
file.server.type=inet,file.server.host=remote,\
file.server.port=10809,file.export=export

and so on.

Basically, if qemu can access storage, it can also export that storage 
as NBD, and you can then hook up the kernel's NBD client to expose that 
storage as a local block device for inspecting what the guest image 
contains (word of caution: don't mount untrusted guest images in this 
manner, as the guest could have planted a malicious file system bug that 
will crash the host. Instead, use libguestfs for mounting guest images 
in a safe VM environment where at most you crash the VM instead of the 
host).  And there's no requirement that you have to involve the kernel 
via /dev/nbdN - you can instead access NBD servers via userspace 
(qemu-img is one such userspace client, and Nir Soffer recently posted 
to this list about a client being written in python).

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-08-17 14:20 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-16  1:35 [Qemu-devel] How do you do when write more than 16TB data to qcow2 on ext4? lampahome
2018-08-16  8:22 ` Daniel P. Berrangé
2018-08-16 11:46   ` Eric Blake
2018-08-17  8:05     ` lampahome
2018-08-17 14:20       ` Eric Blake

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.