All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jagane Sundar <jagane@sundar.org>
To: "dlaor@redhat.com" <dlaor@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>,
	Anthony Liguori <aliguori@us.ibm.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Jes Sorensen <Jes.Sorensen@redhat.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	qemu-devel <qemu-devel@nongnu.org>, Avi Kivity <avi@redhat.com>,
	Stefan Hajnoczi <stefan.hajnoczi@uk.ibm.com>,
	Ayal Baron <abaron@redhat.com>
Subject: Re: [Qemu-devel] [RFC] live snapshot, live merge,	live block migration
Date: Wed, 18 May 2011 08:49:50 -0700	[thread overview]
Message-ID: <4DD3EA9E.5030908@sundar.org> (raw)
In-Reply-To: <4DD2FC6C.5060608@redhat.com>

Hello Dor,

I'm glad I could convince you of the value of Livebackup. I
think Livesnapshot/Livemerge, Livebackup and Block
Migration all have very interesting use cases. For example:

- Livesnapshot/Livemerge is very useful in development/QA
   environments where one might want to create a snapshot
   before trying out some new software and then committing.
- Livebackup is useful in cloud environments where the
   Cloud Service Provider may want to offer regularly scheduled
   backed up VMs with no effort on the part of the customer
- Block Migration with COR is useful in Cloud Service provider
   environments where an arbitrary VM may need to be
   migrated over to another VM server, even though the VM
   is on direct attached storage.

The above is by no means an exhaustive list of use cases. I
am sure qemu/qemu-kvm users can come up with more.

Although there are some common concepts in these three
technologies, I think we should support all three in base
qemu. This would make qemu/qemu-kvm more feature rich
than vmware, xen and hyper-v.

Thanks,
Jagane

On 5/17/2011 3:53 PM, Dor Laor wrote:
> On 05/16/2011 11:23 AM, Jagane Sundar wrote:
>> Hello Dor,
>>
>> Let me see if I understand live snapshot correctly:
>> If I want to configure a VM for daily backup, then I would do
>> the following:
>> - Create a snapshot s1. s0 is marked read-only.
>> - Do a full backup of s0 on day 0.
>> - On day 1, I would create a new snapshot s2, then
>> copy over the snapshot s1, which is the incremental
>> backup image from s0 to s1.
>> - After copying s1 over, I do not need that snapshot, so
>> I would live merge s1 with s0, to create a new merged
>> read-only image s1'.
>> - On day 2, I would create a new snapshot s3, then
>> copy over s2, which is the incremental backup from
>> s1' to s2
>> - And so on...
>>
>> With this sequence of operations, I would need to keep a
>> snapshot active at all times, in order to enable the
>> incremental backup capability, right?
> No and yes ;-)
>
> For regular non incremental backup you can have no snapshot active most
> times:
>
>    - Create a snapshot s1. s0 is marked read-only.
>    - Do a full backup of s0 on day 0.
>    - Once backup is finished, live merge s1 into s0 and make s0 writeable
>      again.
>
> So this way there are no performance penalty here.
> Here we need an option to track dirty block bits (either as internal
> format or external file). This will be both efficient and get the job done.
>
> But in order to be efficient in storage we'll need to ask the snapshot
> creation to only refer to these dirt blocks.
> Well, thinking out load, it turned out to your solution :)
>
> Ok, I do see the value there is with incremental backups.
>
> I'm aware that there were requirements that the backup software itself
> will be done from the guest filesystem level, there incremental backup
> would be done on the FS layer.
>
> Still I do see the value in your solution.
>
> Another option for us would be to keep the latest snapshots around and
> and let the guest IO go through them all the time. There is some
> performance cost but as the newer image format develop, this cost is
> relatively very low.
>
>> If the base image is s0 and there is a single snapshot s1, then a
>> read operation from the VM will first look in s1. if the block is
>> not present in s1, then it will read the block from s0, right?
>> So most reads from the VM will effectively translate into two
>> reads, right?
>>
>> Isn't this a continuous performance penalty for the VM,
>> amounting to almost doubling the read I/O from the VM?
>>
>> Please read below for more comments:
>>>> 2. Robustness of this solution in the face of
>>>> errors in the disk, etc. If any one of the snapshot
>>>> files were to get corrupted, the whole VM is
>>>> adversely impacted.
>>> Since the base images and any snapshot which is not a leaf is marked as
>>> read only there is no such risk.
>>>
>> What happens when a VM host reboots while a live merge of s0
>> and s1 is being done?
> Live merge is using live copy that does duplicates each write IO.
> On a host crash, the merge will continue from the same point where it
> stopped.
>
> I think I answered the your other good comments above.
> Thanks,
> Dor
>
>>>> The primary goal of Livebackup architecture was to have zero
>>>> performance impact on the running VM.
>>>>
>>>> Livebackup impacts performance of the VM only when the
>>>> backup client connects to qemu to transfer the modified
>>>> blocks over, which should be, say 15 minutes a day, for a
>>>> daily backup schedule VM.
>>> In case there were lots of changing for example additional 50GB changes
>>> it will take more time and there will be a performance hit.
>>>
>> Of course, the performance hit is proportional to the amount of data
>> being copied over. However, the performance penalty is paid during
>> the backup operation, and not during normal VM operation.
>>
>>>> One useful thing to do is to evaluate the important use cases
>>>> for this technology, and then decide which approach makes
>>>> most sense. As an example, let me state this use case:
>>>> - A IaaS cloud, where VMs are always on, running off of a local
>>>> disk, and need to be backed up once a day or so.
>>>>
>>>> Can you list some of the other use cases that live snapshot and
>>>> live merge were designed to solve. Perhaps we can put up a
>>>> single wiki page that describes all of these proposals.
>>> Both solutions can serve for the same scenario:
>>> With live snapshot the backup is done the following:
>>>
>>> 1. Take a live snapshot (s1) of image s0.
>>> 2. Newer writes goes to the snapshot s1 while s0 is read only.
>>> 3. Backup software processes s0 image.
>>> There are multiple ways for doing that -
>>> 1. Use qemu-img and get the dirty blocks from former backup.
>>> - Currently qemu-img does not support it.
>>> - Nevertheless, such mechanism will work for lvm, btrfs, NetApp
>>> 2. Mount the s0 image to another guest that runs traditional backup
>>> software at the file system level and let it do the backup.
>>> 4. Live merge s1->s0
>>> We'll use live copy for that so each write is duplicated (like your
>>> live backup solution).
>>> 5. Delete s1
>>>
>>> As you can see, both approaches are very similar, while live snapshot is
>>> more general and not tied to backup specifically.
>>>
>> As I explained at the head of this email, I believe that live snapshot
>> results in the VM read I/O paying a high penalty during normal operation
>> of the VM, whereas Livebackup results in this penalty being paid only
>> during the backup dirty block transfer operation.
>>
>> Finally, I would like to bring up considerations of disk space. To
>> expand on
>> my use case further, consider a Cloud Compute service with 100 VMs
>> running on a host. If live snapshot is used to create snapshot COW files,
>> then potentially each VM could grow the COW snapshot file to the size
>> of the base file, which means the VM host needs to reserve space for
>> the snapshot that equals the size of the VMs - i.e. a 8GB VM would
>> require an additional 8GB of space to be reserved for the snapshot,
>> so that the service provider could safely guarantee that the snapshot
>> will not run out of space.
>> Contrast this with livebackup, wherein the COW files are kept only when
>> the dirty block transfers are being done. This means that for a host with
>> 100 VMs, if the backup server is connecting to each of the 100 qemu's
>> one by one and doing a livebackup, the service provider would need
>> to provision spare disk for at most the COW size of one VM.
>>
>> Thanks,
>> Jagane
>>
>>


WARNING: multiple messages have this Message-ID (diff)
From: Jagane Sundar <jagane@sundar.org>
To: "dlaor@redhat.com" <dlaor@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>,
	Anthony Liguori <aliguori@us.ibm.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Jes Sorensen <Jes.Sorensen@redhat.com>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	qemu-devel <qemu-devel@nongnu.org>,
	Ayal Baron <abaron@redhat.com>, Avi Kivity <avi@redhat.com>,
	Stefan Hajnoczi <stefan.hajnoczi@uk.ibm.com>
Subject: Re: [Qemu-devel] [RFC] live snapshot, live merge, live block migration
Date: Wed, 18 May 2011 08:49:50 -0700	[thread overview]
Message-ID: <4DD3EA9E.5030908@sundar.org> (raw)
In-Reply-To: <4DD2FC6C.5060608@redhat.com>

Hello Dor,

I'm glad I could convince you of the value of Livebackup. I
think Livesnapshot/Livemerge, Livebackup and Block
Migration all have very interesting use cases. For example:

- Livesnapshot/Livemerge is very useful in development/QA
   environments where one might want to create a snapshot
   before trying out some new software and then committing.
- Livebackup is useful in cloud environments where the
   Cloud Service Provider may want to offer regularly scheduled
   backed up VMs with no effort on the part of the customer
- Block Migration with COR is useful in Cloud Service provider
   environments where an arbitrary VM may need to be
   migrated over to another VM server, even though the VM
   is on direct attached storage.

The above is by no means an exhaustive list of use cases. I
am sure qemu/qemu-kvm users can come up with more.

Although there are some common concepts in these three
technologies, I think we should support all three in base
qemu. This would make qemu/qemu-kvm more feature rich
than vmware, xen and hyper-v.

Thanks,
Jagane

On 5/17/2011 3:53 PM, Dor Laor wrote:
> On 05/16/2011 11:23 AM, Jagane Sundar wrote:
>> Hello Dor,
>>
>> Let me see if I understand live snapshot correctly:
>> If I want to configure a VM for daily backup, then I would do
>> the following:
>> - Create a snapshot s1. s0 is marked read-only.
>> - Do a full backup of s0 on day 0.
>> - On day 1, I would create a new snapshot s2, then
>> copy over the snapshot s1, which is the incremental
>> backup image from s0 to s1.
>> - After copying s1 over, I do not need that snapshot, so
>> I would live merge s1 with s0, to create a new merged
>> read-only image s1'.
>> - On day 2, I would create a new snapshot s3, then
>> copy over s2, which is the incremental backup from
>> s1' to s2
>> - And so on...
>>
>> With this sequence of operations, I would need to keep a
>> snapshot active at all times, in order to enable the
>> incremental backup capability, right?
> No and yes ;-)
>
> For regular non incremental backup you can have no snapshot active most
> times:
>
>    - Create a snapshot s1. s0 is marked read-only.
>    - Do a full backup of s0 on day 0.
>    - Once backup is finished, live merge s1 into s0 and make s0 writeable
>      again.
>
> So this way there are no performance penalty here.
> Here we need an option to track dirty block bits (either as internal
> format or external file). This will be both efficient and get the job done.
>
> But in order to be efficient in storage we'll need to ask the snapshot
> creation to only refer to these dirt blocks.
> Well, thinking out load, it turned out to your solution :)
>
> Ok, I do see the value there is with incremental backups.
>
> I'm aware that there were requirements that the backup software itself
> will be done from the guest filesystem level, there incremental backup
> would be done on the FS layer.
>
> Still I do see the value in your solution.
>
> Another option for us would be to keep the latest snapshots around and
> and let the guest IO go through them all the time. There is some
> performance cost but as the newer image format develop, this cost is
> relatively very low.
>
>> If the base image is s0 and there is a single snapshot s1, then a
>> read operation from the VM will first look in s1. if the block is
>> not present in s1, then it will read the block from s0, right?
>> So most reads from the VM will effectively translate into two
>> reads, right?
>>
>> Isn't this a continuous performance penalty for the VM,
>> amounting to almost doubling the read I/O from the VM?
>>
>> Please read below for more comments:
>>>> 2. Robustness of this solution in the face of
>>>> errors in the disk, etc. If any one of the snapshot
>>>> files were to get corrupted, the whole VM is
>>>> adversely impacted.
>>> Since the base images and any snapshot which is not a leaf is marked as
>>> read only there is no such risk.
>>>
>> What happens when a VM host reboots while a live merge of s0
>> and s1 is being done?
> Live merge is using live copy that does duplicates each write IO.
> On a host crash, the merge will continue from the same point where it
> stopped.
>
> I think I answered the your other good comments above.
> Thanks,
> Dor
>
>>>> The primary goal of Livebackup architecture was to have zero
>>>> performance impact on the running VM.
>>>>
>>>> Livebackup impacts performance of the VM only when the
>>>> backup client connects to qemu to transfer the modified
>>>> blocks over, which should be, say 15 minutes a day, for a
>>>> daily backup schedule VM.
>>> In case there were lots of changing for example additional 50GB changes
>>> it will take more time and there will be a performance hit.
>>>
>> Of course, the performance hit is proportional to the amount of data
>> being copied over. However, the performance penalty is paid during
>> the backup operation, and not during normal VM operation.
>>
>>>> One useful thing to do is to evaluate the important use cases
>>>> for this technology, and then decide which approach makes
>>>> most sense. As an example, let me state this use case:
>>>> - A IaaS cloud, where VMs are always on, running off of a local
>>>> disk, and need to be backed up once a day or so.
>>>>
>>>> Can you list some of the other use cases that live snapshot and
>>>> live merge were designed to solve. Perhaps we can put up a
>>>> single wiki page that describes all of these proposals.
>>> Both solutions can serve for the same scenario:
>>> With live snapshot the backup is done the following:
>>>
>>> 1. Take a live snapshot (s1) of image s0.
>>> 2. Newer writes goes to the snapshot s1 while s0 is read only.
>>> 3. Backup software processes s0 image.
>>> There are multiple ways for doing that -
>>> 1. Use qemu-img and get the dirty blocks from former backup.
>>> - Currently qemu-img does not support it.
>>> - Nevertheless, such mechanism will work for lvm, btrfs, NetApp
>>> 2. Mount the s0 image to another guest that runs traditional backup
>>> software at the file system level and let it do the backup.
>>> 4. Live merge s1->s0
>>> We'll use live copy for that so each write is duplicated (like your
>>> live backup solution).
>>> 5. Delete s1
>>>
>>> As you can see, both approaches are very similar, while live snapshot is
>>> more general and not tied to backup specifically.
>>>
>> As I explained at the head of this email, I believe that live snapshot
>> results in the VM read I/O paying a high penalty during normal operation
>> of the VM, whereas Livebackup results in this penalty being paid only
>> during the backup dirty block transfer operation.
>>
>> Finally, I would like to bring up considerations of disk space. To
>> expand on
>> my use case further, consider a Cloud Compute service with 100 VMs
>> running on a host. If live snapshot is used to create snapshot COW files,
>> then potentially each VM could grow the COW snapshot file to the size
>> of the base file, which means the VM host needs to reserve space for
>> the snapshot that equals the size of the VMs - i.e. a 8GB VM would
>> require an additional 8GB of space to be reserved for the snapshot,
>> so that the service provider could safely guarantee that the snapshot
>> will not run out of space.
>> Contrast this with livebackup, wherein the COW files are kept only when
>> the dirty block transfers are being done. This means that for a host with
>> 100 VMs, if the backup server is connecting to each of the 100 qemu's
>> one by one and doing a livebackup, the service provider would need
>> to provision spare disk for at most the COW size of one VM.
>>
>> Thanks,
>> Jagane
>>
>>

  reply	other threads:[~2011-05-18 15:50 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-05-09 13:40 [Qemu-devel] [RFC] live snapshot, live merge, live block migration Dor Laor
2011-05-09 15:23 ` Anthony Liguori
2011-05-09 20:58   ` Dor Laor
2011-05-12 14:18   ` Marcelo Tosatti
2011-05-12 15:37   ` Jes Sorensen
2011-05-10 14:13 ` Marcelo Tosatti
2011-05-12 15:33 ` Jes Sorensen
2011-05-13  3:16   ` Jagane Sundar
2011-05-15 21:14     ` Dor Laor
2011-05-15 21:38       ` Jagane Sundar
2011-05-15 21:38         ` Jagane Sundar
2011-05-16  7:53         ` Dor Laor
2011-05-16  7:53           ` [Qemu-devel] " Dor Laor
2011-05-16  8:23           ` Jagane Sundar
2011-05-16  8:23             ` [Qemu-devel] " Jagane Sundar
2011-05-17 22:53             ` Dor Laor
2011-05-17 22:53               ` [Qemu-devel] " Dor Laor
2011-05-18 15:49               ` Jagane Sundar [this message]
2011-05-18 15:49                 ` Jagane Sundar
2011-05-20 12:19 ` Stefan Hajnoczi
2011-05-20 12:39   ` Jes Sorensen
2011-05-20 12:49     ` Stefan Hajnoczi
2011-05-20 12:56       ` Jes Sorensen
2011-05-22  9:52   ` Dor Laor
2011-05-23 13:02     ` Stefan Hajnoczi
2011-05-27 16:46       ` Stefan Hajnoczi
2011-05-27 17:16         ` Jagane Sundar
2011-05-23  5:42   ` Jagane Sundar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4DD3EA9E.5030908@sundar.org \
    --to=jagane@sundar.org \
    --cc=Jes.Sorensen@redhat.com \
    --cc=abaron@redhat.com \
    --cc=aliguori@us.ibm.com \
    --cc=avi@redhat.com \
    --cc=dlaor@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=kwolf@redhat.com \
    --cc=mtosatti@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefan.hajnoczi@uk.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.