All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] Incremental drive-backup with dirty bitmaps
@ 2019-01-22 19:29 Bharadwaj Rayala
  2019-01-22 21:54 ` Eric Blake
  2019-01-24  8:57 ` Kashyap Chamarthy
  0 siblings, 2 replies; 8+ messages in thread
From: Bharadwaj Rayala @ 2019-01-22 19:29 UTC (permalink / raw)
  To: qemu-discuss, qemu-devel; +Cc: kchamart, Suman Swaroop, kashyap.cv

Hi,

TL(Cant)R: I am trying to figure out a workflow for doing incremental
drive-backups using dirty-bitmaps. Feels qemu lacks some essential features
to achieve it.

I am trying to build a backup workflow(program) using drive-backup along
with dirty bitmaps to take backups of kvm vms. EIther pull/push model works
for me. Since drive-backup push model is already implemented, I am
going forward with it. I am not able to figure out a few details and
couldn't find any documentation around it. Any help would be appreciated

Context: I would like to take recoverable, consistent, incremental
backups of kvm vms, whose disks are backed either by qcow2 or raw images.
Lets say there is a vm:vm1 with drive1 backed by image chain( A <-- B ).
This are the rough steps i would like to do.

Method 1:
Backup:
1. Perform a full backup using `drive-backup(drive1, sync=full, dest =
/nfs/vm1/drive1)`. Use transaction to do `block-dirty-bitmap-add(drive1,
bitmap1)`. Store the vm config seperately
2. Perform an incremental backup using `drive-backup(drive1,
sync=incremental, mode=existing, bitmap=bitmap1, dest=/nfs/vm1/drive1)`.
Store the vm config seperately
3. Rinse and repeat.
Recovery(Just the latest backup, incremental not required):
    Copy the full qcow2 from nfs to host storage. Spawn a new vm with the
same vm config.
Temporary quick recovery:
    Create a new qcow2 layer on top of existing /nfs/vm1/drive1 on the nfs
storage itself. Spawn a new vm with disk on nfs storage itself.
were
Issues i face:
1. Does the drive-backup stall for the whole time the block job is in
progress. This is a strict no for me. I didnot find any documentation
regarding it but a powerpoint presentation(from kaskyapc) mentioning it.
(Assuming yes!)
2. Is the backup consistent? Are the drive file-systems quiesced on backup?
(Assuming no!)

To achieve both of the above, one hack i could think of was to take a
snapshot and read from the snapshot.

Method 2:
1. Perform a full backup using `drive-backup(drive1, sync=full, dest =
/nfs/vm1/drive1)`. Use transaction to do `block-dirty-bitmap-add(drive1,
bitmap1)`. Store the vm config seperately
2. Perform the incremental backup by
     a. add bitmap2 to drive1 `block-dirty-bitmap-add(drive1, bitmap2)`.
     b. Take a vm snapshot with drive1(exclude memory, quiesce). The drive1
image chain is now A<--B<--C.
     c. Take incremental using bitmap1 but using data from node B.
`drive-backup(*#nodeB*, sync=incremental, mode=existing, bitmap=bitmap1,
dest=/nfs/vm1/drive1)`
     d. Delete bitmap1 `block-dirty-bitmap-delete(drive1, bitmap1)`
     e. Delete vm snapshot on drive1. The drive1 image chain is now A <--B.
     f. bitmap2 now tracks the changes from incrementa 1 to incremental 2.

Drawbacks with this method would be(had it worked) that incremental backups
would contain dirty blocks that are a superset of the actual blocks that
are changed between the snapshot and the last snapshot.(Incremental x would
contain blocks that have changed when incremental x-1 backup was in
progress). But there are no correctness issues.


*I cannot do this because drive-backup doesnot allow bitmap and node that
the bitmap is attached to, to be different. :( *
Some other issues i was facing that i worked around:
1. Lets say i have to backup a vm with 2 disks(both at a fixed point in
time, either both fail or both pass). To atomically do a bitmap-add and
drive-backup(sync=full) i can use transcations. To achieve a backup at a
fixed point in time, i can use transaction with multiple drive-backups. To
either fail the whole backup or succeed(when multiple drives are present),
i can use completion-mode = grouped. But then i cant combine them as its
not supported. i.e, do a
    Transaction{drive-backup(drive1), dirty-bitmap-add(drive1,
bitmap1),drive-backup(drive2), dirty-bitmap-add(drive2, bitmap1),
completion-mode=grouped}.
 Workaround: Create bitmaps first, then take full. Effect: Incrementals
would be a small superset of actual changed blocks.
2. Why do I need to dismiss old jobs to start a new job on node. I want to
retain the block-job end state for a day before i clear them. So i set
auto-dismiss to false. This doesnot allow new jobs to run unless the old
job is dismissed even if state=concluded.
 Workaround: no workaround, store the end-job-status somewhere else.
3. Is there a way pre 2.12 to achieve auto-finalise = false in a
transaction. Can I somehow add a dummy block job, that will only finish
when i want to finalise the actual 2 disks block jobs? My backup workflow
needs to run on env's pre 2.12.
 Workaround: Couldnot achieve this. So if an incremental fails after block
jobs succeed before i can ensure success(have to do some metadata
operations on my side), i retry with sync=full mode.


*So what is the recommeded way of taking backups with incremental bitmaps
? *
Thanks you for taking time to read through this.

Best,
Bharadwaj.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Incremental drive-backup with dirty bitmaps
  2019-01-22 19:29 [Qemu-devel] Incremental drive-backup with dirty bitmaps Bharadwaj Rayala
@ 2019-01-22 21:54 ` Eric Blake
  2019-01-23 18:08   ` Bharadwaj Rayala
  2019-01-24  8:57 ` Kashyap Chamarthy
  1 sibling, 1 reply; 8+ messages in thread
From: Eric Blake @ 2019-01-22 21:54 UTC (permalink / raw)
  To: Bharadwaj Rayala, qemu-discuss, qemu-devel
  Cc: kashyap.cv, Suman Swaroop, kchamart, John Snow

[-- Attachment #1: Type: text/plain, Size: 7114 bytes --]

On 1/22/19 1:29 PM, Bharadwaj Rayala wrote:
> Hi,
> 
> TL(Cant)R: I am trying to figure out a workflow for doing incremental
> drive-backups using dirty-bitmaps. Feels qemu lacks some essential features
> to achieve it.
> 
> I am trying to build a backup workflow(program) using drive-backup along
> with dirty bitmaps to take backups of kvm vms. EIther pull/push model works
> for me. Since drive-backup push model is already implemented, I am
> going forward with it. I am not able to figure out a few details and
> couldn't find any documentation around it. Any help would be appreciated
> 
> Context: I would like to take recoverable, consistent, incremental
> backups of kvm vms, whose disks are backed either by qcow2 or raw images.
> Lets say there is a vm:vm1 with drive1 backed by image chain( A <-- B ).
> This are the rough steps i would like to do.
> 
> Method 1:
> Backup:
> 1. Perform a full backup using `drive-backup(drive1, sync=full, dest =
> /nfs/vm1/drive1)`. Use transaction to do `block-dirty-bitmap-add(drive1,
> bitmap1)`. Store the vm config seperately
> 2. Perform an incremental backup using `drive-backup(drive1,
> sync=incremental, mode=existing, bitmap=bitmap1, dest=/nfs/vm1/drive1)`.
> Store the vm config seperately
> 3. Rinse and repeat.
> Recovery(Just the latest backup, incremental not required):
>     Copy the full qcow2 from nfs to host storage. Spawn a new vm with the
> same vm config.
> Temporary quick recovery:
>     Create a new qcow2 layer on top of existing /nfs/vm1/drive1 on the nfs
> storage itself. Spawn a new vm with disk on nfs storage itself.

Sounds like it should work; using qemu to push the backup out.

> were
> Issues i face:
> 1. Does the drive-backup stall for the whole time the block job is in
> progress. This is a strict no for me. I didnot find any documentation
> regarding it but a powerpoint presentation(from kaskyapc) mentioning it.
> (Assuming yes!)

The drive-backup is running in parallel to the guest.  I'm not sure what
stalls you are seeing - but as qemu is doing all the work, it DOES have
to service both guest requests and the work to copy out the backup;
also, if you have known-inefficient lseek() situations, there may be
cases where qemu is doing a lousy job (there's work underway on the list
to improve qemu's caching of lseek() data).

> 2. Is the backup consistent? Are the drive file-systems quiesced on backup?
> (Assuming no!)

If you want the file systems quiesced on backup, then merely bracket
your transaction that kicks off the drive-backup inside guest-agent
commands that freeze and thaw the disk.  So, consistency is not default
(because it requires trusting the guest), but is possible.

> 
> To achieve both of the above, one hack i could think of was to take a
> snapshot and read from the snapshot.
> 
> Method 2:
> 1. Perform a full backup using `drive-backup(drive1, sync=full, dest =
> /nfs/vm1/drive1)`. Use transaction to do `block-dirty-bitmap-add(drive1,
> bitmap1)`. Store the vm config seperately
> 2. Perform the incremental backup by
>      a. add bitmap2 to drive1 `block-dirty-bitmap-add(drive1, bitmap2)`.
>      b. Take a vm snapshot with drive1(exclude memory, quiesce). The drive1
> image chain is now A<--B<--C.
>      c. Take incremental using bitmap1 but using data from node B.
> `drive-backup(*#nodeB*, sync=incremental, mode=existing, bitmap=bitmap1,
> dest=/nfs/vm1/drive1)`
>      d. Delete bitmap1 `block-dirty-bitmap-delete(drive1, bitmap1)`
>      e. Delete vm snapshot on drive1. The drive1 image chain is now A <--B.
>      f. bitmap2 now tracks the changes from incrementa 1 to incremental 2.
> 
> Drawbacks with this method would be(had it worked) that incremental backups
> would contain dirty blocks that are a superset of the actual blocks that
> are changed between the snapshot and the last snapshot.(Incremental x would
> contain blocks that have changed when incremental x-1 backup was in
> progress). But there are no correctness issues.
> 
> 
> *I cannot do this because drive-backup doesnot allow bitmap and node that
> the bitmap is attached to, to be different. :( *

It might, as long as the bitmap is found on the backing chain (I'm a bit
fuzzier on that case, but KNOW that for pull-mode backups, my libvirt
code is definitely relying on being able to access the bitmap from the
backing file of the BDS being exported over NBD).

> Some other issues i was facing that i worked around:
> 1. Lets say i have to backup a vm with 2 disks(both at a fixed point in
> time, either both fail or both pass). To atomically do a bitmap-add and
> drive-backup(sync=full) i can use transcations. To achieve a backup at a
> fixed point in time, i can use transaction with multiple drive-backups. To
> either fail the whole backup or succeed(when multiple drives are present),
> i can use completion-mode = grouped. But then i cant combine them as its
> not supported. i.e, do a
>     Transaction{drive-backup(drive1), dirty-bitmap-add(drive1,
> bitmap1),drive-backup(drive2), dirty-bitmap-add(drive2, bitmap1),
> completion-mode=grouped}.

What error message are you getting?  I'm not surprised if
completion-mode=grouped isn't playing nicely with bitmaps in
transactions, although that should be something that we should fix.

>  Workaround: Create bitmaps first, then take full. Effect: Incrementals
> would be a small superset of actual changed blocks.
> 2. Why do I need to dismiss old jobs to start a new job on node. I want to
> retain the block-job end state for a day before i clear them. So i set
> auto-dismiss to false. This doesnot allow new jobs to run unless the old
> job is dismissed even if state=concluded.

Yes, there is probably more work needed to make parallel jobs do what
people want.

>  Workaround: no workaround, store the end-job-status somewhere else.
> 3. Is there a way pre 2.12 to achieve auto-finalise = false in a
> transaction. Can I somehow add a dummy block job, that will only finish
> when i want to finalise the actual 2 disks block jobs? My backup workflow
> needs to run on env's pre 2.12.

Ouch - backups pre-2.12 have issues.  If I had not read this paragraph,
my recommendation would be to stick to 3.1 and use pull-mode backups
(where you use NBD to learn which portions of the image were dirtied,
and pull those portions of the disk over NBD rather than qemu pushing
them); I even have a working demo of preliminary libvirt code driving
that which I presented at last year's KVM Forum.

>  Workaround: Couldnot achieve this. So if an incremental fails after block
> jobs succeed before i can ensure success(have to do some metadata
> operations on my side), i retry with sync=full mode.
> 
> 
> *So what is the recommeded way of taking backups with incremental bitmaps
> ? *
> Thanks you for taking time to read through this.
> 
> Best,
> Bharadwaj.
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Incremental drive-backup with dirty bitmaps
  2019-01-22 21:54 ` Eric Blake
@ 2019-01-23 18:08   ` Bharadwaj Rayala
  2019-01-23 19:09     ` Eric Blake
  0 siblings, 1 reply; 8+ messages in thread
From: Bharadwaj Rayala @ 2019-01-23 18:08 UTC (permalink / raw)
  To: Eric Blake
  Cc: qemu-devel, Kashyap Chamarthy, Suman Swaroop, kchamart,
	John Snow, qemu-discuss

Replied inline.

On Wed, Jan 23, 2019 at 3:25 AM Eric Blake <eblake@redhat.com> wrote:

> On 1/22/19 1:29 PM, Bharadwaj Rayala wrote:
> > Hi,
> >
> > TL(Cant)R: I am trying to figure out a workflow for doing incremental
> > drive-backups using dirty-bitmaps. Feels qemu lacks some essential
> features
> > to achieve it.
> >
> > I am trying to build a backup workflow(program) using drive-backup along
> > with dirty bitmaps to take backups of kvm vms. EIther pull/push model
> works
> > for me. Since drive-backup push model is already implemented, I am
> > going forward with it. I am not able to figure out a few details and
> > couldn't find any documentation around it. Any help would be appreciated
> >
> > Context: I would like to take recoverable, consistent, incremental
> > backups of kvm vms, whose disks are backed either by qcow2 or raw images.
> > Lets say there is a vm:vm1 with drive1 backed by image chain( A <-- B ).
> > This are the rough steps i would like to do.
> >
> > Method 1:
> > Backup:
> > 1. Perform a full backup using `drive-backup(drive1, sync=full, dest =
> > /nfs/vm1/drive1)`. Use transaction to do `block-dirty-bitmap-add(drive1,
> > bitmap1)`. Store the vm config seperately
> > 2. Perform an incremental backup using `drive-backup(drive1,
> > sync=incremental, mode=existing, bitmap=bitmap1, dest=/nfs/vm1/drive1)`.
> > Store the vm config seperately
> > 3. Rinse and repeat.
> > Recovery(Just the latest backup, incremental not required):
> >     Copy the full qcow2 from nfs to host storage. Spawn a new vm with the
> > same vm config.
> > Temporary quick recovery:
> >     Create a new qcow2 layer on top of existing /nfs/vm1/drive1 on the
> nfs
> > storage itself. Spawn a new vm with disk on nfs storage itself.
>
> Sounds like it should work; using qemu to push the backup out.
>
> > were
> > Issues i face:
> > 1. Does the drive-backup stall for the whole time the block job is in
> > progress. This is a strict no for me. I didnot find any documentation
> > regarding it but a powerpoint presentation(from kaskyapc) mentioning it.
> > (Assuming yes!)
>
> The drive-backup is running in parallel to the guest.  I'm not sure what
> stalls you are seeing - but as qemu is doing all the work, it DOES have
> to service both guest requests and the work to copy out the backup;
> also, if you have known-inefficient lseek() situations, there may be
> cases where qemu is doing a lousy job (there's work underway on the list
> to improve qemu's caching of lseek() data).
>
>
Eric, I watched your kvm forum video
https://www.youtube.com/watch?v=zQK5ANionpU. Which cleared out somethings
for me. Lets say you have a disk of size 10GB, I had assumed that, if
drive-backup has copied till 2 gb offset, that wouldnt qemu have to stall
writes coming from guest b/w 2gb and 10gb ? Unless qemu does some internal
qcow snapshoting at the start of the backup job and committing at the end.
But if i get it correctly from what you explained, qemu doesnot create a
new qcow file, but when a write comes from the guest to the live image, old
block is first written to the backup synchronously before writing new data
to the live qcow2 file. This would not stall the writes, but this would
slow down the writes of the guest, as an extra write to target file on
secondary storage(over nfs) has to happen first. If the old block write to
nfs fails, does backup fail with on-target-error appropriately set? or does
it stall the guest write ?


> > 2. Is the backup consistent? Are the drive file-systems quiesced on
> backup?
> > (Assuming no!)
>
> If you want the file systems quiesced on backup, then merely bracket
> your transaction that kicks off the drive-backup inside guest-agent
> commands that freeze and thaw the disk.  So, consistency is not default
> (because it requires trusting the guest), but is possible.
>
>
Ok. Method 2 below would not even be required if both the above issues can
be solved.


> >
> > To achieve both of the above, one hack i could think of was to take a
> > snapshot and read from the snapshot.
> >
> > Method 2:
> > 1. Perform a full backup using `drive-backup(drive1, sync=full, dest =
> > /nfs/vm1/drive1)`. Use transaction to do `block-dirty-bitmap-add(drive1,
> > bitmap1)`. Store the vm config seperately
> > 2. Perform the incremental backup by
> >      a. add bitmap2 to drive1 `block-dirty-bitmap-add(drive1, bitmap2)`.
> >      b. Take a vm snapshot with drive1(exclude memory, quiesce). The
> drive1
> > image chain is now A<--B<--C.
> >      c. Take incremental using bitmap1 but using data from node B.
> > `drive-backup(*#nodeB*, sync=incremental, mode=existing, bitmap=bitmap1,
> > dest=/nfs/vm1/drive1)`
> >      d. Delete bitmap1 `block-dirty-bitmap-delete(drive1, bitmap1)`
> >      e. Delete vm snapshot on drive1. The drive1 image chain is now A
> <--B.
> >      f. bitmap2 now tracks the changes from incrementa 1 to incremental
> 2.
> >
> > Drawbacks with this method would be(had it worked) that incremental
> backups
> > would contain dirty blocks that are a superset of the actual blocks that
> > are changed between the snapshot and the last snapshot.(Incremental x
> would
> > contain blocks that have changed when incremental x-1 backup was in
> > progress). But there are no correctness issues.
> >
> >
> > *I cannot do this because drive-backup doesnot allow bitmap and node that
> > the bitmap is attached to, to be different. :( *
>
> It might, as long as the bitmap is found on the backing chain (I'm a bit
> fuzzier on that case, but KNOW that for pull-mode backups, my libvirt
> code is definitely relying on being able to access the bitmap from the
> backing file of the BDS being exported over NBD).
>
>
Sorry. I dont get this. So lets say this was the drive-1 I had. A(raw) <---
B (qcow2) . @suman(cc'ed) created a bitmap(bitmap1) on device:drive-1 ,
then took a snapshot of it. At this point the chain would be something like
A(raw) <-- B(qcow2 -  snapshot)  <--- C(qcow2 - live). Would the bitmap
that was created on drive-1 still be attached to #nodeB or would it be
attached to #nodeC. Would it have all the dirty blocks from "bitmap-add to
now" or would it only have dirty blocks from "bitmap-add to snapshot".
If the bitmap's now attached to live drive-1( i.e, nodeC) it would have all
the dirty blocks, but then can i do a drive-backup(bitmap1, src=#nodeB).

If the bitmap stays attached to ( nodeB), it would have only dirty blocks
till the point snapshot C is created. But this is a problem, as a backup
workflow/program shouldnot restrict users from creating other snapshots.
Backup workflow can take additional snapshots as done in method2 above if
it wants, and then remove the snapshot once the backup job is done. I guess
this problem would be there for the pull based model as well. I am
currently trying my workflow on an rhev cluster, and i donot want my backup
workflow to interfere with snapshots triggered from rhevm/ovirt.


> > Some other issues i was facing that i worked around:
> > 1. Lets say i have to backup a vm with 2 disks(both at a fixed point in
> > time, either both fail or both pass). To atomically do a bitmap-add and
> > drive-backup(sync=full) i can use transcations. To achieve a backup at a
> > fixed point in time, i can use transaction with multiple drive-backups.
> To
> > either fail the whole backup or succeed(when multiple drives are
> present),
> > i can use completion-mode = grouped. But then i cant combine them as its
> > not supported. i.e, do a
> >     Transaction{drive-backup(drive1), dirty-bitmap-add(drive1,
> > bitmap1),drive-backup(drive2), dirty-bitmap-add(drive2, bitmap1),
> > completion-mode=grouped}.
>
> What error message are you getting?  I'm not surprised if
> completion-mode=grouped isn't playing nicely with bitmaps in
> transactions, although that should be something that we should fix.
>

error says grouped completion-mode not allowed with command
'drity-bitmap-add'


>
> >  Workaround: Create bitmaps first, then take full. Effect: Incrementals
> > would be a small superset of actual changed blocks.
> > 2. Why do I need to dismiss old jobs to start a new job on node. I want
> to
> > retain the block-job end state for a day before i clear them. So i set
> > auto-dismiss to false. This doesnot allow new jobs to run unless the old
> > job is dismissed even if state=concluded.
>
> Yes, there is probably more work needed to make parallel jobs do what
> people want.
>
> >  Workaround: no workaround, store the end-job-status somewhere else.
> > 3. Is there a way pre 2.12 to achieve auto-finalise = false in a
> > transaction. Can I somehow add a dummy block job, that will only finish
> > when i want to finalise the actual 2 disks block jobs? My backup workflow
> > needs to run on env's pre 2.12.
>
> Ouch - backups pre-2.12 have issues.  If I had not read this paragraph,
> my recommendation would be to stick to 3.1 and use pull-mode backups
> (where you use NBD to learn which portions of the image were dirtied,
> and pull those portions of the disk over NBD rather than qemu pushing
> them); I even have a working demo of preliminary libvirt code driving
> that which I presented at last year's KVM Forum.
>

What do you mean by issues? Do you mean any data/corruption bugs or lack of
some nice functionality that we are talking here?


>
> >  Workaround: Couldnot achieve this. So if an incremental fails after
> block
> > jobs succeed before i can ensure success(have to do some metadata
> > operations on my side), i retry with sync=full mode.
> >
> >
> > *So what is the recommeded way of taking backups with incremental bitmaps
> > ? *
> > Thanks you for taking time to read through this.
> >
> > Best,
> > Bharadwaj.
> >
>
> --
> Eric Blake, Principal Software Engineer
> Red Hat, Inc.           +1-919-301-3226
> Virtualization:  qemu.org | libvirt.org


Thanks a lot Eric for spending your time in answering my queries. I dont
know if you work with Kashyap Chamarthy, but your help and his blogs are
lifesavers.

Thank you,
Bharadwaj.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Incremental drive-backup with dirty bitmaps
  2019-01-23 18:08   ` Bharadwaj Rayala
@ 2019-01-23 19:09     ` Eric Blake
  2019-01-24  9:16       ` Kashyap Chamarthy
  0 siblings, 1 reply; 8+ messages in thread
From: Eric Blake @ 2019-01-23 19:09 UTC (permalink / raw)
  To: Bharadwaj Rayala
  Cc: qemu-devel, Kashyap Chamarthy, Suman Swaroop, kchamart,
	John Snow, qemu-discuss

[-- Attachment #1: Type: text/plain, Size: 10441 bytes --]

On 1/23/19 12:08 PM, Bharadwaj Rayala wrote:

>>> Issues i face:
>>> 1. Does the drive-backup stall for the whole time the block job is in
>>> progress. This is a strict no for me. I didnot find any documentation
>>> regarding it but a powerpoint presentation(from kaskyapc) mentioning it.
>>> (Assuming yes!)
>>
>> The drive-backup is running in parallel to the guest.  I'm not sure what
>> stalls you are seeing - but as qemu is doing all the work, it DOES have
>> to service both guest requests and the work to copy out the backup;
>> also, if you have known-inefficient lseek() situations, there may be
>> cases where qemu is doing a lousy job (there's work underway on the list
>> to improve qemu's caching of lseek() data).
>>
>>
> Eric, I watched your kvm forum video
> https://www.youtube.com/watch?v=zQK5ANionpU. Which cleared out somethings
> for me. Lets say you have a disk of size 10GB, I had assumed that, if
> drive-backup has copied till 2 gb offset, that wouldnt qemu have to stall
> writes coming from guest b/w 2gb and 10gb ? Unless qemu does some internal
> qcow snapshoting at the start of the backup job and committing at the end.
> But if i get it correctly from what you explained, qemu doesnot create a
> new qcow file, but when a write comes from the guest to the live image, old
> block is first written to the backup synchronously before writing new data
> to the live qcow2 file. This would not stall the writes, but this would
> slow down the writes of the guest, as an extra write to target file on
> secondary storage(over nfs) has to happen first. If the old block write to
> nfs fails, does backup fail with on-target-error appropriately set? or does
> it stall the guest write ?

You have various knobs to control what happens on write failures, both
on the source and on the destination (on-source-error and
on-target-error) as well as how synchronized the image will be
(MirrorCopyMode of background vs. write-blocking - but only since 3.0).
Between those knobs, you should be able to control whether a failure to
write to the backup image halts the guest or merely halts the job.  But
yes, I/O issued by the guest to a cluster currently being serviced by
the backup code can result in longer write completion times from the
guest's perspective on those clusters.

> 
> 
>>> 2. Is the backup consistent? Are the drive file-systems quiesced on
>> backup?
>>> (Assuming no!)
>>
>> If you want the file systems quiesced on backup, then merely bracket
>> your transaction that kicks off the drive-backup inside guest-agent
>> commands that freeze and thaw the disk.  So, consistency is not default
>> (because it requires trusting the guest), but is possible.
>>
>>
> Ok. Method 2 below would not even be required if both the above issues can
> be solved.
> 

>>>
>>> *I cannot do this because drive-backup doesnot allow bitmap and node that
>>> the bitmap is attached to, to be different. :( *
>>
>> It might, as long as the bitmap is found on the backing chain (I'm a bit
>> fuzzier on that case, but KNOW that for pull-mode backups, my libvirt
>> code is definitely relying on being able to access the bitmap from the
>> backing file of the BDS being exported over NBD).
>>
>>
> Sorry. I dont get this. So lets say this was the drive-1 I had. A(raw) <---
> B (qcow2) . @suman(cc'ed) created a bitmap(bitmap1) on device:drive-1 ,
> then took a snapshot of it. At this point the chain would be something like
> A(raw) <-- B(qcow2 -  snapshot)  <--- C(qcow2 - live). Would the bitmap
> that was created on drive-1 still be attached to #nodeB or would it be
> attached to #nodeC. Would it have all the dirty blocks from "bitmap-add to
> now" or would it only have dirty blocks from "bitmap-add to snapshot".
> If the bitmap's now attached to live drive-1( i.e, nodeC) it would have all
> the dirty blocks, but then can i do a drive-backup(bitmap1, src=#nodeB).

We are still exploring how external snapshots should interact with
bitmaps (the low level building blocks may or may not already be present
in qemu 3.1, but libvirt certainly hasn't been coded to use them to
actually prove what works, as I'm still struggling to get the
incremental backups without external snapshot code in libvirt first). At
the moment, when you create nodeC, the bitmap in node B effectively
becomes read-only (no more writes to nodeB, so the bitmap doesn't change
content). You can, at the time you create nodeC but before wiring it
into the chain using blockdev-add, also create another bitmap living in
nodeC, such that when you then perform the snapshots, writes to nodeC
are tracked in the new bitmap.  To track all changes from the time that
bitmap1 was first created, you'd need to be able to merge the bits set
in bitmap1 of nodeB plus the bits set in the bitmap in nodeC.  Qemu does
not automatically move bitmaps from one image to another, so it really
does boil down to whether we have enough other mechanisms for merging
bitmaps from cross-image sources.

> 
> If the bitmap stays attached to ( nodeB), it would have only dirty blocks
> till the point snapshot C is created. But this is a problem, as a backup
> workflow/program shouldnot restrict users from creating other snapshots.

Not a problem if you also create a bitmap every time you take an
external snapshot, and then piece together bitmaps as needed to collect
all changes between the point in time of interest and the present.

> Backup workflow can take additional snapshots as done in method2 above if
> it wants, and then remove the snapshot once the backup job is done. I guess
> this problem would be there for the pull based model as well. I am
> currently trying my workflow on an rhev cluster, and i donot want my backup
> workflow to interfere with snapshots triggered from rhevm/ovirt.

"Incremental backup" means only the data that changed since the last
backup (which can either be done via a single bitmap or by treating all
external snapshot creation operations as a backup point in time);
"differential backup" is the more powerful term that means tracking
MULTIPLE points in time (in my libvirt code, by having a chain of
multiple bitmaps, and then piecing together the right set of bitmaps as
needed).  But yes, it sounds like you want differential backups, by
piecing together bitmaps over multiple points in time, and where you
take care to freeze one bitmap and create a new one at any point in time
where you want to be able to track changes since that point in time
(whether kicking off a backup job, or doing an external snapshot).


>> To
>>> either fail the whole backup or succeed(when multiple drives are
>> present),
>>> i can use completion-mode = grouped. But then i cant combine them as its
>>> not supported. i.e, do a
>>>     Transaction{drive-backup(drive1), dirty-bitmap-add(drive1,
>>> bitmap1),drive-backup(drive2), dirty-bitmap-add(drive2, bitmap1),
>>> completion-mode=grouped}.
>>
>> What error message are you getting?  I'm not surprised if
>> completion-mode=grouped isn't playing nicely with bitmaps in
>> transactions, although that should be something that we should fix.
>>
> 
> error says grouped completion-mode not allowed with command
> 'drity-bitmap-add'
> 

The other thing to consider is whether you really need
completion-mode=grouped, or whether you can instead use push-mode
backups with a temporary backup.  But again, that won't help you prior
to qemu 3.1, where you don't have easy access to creating/merging
bitmaps on the fly.  The approach I'm using in libvirt is that since
qemu's push-mode backup success destroys the old state of the bitmap,
that I instead create a temporary bitmap, merge the real bitmap into the
temporary bitmap (in a transaction), then kick off the backup job. If
the backup job succeeds, delete the temporary bitmap, all is well; if it
fails, then merge the temporary bitmap back into the real bitmap, but at
the end of the day, by managing the bitmaps myself instead of letting
qemu auto-manage them, I did not have to rely on completion-mode=grouped
in order to get sane failure handling of push-mode backups across
multiple disks.  (Well, truth be told, that's the part of the libvirt
code that I did NOT have working at KVM Forum, and where I still have
not posted a working demo to the libvirt list in the meantime - so far,
I have only demo'd pull-mode backups, and not push-mode, because I am
still playing with how libvirt will make push-mode work reliably).

>>> 3. Is there a way pre 2.12 to achieve auto-finalise = false in a
>>> transaction. Can I somehow add a dummy block job, that will only finish
>>> when i want to finalise the actual 2 disks block jobs? My backup workflow
>>> needs to run on env's pre 2.12.
>>
>> Ouch - backups pre-2.12 have issues.  If I had not read this paragraph,
>> my recommendation would be to stick to 3.1 and use pull-mode backups
>> (where you use NBD to learn which portions of the image were dirtied,
>> and pull those portions of the disk over NBD rather than qemu pushing
>> them); I even have a working demo of preliminary libvirt code driving
>> that which I presented at last year's KVM Forum.
>>
> 
> What do you mean by issues? Do you mean any data/corruption bugs or lack of
> some nice functionality that we are talking here?

Lack of functionality.  In particular, the 4.0 commands
block-dirty-bitmap-{enable,merge,disable} (or their 3.1 counterparts
x-block-dirty-bitmap-*) are essential to the workflow of differential
backups (without being able to manage bitmaps yourself, you can only get
the weaker incremental backup, and that means qemu itself is clearing
the bitmap out of under your feet on success, and where you are having
to worry about completion-mode=grouped).

> 
> Thanks a lot Eric for spending your time in answering my queries. I dont
> know if you work with Kashyap Chamarthy, but your help and his blogs are
> lifesavers.

Yes, Kashyap is trying to build solutions on top of the building blocks
that I am working on, so we have collaborated several times on these
types of issues (he does a lot better at blog posts extracted from my
mailing list brain dumps).

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Incremental drive-backup with dirty bitmaps
  2019-01-22 19:29 [Qemu-devel] Incremental drive-backup with dirty bitmaps Bharadwaj Rayala
  2019-01-22 21:54 ` Eric Blake
@ 2019-01-24  8:57 ` Kashyap Chamarthy
  1 sibling, 0 replies; 8+ messages in thread
From: Kashyap Chamarthy @ 2019-01-24  8:57 UTC (permalink / raw)
  To: Bharadwaj Rayala; +Cc: qemu-discuss, qemu-devel, Suman Swaroop, kashyap.cv

On Wed, Jan 23, 2019 at 12:59:27AM +0530, Bharadwaj Rayala wrote:

[...]

Eric has responded with excellent detail, as usual; a "meta question"
below.

> I am trying to build a backup workflow(program) using drive-backup along
> with dirty bitmaps to take backups of kvm vms. 

Is this program that you're building just specific to your environment?
Or is it a generic open source project (or proprietary product)?

> EIther pull/push model works for me. Since drive-backup push model is
> already implemented, I am going forward with it. I am not able to
> figure out a few details and couldn't find any documentation around
> it. Any help would be appreciated

[...]


-- 
/kashyap

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Incremental drive-backup with dirty bitmaps
  2019-01-23 19:09     ` Eric Blake
@ 2019-01-24  9:16       ` Kashyap Chamarthy
       [not found]         ` <CAMAMwPA_H77fnC+dOzBt+nRQ+=oPHtpw2DRYMCEtnGdo1OU0Hw@mail.gmail.com>
  0 siblings, 1 reply; 8+ messages in thread
From: Kashyap Chamarthy @ 2019-01-24  9:16 UTC (permalink / raw)
  To: Eric Blake
  Cc: Bharadwaj Rayala, qemu-devel, Kashyap Chamarthy, Suman Swaroop,
	John Snow, qemu-discuss

On Wed, Jan 23, 2019 at 01:09:41PM -0600, Eric Blake wrote:
> On 1/23/19 12:08 PM, Bharadwaj Rayala wrote:

[...] # [Snip Eric's excellent exposition.]

> > What do you mean by issues? Do you mean any data/corruption bugs or lack of
> > some nice functionality that we are talking here?
> 
> Lack of functionality.  In particular, the 4.0 commands
> block-dirty-bitmap-{enable,merge,disable} (or their 3.1 counterparts
> x-block-dirty-bitmap-*) are essential to the workflow of differential
> backups (without being able to manage bitmaps yourself, you can only get
> the weaker incremental backup, and that means qemu itself is clearing
> the bitmap out of under your feet on success, and where you are having
> to worry about completion-mode=grouped).
> 
> > 
> > Thanks a lot Eric for spending your time in answering my queries. I dont
> > know if you work with Kashyap Chamarthy, but your help and his blogs are
> > lifesavers.
> 
> Yes, Kashyap is trying to build solutions on top of the building blocks
> that I am working on, so we have collaborated several times on these
> types of issues (he does a lot better at blog posts extracted from my
> mailing list brain dumps).

I haven't kept up with incremental backups lately, as I've been swamped
with other work.  But two other documents that I can point to are these
[1][2] in the QEMU tree.  And their HTML-rendered versions are
here[3][4].  They're generated for 3.0.0; but these docs haven't changed
much since then.

Along with Eric's last year talk, also check out presentations from
previous KVM Forums from other Block Layer maintainers.


[1] https://git.qemu.org/?p=qemu.git;a=blob;f=docs/interop/live-block-operations.rst
[2] https://git.qemu.org/?p=qemu.git;a=blob;f=docs/interop/bitmaps.rst
[3] https://kashyapc.fedorapeople.org/QEMU-Docs-v3.0.0/_build/html/docs/interop/live-block-operations.html
[4] https://kashyapc.fedorapeople.org/QEMU-Docs-v3.0.0/_build/html/docs/interop/bitmaps.html


-- 
/kashyap

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Incremental drive-backup with dirty bitmaps
       [not found]         ` <CAMAMwPA_H77fnC+dOzBt+nRQ+=oPHtpw2DRYMCEtnGdo1OU0Hw@mail.gmail.com>
@ 2019-02-06 17:20           ` Suman Swaroop
  2019-02-06 17:57             ` Eric Blake
  0 siblings, 1 reply; 8+ messages in thread
From: Suman Swaroop @ 2019-02-06 17:20 UTC (permalink / raw)
  To: Bharadwaj Rayala, qemu-discuss, qemu-devel
  Cc: Kashyap Chamarthy, Eric Blake, Kashyap Chamarthy, John Snow

  Hey, some continuation questions from above discussion,


   1.

   Comments in blockdev-add command section in patch
   https://patchwork.kernel.org/patch/9638133/ says that
   “Note: This command is still a work in progress. It doesn't support all
   block drivers among other things. Stay away from it unless you want to help
   with its development.” Does it work with qcow2 and raw from version 2.3?
   2.

   When new nodes are added in chain, there is an associated backing image
   file as well. Some of these image files become redundant when a node is
   merged with other nodes. While issuing qmp commands to qemu via libvirt tls
   socket, there does not seem to be any functionality available to delete the
   redundant image files. Basically when a node is streamed or committed, the
   node and its backing image file can be deleted as they are not required
   anymore. Is there any qmp command to achieve the same with the constraint
   that we only have access to libvirt tls socket and cannot ssh to the host?
   3.

   In filename path of blockdev-add command or target path of drive-backup
   command can we provide nfs urld directly for the image path i.e
   nfs://<ip>/foo(not a mounted directory)? This is related to the point 2
   above that we do not have root ssh access to the host but only access to
   its tls socket


Suman

On Thu, Jan 24, 2019 at 6:12 PM Bharadwaj Rayala <
bharadwaj.rayala@rubrik.com> wrote:

> Hi Eric, Kashyap,
>
> I work for Rubrik. I am trying to implement first class support for
> protection of rhev instances using rubrik. I want the solution to be
> generic and be a step towards protection of any kvm instance with any
> management layer(standalone kvm/rhel host, rhev/ovirt clusters, openstack
> clusters ...). But first l am just concentrating on the rhev flow. This
> backup program would be a proprietary product.
>
> I looked at the rhev recommended way for backups.
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/html/administration_guide/sect-backing_up_and_restoring_virtual_machines_using_the_backup_and_restore_api
> This doesnot make use of qemu block jobs for incremental ingest, or dirty
> bitmaps. Entire responsibility is on the backup program to take backups,
> and the best one can do is to do a block by block fingerprint on the whole
> drive and then do an incremental ingest by comparing fp with fp's of the
> base snapshot.
>
> I wanted to build something that directly talks to rhev(or any kvm) hosts
> using libvirt and use block jobs and dirty bitmaps to do cbt like, but push
> based ingest. We need to support 4.1 version of rhev, which contains
> qmeu-kvm-rhev 2.6.0-28.el7_3.9 which has (almost) all the functionality
> that we need. I know that write access through libvirt is not supported by
> rhev right now [1]
> <https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/html/product_guide/accessing-rhv>.
> I am trying to find a non invasive way of accessing qemu drive-backup block
> job through libvirt(using qemu-monitor-command, libvirt2.0 in rhev4.1
> doesnot have drive-backup) that does not interfere with user driven
> snapshots/interactions from rhev ui/api. I am not sure if I can continue on
> this path though, with the information Eric has provided on user driven
> snapshots. Please do let me know if you have any comments/better ideas.
>
> Currently i am talking to libvirt hosts over tls socket (virsh -c
> qemu+tls://rhevnode1/system ), by certifying rubrik nodes from which backup
> is triggered using the same CA.key manager uses to certify its own nodes. I
> already read your wikis on incremental live backups and dirty bitmaps. They
> are a great find. TY
>
> Bharadwaj.
>
> On Thu, Jan 24, 2019 at 2:46 PM Kashyap Chamarthy <kchamart@redhat.com>
> wrote:
>
>> On Wed, Jan 23, 2019 at 01:09:41PM -0600, Eric Blake wrote:
>> > On 1/23/19 12:08 PM, Bharadwaj Rayala wrote:
>>
>> [...] # [Snip Eric's excellent exposition.]
>>
>> > > What do you mean by issues? Do you mean any data/corruption bugs or
>> lack of
>> > > some nice functionality that we are talking here?
>> >
>> > Lack of functionality.  In particular, the 4.0 commands
>> > block-dirty-bitmap-{enable,merge,disable} (or their 3.1 counterparts
>> > x-block-dirty-bitmap-*) are essential to the workflow of differential
>> > backups (without being able to manage bitmaps yourself, you can only get
>> > the weaker incremental backup, and that means qemu itself is clearing
>> > the bitmap out of under your feet on success, and where you are having
>> > to worry about completion-mode=grouped).
>> >
>> > >
>> > > Thanks a lot Eric for spending your time in answering my queries. I
>> dont
>> > > know if you work with Kashyap Chamarthy, but your help and his blogs
>> are
>> > > lifesavers.
>> >
>> > Yes, Kashyap is trying to build solutions on top of the building blocks
>> > that I am working on, so we have collaborated several times on these
>> > types of issues (he does a lot better at blog posts extracted from my
>> > mailing list brain dumps).
>>
>> I haven't kept up with incremental backups lately, as I've been swamped
>> with other work.  But two other documents that I can point to are these
>> [1][2] in the QEMU tree.  And their HTML-rendered versions are
>> here[3][4].  They're generated for 3.0.0; but these docs haven't changed
>> much since then.
>>
>> Along with Eric's last year talk, also check out presentations from
>> previous KVM Forums from other Block Layer maintainers.
>>
>>
>> [1]
>> https://git.qemu.org/?p=qemu.git;a=blob;f=docs/interop/live-block-operations.rst
>> [2] https://git.qemu.org/?p=qemu.git;a=blob;f=docs/interop/bitmaps.rst
>> [3]
>> https://kashyapc.fedorapeople.org/QEMU-Docs-v3.0.0/_build/html/docs/interop/live-block-operations.html
>> [4]
>> https://kashyapc.fedorapeople.org/QEMU-Docs-v3.0.0/_build/html/docs/interop/bitmaps.html
>>
>>
>> --
>> /kashyap
>>
>
>
> --
> [image: photo]
> Bharadwaj Rayala
> Software Engineer at Rubrik
> M  +919618462233  <+919618462233> E  bharadwaj.rayala@rubrik.com
> <bharadwaj.rayala@rubrik.com> W  www.rubrik.com
> <http://www.rubrik.com?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
>
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] Incremental drive-backup with dirty bitmaps
  2019-02-06 17:20           ` Suman Swaroop
@ 2019-02-06 17:57             ` Eric Blake
  0 siblings, 0 replies; 8+ messages in thread
From: Eric Blake @ 2019-02-06 17:57 UTC (permalink / raw)
  To: Suman Swaroop, Bharadwaj Rayala, qemu-discuss, qemu-devel
  Cc: Kashyap Chamarthy, Kashyap Chamarthy, John Snow

[-- Attachment #1: Type: text/plain, Size: 5084 bytes --]

On 2/6/19 11:20 AM, Suman Swaroop wrote:
>   Hey, some continuation questions from above discussion,
> 

What above discussion?  Oh, you top-posted, so you mean the below
discussion.  (On technical lists, it's best to avoid top-posting, and to
instead reply inline to make the conversation easier to follow; it's
also okay to trim quoted parts irrelevant to your reply, as it can be
assumed that anyone joining the conversation can find the public list
archives to catch up on the full thread).

> 
>    1.
> 
>    Comments in blockdev-add command section in patch
>    https://patchwork.kernel.org/patch/9638133/ says that
>    “Note: This command is still a work in progress. It doesn't support all
>    block drivers among other things. Stay away from it unless you want to help
>    with its development.” Does it work with qcow2 and raw from version 2.3?

blockdev-add was declared stable in commit 79b7a77ed, v2.9 and beyond.
You are correct that for simpler uses, you could probably get raw and
qcow2 actions to work in 2.3, but working with older versions is more of
a downstream task so you may get less support figuring out how to target
older and newer versions simultaneously from this list.

Meanwhile, there have been enough other fixes with incremental backups
that you probably want to be using 3.1 or newer.  For that matter, a lot
of bitmap commands were still experimental in 3.1, but have been
converted to stable for the upcoming 4.0; depending on what you plan on
doing with dirty bitmaps, having to implement something twice to use
x-block-dirty-bitmap-merge from 3.1 and block-dirty-bitmap-merge from
4.0 can be a pain.

>    2.
> 
>    When new nodes are added in chain, there is an associated backing image
>    file as well. Some of these image files become redundant when a node is
>    merged with other nodes. While issuing qmp commands to qemu via libvirt tls
>    socket, there does not seem to be any functionality available to delete the
>    redundant image files. Basically when a node is streamed or committed, the
>    node and its backing image file can be deleted as they are not required
>    anymore. Is there any qmp command to achieve the same with the constraint
>    that we only have access to libvirt tls socket and cannot ssh to the host?

Trying to diagram what you are asking, to make sure I understand:

You stared with:

base <- image1 <- image2

and later did a block stream to get:

base <- image2 (contents of image1 now in image2)

or a block commit to get:

base <- image2 (contents of image1 now in base)

and are trying to figure out how to delete image1 from the file system,
now that it is no longer in use by the backing chain?  Why can't you
just 'rm image1'? Or if you are using libvirt to manage the storage pool
that image1 lives in, can't you use libvirt virStorage* APIs to remove
it?  Yes, it might be nice it the libvirt APIs for external snapshot
management had a bit more power to optionally auto-delete images that
are no longer in use - but that's more a question for the libvirt list
than this list.

Or are you asking about a node still in qemu's memory, that would show
up via the query-block command?

If it is something owned by qemu, you can always experiment with the
libvirt backdoor of 'virsh qemu-monitor-command' to send QMP commands to
qemu that libvirt has not yet coded up official support for; but it puts
you squarely in unsupported territory (if it works, great; if it breaks,
you get to keep both pieces).

In the meantime, I've cc'd you on my v3 posting of what I hope will
stabilize into the libvirt incremental backup APIs in time for libvirt
5.1 (I'm down to a couple of weeks, with still quite a few things to
shake out).

>    3.
> 
>    In filename path of blockdev-add command or target path of drive-backup
>    command can we provide nfs urld directly for the image path i.e
>    nfs://<ip>/foo(not a mounted directory)? This is related to the point 2
>    above that we do not have root ssh access to the host but only access to
>    its tls socket

The point of blockdev-add is that you should supply the parameters
needed by a particular driver.  An image path that can be short-cutted
to nfs:// for qemu-img also has a long form described by
BlockdevOptionsNfs (present since 2.9); so where you use:

{ "driver": "file", "filename": "/path/to/local" }

to get the file driver, you would instead use:

{ "driver": "nfs", "server": "<ip>", "path": "foo", ... }

for an NFS access.  If you are using libvirt to manage qemu, then
libvirt should be able to do all this on your behalf - except that
getting libvirt to use blockdev-add has been a multi-year project, and
Peter Krempa is still producing patches that will hopefully land in
libvirt 5.1 along those lines.  But again, libvirt questions may be
better asked on the libvirt list.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-02-06 17:57 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-22 19:29 [Qemu-devel] Incremental drive-backup with dirty bitmaps Bharadwaj Rayala
2019-01-22 21:54 ` Eric Blake
2019-01-23 18:08   ` Bharadwaj Rayala
2019-01-23 19:09     ` Eric Blake
2019-01-24  9:16       ` Kashyap Chamarthy
     [not found]         ` <CAMAMwPA_H77fnC+dOzBt+nRQ+=oPHtpw2DRYMCEtnGdo1OU0Hw@mail.gmail.com>
2019-02-06 17:20           ` Suman Swaroop
2019-02-06 17:57             ` Eric Blake
2019-01-24  8:57 ` Kashyap Chamarthy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.