All of lore.kernel.org
 help / color / mirror / Atom feed
* towards a workable O_DIRECT outmigration to a file
@ 2022-08-11 13:47 Nikolay Borisov
  2022-08-11 14:10 ` Nikolay Borisov
  2022-09-08 10:26 ` [PATCH] migration: support file: uri for source migration Nikolay Borisov
  0 siblings, 2 replies; 14+ messages in thread
From: Nikolay Borisov @ 2022-08-11 13:47 UTC (permalink / raw)
  To: qemu-devel; +Cc: Claudio Fontana, Jim Fehlig

Hello,

I'm currently looking into implementing a 'file:' uri for migration save 
in qemu. Ideally the solution will be O_DIRECT compatible. I'm aware of 
the branch https://gitlab.com/berrange/qemu/-/tree/mig-file. In the 
process of brainstorming how a solution would like the a couple of 
questions transpired that I think warrant wider discussion in the community.

First, implementing a solution which is self-contained within qemu would 
be easy enough( famous last words) but the gist is one  has to only care 
about the format within qemu. However, I'm being told that what libvirt 
does is prepend its own custom header to the resulting saved file, then 
slipstreams the migration stream from qemu. Now with the solution that I 
envision I intend to keep all write-related logic inside qemu, this 
means there's no way to incorporate the logic of libvirt. The reason I'd 
like to keep the write process within qemu is to avoid an extra copy of 
data between the two processes (qemu outging migration and libvirt), 
with the current fd approach qemu is passed an fd, data is copied 
between qemu/libvirt and finally the libvirt_iohelper writes the data. 
So the question which remains to be answered is how would libvirt make 
use of this new functionality in qemu? I was thinking something along 
the lines of :

1. Qemu writes its migration stream to a file, ideally on a filesystem 
which supports reflink - xfs/btrfs

2. Libvirt writes it's header to a separate file
2.1 Reflinks the qemu's stream right after its header
2.2 Writes its trailer

3. Unlink() qemu's file, now only libvirt's file remains on-disk.

I wouldn't call this solution hacky though it definitely leaves some 
bitter aftertaste.


Another solution would be to extend the 'fd:' protocol to allow multiple 
descriptors (for multifd) support to be passed in. The reason dup() 
can't be used is because in order for multifd to be supported it's 
required to be able to write to multiple, non-overlapping regions of the 
file. And duplicated fd's share their offsets etc. But that really seems 
more or less hacky. Alternatively it's possible that pwrite() are used 
to write to non-overlapping regions in the file. Any feedback is welcomed.


Regards,
Nikolay


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: towards a workable O_DIRECT outmigration to a file
  2022-08-11 13:47 towards a workable O_DIRECT outmigration to a file Nikolay Borisov
@ 2022-08-11 14:10 ` Nikolay Borisov
  2022-08-18 12:38   ` Dr. David Alan Gilbert
  2022-09-08 10:26 ` [PATCH] migration: support file: uri for source migration Nikolay Borisov
  1 sibling, 1 reply; 14+ messages in thread
From: Nikolay Borisov @ 2022-08-11 14:10 UTC (permalink / raw)
  To: qemu-devel; +Cc: Claudio Fontana, Jim Fehlig, quintela, dgilbert

[adding Juan and David to cc as I had missed them. ]

On 11.08.22 г. 16:47 ч., Nikolay Borisov wrote:
> Hello,
> 
> I'm currently looking into implementing a 'file:' uri for migration save 
> in qemu. Ideally the solution will be O_DIRECT compatible. I'm aware of 
> the branch https://gitlab.com/berrange/qemu/-/tree/mig-file. In the 
> process of brainstorming how a solution would like the a couple of 
> questions transpired that I think warrant wider discussion in the 
> community.
> 
> First, implementing a solution which is self-contained within qemu would 
> be easy enough( famous last words) but the gist is one  has to only care 
> about the format within qemu. However, I'm being told that what libvirt 
> does is prepend its own custom header to the resulting saved file, then 
> slipstreams the migration stream from qemu. Now with the solution that I 
> envision I intend to keep all write-related logic inside qemu, this 
> means there's no way to incorporate the logic of libvirt. The reason I'd 
> like to keep the write process within qemu is to avoid an extra copy of 
> data between the two processes (qemu outging migration and libvirt), 
> with the current fd approach qemu is passed an fd, data is copied 
> between qemu/libvirt and finally the libvirt_iohelper writes the data. 
> So the question which remains to be answered is how would libvirt make 
> use of this new functionality in qemu? I was thinking something along 
> the lines of :
> 
> 1. Qemu writes its migration stream to a file, ideally on a filesystem 
> which supports reflink - xfs/btrfs
> 
> 2. Libvirt writes it's header to a separate file
> 2.1 Reflinks the qemu's stream right after its header
> 2.2 Writes its trailer
> 
> 3. Unlink() qemu's file, now only libvirt's file remains on-disk.
> 
> I wouldn't call this solution hacky though it definitely leaves some 
> bitter aftertaste.
> 
> 
> Another solution would be to extend the 'fd:' protocol to allow multiple 
> descriptors (for multifd) support to be passed in. The reason dup() 
> can't be used is because in order for multifd to be supported it's 
> required to be able to write to multiple, non-overlapping regions of the 
> file. And duplicated fd's share their offsets etc. But that really seems 
> more or less hacky. Alternatively it's possible that pwrite() are used 
> to write to non-overlapping regions in the file. Any feedback is welcomed.
> 
> 
> Regards,
> Nikolay


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: towards a workable O_DIRECT outmigration to a file
  2022-08-11 14:10 ` Nikolay Borisov
@ 2022-08-18 12:38   ` Dr. David Alan Gilbert
  2022-08-18 12:52     ` Claudio Fontana
  0 siblings, 1 reply; 14+ messages in thread
From: Dr. David Alan Gilbert @ 2022-08-18 12:38 UTC (permalink / raw)
  To: Nikolay Borisov, berrange
  Cc: qemu-devel, Claudio Fontana, Jim Fehlig, quintela

* Nikolay Borisov (nborisov@suse.com) wrote:
> [adding Juan and David to cc as I had missed them. ]

Hi Nikolay,

> On 11.08.22 г. 16:47 ч., Nikolay Borisov wrote:
> > Hello,
> > 
> > I'm currently looking into implementing a 'file:' uri for migration save
> > in qemu. Ideally the solution will be O_DIRECT compatible. I'm aware of
> > the branch https://gitlab.com/berrange/qemu/-/tree/mig-file. In the
> > process of brainstorming how a solution would like the a couple of
> > questions transpired that I think warrant wider discussion in the
> > community.

OK, so this seems to be a continuation with Claudio and Daniel and co as
of a few months back.  I'd definitely be leaving libvirt sides of the
question here to Dan, and so that also means definitely looking at that
tree above.

> > First, implementing a solution which is self-contained within qemu would
> > be easy enough( famous last words) but the gist is one  has to only care
> > about the format within qemu. However, I'm being told that what libvirt
> > does is prepend its own custom header to the resulting saved file, then
> > slipstreams the migration stream from qemu. Now with the solution that I
> > envision I intend to keep all write-related logic inside qemu, this
> > means there's no way to incorporate the logic of libvirt. The reason I'd
> > like to keep the write process within qemu is to avoid an extra copy of
> > data between the two processes (qemu outging migration and libvirt),
> > with the current fd approach qemu is passed an fd, data is copied
> > between qemu/libvirt and finally the libvirt_iohelper writes the data.
> > So the question which remains to be answered is how would libvirt make
> > use of this new functionality in qemu? I was thinking something along
> > the lines of :
> > 
> > 1. Qemu writes its migration stream to a file, ideally on a filesystem
> > which supports reflink - xfs/btrfs
> > 
> > 2. Libvirt writes it's header to a separate file
> > 2.1 Reflinks the qemu's stream right after its header
> > 2.2 Writes its trailer
> > 
> > 3. Unlink() qemu's file, now only libvirt's file remains on-disk.
> > 
> > I wouldn't call this solution hacky though it definitely leaves some
> > bitter aftertaste.

Wouldn't it be simpler to tell libvirt to write it's header, then tell
qemu to append everything?

> > Another solution would be to extend the 'fd:' protocol to allow multiple
> > descriptors (for multifd) support to be passed in. The reason dup()
> > can't be used is because in order for multifd to be supported it's
> > required to be able to write to multiple, non-overlapping regions of the
> > file. And duplicated fd's share their offsets etc. But that really seems
> > more or less hacky. Alternatively it's possible that pwrite() are used
> > to write to non-overlapping regions in the file. Any feedback is
> > welcomed.

I do like the idea of letting fd: take multiple fd's.

Dave

> > 
> > 
> > Regards,
> > Nikolay
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: towards a workable O_DIRECT outmigration to a file
  2022-08-18 12:38   ` Dr. David Alan Gilbert
@ 2022-08-18 12:52     ` Claudio Fontana
  2022-08-18 16:31       ` Dr. David Alan Gilbert
  0 siblings, 1 reply; 14+ messages in thread
From: Claudio Fontana @ 2022-08-18 12:52 UTC (permalink / raw)
  To: Dr. David Alan Gilbert, Nikolay Borisov, berrange
  Cc: qemu-devel, Claudio Fontana, Jim Fehlig, quintela

On 8/18/22 14:38, Dr. David Alan Gilbert wrote:
> * Nikolay Borisov (nborisov@suse.com) wrote:
>> [adding Juan and David to cc as I had missed them. ]
> 
> Hi Nikolay,
> 
>> On 11.08.22 г. 16:47 ч., Nikolay Borisov wrote:
>>> Hello,
>>>
>>> I'm currently looking into implementing a 'file:' uri for migration save
>>> in qemu. Ideally the solution will be O_DIRECT compatible. I'm aware of
>>> the branch https://gitlab.com/berrange/qemu/-/tree/mig-file. In the
>>> process of brainstorming how a solution would like the a couple of
>>> questions transpired that I think warrant wider discussion in the
>>> community.
> 
> OK, so this seems to be a continuation with Claudio and Daniel and co as
> of a few months back.  I'd definitely be leaving libvirt sides of the
> question here to Dan, and so that also means definitely looking at that
> tree above.

Hi Dave, yes, Nikolai is trying to continue on the qemu side.

We have something working with libvirt for our short term needs which offers good performance,
but it is clear that that simple solution is barred for upstream libvirt merging.


> 
>>> First, implementing a solution which is self-contained within qemu would
>>> be easy enough( famous last words) but the gist is one  has to only care
>>> about the format within qemu. However, I'm being told that what libvirt
>>> does is prepend its own custom header to the resulting saved file, then
>>> slipstreams the migration stream from qemu. Now with the solution that I
>>> envision I intend to keep all write-related logic inside qemu, this
>>> means there's no way to incorporate the logic of libvirt. The reason I'd
>>> like to keep the write process within qemu is to avoid an extra copy of
>>> data between the two processes (qemu outging migration and libvirt),
>>> with the current fd approach qemu is passed an fd, data is copied
>>> between qemu/libvirt and finally the libvirt_iohelper writes the data.
>>> So the question which remains to be answered is how would libvirt make
>>> use of this new functionality in qemu? I was thinking something along
>>> the lines of :
>>>
>>> 1. Qemu writes its migration stream to a file, ideally on a filesystem
>>> which supports reflink - xfs/btrfs
>>>
>>> 2. Libvirt writes it's header to a separate file
>>> 2.1 Reflinks the qemu's stream right after its header
>>> 2.2 Writes its trailer
>>>
>>> 3. Unlink() qemu's file, now only libvirt's file remains on-disk.
>>>
>>> I wouldn't call this solution hacky though it definitely leaves some
>>> bitter aftertaste.
> 
> Wouldn't it be simpler to tell libvirt to write it's header, then tell
> qemu to append everything?

I would think so as well. 

> 
>>> Another solution would be to extend the 'fd:' protocol to allow multiple
>>> descriptors (for multifd) support to be passed in. The reason dup()
>>> can't be used is because in order for multifd to be supported it's
>>> required to be able to write to multiple, non-overlapping regions of the
>>> file. And duplicated fd's share their offsets etc. But that really seems
>>> more or less hacky. Alternatively it's possible that pwrite() are used
>>> to write to non-overlapping regions in the file. Any feedback is
>>> welcomed.
> 
> I do like the idea of letting fd: take multiple fd's.

Fine in my view, I think we will still need then a helper process in libvirt to merge the data into a single file, no?
In case the libvirt multifd to single file multithreaded helper I proposed before is helpful as a reference you could reuse/modify those patches.

Maybe this new way will be acceptable to libvirt,
ie avoiding the multifd code -> socket, but still merging the data from the multiple fds into a single file?

> 
> Dave
> 

Thanks for your comments,

Claudio
>>>
>>>
>>> Regards,
>>> Nikolay
>>



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: towards a workable O_DIRECT outmigration to a file
  2022-08-18 12:52     ` Claudio Fontana
@ 2022-08-18 16:31       ` Dr. David Alan Gilbert
  2022-08-18 18:09         ` Claudio Fontana
  2022-08-18 18:13         ` Claudio Fontana
  0 siblings, 2 replies; 14+ messages in thread
From: Dr. David Alan Gilbert @ 2022-08-18 16:31 UTC (permalink / raw)
  To: Claudio Fontana
  Cc: Nikolay Borisov, berrange, qemu-devel, Claudio Fontana,
	Jim Fehlig, quintela

* Claudio Fontana (cfontana@suse.de) wrote:
> On 8/18/22 14:38, Dr. David Alan Gilbert wrote:
> > * Nikolay Borisov (nborisov@suse.com) wrote:
> >> [adding Juan and David to cc as I had missed them. ]
> > 
> > Hi Nikolay,
> > 
> >> On 11.08.22 г. 16:47 ч., Nikolay Borisov wrote:
> >>> Hello,
> >>>
> >>> I'm currently looking into implementing a 'file:' uri for migration save
> >>> in qemu. Ideally the solution will be O_DIRECT compatible. I'm aware of
> >>> the branch https://gitlab.com/berrange/qemu/-/tree/mig-file. In the
> >>> process of brainstorming how a solution would like the a couple of
> >>> questions transpired that I think warrant wider discussion in the
> >>> community.
> > 
> > OK, so this seems to be a continuation with Claudio and Daniel and co as
> > of a few months back.  I'd definitely be leaving libvirt sides of the
> > question here to Dan, and so that also means definitely looking at that
> > tree above.
> 
> Hi Dave, yes, Nikolai is trying to continue on the qemu side.
> 
> We have something working with libvirt for our short term needs which offers good performance,
> but it is clear that that simple solution is barred for upstream libvirt merging.
> 
> 
> > 
> >>> First, implementing a solution which is self-contained within qemu would
> >>> be easy enough( famous last words) but the gist is one  has to only care
> >>> about the format within qemu. However, I'm being told that what libvirt
> >>> does is prepend its own custom header to the resulting saved file, then
> >>> slipstreams the migration stream from qemu. Now with the solution that I
> >>> envision I intend to keep all write-related logic inside qemu, this
> >>> means there's no way to incorporate the logic of libvirt. The reason I'd
> >>> like to keep the write process within qemu is to avoid an extra copy of
> >>> data between the two processes (qemu outging migration and libvirt),
> >>> with the current fd approach qemu is passed an fd, data is copied
> >>> between qemu/libvirt and finally the libvirt_iohelper writes the data.
> >>> So the question which remains to be answered is how would libvirt make
> >>> use of this new functionality in qemu? I was thinking something along
> >>> the lines of :
> >>>
> >>> 1. Qemu writes its migration stream to a file, ideally on a filesystem
> >>> which supports reflink - xfs/btrfs
> >>>
> >>> 2. Libvirt writes it's header to a separate file
> >>> 2.1 Reflinks the qemu's stream right after its header
> >>> 2.2 Writes its trailer
> >>>
> >>> 3. Unlink() qemu's file, now only libvirt's file remains on-disk.
> >>>
> >>> I wouldn't call this solution hacky though it definitely leaves some
> >>> bitter aftertaste.
> > 
> > Wouldn't it be simpler to tell libvirt to write it's header, then tell
> > qemu to append everything?
> 
> I would think so as well. 
> 
> > 
> >>> Another solution would be to extend the 'fd:' protocol to allow multiple
> >>> descriptors (for multifd) support to be passed in. The reason dup()
> >>> can't be used is because in order for multifd to be supported it's
> >>> required to be able to write to multiple, non-overlapping regions of the
> >>> file. And duplicated fd's share their offsets etc. But that really seems
> >>> more or less hacky. Alternatively it's possible that pwrite() are used
> >>> to write to non-overlapping regions in the file. Any feedback is
> >>> welcomed.
> > 
> > I do like the idea of letting fd: take multiple fd's.
> 
> Fine in my view, I think we will still need then a helper process in libvirt to merge the data into a single file, no?
> In case the libvirt multifd to single file multithreaded helper I proposed before is helpful as a reference you could reuse/modify those patches.

Eww that's messy isn't it.
(You don't fancy a huge sparse file do you?)

> Maybe this new way will be acceptable to libvirt,
> ie avoiding the multifd code -> socket, but still merging the data from the multiple fds into a single file?

It feels to me like the problem here is really what we want is something
closer to a dump than the migration code; you don't need all that
overhead of the code to deal with live migration bitmaps and dirty pages
that aren't going to happen.
Something that just does a nice single write(2) (for each memory
region);
and then ties the device state on.

Dave

> > 
> > Dave
> > 
> 
> Thanks for your comments,
> 
> Claudio
> >>>
> >>>
> >>> Regards,
> >>> Nikolay
> >>
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: towards a workable O_DIRECT outmigration to a file
  2022-08-18 16:31       ` Dr. David Alan Gilbert
@ 2022-08-18 18:09         ` Claudio Fontana
  2022-08-18 18:45           ` Claudio Fontana
  2022-08-18 18:49           ` Dr. David Alan Gilbert
  2022-08-18 18:13         ` Claudio Fontana
  1 sibling, 2 replies; 14+ messages in thread
From: Claudio Fontana @ 2022-08-18 18:09 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Nikolay Borisov, berrange, qemu-devel, Claudio Fontana,
	Jim Fehlig, quintela

On 8/18/22 18:31, Dr. David Alan Gilbert wrote:
> * Claudio Fontana (cfontana@suse.de) wrote:
>> On 8/18/22 14:38, Dr. David Alan Gilbert wrote:
>>> * Nikolay Borisov (nborisov@suse.com) wrote:
>>>> [adding Juan and David to cc as I had missed them. ]
>>>
>>> Hi Nikolay,
>>>
>>>> On 11.08.22 г. 16:47 ч., Nikolay Borisov wrote:
>>>>> Hello,
>>>>>
>>>>> I'm currently looking into implementing a 'file:' uri for migration save
>>>>> in qemu. Ideally the solution will be O_DIRECT compatible. I'm aware of
>>>>> the branch https://gitlab.com/berrange/qemu/-/tree/mig-file. In the
>>>>> process of brainstorming how a solution would like the a couple of
>>>>> questions transpired that I think warrant wider discussion in the
>>>>> community.
>>>
>>> OK, so this seems to be a continuation with Claudio and Daniel and co as
>>> of a few months back.  I'd definitely be leaving libvirt sides of the
>>> question here to Dan, and so that also means definitely looking at that
>>> tree above.
>>
>> Hi Dave, yes, Nikolai is trying to continue on the qemu side.
>>
>> We have something working with libvirt for our short term needs which offers good performance,
>> but it is clear that that simple solution is barred for upstream libvirt merging.
>>
>>
>>>
>>>>> First, implementing a solution which is self-contained within qemu would
>>>>> be easy enough( famous last words) but the gist is one  has to only care
>>>>> about the format within qemu. However, I'm being told that what libvirt
>>>>> does is prepend its own custom header to the resulting saved file, then
>>>>> slipstreams the migration stream from qemu. Now with the solution that I
>>>>> envision I intend to keep all write-related logic inside qemu, this
>>>>> means there's no way to incorporate the logic of libvirt. The reason I'd
>>>>> like to keep the write process within qemu is to avoid an extra copy of
>>>>> data between the two processes (qemu outging migration and libvirt),
>>>>> with the current fd approach qemu is passed an fd, data is copied
>>>>> between qemu/libvirt and finally the libvirt_iohelper writes the data.
>>>>> So the question which remains to be answered is how would libvirt make
>>>>> use of this new functionality in qemu? I was thinking something along
>>>>> the lines of :
>>>>>
>>>>> 1. Qemu writes its migration stream to a file, ideally on a filesystem
>>>>> which supports reflink - xfs/btrfs
>>>>>
>>>>> 2. Libvirt writes it's header to a separate file
>>>>> 2.1 Reflinks the qemu's stream right after its header
>>>>> 2.2 Writes its trailer
>>>>>
>>>>> 3. Unlink() qemu's file, now only libvirt's file remains on-disk.
>>>>>
>>>>> I wouldn't call this solution hacky though it definitely leaves some
>>>>> bitter aftertaste.
>>>
>>> Wouldn't it be simpler to tell libvirt to write it's header, then tell
>>> qemu to append everything?
>>
>> I would think so as well. 
>>
>>>
>>>>> Another solution would be to extend the 'fd:' protocol to allow multiple
>>>>> descriptors (for multifd) support to be passed in. The reason dup()
>>>>> can't be used is because in order for multifd to be supported it's
>>>>> required to be able to write to multiple, non-overlapping regions of the
>>>>> file. And duplicated fd's share their offsets etc. But that really seems
>>>>> more or less hacky. Alternatively it's possible that pwrite() are used
>>>>> to write to non-overlapping regions in the file. Any feedback is
>>>>> welcomed.
>>>
>>> I do like the idea of letting fd: take multiple fd's.
>>
>> Fine in my view, I think we will still need then a helper process in libvirt to merge the data into a single file, no?
>> In case the libvirt multifd to single file multithreaded helper I proposed before is helpful as a reference you could reuse/modify those patches.
> 
> Eww that's messy isn't it.
> (You don't fancy a huge sparse file do you?)

Wait am I missing something obvious here?

Maybe we don't need any libvirt extra process.

why don't we open the _single_ file multiple times from libvirt,

Lets say the "main channel" fd is opened, we write the libvirt header,
then reopen again the same file multiple times,
and finally pass all fds to qemu, one fd for each parallel transfer channel we want to use
(so we solve all the permissions, security labels issues etc).

And then from QEMU we can write to those fds at the right offsets for each separate channel,
which is easier from QEMU because we can know exactly how much data we need to transfer before starting the migration,
so we have even less need for "holes", possibly only minor ones for single byte adjustments
for uneven division of the interleaved file.

What is wrong with this one, or does anyone see some other better approach?

Thanks,

C

> 
>> Maybe this new way will be acceptable to libvirt,
>> ie avoiding the multifd code -> socket, but still merging the data from the multiple fds into a single file?
> 
> It feels to me like the problem here is really what we want is something
> closer to a dump than the migration code; you don't need all that
> overhead of the code to deal with live migration bitmaps and dirty pages
> that aren't going to happen.
> Something that just does a nice single write(2) (for each memory
> region);
> and then ties the device state on.
> 
> Dave
> 
>>>
>>> Dave
>>>
>>
>> Thanks for your comments,
>>
>> Claudio
>>>>>
>>>>>
>>>>> Regards,
>>>>> Nikolay
>>>>
>>



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: towards a workable O_DIRECT outmigration to a file
  2022-08-18 16:31       ` Dr. David Alan Gilbert
  2022-08-18 18:09         ` Claudio Fontana
@ 2022-08-18 18:13         ` Claudio Fontana
  1 sibling, 0 replies; 14+ messages in thread
From: Claudio Fontana @ 2022-08-18 18:13 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Nikolay Borisov, berrange, qemu-devel, Claudio Fontana,
	Jim Fehlig, quintela

On 8/18/22 18:31, Dr. David Alan Gilbert wrote:
> * Claudio Fontana (cfontana@suse.de) wrote:
>> On 8/18/22 14:38, Dr. David Alan Gilbert wrote:
>>> * Nikolay Borisov (nborisov@suse.com) wrote:
>>>> [adding Juan and David to cc as I had missed them. ]
>>>
>>> Hi Nikolay,
>>>
>>>> On 11.08.22 г. 16:47 ч., Nikolay Borisov wrote:
>>>>> Hello,
>>>>>
>>>>> I'm currently looking into implementing a 'file:' uri for migration save
>>>>> in qemu. Ideally the solution will be O_DIRECT compatible. I'm aware of
>>>>> the branch https://gitlab.com/berrange/qemu/-/tree/mig-file. In the
>>>>> process of brainstorming how a solution would like the a couple of
>>>>> questions transpired that I think warrant wider discussion in the
>>>>> community.
>>>
>>> OK, so this seems to be a continuation with Claudio and Daniel and co as
>>> of a few months back.  I'd definitely be leaving libvirt sides of the
>>> question here to Dan, and so that also means definitely looking at that
>>> tree above.
>>
>> Hi Dave, yes, Nikolai is trying to continue on the qemu side.
>>
>> We have something working with libvirt for our short term needs which offers good performance,
>> but it is clear that that simple solution is barred for upstream libvirt merging.
>>
>>
>>>
>>>>> First, implementing a solution which is self-contained within qemu would
>>>>> be easy enough( famous last words) but the gist is one  has to only care
>>>>> about the format within qemu. However, I'm being told that what libvirt
>>>>> does is prepend its own custom header to the resulting saved file, then
>>>>> slipstreams the migration stream from qemu. Now with the solution that I
>>>>> envision I intend to keep all write-related logic inside qemu, this
>>>>> means there's no way to incorporate the logic of libvirt. The reason I'd
>>>>> like to keep the write process within qemu is to avoid an extra copy of
>>>>> data between the two processes (qemu outging migration and libvirt),
>>>>> with the current fd approach qemu is passed an fd, data is copied
>>>>> between qemu/libvirt and finally the libvirt_iohelper writes the data.
>>>>> So the question which remains to be answered is how would libvirt make
>>>>> use of this new functionality in qemu? I was thinking something along
>>>>> the lines of :
>>>>>
>>>>> 1. Qemu writes its migration stream to a file, ideally on a filesystem
>>>>> which supports reflink - xfs/btrfs
>>>>>
>>>>> 2. Libvirt writes it's header to a separate file
>>>>> 2.1 Reflinks the qemu's stream right after its header
>>>>> 2.2 Writes its trailer
>>>>>
>>>>> 3. Unlink() qemu's file, now only libvirt's file remains on-disk.
>>>>>
>>>>> I wouldn't call this solution hacky though it definitely leaves some
>>>>> bitter aftertaste.
>>>
>>> Wouldn't it be simpler to tell libvirt to write it's header, then tell
>>> qemu to append everything?
>>
>> I would think so as well. 
>>
>>>
>>>>> Another solution would be to extend the 'fd:' protocol to allow multiple
>>>>> descriptors (for multifd) support to be passed in. The reason dup()
>>>>> can't be used is because in order for multifd to be supported it's
>>>>> required to be able to write to multiple, non-overlapping regions of the
>>>>> file. And duplicated fd's share their offsets etc. But that really seems
>>>>> more or less hacky. Alternatively it's possible that pwrite() are used
>>>>> to write to non-overlapping regions in the file. Any feedback is
>>>>> welcomed.
>>>
>>> I do like the idea of letting fd: take multiple fd's.
>>
>> Fine in my view, I think we will still need then a helper process in libvirt to merge the data into a single file, no?
>> In case the libvirt multifd to single file multithreaded helper I proposed before is helpful as a reference you could reuse/modify those patches.
> 
> Eww that's messy isn't it.
> (You don't fancy a huge sparse file do you?)
> 
>> Maybe this new way will be acceptable to libvirt,
>> ie avoiding the multifd code -> socket, but still merging the data from the multiple fds into a single file?
> 
> It feels to me like the problem here is really what we want is something
> closer to a dump than the migration code; you don't need all that
> overhead of the code to deal with live migration bitmaps and dirty pages

well yes you are right, we don't care about live migration bitmaps and dirty pages,
but we don't incur in any of that anyway since (at least for what I have in mind, virsh save and restore),
the VM is stopped.

> that aren't going to happen.
> Something that just does a nice single write(2) (for each memory
> region);
> and then ties the device state on.

ultimately yes, it's the same thing though, whether we trigger it via migrate fd: or via another non-migration-related mechanism,
any approach would work.

Ciao,

C

> 
> Dave
> 
>>>
>>> Dave
>>>
>>
>> Thanks for your comments,
>>
>> Claudio
>>>>>
>>>>>
>>>>> Regards,
>>>>> Nikolay
>>>>
>>



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: towards a workable O_DIRECT outmigration to a file
  2022-08-18 18:09         ` Claudio Fontana
@ 2022-08-18 18:45           ` Claudio Fontana
  2022-08-18 18:49           ` Dr. David Alan Gilbert
  1 sibling, 0 replies; 14+ messages in thread
From: Claudio Fontana @ 2022-08-18 18:45 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Nikolay Borisov, berrange, qemu-devel, Claudio Fontana,
	Jim Fehlig, quintela

On 8/18/22 20:09, Claudio Fontana wrote:
> On 8/18/22 18:31, Dr. David Alan Gilbert wrote:
>> * Claudio Fontana (cfontana@suse.de) wrote:
>>> On 8/18/22 14:38, Dr. David Alan Gilbert wrote:
>>>> * Nikolay Borisov (nborisov@suse.com) wrote:
>>>>> [adding Juan and David to cc as I had missed them. ]
>>>>
>>>> Hi Nikolay,
>>>>
>>>>> On 11.08.22 г. 16:47 ч., Nikolay Borisov wrote:
>>>>>> Hello,
>>>>>>
>>>>>> I'm currently looking into implementing a 'file:' uri for migration save
>>>>>> in qemu. Ideally the solution will be O_DIRECT compatible. I'm aware of
>>>>>> the branch https://gitlab.com/berrange/qemu/-/tree/mig-file. In the
>>>>>> process of brainstorming how a solution would like the a couple of
>>>>>> questions transpired that I think warrant wider discussion in the
>>>>>> community.
>>>>
>>>> OK, so this seems to be a continuation with Claudio and Daniel and co as
>>>> of a few months back.  I'd definitely be leaving libvirt sides of the
>>>> question here to Dan, and so that also means definitely looking at that
>>>> tree above.
>>>
>>> Hi Dave, yes, Nikolai is trying to continue on the qemu side.
>>>
>>> We have something working with libvirt for our short term needs which offers good performance,
>>> but it is clear that that simple solution is barred for upstream libvirt merging.
>>>
>>>
>>>>
>>>>>> First, implementing a solution which is self-contained within qemu would
>>>>>> be easy enough( famous last words) but the gist is one  has to only care
>>>>>> about the format within qemu. However, I'm being told that what libvirt
>>>>>> does is prepend its own custom header to the resulting saved file, then
>>>>>> slipstreams the migration stream from qemu. Now with the solution that I
>>>>>> envision I intend to keep all write-related logic inside qemu, this
>>>>>> means there's no way to incorporate the logic of libvirt. The reason I'd
>>>>>> like to keep the write process within qemu is to avoid an extra copy of
>>>>>> data between the two processes (qemu outging migration and libvirt),
>>>>>> with the current fd approach qemu is passed an fd, data is copied
>>>>>> between qemu/libvirt and finally the libvirt_iohelper writes the data.
>>>>>> So the question which remains to be answered is how would libvirt make
>>>>>> use of this new functionality in qemu? I was thinking something along
>>>>>> the lines of :
>>>>>>
>>>>>> 1. Qemu writes its migration stream to a file, ideally on a filesystem
>>>>>> which supports reflink - xfs/btrfs
>>>>>>
>>>>>> 2. Libvirt writes it's header to a separate file
>>>>>> 2.1 Reflinks the qemu's stream right after its header
>>>>>> 2.2 Writes its trailer
>>>>>>
>>>>>> 3. Unlink() qemu's file, now only libvirt's file remains on-disk.
>>>>>>
>>>>>> I wouldn't call this solution hacky though it definitely leaves some
>>>>>> bitter aftertaste.
>>>>
>>>> Wouldn't it be simpler to tell libvirt to write it's header, then tell
>>>> qemu to append everything?
>>>
>>> I would think so as well. 
>>>
>>>>
>>>>>> Another solution would be to extend the 'fd:' protocol to allow multiple
>>>>>> descriptors (for multifd) support to be passed in. The reason dup()
>>>>>> can't be used is because in order for multifd to be supported it's
>>>>>> required to be able to write to multiple, non-overlapping regions of the
>>>>>> file. And duplicated fd's share their offsets etc. But that really seems
>>>>>> more or less hacky. Alternatively it's possible that pwrite() are used
>>>>>> to write to non-overlapping regions in the file. Any feedback is
>>>>>> welcomed.
>>>>
>>>> I do like the idea of letting fd: take multiple fd's.
>>>
>>> Fine in my view, I think we will still need then a helper process in libvirt to merge the data into a single file, no?
>>> In case the libvirt multifd to single file multithreaded helper I proposed before is helpful as a reference you could reuse/modify those patches.
>>
>> Eww that's messy isn't it.
>> (You don't fancy a huge sparse file do you?)
> 
> Wait am I missing something obvious here?
> 
> Maybe we don't need any libvirt extra process.
> 
> why don't we open the _single_ file multiple times from libvirt,
> 
> Lets say the "main channel" fd is opened, we write the libvirt header,
> then reopen again the same file multiple times,
> and finally pass all fds to qemu, one fd for each parallel transfer channel we want to use
> (so we solve all the permissions, security labels issues etc).
> 
> And then from QEMU we can write to those fds at the right offsets for each separate channel,
> which is easier from QEMU because we can know exactly how much data we need to transfer before starting the migration,
> so we have even less need for "holes", possibly only minor ones for single byte adjustments
> for uneven division of the interleaved file.

Or even better, not pass multiple fds, but just _one_ fd,
and then from qemu write using multiple threads and pread / pwrite , so we don't have the additional complication of managing a bunch of fds.

Ciao,

CLaudio

> 
> What is wrong with this one, or does anyone see some other better approach?
> 
> Thanks,
> 
> C
> 
>>
>>> Maybe this new way will be acceptable to libvirt,
>>> ie avoiding the multifd code -> socket, but still merging the data from the multiple fds into a single file?
>>
>> It feels to me like the problem here is really what we want is something
>> closer to a dump than the migration code; you don't need all that
>> overhead of the code to deal with live migration bitmaps and dirty pages
>> that aren't going to happen.
>> Something that just does a nice single write(2) (for each memory
>> region);
>> and then ties the device state on.
>>
>> Dave
>>
>>>>
>>>> Dave
>>>>
>>>
>>> Thanks for your comments,
>>>
>>> Claudio
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>> Nikolay
>>>>>
>>>
> 



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: towards a workable O_DIRECT outmigration to a file
  2022-08-18 18:09         ` Claudio Fontana
  2022-08-18 18:45           ` Claudio Fontana
@ 2022-08-18 18:49           ` Dr. David Alan Gilbert
  2022-08-18 19:14             ` Claudio Fontana
  1 sibling, 1 reply; 14+ messages in thread
From: Dr. David Alan Gilbert @ 2022-08-18 18:49 UTC (permalink / raw)
  To: Claudio Fontana
  Cc: Nikolay Borisov, berrange, qemu-devel, Claudio Fontana,
	Jim Fehlig, quintela

* Claudio Fontana (cfontana@suse.de) wrote:
> On 8/18/22 18:31, Dr. David Alan Gilbert wrote:
> > * Claudio Fontana (cfontana@suse.de) wrote:
> >> On 8/18/22 14:38, Dr. David Alan Gilbert wrote:
> >>> * Nikolay Borisov (nborisov@suse.com) wrote:
> >>>> [adding Juan and David to cc as I had missed them. ]
> >>>
> >>> Hi Nikolay,
> >>>
> >>>> On 11.08.22 г. 16:47 ч., Nikolay Borisov wrote:
> >>>>> Hello,
> >>>>>
> >>>>> I'm currently looking into implementing a 'file:' uri for migration save
> >>>>> in qemu. Ideally the solution will be O_DIRECT compatible. I'm aware of
> >>>>> the branch https://gitlab.com/berrange/qemu/-/tree/mig-file. In the
> >>>>> process of brainstorming how a solution would like the a couple of
> >>>>> questions transpired that I think warrant wider discussion in the
> >>>>> community.
> >>>
> >>> OK, so this seems to be a continuation with Claudio and Daniel and co as
> >>> of a few months back.  I'd definitely be leaving libvirt sides of the
> >>> question here to Dan, and so that also means definitely looking at that
> >>> tree above.
> >>
> >> Hi Dave, yes, Nikolai is trying to continue on the qemu side.
> >>
> >> We have something working with libvirt for our short term needs which offers good performance,
> >> but it is clear that that simple solution is barred for upstream libvirt merging.
> >>
> >>
> >>>
> >>>>> First, implementing a solution which is self-contained within qemu would
> >>>>> be easy enough( famous last words) but the gist is one  has to only care
> >>>>> about the format within qemu. However, I'm being told that what libvirt
> >>>>> does is prepend its own custom header to the resulting saved file, then
> >>>>> slipstreams the migration stream from qemu. Now with the solution that I
> >>>>> envision I intend to keep all write-related logic inside qemu, this
> >>>>> means there's no way to incorporate the logic of libvirt. The reason I'd
> >>>>> like to keep the write process within qemu is to avoid an extra copy of
> >>>>> data between the two processes (qemu outging migration and libvirt),
> >>>>> with the current fd approach qemu is passed an fd, data is copied
> >>>>> between qemu/libvirt and finally the libvirt_iohelper writes the data.
> >>>>> So the question which remains to be answered is how would libvirt make
> >>>>> use of this new functionality in qemu? I was thinking something along
> >>>>> the lines of :
> >>>>>
> >>>>> 1. Qemu writes its migration stream to a file, ideally on a filesystem
> >>>>> which supports reflink - xfs/btrfs
> >>>>>
> >>>>> 2. Libvirt writes it's header to a separate file
> >>>>> 2.1 Reflinks the qemu's stream right after its header
> >>>>> 2.2 Writes its trailer
> >>>>>
> >>>>> 3. Unlink() qemu's file, now only libvirt's file remains on-disk.
> >>>>>
> >>>>> I wouldn't call this solution hacky though it definitely leaves some
> >>>>> bitter aftertaste.
> >>>
> >>> Wouldn't it be simpler to tell libvirt to write it's header, then tell
> >>> qemu to append everything?
> >>
> >> I would think so as well. 
> >>
> >>>
> >>>>> Another solution would be to extend the 'fd:' protocol to allow multiple
> >>>>> descriptors (for multifd) support to be passed in. The reason dup()
> >>>>> can't be used is because in order for multifd to be supported it's
> >>>>> required to be able to write to multiple, non-overlapping regions of the
> >>>>> file. And duplicated fd's share their offsets etc. But that really seems
> >>>>> more or less hacky. Alternatively it's possible that pwrite() are used
> >>>>> to write to non-overlapping regions in the file. Any feedback is
> >>>>> welcomed.
> >>>
> >>> I do like the idea of letting fd: take multiple fd's.
> >>
> >> Fine in my view, I think we will still need then a helper process in libvirt to merge the data into a single file, no?
> >> In case the libvirt multifd to single file multithreaded helper I proposed before is helpful as a reference you could reuse/modify those patches.
> > 
> > Eww that's messy isn't it.
> > (You don't fancy a huge sparse file do you?)
> 
> Wait am I missing something obvious here?
> 
> Maybe we don't need any libvirt extra process.
> 
> why don't we open the _single_ file multiple times from libvirt,
> 
> Lets say the "main channel" fd is opened, we write the libvirt header,
> then reopen again the same file multiple times,
> and finally pass all fds to qemu, one fd for each parallel transfer channel we want to use
> (so we solve all the permissions, security labels issues etc).
> 
> And then from QEMU we can write to those fds at the right offsets for each separate channel,
> which is easier from QEMU because we can know exactly how much data we need to transfer before starting the migration,
> so we have even less need for "holes", possibly only minor ones for single byte adjustments
> for uneven division of the interleaved file.
> 
> What is wrong with this one, or does anyone see some other better approach?

You'd have to know exactly how to space the channels position in the
file, unless you somehow controlled it; the allocation across the
multifd threads is load/scheduler/random I think, so you'd have to
assume the worst case of everything going to one thread.
I.e. a big sparse area and then something to tell you where they are.

Dave

> Thanks,
> 
> C
> 
> > 
> >> Maybe this new way will be acceptable to libvirt,
> >> ie avoiding the multifd code -> socket, but still merging the data from the multiple fds into a single file?
> > 
> > It feels to me like the problem here is really what we want is something
> > closer to a dump than the migration code; you don't need all that
> > overhead of the code to deal with live migration bitmaps and dirty pages
> > that aren't going to happen.
> > Something that just does a nice single write(2) (for each memory
> > region);
> > and then ties the device state on.
> > 
> > Dave
> > 
> >>>
> >>> Dave
> >>>
> >>
> >> Thanks for your comments,
> >>
> >> Claudio
> >>>>>
> >>>>>
> >>>>> Regards,
> >>>>> Nikolay
> >>>>
> >>
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: towards a workable O_DIRECT outmigration to a file
  2022-08-18 18:49           ` Dr. David Alan Gilbert
@ 2022-08-18 19:14             ` Claudio Fontana
  0 siblings, 0 replies; 14+ messages in thread
From: Claudio Fontana @ 2022-08-18 19:14 UTC (permalink / raw)
  To: Dr. David Alan Gilbert
  Cc: Nikolay Borisov, berrange, qemu-devel, Claudio Fontana,
	Jim Fehlig, quintela

On 8/18/22 20:49, Dr. David Alan Gilbert wrote:
> * Claudio Fontana (cfontana@suse.de) wrote:
>> On 8/18/22 18:31, Dr. David Alan Gilbert wrote:
>>> * Claudio Fontana (cfontana@suse.de) wrote:
>>>> On 8/18/22 14:38, Dr. David Alan Gilbert wrote:
>>>>> * Nikolay Borisov (nborisov@suse.com) wrote:
>>>>>> [adding Juan and David to cc as I had missed them. ]
>>>>>
>>>>> Hi Nikolay,
>>>>>
>>>>>> On 11.08.22 г. 16:47 ч., Nikolay Borisov wrote:
>>>>>>> Hello,
>>>>>>>
>>>>>>> I'm currently looking into implementing a 'file:' uri for migration save
>>>>>>> in qemu. Ideally the solution will be O_DIRECT compatible. I'm aware of
>>>>>>> the branch https://gitlab.com/berrange/qemu/-/tree/mig-file. In the
>>>>>>> process of brainstorming how a solution would like the a couple of
>>>>>>> questions transpired that I think warrant wider discussion in the
>>>>>>> community.
>>>>>
>>>>> OK, so this seems to be a continuation with Claudio and Daniel and co as
>>>>> of a few months back.  I'd definitely be leaving libvirt sides of the
>>>>> question here to Dan, and so that also means definitely looking at that
>>>>> tree above.
>>>>
>>>> Hi Dave, yes, Nikolai is trying to continue on the qemu side.
>>>>
>>>> We have something working with libvirt for our short term needs which offers good performance,
>>>> but it is clear that that simple solution is barred for upstream libvirt merging.
>>>>
>>>>
>>>>>
>>>>>>> First, implementing a solution which is self-contained within qemu would
>>>>>>> be easy enough( famous last words) but the gist is one  has to only care
>>>>>>> about the format within qemu. However, I'm being told that what libvirt
>>>>>>> does is prepend its own custom header to the resulting saved file, then
>>>>>>> slipstreams the migration stream from qemu. Now with the solution that I
>>>>>>> envision I intend to keep all write-related logic inside qemu, this
>>>>>>> means there's no way to incorporate the logic of libvirt. The reason I'd
>>>>>>> like to keep the write process within qemu is to avoid an extra copy of
>>>>>>> data between the two processes (qemu outging migration and libvirt),
>>>>>>> with the current fd approach qemu is passed an fd, data is copied
>>>>>>> between qemu/libvirt and finally the libvirt_iohelper writes the data.
>>>>>>> So the question which remains to be answered is how would libvirt make
>>>>>>> use of this new functionality in qemu? I was thinking something along
>>>>>>> the lines of :
>>>>>>>
>>>>>>> 1. Qemu writes its migration stream to a file, ideally on a filesystem
>>>>>>> which supports reflink - xfs/btrfs
>>>>>>>
>>>>>>> 2. Libvirt writes it's header to a separate file
>>>>>>> 2.1 Reflinks the qemu's stream right after its header
>>>>>>> 2.2 Writes its trailer
>>>>>>>
>>>>>>> 3. Unlink() qemu's file, now only libvirt's file remains on-disk.
>>>>>>>
>>>>>>> I wouldn't call this solution hacky though it definitely leaves some
>>>>>>> bitter aftertaste.
>>>>>
>>>>> Wouldn't it be simpler to tell libvirt to write it's header, then tell
>>>>> qemu to append everything?
>>>>
>>>> I would think so as well. 
>>>>
>>>>>
>>>>>>> Another solution would be to extend the 'fd:' protocol to allow multiple
>>>>>>> descriptors (for multifd) support to be passed in. The reason dup()
>>>>>>> can't be used is because in order for multifd to be supported it's
>>>>>>> required to be able to write to multiple, non-overlapping regions of the
>>>>>>> file. And duplicated fd's share their offsets etc. But that really seems
>>>>>>> more or less hacky. Alternatively it's possible that pwrite() are used
>>>>>>> to write to non-overlapping regions in the file. Any feedback is
>>>>>>> welcomed.
>>>>>
>>>>> I do like the idea of letting fd: take multiple fd's.
>>>>
>>>> Fine in my view, I think we will still need then a helper process in libvirt to merge the data into a single file, no?
>>>> In case the libvirt multifd to single file multithreaded helper I proposed before is helpful as a reference you could reuse/modify those patches.
>>>
>>> Eww that's messy isn't it.
>>> (You don't fancy a huge sparse file do you?)
>>
>> Wait am I missing something obvious here?
>>
>> Maybe we don't need any libvirt extra process.
>>
>> why don't we open the _single_ file multiple times from libvirt,
>>
>> Lets say the "main channel" fd is opened, we write the libvirt header,
>> then reopen again the same file multiple times,
>> and finally pass all fds to qemu, one fd for each parallel transfer channel we want to use
>> (so we solve all the permissions, security labels issues etc).
>>
>> And then from QEMU we can write to those fds at the right offsets for each separate channel,
>> which is easier from QEMU because we can know exactly how much data we need to transfer before starting the migration,
>> so we have even less need for "holes", possibly only minor ones for single byte adjustments
>> for uneven division of the interleaved file.
>>
>> What is wrong with this one, or does anyone see some other better approach?
> 
> You'd have to know exactly how to space the channels position in the
> file, unless you somehow controlled it; the allocation across the
> multifd threads is load/scheduler/random I think, so you'd have to
> assume the worst case of everything going to one thread.
> I.e. a big sparse area and then something to tell you where they are.

No why, just split the work evenly between threads, with each one writing to interleaved areas depending on the thread index,
pass the relevant parallel=n parameter from libvirt at save/restore time so qemu knows how to split the data.

Ciao,

C

> 
> Dave
> 
>> Thanks,
>>
>> C
>>
>>>
>>>> Maybe this new way will be acceptable to libvirt,
>>>> ie avoiding the multifd code -> socket, but still merging the data from the multiple fds into a single file?
>>>
>>> It feels to me like the problem here is really what we want is something
>>> closer to a dump than the migration code; you don't need all that
>>> overhead of the code to deal with live migration bitmaps and dirty pages
>>> that aren't going to happen.
>>> Something that just does a nice single write(2) (for each memory
>>> region);
>>> and then ties the device state on.
>>>
>>> Dave
>>>
>>>>>
>>>>> Dave
>>>>>
>>>>
>>>> Thanks for your comments,
>>>>
>>>> Claudio
>>>>>>>
>>>>>>>
>>>>>>> Regards,
>>>>>>> Nikolay
>>>>>>
>>>>
>>



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH] migration: support file: uri for source migration
  2022-08-11 13:47 towards a workable O_DIRECT outmigration to a file Nikolay Borisov
  2022-08-11 14:10 ` Nikolay Borisov
@ 2022-09-08 10:26 ` Nikolay Borisov
  2022-09-12 15:41   ` Daniel P. Berrangé
  1 sibling, 1 reply; 14+ messages in thread
From: Nikolay Borisov @ 2022-09-08 10:26 UTC (permalink / raw)
  To: qemu-devel
  Cc: berrange, dgilbert, jfehlig, Claudio.Fontana, quintela, Nikolay Borisov

This is a prototype of supporting a 'file:' based uri protocol for
writing out the migration stream of qemu. Currently the code always
opens the file in DIO mode and adheres to an alignment of 64k to be
generic enough. However this comes with a problem - it requires copying
all data that we are writing (qemu metadata + guest ram pages) to a
bounce buffer so that we adhere to this alignment. With this code I get
the following performance results:

      DIO              exec: cat > file         virsh --bypass-cache
      82		     		77							81
      82		    	    78							80
      80		    	    80							82
      82		    	    82							77
      77		    	    79							77

AVG:  80.6		     		79.2						79.4
stddev: 1.959		     	1.720						2.05

All numbers are in seconds.

Those results are somewhat surprising to me as I'd expected doing the
writeout directly within qemu and avoiding copying between qemu and
virsh's iohelper process would result in a speed up. Clearly that's not
the case, I attribute this to the fact that all memory pages have to be
copied into the bounce buffer. There are more measurements/profiling
work that I'd have to do in order to (dis)prove this hypotheses and will
report back when I have the data.

However I'm sending the code now as I'd like to facilitate a discussion
as to whether this is an approach that would be acceptable to upstream
merging. Any ideas/comments would be much appreciated.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
---
 include/io/channel-file.h |   1 +
 include/io/channel.h      |   1 +
 io/channel-file.c         |  17 +++++++
 migration/meson.build     |   1 +
 migration/migration.c     |   4 ++
 migration/migration.h     |   2 +
 migration/qemu-file.c     | 104 +++++++++++++++++++++++++++++++++++++-
 7 files changed, 129 insertions(+), 1 deletion(-)

diff --git a/include/io/channel-file.h b/include/io/channel-file.h
index 50e8eb113868..6cb0b698c62c 100644
--- a/include/io/channel-file.h
+++ b/include/io/channel-file.h
@@ -89,4 +89,5 @@ qio_channel_file_new_path(const char *path,
                           mode_t mode,
                           Error **errp);

+void qio_channel_file_disable_dio(QIOChannelFile *ioc);
 #endif /* QIO_CHANNEL_FILE_H */
diff --git a/include/io/channel.h b/include/io/channel.h
index c680ee748021..6127ff6c0626 100644
--- a/include/io/channel.h
+++ b/include/io/channel.h
@@ -41,6 +41,7 @@ enum QIOChannelFeature {
     QIO_CHANNEL_FEATURE_SHUTDOWN,
     QIO_CHANNEL_FEATURE_LISTEN,
     QIO_CHANNEL_FEATURE_WRITE_ZERO_COPY,
+    QIO_CHANNEL_FEATURE_DIO,
 };


diff --git a/io/channel-file.c b/io/channel-file.c
index b67687c2aa64..5c7211b128f1 100644
--- a/io/channel-file.c
+++ b/io/channel-file.c
@@ -59,6 +59,10 @@ qio_channel_file_new_path(const char *path,
         return NULL;
     }

+    if (flags & O_DIRECT) {
+	    qio_channel_set_feature(QIO_CHANNEL(ioc), QIO_CHANNEL_FEATURE_DIO);
+    }
+
     trace_qio_channel_file_new_path(ioc, path, flags, mode, ioc->fd);

     return ioc;
@@ -109,6 +113,19 @@ static ssize_t qio_channel_file_readv(QIOChannel *ioc,
     return ret;
 }

+
+void qio_channel_file_disable_dio(QIOChannelFile *ioc)
+{
+	int flags = fcntl(ioc->fd, F_GETFL);
+	if (flags == -1) {
+		//handle failure
+	}
+
+	if (fcntl(ioc->fd, F_SETFL, (flags & ~O_DIRECT)) == -1) {
+		error_report("Can't disable O_DIRECT");
+	}
+}
+
 static ssize_t qio_channel_file_writev(QIOChannel *ioc,
                                        const struct iovec *iov,
                                        size_t niov,
diff --git a/migration/meson.build b/migration/meson.build
index 690487cf1a81..30a8392701c3 100644
--- a/migration/meson.build
+++ b/migration/meson.build
@@ -17,6 +17,7 @@ softmmu_ss.add(files(
   'colo.c',
   'exec.c',
   'fd.c',
+  'file.c',
   'global_state.c',
   'migration.c',
   'multifd.c',
diff --git a/migration/migration.c b/migration/migration.c
index bb8bbddfe467..e7e84ae12066 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -20,6 +20,7 @@
 #include "migration/blocker.h"
 #include "exec.h"
 #include "fd.h"
+#include "file.h"
 #include "socket.h"
 #include "sysemu/runstate.h"
 #include "sysemu/sysemu.h"
@@ -2414,6 +2415,8 @@ void qmp_migrate(const char *uri, bool has_blk, bool blk,
         exec_start_outgoing_migration(s, p, &local_err);
     } else if (strstart(uri, "fd:", &p)) {
         fd_start_outgoing_migration(s, p, &local_err);
+    } else if (strstart(uri, "file:", &p)) {
+	file_start_outgoing_migration(s, p, &local_err);
     } else {
         if (!(has_resume && resume)) {
             yank_unregister_instance(MIGRATION_YANK_INSTANCE);
@@ -4307,6 +4310,7 @@ void migration_global_dump(Monitor *mon)
 static Property migration_properties[] = {
     DEFINE_PROP_BOOL("store-global-state", MigrationState,
                      store_global_state, true),
+    DEFINE_PROP_BOOL("use-direct", MigrationState, use_dio, false),
     DEFINE_PROP_BOOL("send-configuration", MigrationState,
                      send_configuration, true),
     DEFINE_PROP_BOOL("send-section-footer", MigrationState,
diff --git a/migration/migration.h b/migration/migration.h
index cdad8aceaaab..fa1a996bdd4e 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -336,6 +336,8 @@ struct MigrationState {
      */
     bool store_global_state;

+    bool use_dio;
+
     /* Whether we send QEMU_VM_CONFIGURATION during migration */
     bool send_configuration;
     /* Whether we send section footer during migration */
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 4f400c2e5265..18a2fefccd00 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -30,9 +30,14 @@
 #include "qemu-file.h"
 #include "trace.h"
 #include "qapi/error.h"
+#include "qemu/memalign.h"
+#include "qemu/error-report.h"
+#include "migration.h"
+#include "io/channel-file.h"

 #define IO_BUF_SIZE 32768
 #define MAX_IOV_SIZE MIN_CONST(IOV_MAX, 64)
+#define DIO_BUF_SIZE (8*IO_BUF_SIZE)

 struct QEMUFile {
     const QEMUFileHooks *hooks;
@@ -56,6 +61,8 @@ struct QEMUFile {
     int buf_index;
     int buf_size; /* 0 when writing */
     uint8_t buf[IO_BUF_SIZE];
+    uint8_t *dio_buf;
+    int dio_buf_index;

     DECLARE_BITMAP(may_free, MAX_IOV_SIZE);
     struct iovec iov[MAX_IOV_SIZE];
@@ -65,6 +72,7 @@ struct QEMUFile {
     Error *last_error_obj;
     /* has the file has been shutdown */
     bool shutdown;
+    bool closing;
 };

 /*
@@ -115,6 +123,7 @@ static QEMUFile *qemu_file_new_impl(QIOChannel *ioc, bool is_writable)
     object_ref(ioc);
     f->ioc = ioc;
     f->is_writable = is_writable;
+    f->dio_buf = qemu_memalign(64*1024, DIO_BUF_SIZE);

     return f;
 }
@@ -260,6 +269,76 @@ static void qemu_iovec_release_ram(QEMUFile *f)
     memset(f->may_free, 0, sizeof(f->may_free));
 }

+#define in_range(b, first, len) ((uintptr_t)(b) >= (uintptr_t)(first) && (uintptr_t)(b) < (uintptr_t)(first) + (len))
+
+static void qemu_fflush_dio(QEMUFile *f)
+{
+	do  {
+		int i;
+		int new_ioveccnt = 0;
+		for (i = 0; i < f->iovcnt && f->dio_buf_index < DIO_BUF_SIZE; i++) {
+			struct iovec *vec = &f->iov[i];
+			size_t copy_len = MIN(vec->iov_len, DIO_BUF_SIZE - f->dio_buf_index);
+
+			/* if the iovec contains inline buf, adjust buf_index
+			 * accordingly
+			 */
+			if (in_range(vec->iov_base, f->buf, IO_BUF_SIZE)) {
+				f->buf_index -= copy_len;
+			}
+
+			memcpy(f->dio_buf+f->dio_buf_index, vec->iov_base, copy_len);
+			f->dio_buf_index += copy_len;
+			/* In case we couldn't fit the full iovec */
+			if (copy_len < vec->iov_len) {
+				// partial write or no write at all;
+				vec->iov_base += copy_len;
+				vec->iov_len -= copy_len;
+				break;
+			}
+		}
+
+		new_ioveccnt = f->iovcnt - i;
+		/*
+		 * DIO buf has been filled but we still have outstanding iovecs
+		 * so shift them to the beginning of iov array for subsequent
+		 * flushing
+		 */
+		for (int j = 0; i < f->iovcnt; j++, i++) {
+			f->iov[j] = f->iov[i];
+		}
+		f->iovcnt = new_ioveccnt;
+
+
+		/*
+		 * DIO BUFF is either full or this is the final flush, in the
+		 * latter case it's guaranteed that the fd is now in buffered
+		 * mode so we simply write anything which is outstanding
+		 */
+		if (f->dio_buf_index == DIO_BUF_SIZE || f->closing) {
+			Error *local_error = NULL;
+			struct iovec dio_iovec = {.iov_base = f->dio_buf,
+				.iov_len = f->dio_buf_index };
+
+			/*
+			 * This is the last flush so revert back to buffered
+			 * write
+			 */
+			if (f->closing) {
+				qio_channel_file_disable_dio(QIO_CHANNEL_FILE(f->ioc));
+			}
+
+			if (qio_channel_writev_all(f->ioc, &dio_iovec, 1, &local_error) < 0) {
+				qemu_file_set_error_obj(f, -EIO, local_error);
+			} else {
+				f->total_transferred += dio_iovec.iov_len;
+			}
+
+			f->dio_buf_index = 0;
+		}
+	} while (f->iovcnt);
+
+}

 /**
  * Flushes QEMUFile buffer
@@ -276,6 +355,12 @@ void qemu_fflush(QEMUFile *f)
     if (f->shutdown) {
         return;
     }
+
+    if (qio_channel_has_feature(f->ioc, QIO_CHANNEL_FEATURE_DIO)) {
+	    qemu_fflush_dio(f);
+	    return;
+    }
+
     if (f->iovcnt > 0) {
         Error *local_error = NULL;
         if (qio_channel_writev_all(f->ioc,
@@ -434,6 +519,8 @@ void qemu_file_credit_transfer(QEMUFile *f, size_t size)
 int qemu_fclose(QEMUFile *f)
 {
     int ret, ret2;
+
+    f->closing = true;
     qemu_fflush(f);
     ret = qemu_file_get_error(f);

@@ -450,6 +537,7 @@ int qemu_fclose(QEMUFile *f)
         ret = f->last_error;
     }
     error_free(f->last_error_obj);
+    qemu_vfree(f->dio_buf);
     g_free(f);
     trace_qemu_file_fclose();
     return ret;
@@ -706,6 +794,10 @@ int64_t qemu_file_total_transferred_fast(QEMUFile *f)
     int64_t ret = f->total_transferred;
     int i;

+    if (qio_channel_has_feature(f->ioc, QIO_CHANNEL_FEATURE_DIO)) {
+	    ret += f->dio_buf_index;
+    }
+
     for (i = 0; i < f->iovcnt; i++) {
         ret += f->iov[i].iov_len;
     }
@@ -715,8 +807,18 @@ int64_t qemu_file_total_transferred_fast(QEMUFile *f)

 int64_t qemu_file_total_transferred(QEMUFile *f)
 {
+    int64_t total_transferred = 0;
     qemu_fflush(f);
-    return f->total_transferred;
+    total_transferred += f->total_transferred;
+    /*
+     * If we are a DIO channel then adjust total transferred with possible bytes
+     * which might not have been totally written but are in the staging dio
+     * buffer
+     */
+    if (qio_channel_has_feature(f->ioc, QIO_CHANNEL_FEATURE_DIO)) {
+	    total_transferred += f->dio_buf_index;
+    }
+    return total_transferred;
 }

 int qemu_file_rate_limit(QEMUFile *f)
--
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH] migration: support file: uri for source migration
  2022-09-08 10:26 ` [PATCH] migration: support file: uri for source migration Nikolay Borisov
@ 2022-09-12 15:41   ` Daniel P. Berrangé
  2022-09-12 16:30     ` Nikolay Borisov
  0 siblings, 1 reply; 14+ messages in thread
From: Daniel P. Berrangé @ 2022-09-12 15:41 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: qemu-devel, dgilbert, jfehlig, Claudio.Fontana, quintela

On Thu, Sep 08, 2022 at 01:26:32PM +0300, Nikolay Borisov wrote:
> This is a prototype of supporting a 'file:' based uri protocol for
> writing out the migration stream of qemu. Currently the code always
> opens the file in DIO mode and adheres to an alignment of 64k to be
> generic enough. However this comes with a problem - it requires copying
> all data that we are writing (qemu metadata + guest ram pages) to a
> bounce buffer so that we adhere to this alignment.

The adhoc device metadata clearly needs bounce buffers since it
is splattered all over RAM with no concern of alignemnt. THe use
of bounce buffers for this shouldn't be a performance issue though
as metadata is small relative to the size of the snapshot as a whole.

The guest RAM pages should not need bounce buffers at all when using
huge pages, as alignment will already be way larger than we required.
Guests with huge pages are the ones which are likely to have huge
RAM sizes and thus need the DIO mode, so we should be sorted for that.

When using small pages for guest RAM, if it is not already allocated
with suitable alignment, I feel like we should be able to make it
so that we allocate the RAM block with good alignemnt to avoid the
need for bounce buffers. This would address the less common case of
a guest with huge RAM size but not huge pages.

Thus if we assume guest RAM is suitably aligned, then we can avoid
bounce buffers for RAM pages, while still using bounce buffers for
the metadata.

>                                                    With this code I get
> the following performance results:
> 
>       DIO              exec: cat > file         virsh --bypass-cache
>       82		     		77							81
>       82		    	    78							80
>       80		    	    80							82
>       82		    	    82							77
>       77		    	    79							77
> 
> AVG:  80.6		     		79.2						79.4
> stddev: 1.959		     	1.720						2.05
> 
> All numbers are in seconds.
> 
> Those results are somewhat surprising to me as I'd expected doing the
> writeout directly within qemu and avoiding copying between qemu and
> virsh's iohelper process would result in a speed up. Clearly that's not
> the case, I attribute this to the fact that all memory pages have to be
> copied into the bounce buffer. There are more measurements/profiling
> work that I'd have to do in order to (dis)prove this hypotheses and will
> report back when I have the data.

When using the libvirt iohelper we have mutliple CPUs involved. IOW the
bounce buffer copy is taking place on a separate CPU from the QEMU
migration loop. This ability to use multiple CPUs may well have balanced
out any benefit from doing DIO on the QEMU side.

If you eliminate bounce buffers for guest RAM and write it directly to
the fixed location on disk, then we should see the benefit - and if not
then something is really wrong in our thoughts.

> However I'm sending the code now as I'd like to facilitate a discussion
> as to whether this is an approach that would be acceptable to upstream
> merging. Any ideas/comments would be much appreciated.

AFAICT this impl is still using the existing on-disk format, where RAM
pages are just written inline to the stream. For DIO benefit to be
maximised we need the on-disk format to be changed, so that the guest
RAM regions can be directly associated with fixed locations on disk.
This also means that if guest dirties RAM while its saving, then we
overwrite the existing content on disk, such that restore only ever
needs to restore each RAM page once, instead of restoring every
dirtied version.


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] migration: support file: uri for source migration
  2022-09-12 15:41   ` Daniel P. Berrangé
@ 2022-09-12 16:30     ` Nikolay Borisov
  2022-09-12 16:43       ` Daniel P. Berrangé
  0 siblings, 1 reply; 14+ messages in thread
From: Nikolay Borisov @ 2022-09-12 16:30 UTC (permalink / raw)
  To: Daniel P. Berrangé
  Cc: qemu-devel, dgilbert, jfehlig, Claudio.Fontana, quintela



On 12.09.22 г. 18:41 ч., Daniel P. Berrangé wrote:
> On Thu, Sep 08, 2022 at 01:26:32PM +0300, Nikolay Borisov wrote:
>> This is a prototype of supporting a 'file:' based uri protocol for
>> writing out the migration stream of qemu. Currently the code always
>> opens the file in DIO mode and adheres to an alignment of 64k to be
>> generic enough. However this comes with a problem - it requires copying
>> all data that we are writing (qemu metadata + guest ram pages) to a
>> bounce buffer so that we adhere to this alignment.
> 
> The adhoc device metadata clearly needs bounce buffers since it
> is splattered all over RAM with no concern of alignemnt. THe use
> of bounce buffers for this shouldn't be a performance issue though
> as metadata is small relative to the size of the snapshot as a whole.

Bounce buffers can be eliminated altogether so long as we simply switch 
between buffered/DIO mode via fcntl.

> 
> The guest RAM pages should not need bounce buffers at all when using
> huge pages, as alignment will already be way larger than we required.
> Guests with huge pages are the ones which are likely to have huge
> RAM sizes and thus need the DIO mode, so we should be sorted for that.
> 
> When using small pages for guest RAM, if it is not already allocated
> with suitable alignment, I feel like we should be able to make it
> so that we allocate the RAM block with good alignemnt to avoid the
> need for bounce buffers. This would address the less common case of
> a guest with huge RAM size but not huge pages.

Ram blocks are generally allocated with good alignment due to them being 
mmaped(), however as I was toying with eliminating bounce buffers for 
ram I hit an issue where the page headers being written (8 bytes each) 
aren't aligned (naturally). Imo I think the on-disk format can be 
changed the following way:


<ramblock header, containing base address of ramblock>, each subsequent 
page is then written at an offset from the base address of the ramblock, 
that is it's index would be :

page_offset = page_addr - ramblock_base, Then the page is written at 
ramblock_base (in the file) + page_offset. This would eliminate the page 
headers altogether. This leaves aligning the initial ramblock header 
initially. However, this would lead to us potentially having to issue 1 
lseek per page to write - to adjust the the file position, which might 
not be a problem in itself but who knows. How dooes that sound to you?

> 
> Thus if we assume guest RAM is suitably aligned, then we can avoid
> bounce buffers for RAM pages, while still using bounce buffers for
> the metadata.
> 
>>                                                     With this code I get
>> the following performance results:
>>
>>        DIO              exec: cat > file         virsh --bypass-cache
>>        82		     		77							81
>>        82		    	    78							80
>>        80		    	    80							82
>>        82		    	    82							77
>>        77		    	    79							77
>>
>> AVG:  80.6		     		79.2						79.4
>> stddev: 1.959		     	1.720						2.05
>>
>> All numbers are in seconds.
>>
>> Those results are somewhat surprising to me as I'd expected doing the
>> writeout directly within qemu and avoiding copying between qemu and
>> virsh's iohelper process would result in a speed up. Clearly that's not
>> the case, I attribute this to the fact that all memory pages have to be
>> copied into the bounce buffer. There are more measurements/profiling
>> work that I'd have to do in order to (dis)prove this hypotheses and will
>> report back when I have the data.
> 
> When using the libvirt iohelper we have mutliple CPUs involved. IOW the
> bounce buffer copy is taking place on a separate CPU from the QEMU
> migration loop. This ability to use multiple CPUs may well have balanced
> out any benefit from doing DIO on the QEMU side.
> 
> If you eliminate bounce buffers for guest RAM and write it directly to
> the fixed location on disk, then we should see the benefit - and if not
> then something is really wrong in our thoughts.
> 
>> However I'm sending the code now as I'd like to facilitate a discussion
>> as to whether this is an approach that would be acceptable to upstream
>> merging. Any ideas/comments would be much appreciated.
> 
> AFAICT this impl is still using the existing on-disk format, where RAM
> pages are just written inline to the stream. For DIO benefit to be
> maximised we need the on-disk format to be changed, so that the guest
> RAM regions can be directly associated with fixed locations on disk.
> This also means that if guest dirties RAM while its saving, then we
> overwrite the existing content on disk, such that restore only ever
> needs to restore each RAM page once, instead of restoring every
> dirtied version.
> 
> 
> With regards,
> Daniel


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH] migration: support file: uri for source migration
  2022-09-12 16:30     ` Nikolay Borisov
@ 2022-09-12 16:43       ` Daniel P. Berrangé
  0 siblings, 0 replies; 14+ messages in thread
From: Daniel P. Berrangé @ 2022-09-12 16:43 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: qemu-devel, dgilbert, jfehlig, Claudio.Fontana, quintela

On Mon, Sep 12, 2022 at 07:30:50PM +0300, Nikolay Borisov wrote:
> 
> 
> On 12.09.22 г. 18:41 ч., Daniel P. Berrangé wrote:
> > On Thu, Sep 08, 2022 at 01:26:32PM +0300, Nikolay Borisov wrote:
> > > This is a prototype of supporting a 'file:' based uri protocol for
> > > writing out the migration stream of qemu. Currently the code always
> > > opens the file in DIO mode and adheres to an alignment of 64k to be
> > > generic enough. However this comes with a problem - it requires copying
> > > all data that we are writing (qemu metadata + guest ram pages) to a
> > > bounce buffer so that we adhere to this alignment.
> > 
> > The adhoc device metadata clearly needs bounce buffers since it
> > is splattered all over RAM with no concern of alignemnt. THe use
> > of bounce buffers for this shouldn't be a performance issue though
> > as metadata is small relative to the size of the snapshot as a whole.
> 
> Bounce buffers can be eliminated altogether so long as we simply switch
> between buffered/DIO mode via fcntl.
> 
> > 
> > The guest RAM pages should not need bounce buffers at all when using
> > huge pages, as alignment will already be way larger than we required.
> > Guests with huge pages are the ones which are likely to have huge
> > RAM sizes and thus need the DIO mode, so we should be sorted for that.
> > 
> > When using small pages for guest RAM, if it is not already allocated
> > with suitable alignment, I feel like we should be able to make it
> > so that we allocate the RAM block with good alignemnt to avoid the
> > need for bounce buffers. This would address the less common case of
> > a guest with huge RAM size but not huge pages.
> 
> Ram blocks are generally allocated with good alignment due to them being
> mmaped(), however as I was toying with eliminating bounce buffers for ram I
> hit an issue where the page headers being written (8 bytes each) aren't
> aligned (naturally). Imo I think the on-disk format can be changed the
> following way:
> 
> 
> <ramblock header, containing base address of ramblock>, each subsequent page
> is then written at an offset from the base address of the ramblock, that is
> it's index would be :
> 
> page_offset = page_addr - ramblock_base, Then the page is written at
> ramblock_base (in the file) + page_offset. This would eliminate the page
> headers altogether. This leaves aligning the initial ramblock header
> initially. However, this would lead to us potentially having to issue 1
> lseek per page to write - to adjust the the file position, which might not
> be a problem in itself but who knows. How dooes that sound to you?

Yes, definitely. We don't want the headers on front of each page,
just one single large block. Looking forward to multi-fd, we don't
want to be using lseek at all, because that changes the file offset
for all threads using the FD. Instead we need to be able to use
pread/pwrite for writing the RAM blocks.

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2022-09-12 16:44 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-11 13:47 towards a workable O_DIRECT outmigration to a file Nikolay Borisov
2022-08-11 14:10 ` Nikolay Borisov
2022-08-18 12:38   ` Dr. David Alan Gilbert
2022-08-18 12:52     ` Claudio Fontana
2022-08-18 16:31       ` Dr. David Alan Gilbert
2022-08-18 18:09         ` Claudio Fontana
2022-08-18 18:45           ` Claudio Fontana
2022-08-18 18:49           ` Dr. David Alan Gilbert
2022-08-18 19:14             ` Claudio Fontana
2022-08-18 18:13         ` Claudio Fontana
2022-09-08 10:26 ` [PATCH] migration: support file: uri for source migration Nikolay Borisov
2022-09-12 15:41   ` Daniel P. Berrangé
2022-09-12 16:30     ` Nikolay Borisov
2022-09-12 16:43       ` Daniel P. Berrangé

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.