* Is fallback vhost_net to qemu for live migrate available?
@ 2013-08-27 3:32 Qin Chuanyu
2013-08-27 4:19 ` Michael S. Tsirkin
` (2 more replies)
0 siblings, 3 replies; 28+ messages in thread
From: Qin Chuanyu @ 2013-08-27 3:32 UTC (permalink / raw)
To: Michael S. Tsirkin, jasowang; +Cc: kvm, netdev, qianhuibin
Hi all
I am participating in a project which try to port vhost_net on Xen。
By change the memory copy and notify mechanism ,currently virtio-net
with vhost_net could run on Xen with good performance。TCP receive
throughput of single vnic from 2.77Gbps up to 6Gps。In VM receive
side,I instead grant_copy with grant_map + memcopy,it efficiently
reduce the cost of grant_table spin_lock of dom0,So the hole server TCP
performance from 5.33Gps up to 9.5Gps。
Now I am consider the live migrate of vhost_net on Xen,vhost_net use
vhost_log for live migrate on Kvm,but qemu on Xen havn't manage the
hole memory of VM,So I am trying to fallback datapath from vhost_net to
qemu when doing live migrate ,and fallback datapath from qemu to
vhost_net again after vm migrate to new server。
My question is:
why didn't vhost_net do the same fallback operation for live migrate on
KVM,but use vhost_log to mark the dirty page?
Is there any mechanism fault for the idea of fallback datapath from
vhost_net to qemu for live migrate?
any question about the detail of vhost_net on Xen is welcome。
Thanks
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-08-27 3:32 Is fallback vhost_net to qemu for live migrate available? Qin Chuanyu
@ 2013-08-27 4:19 ` Michael S. Tsirkin
2013-08-27 7:04 ` Qin Chuanyu
2013-08-29 16:08 ` Anthony Liguori
2013-08-29 16:08 ` Anthony Liguori
2 siblings, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2013-08-27 4:19 UTC (permalink / raw)
To: Qin Chuanyu; +Cc: jasowang, kvm, netdev, qianhuibin
On Tue, Aug 27, 2013 at 11:32:31AM +0800, Qin Chuanyu wrote:
> Hi all
>
> I am participating in a project which try to port vhost_net on Xen。
>
> By change the memory copy and notify mechanism ,currently
> virtio-net with vhost_net could run on Xen with good
> performance。TCP receive throughput of single vnic from 2.77Gbps up
> to 6Gps。In VM receive side,I instead grant_copy with grant_map +
> memcopy,it efficiently reduce the cost of grant_table spin_lock of
> dom0,So the hole server TCP performance from 5.33Gps up to 9.5Gps。
>
> Now I am consider the live migrate of vhost_net on Xen,vhost_net
> use vhost_log for live migrate on Kvm,but qemu on Xen havn't manage
> the hole memory of VM,So I am trying to fallback datapath from
> vhost_net to qemu when doing live migrate ,and fallback datapath
> from qemu to
> vhost_net again after vm migrate to new server。
>
> My question is:
> why didn't vhost_net do the same fallback operation for live
> migrate on KVM,but use vhost_log to mark the dirty page?
> Is there any mechanism fault for the idea of fallback datapath from
> vhost_net to qemu for live migrate?
>
> any question about the detail of vhost_net on Xen is welcome。
>
> Thanks
>
It should work, in practice.
However, one issue with this approach that I see is that you
are running two instances of virtio-net on the host:
qemu and vhost-net, doubling your security surface
for guest to host attack.
I don't exactly see why does it matter that qemu doesn't manage
the whole memory of a VM - vhost only needs to log
memory writes that it performs.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-08-27 4:19 ` Michael S. Tsirkin
@ 2013-08-27 7:04 ` Qin Chuanyu
2013-08-27 7:16 ` Michael S. Tsirkin
2013-08-27 9:41 ` Wei Liu
0 siblings, 2 replies; 28+ messages in thread
From: Qin Chuanyu @ 2013-08-27 7:04 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: jasowang, kvm, netdev, qianhuibin, wangfuhai
On 2013/8/27 12:19, Michael S. Tsirkin wrote:
> On Tue, Aug 27, 2013 at 11:32:31AM +0800, Qin Chuanyu wrote:
>> Hi all
>>
>> I am participating in a project which try to port vhost_net on Xen。
>>
>> By change the memory copy and notify mechanism ,currently
>> virtio-net with vhost_net could run on Xen with good
>> performance。TCP receive throughput of single vnic from 2.77Gbps up
>> to 6Gps。In VM receive side,I instead grant_copy with grant_map +
>> memcopy,it efficiently reduce the cost of grant_table spin_lock of
>> dom0,So the hole server TCP performance from 5.33Gps up to 9.5Gps。
>>
>> Now I am consider the live migrate of vhost_net on Xen,vhost_net
>> use vhost_log for live migrate on Kvm,but qemu on Xen havn't manage
>> the hole memory of VM,So I am trying to fallback datapath from
>> vhost_net to qemu when doing live migrate ,and fallback datapath
>> from qemu to
>> vhost_net again after vm migrate to new server。
>>
>> My question is:
>> why didn't vhost_net do the same fallback operation for live
>> migrate on KVM,but use vhost_log to mark the dirty page?
>> Is there any mechanism fault for the idea of fallback datapath from
>> vhost_net to qemu for live migrate?
>>
>> any question about the detail of vhost_net on Xen is welcome。
>>
>> Thanks
>>
>
> It should work, in practice.
>
> However, one issue with this approach that I see is that you
> are running two instances of virtio-net on the host:
> qemu and vhost-net, doubling your security surface
> for guest to host attack.
>
> I don't exactly see why does it matter that qemu doesn't manage
> the whole memory of a VM - vhost only needs to log
> memory writes that it performs.
>
>
> .
>
Thanks for your reply.
In fact,I am not sure that whether vhost_log could work on Xen live
migrate or not. Yes,vhost_sync_dirty_bitmap work well on Kvm,but
vhost_net havn't run on Xen before,although the method
vhost_dev_sync_region -> memory_region_set_dirty -> xen_modified_memory
call sequence exist.
Have you considered the scene that vhost_migration_log code running on
Xen? If yes,it sames much easier then fallback datapath from vhost_net
to qemu for live migrate.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-08-27 7:04 ` Qin Chuanyu
@ 2013-08-27 7:16 ` Michael S. Tsirkin
2013-08-27 7:22 ` Qin Chuanyu
2013-08-27 9:41 ` Wei Liu
1 sibling, 1 reply; 28+ messages in thread
From: Michael S. Tsirkin @ 2013-08-27 7:16 UTC (permalink / raw)
To: Qin Chuanyu; +Cc: jasowang, kvm, netdev, qianhuibin, wangfuhai
On Tue, Aug 27, 2013 at 03:04:54PM +0800, Qin Chuanyu wrote:
> On 2013/8/27 12:19, Michael S. Tsirkin wrote:
> >On Tue, Aug 27, 2013 at 11:32:31AM +0800, Qin Chuanyu wrote:
> >>Hi all
> >>
> >>I am participating in a project which try to port vhost_net on Xen。
> >>
> >>By change the memory copy and notify mechanism ,currently
> >>virtio-net with vhost_net could run on Xen with good
> >>performance。TCP receive throughput of single vnic from 2.77Gbps up
> >>to 6Gps。In VM receive side,I instead grant_copy with grant_map +
> >>memcopy,it efficiently reduce the cost of grant_table spin_lock of
> >>dom0,So the hole server TCP performance from 5.33Gps up to 9.5Gps。
> >>
> >>Now I am consider the live migrate of vhost_net on Xen,vhost_net
> >>use vhost_log for live migrate on Kvm,but qemu on Xen havn't manage
> >>the hole memory of VM,So I am trying to fallback datapath from
> >>vhost_net to qemu when doing live migrate ,and fallback datapath
> >>from qemu to
> >>vhost_net again after vm migrate to new server。
> >>
> >>My question is:
> >> why didn't vhost_net do the same fallback operation for live
> >>migrate on KVM,but use vhost_log to mark the dirty page?
> >> Is there any mechanism fault for the idea of fallback datapath from
> >>vhost_net to qemu for live migrate?
> >>
> >>any question about the detail of vhost_net on Xen is welcome。
> >>
> >>Thanks
> >>
> >
> >It should work, in practice.
> >
> >However, one issue with this approach that I see is that you
> >are running two instances of virtio-net on the host:
> >qemu and vhost-net, doubling your security surface
> >for guest to host attack.
> >
> >I don't exactly see why does it matter that qemu doesn't manage
> >the whole memory of a VM - vhost only needs to log
> >memory writes that it performs.
> >
> >
> >.
> >
> Thanks for your reply.
>
> In fact,I am not sure that whether vhost_log could work on Xen live
> migrate or not. Yes,vhost_sync_dirty_bitmap work well on Kvm,but
> vhost_net havn't run on Xen before,although the method
> vhost_dev_sync_region -> memory_region_set_dirty ->
> xen_modified_memory call sequence exist.
>
> Have you considered the scene that vhost_migration_log code running
> on Xen? If yes,it sames much easier then fallback datapath from
> vhost_net to qemu for live migrate.
>
I never looked at how Xen live migration works with QEMU so I don't
know.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-08-27 7:16 ` Michael S. Tsirkin
@ 2013-08-27 7:22 ` Qin Chuanyu
0 siblings, 0 replies; 28+ messages in thread
From: Qin Chuanyu @ 2013-08-27 7:22 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: jasowang, kvm, netdev, qianhuibin, wangfuhai
On 2013/8/27 15:16, Michael S. Tsirkin wrote:
> On Tue, Aug 27, 2013 at 03:04:54PM +0800, Qin Chuanyu wrote:
>> On 2013/8/27 12:19, Michael S. Tsirkin wrote:
>>> On Tue, Aug 27, 2013 at 11:32:31AM +0800, Qin Chuanyu wrote:
>>>> Hi all
>>>>
>>>> I am participating in a project which try to port vhost_net on Xen。
>>>>
>>>> By change the memory copy and notify mechanism ,currently
>>>> virtio-net with vhost_net could run on Xen with good
>>>> performance。TCP receive throughput of single vnic from 2.77Gbps up
>>>> to 6Gps。In VM receive side,I instead grant_copy with grant_map +
>>>> memcopy,it efficiently reduce the cost of grant_table spin_lock of
>>>> dom0,So the hole server TCP performance from 5.33Gps up to 9.5Gps。
>>>>
>>>> Now I am consider the live migrate of vhost_net on Xen,vhost_net
>>>> use vhost_log for live migrate on Kvm,but qemu on Xen havn't manage
>>>> the hole memory of VM,So I am trying to fallback datapath from
>>>> vhost_net to qemu when doing live migrate ,and fallback datapath
>>> >from qemu to
>>>> vhost_net again after vm migrate to new server。
>>>>
>>>> My question is:
>>>> why didn't vhost_net do the same fallback operation for live
>>>> migrate on KVM,but use vhost_log to mark the dirty page?
>>>> Is there any mechanism fault for the idea of fallback datapath from
>>>> vhost_net to qemu for live migrate?
>>>>
>>>> any question about the detail of vhost_net on Xen is welcome。
>>>>
>>>> Thanks
>>>>
>>>
>>> It should work, in practice.
>>>
>>> However, one issue with this approach that I see is that you
>>> are running two instances of virtio-net on the host:
>>> qemu and vhost-net, doubling your security surface
>>> for guest to host attack.
>>>
>>> I don't exactly see why does it matter that qemu doesn't manage
>>> the whole memory of a VM - vhost only needs to log
>>> memory writes that it performs.
>>>
>>>
>>> .
>>>
>> Thanks for your reply.
>>
>> In fact,I am not sure that whether vhost_log could work on Xen live
>> migrate or not. Yes,vhost_sync_dirty_bitmap work well on Kvm,but
>> vhost_net havn't run on Xen before,although the method
>> vhost_dev_sync_region -> memory_region_set_dirty ->
>> xen_modified_memory call sequence exist.
>>
>> Have you considered the scene that vhost_migration_log code running
>> on Xen? If yes,it sames much easier then fallback datapath from
>> vhost_net to qemu for live migrate.
>>
>
> I never looked at how Xen live migration works with QEMU so I don't
> know.
>
>
> .
>
Thanks
I will try vhost_migration_log for live migration first,if there are
some further result,I would let you know.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-08-27 7:04 ` Qin Chuanyu
2013-08-27 7:16 ` Michael S. Tsirkin
@ 2013-08-27 9:41 ` Wei Liu
1 sibling, 0 replies; 28+ messages in thread
From: Wei Liu @ 2013-08-27 9:41 UTC (permalink / raw)
To: Qin Chuanyu
Cc: Michael S. Tsirkin, jasowang, kvm, netdev, qianhuibin, wangfuhai,
wei.liu2
First of all, good job, Chuanyu. :-)
On Tue, Aug 27, 2013 at 03:04:54PM +0800, Qin Chuanyu wrote:
[...]
> >.
> >
> Thanks for your reply.
>
> In fact,I am not sure that whether vhost_log could work on Xen live
> migrate or not. Yes,vhost_sync_dirty_bitmap work well on Kvm,but
> vhost_net havn't run on Xen before,although the method
> vhost_dev_sync_region -> memory_region_set_dirty ->
> xen_modified_memory call sequence exist.
>
For Xen-specific questions you might have better luck posting to
xen-devel@lists.xenproject.org.
And I think this project is interesting enough and of benificial to Xen
community as well. Could you CC Xen-devel@ for future updates?
Thanks
Wei.
> Have you considered the scene that vhost_migration_log code running
> on Xen? If yes,it sames much easier then fallback datapath from
> vhost_net to qemu for live migrate.
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-08-27 3:32 Is fallback vhost_net to qemu for live migrate available? Qin Chuanyu
2013-08-27 4:19 ` Michael S. Tsirkin
2013-08-29 16:08 ` Anthony Liguori
@ 2013-08-29 16:08 ` Anthony Liguori
2013-08-31 4:45 ` Qin Chuanyu
` (3 more replies)
2 siblings, 4 replies; 28+ messages in thread
From: Anthony Liguori @ 2013-08-29 16:08 UTC (permalink / raw)
To: Qin Chuanyu
Cc: Michael S. Tsirkin, jasowang, KVM list, netdev, qianhuibin, xen-devel
Hi Qin,
On Mon, Aug 26, 2013 at 10:32 PM, Qin Chuanyu <qinchuanyu@huawei.com> wrote:
> Hi all
>
> I am participating in a project which try to port vhost_net on Xen。
Neat!
> By change the memory copy and notify mechanism ,currently virtio-net with
> vhost_net could run on Xen with good performance。
I think the key in doing this would be to implement a property
ioeventfd and irqfd interface in the driver domain kernel. Just
hacking vhost_net with Xen specific knowledge would be pretty nasty
IMHO.
Did you modify the front end driver to do grant table mapping or is
this all being done by mapping the domain's memory?
> TCP receive throughput of
> single vnic from 2.77Gbps up to 6Gps。In VM receive side,I instead grant_copy
> with grant_map + memcopy,it efficiently reduce the cost of grant_table
> spin_lock of dom0,So the hole server TCP performance from 5.33Gps up to
> 9.5Gps。
>
> Now I am consider the live migrate of vhost_net on Xen,vhost_net use
> vhost_log for live migrate on Kvm,but qemu on Xen havn't manage the hole
> memory of VM,So I am trying to fallback datapath from vhost_net to qemu when
> doing live migrate ,and fallback datapath from qemu to
> vhost_net again after vm migrate to new server。
KVM and Xen represent memory in a very different way. KVM can only
track when guest mode code dirties memory. It relies on QEMU to track
when guest memory is dirtied by QEMU. Since vhost is running outside
of QEMU, vhost also needs to tell QEMU when it has dirtied memory.
I don't think this is a problem with Xen though. I believe (although
could be wrong) that Xen is able to track when either the domain or
dom0 dirties memory.
So I think you can simply ignore the dirty logging with vhost and it
should Just Work.
>
> My question is:
> why didn't vhost_net do the same fallback operation for live migrate
> on KVM,but use vhost_log to mark the dirty page?
> Is there any mechanism fault for the idea of fallback datapath from
> vhost_net to qemu for live migrate?
No, we don't have a mechanism to fallback to QEMU for the datapath.
It would be possible but I think it's a bad idea to mix and match the
two.
Regards,
Anthony Liguori
> any question about the detail of vhost_net on Xen is welcome。
>
> Thanks
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-08-27 3:32 Is fallback vhost_net to qemu for live migrate available? Qin Chuanyu
2013-08-27 4:19 ` Michael S. Tsirkin
@ 2013-08-29 16:08 ` Anthony Liguori
2013-08-29 16:08 ` Anthony Liguori
2 siblings, 0 replies; 28+ messages in thread
From: Anthony Liguori @ 2013-08-29 16:08 UTC (permalink / raw)
To: Qin Chuanyu
Cc: qianhuibin, KVM list, Michael S. Tsirkin, netdev, jasowang, xen-devel
Hi Qin,
On Mon, Aug 26, 2013 at 10:32 PM, Qin Chuanyu <qinchuanyu@huawei.com> wrote:
> Hi all
>
> I am participating in a project which try to port vhost_net on Xen。
Neat!
> By change the memory copy and notify mechanism ,currently virtio-net with
> vhost_net could run on Xen with good performance。
I think the key in doing this would be to implement a property
ioeventfd and irqfd interface in the driver domain kernel. Just
hacking vhost_net with Xen specific knowledge would be pretty nasty
IMHO.
Did you modify the front end driver to do grant table mapping or is
this all being done by mapping the domain's memory?
> TCP receive throughput of
> single vnic from 2.77Gbps up to 6Gps。In VM receive side,I instead grant_copy
> with grant_map + memcopy,it efficiently reduce the cost of grant_table
> spin_lock of dom0,So the hole server TCP performance from 5.33Gps up to
> 9.5Gps。
>
> Now I am consider the live migrate of vhost_net on Xen,vhost_net use
> vhost_log for live migrate on Kvm,but qemu on Xen havn't manage the hole
> memory of VM,So I am trying to fallback datapath from vhost_net to qemu when
> doing live migrate ,and fallback datapath from qemu to
> vhost_net again after vm migrate to new server。
KVM and Xen represent memory in a very different way. KVM can only
track when guest mode code dirties memory. It relies on QEMU to track
when guest memory is dirtied by QEMU. Since vhost is running outside
of QEMU, vhost also needs to tell QEMU when it has dirtied memory.
I don't think this is a problem with Xen though. I believe (although
could be wrong) that Xen is able to track when either the domain or
dom0 dirties memory.
So I think you can simply ignore the dirty logging with vhost and it
should Just Work.
>
> My question is:
> why didn't vhost_net do the same fallback operation for live migrate
> on KVM,but use vhost_log to mark the dirty page?
> Is there any mechanism fault for the idea of fallback datapath from
> vhost_net to qemu for live migrate?
No, we don't have a mechanism to fallback to QEMU for the datapath.
It would be possible but I think it's a bad idea to mix and match the
two.
Regards,
Anthony Liguori
> any question about the detail of vhost_net on Xen is welcome。
>
> Thanks
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-08-29 16:08 ` Anthony Liguori
@ 2013-08-31 4:45 ` Qin Chuanyu
2013-09-02 3:19 ` Jason Wang
` (3 more replies)
2013-08-31 4:45 ` Qin Chuanyu
` (2 subsequent siblings)
3 siblings, 4 replies; 28+ messages in thread
From: Qin Chuanyu @ 2013-08-31 4:45 UTC (permalink / raw)
To: Anthony Liguori
Cc: Michael S. Tsirkin, jasowang, KVM list, netdev, qianhuibin,
xen-devel, wangfuhai, likunyun, liuyongan, liuyingdong
On 2013/8/30 0:08, Anthony Liguori wrote:
> Hi Qin,
>> By change the memory copy and notify mechanism ,currently virtio-net with
>> vhost_net could run on Xen with good performance。
>
> I think the key in doing this would be to implement a property
> ioeventfd and irqfd interface in the driver domain kernel. Just
> hacking vhost_net with Xen specific knowledge would be pretty nasty
> IMHO.
>
Yes, I add a kernel module which persist virtio-net pio_addr and msix
address as what kvm module did. Guest wake up vhost thread by adding a
hook func in evtchn_interrupt.
> Did you modify the front end driver to do grant table mapping or is
> this all being done by mapping the domain's memory?
>
There is nothing changed in front end driver. Currently I use
alloc_vm_area to get address space, and map the domain's memory as what
what qemu did.
> KVM and Xen represent memory in a very different way. KVM can only
> track when guest mode code dirties memory. It relies on QEMU to track
> when guest memory is dirtied by QEMU. Since vhost is running outside
> of QEMU, vhost also needs to tell QEMU when it has dirtied memory.
>
> I don't think this is a problem with Xen though. I believe (although
> could be wrong) that Xen is able to track when either the domain or
> dom0 dirties memory.
>
> So I think you can simply ignore the dirty logging with vhost and it
> should Just Work.
>
Thanks for your advice, I have tried it, without ping, it could migrate
successfully, but if there has skb been received, domU would crash. I
guess that because though Xen track domU memory, but it could only track
memory that changed in DomU. memory changed by Dom0 is out of control.
>
> No, we don't have a mechanism to fallback to QEMU for the datapath.
> It would be possible but I think it's a bad idea to mix and match the
> two.
>
Next I would try to fallback datapath to qemu for three reason:
1: memory translate mechanism has been changed for vhost_net on Xen,so
there would be some necessary changed needed for vhost_log in kernel.
2: I also maped IOREQ_PFN page(which is used for communication between
qemu and Xen) in kernel notify module, so it also needed been marked as
dirty when tx/rx exist in migrate period.
3: Most important of all, Michael S. Tsirkin said that he hadn't
considered about vhost_net migrate on Xen,so there would be some
changed needed in vhost_log for qemu.
fallback to qemu seems to much easier, isn't it.
Regards
Qin chuanyu
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-08-29 16:08 ` Anthony Liguori
2013-08-31 4:45 ` Qin Chuanyu
@ 2013-08-31 4:45 ` Qin Chuanyu
2013-10-14 8:19 ` Qin Chuanyu
2013-10-14 8:19 ` Qin Chuanyu
3 siblings, 0 replies; 28+ messages in thread
From: Qin Chuanyu @ 2013-08-31 4:45 UTC (permalink / raw)
To: Anthony Liguori
Cc: qianhuibin, KVM list, Michael S. Tsirkin, likunyun, netdev,
jasowang, xen-devel, liuyongan, liuyingdong, wangfuhai
On 2013/8/30 0:08, Anthony Liguori wrote:
> Hi Qin,
>> By change the memory copy and notify mechanism ,currently virtio-net with
>> vhost_net could run on Xen with good performance。
>
> I think the key in doing this would be to implement a property
> ioeventfd and irqfd interface in the driver domain kernel. Just
> hacking vhost_net with Xen specific knowledge would be pretty nasty
> IMHO.
>
Yes, I add a kernel module which persist virtio-net pio_addr and msix
address as what kvm module did. Guest wake up vhost thread by adding a
hook func in evtchn_interrupt.
> Did you modify the front end driver to do grant table mapping or is
> this all being done by mapping the domain's memory?
>
There is nothing changed in front end driver. Currently I use
alloc_vm_area to get address space, and map the domain's memory as what
what qemu did.
> KVM and Xen represent memory in a very different way. KVM can only
> track when guest mode code dirties memory. It relies on QEMU to track
> when guest memory is dirtied by QEMU. Since vhost is running outside
> of QEMU, vhost also needs to tell QEMU when it has dirtied memory.
>
> I don't think this is a problem with Xen though. I believe (although
> could be wrong) that Xen is able to track when either the domain or
> dom0 dirties memory.
>
> So I think you can simply ignore the dirty logging with vhost and it
> should Just Work.
>
Thanks for your advice, I have tried it, without ping, it could migrate
successfully, but if there has skb been received, domU would crash. I
guess that because though Xen track domU memory, but it could only track
memory that changed in DomU. memory changed by Dom0 is out of control.
>
> No, we don't have a mechanism to fallback to QEMU for the datapath.
> It would be possible but I think it's a bad idea to mix and match the
> two.
>
Next I would try to fallback datapath to qemu for three reason:
1: memory translate mechanism has been changed for vhost_net on Xen,so
there would be some necessary changed needed for vhost_log in kernel.
2: I also maped IOREQ_PFN page(which is used for communication between
qemu and Xen) in kernel notify module, so it also needed been marked as
dirty when tx/rx exist in migrate period.
3: Most important of all, Michael S. Tsirkin said that he hadn't
considered about vhost_net migrate on Xen,so there would be some
changed needed in vhost_log for qemu.
fallback to qemu seems to much easier, isn't it.
Regards
Qin chuanyu
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-08-31 4:45 ` Qin Chuanyu
@ 2013-09-02 3:19 ` Jason Wang
2013-09-02 3:19 ` Jason Wang
` (2 subsequent siblings)
3 siblings, 0 replies; 28+ messages in thread
From: Jason Wang @ 2013-09-02 3:19 UTC (permalink / raw)
To: Qin Chuanyu
Cc: Anthony Liguori, Michael S. Tsirkin, KVM list, netdev,
qianhuibin, xen-devel, wangfuhai, likunyun, liuyongan,
liuyingdong
On 08/31/2013 12:45 PM, Qin Chuanyu wrote:
> On 2013/8/30 0:08, Anthony Liguori wrote:
>> Hi Qin,
>
>>> By change the memory copy and notify mechanism ,currently
>>> virtio-net with
>>> vhost_net could run on Xen with good performance。
>>
>> I think the key in doing this would be to implement a property
>> ioeventfd and irqfd interface in the driver domain kernel. Just
>> hacking vhost_net with Xen specific knowledge would be pretty nasty
>> IMHO.
>>
> Yes, I add a kernel module which persist virtio-net pio_addr and msix
> address as what kvm module did. Guest wake up vhost thread by adding a
> hook func in evtchn_interrupt.
>
>> Did you modify the front end driver to do grant table mapping or is
>> this all being done by mapping the domain's memory?
>>
> There is nothing changed in front end driver. Currently I use
> alloc_vm_area to get address space, and map the domain's memory as
> what what qemu did.
>
>> KVM and Xen represent memory in a very different way. KVM can only
>> track when guest mode code dirties memory. It relies on QEMU to track
>> when guest memory is dirtied by QEMU. Since vhost is running outside
>> of QEMU, vhost also needs to tell QEMU when it has dirtied memory.
>>
>> I don't think this is a problem with Xen though. I believe (although
>> could be wrong) that Xen is able to track when either the domain or
>> dom0 dirties memory.
>>
>> So I think you can simply ignore the dirty logging with vhost and it
>> should Just Work.
>>
> Thanks for your advice, I have tried it, without ping, it could
> migrate successfully, but if there has skb been received, domU would
> crash. I guess that because though Xen track domU memory, but it could
> only track memory that changed in DomU. memory changed by Dom0 is out
> of control.
>
>>
>> No, we don't have a mechanism to fallback to QEMU for the datapath.
>> It would be possible but I think it's a bad idea to mix and match the
>> two.
>>
> Next I would try to fallback datapath to qemu for three reason:
> 1: memory translate mechanism has been changed for vhost_net on
> Xen,so there would be some necessary changed needed for vhost_log in
> kernel.
>
> 2: I also maped IOREQ_PFN page(which is used for communication between
> qemu and Xen) in kernel notify module, so it also needed been marked
> as dirty when tx/rx exist in migrate period.
>
> 3: Most important of all, Michael S. Tsirkin said that he hadn't
> considered about vhost_net migrate on Xen,so there would be some
> changed needed in vhost_log for qemu.
>
> fallback to qemu seems to much easier, isn't it.
Maybe we can just stop vhost_net in pre_save() and enable it in
post_load()? Then no need to use enable the dirty logging of vhost_net.
>
>
> Regards
> Qin chuanyu
>
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-08-31 4:45 ` Qin Chuanyu
2013-09-02 3:19 ` Jason Wang
@ 2013-09-02 3:19 ` Jason Wang
2013-09-02 7:57 ` Wei Liu
2013-09-02 7:57 ` Wei Liu
3 siblings, 0 replies; 28+ messages in thread
From: Jason Wang @ 2013-09-02 3:19 UTC (permalink / raw)
To: Qin Chuanyu
Cc: qianhuibin, KVM list, Michael S. Tsirkin, likunyun, netdev,
xen-devel, liuyongan, liuyingdong, wangfuhai, Anthony Liguori
On 08/31/2013 12:45 PM, Qin Chuanyu wrote:
> On 2013/8/30 0:08, Anthony Liguori wrote:
>> Hi Qin,
>
>>> By change the memory copy and notify mechanism ,currently
>>> virtio-net with
>>> vhost_net could run on Xen with good performance。
>>
>> I think the key in doing this would be to implement a property
>> ioeventfd and irqfd interface in the driver domain kernel. Just
>> hacking vhost_net with Xen specific knowledge would be pretty nasty
>> IMHO.
>>
> Yes, I add a kernel module which persist virtio-net pio_addr and msix
> address as what kvm module did. Guest wake up vhost thread by adding a
> hook func in evtchn_interrupt.
>
>> Did you modify the front end driver to do grant table mapping or is
>> this all being done by mapping the domain's memory?
>>
> There is nothing changed in front end driver. Currently I use
> alloc_vm_area to get address space, and map the domain's memory as
> what what qemu did.
>
>> KVM and Xen represent memory in a very different way. KVM can only
>> track when guest mode code dirties memory. It relies on QEMU to track
>> when guest memory is dirtied by QEMU. Since vhost is running outside
>> of QEMU, vhost also needs to tell QEMU when it has dirtied memory.
>>
>> I don't think this is a problem with Xen though. I believe (although
>> could be wrong) that Xen is able to track when either the domain or
>> dom0 dirties memory.
>>
>> So I think you can simply ignore the dirty logging with vhost and it
>> should Just Work.
>>
> Thanks for your advice, I have tried it, without ping, it could
> migrate successfully, but if there has skb been received, domU would
> crash. I guess that because though Xen track domU memory, but it could
> only track memory that changed in DomU. memory changed by Dom0 is out
> of control.
>
>>
>> No, we don't have a mechanism to fallback to QEMU for the datapath.
>> It would be possible but I think it's a bad idea to mix and match the
>> two.
>>
> Next I would try to fallback datapath to qemu for three reason:
> 1: memory translate mechanism has been changed for vhost_net on
> Xen,so there would be some necessary changed needed for vhost_log in
> kernel.
>
> 2: I also maped IOREQ_PFN page(which is used for communication between
> qemu and Xen) in kernel notify module, so it also needed been marked
> as dirty when tx/rx exist in migrate period.
>
> 3: Most important of all, Michael S. Tsirkin said that he hadn't
> considered about vhost_net migrate on Xen,so there would be some
> changed needed in vhost_log for qemu.
>
> fallback to qemu seems to much easier, isn't it.
Maybe we can just stop vhost_net in pre_save() and enable it in
post_load()? Then no need to use enable the dirty logging of vhost_net.
>
>
> Regards
> Qin chuanyu
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-08-31 4:45 ` Qin Chuanyu
` (2 preceding siblings ...)
2013-09-02 7:57 ` Wei Liu
@ 2013-09-02 7:57 ` Wei Liu
2013-09-02 8:11 ` Michael S. Tsirkin
` (3 more replies)
3 siblings, 4 replies; 28+ messages in thread
From: Wei Liu @ 2013-09-02 7:57 UTC (permalink / raw)
To: Qin Chuanyu
Cc: Anthony Liguori, Michael S. Tsirkin, jasowang, KVM list, netdev,
qianhuibin, xen-devel, wangfuhai, likunyun, liuyongan,
liuyingdong, wei.liu2
On Sat, Aug 31, 2013 at 12:45:11PM +0800, Qin Chuanyu wrote:
> On 2013/8/30 0:08, Anthony Liguori wrote:
> >Hi Qin,
>
> >>By change the memory copy and notify mechanism ,currently virtio-net with
> >>vhost_net could run on Xen with good performance。
> >
> >I think the key in doing this would be to implement a property
> >ioeventfd and irqfd interface in the driver domain kernel. Just
> >hacking vhost_net with Xen specific knowledge would be pretty nasty
> >IMHO.
> >
> Yes, I add a kernel module which persist virtio-net pio_addr and
> msix address as what kvm module did. Guest wake up vhost thread by
> adding a hook func in evtchn_interrupt.
>
> >Did you modify the front end driver to do grant table mapping or is
> >this all being done by mapping the domain's memory?
> >
> There is nothing changed in front end driver. Currently I use
> alloc_vm_area to get address space, and map the domain's memory as
> what what qemu did.
>
You mean you're using xc_map_foreign_range and friends in the backend to
map guest memory? That's not very desirable as it violates Xen's
security model. It would not be too hard to pass grant references
instead of guest physical memory address IMHO.
Wei.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-08-31 4:45 ` Qin Chuanyu
2013-09-02 3:19 ` Jason Wang
2013-09-02 3:19 ` Jason Wang
@ 2013-09-02 7:57 ` Wei Liu
2013-09-02 7:57 ` Wei Liu
3 siblings, 0 replies; 28+ messages in thread
From: Wei Liu @ 2013-09-02 7:57 UTC (permalink / raw)
To: Qin Chuanyu
Cc: wei.liu2, qianhuibin, KVM list, Michael S. Tsirkin, likunyun,
netdev, jasowang, xen-devel, liuyongan, liuyingdong, wangfuhai,
Anthony Liguori
On Sat, Aug 31, 2013 at 12:45:11PM +0800, Qin Chuanyu wrote:
> On 2013/8/30 0:08, Anthony Liguori wrote:
> >Hi Qin,
>
> >>By change the memory copy and notify mechanism ,currently virtio-net with
> >>vhost_net could run on Xen with good performance。
> >
> >I think the key in doing this would be to implement a property
> >ioeventfd and irqfd interface in the driver domain kernel. Just
> >hacking vhost_net with Xen specific knowledge would be pretty nasty
> >IMHO.
> >
> Yes, I add a kernel module which persist virtio-net pio_addr and
> msix address as what kvm module did. Guest wake up vhost thread by
> adding a hook func in evtchn_interrupt.
>
> >Did you modify the front end driver to do grant table mapping or is
> >this all being done by mapping the domain's memory?
> >
> There is nothing changed in front end driver. Currently I use
> alloc_vm_area to get address space, and map the domain's memory as
> what what qemu did.
>
You mean you're using xc_map_foreign_range and friends in the backend to
map guest memory? That's not very desirable as it violates Xen's
security model. It would not be too hard to pass grant references
instead of guest physical memory address IMHO.
Wei.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-09-02 7:57 ` Wei Liu
@ 2013-09-02 8:11 ` Michael S. Tsirkin
2013-09-02 8:11 ` Michael S. Tsirkin
` (2 subsequent siblings)
3 siblings, 0 replies; 28+ messages in thread
From: Michael S. Tsirkin @ 2013-09-02 8:11 UTC (permalink / raw)
To: Wei Liu
Cc: Qin Chuanyu, Anthony Liguori, jasowang, KVM list, netdev,
qianhuibin, xen-devel, wangfuhai, likunyun, liuyongan,
liuyingdong
On Mon, Sep 02, 2013 at 08:57:22AM +0100, Wei Liu wrote:
> On Sat, Aug 31, 2013 at 12:45:11PM +0800, Qin Chuanyu wrote:
> > On 2013/8/30 0:08, Anthony Liguori wrote:
> > >Hi Qin,
> >
> > >>By change the memory copy and notify mechanism ,currently virtio-net with
> > >>vhost_net could run on Xen with good performance。
> > >
> > >I think the key in doing this would be to implement a property
> > >ioeventfd and irqfd interface in the driver domain kernel. Just
> > >hacking vhost_net with Xen specific knowledge would be pretty nasty
> > >IMHO.
> > >
> > Yes, I add a kernel module which persist virtio-net pio_addr and
> > msix address as what kvm module did. Guest wake up vhost thread by
> > adding a hook func in evtchn_interrupt.
> >
> > >Did you modify the front end driver to do grant table mapping or is
> > >this all being done by mapping the domain's memory?
> > >
> > There is nothing changed in front end driver. Currently I use
> > alloc_vm_area to get address space, and map the domain's memory as
> > what what qemu did.
> >
>
> You mean you're using xc_map_foreign_range and friends in the backend to
> map guest memory? That's not very desirable as it violates Xen's
> security model. It would not be too hard to pass grant references
> instead of guest physical memory address IMHO.
>
> Wei.
It's a start and it should make it fast and work with existing
infrastructure in the host, though.
--
MST
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-09-02 7:57 ` Wei Liu
2013-09-02 8:11 ` Michael S. Tsirkin
@ 2013-09-02 8:11 ` Michael S. Tsirkin
2013-09-03 1:28 ` Qin Chuanyu
2013-09-03 1:28 ` Qin Chuanyu
3 siblings, 0 replies; 28+ messages in thread
From: Michael S. Tsirkin @ 2013-09-02 8:11 UTC (permalink / raw)
To: Wei Liu
Cc: qianhuibin, KVM list, likunyun, netdev, jasowang, xen-devel,
liuyongan, liuyingdong, wangfuhai, Anthony Liguori, Qin Chuanyu
On Mon, Sep 02, 2013 at 08:57:22AM +0100, Wei Liu wrote:
> On Sat, Aug 31, 2013 at 12:45:11PM +0800, Qin Chuanyu wrote:
> > On 2013/8/30 0:08, Anthony Liguori wrote:
> > >Hi Qin,
> >
> > >>By change the memory copy and notify mechanism ,currently virtio-net with
> > >>vhost_net could run on Xen with good performance。
> > >
> > >I think the key in doing this would be to implement a property
> > >ioeventfd and irqfd interface in the driver domain kernel. Just
> > >hacking vhost_net with Xen specific knowledge would be pretty nasty
> > >IMHO.
> > >
> > Yes, I add a kernel module which persist virtio-net pio_addr and
> > msix address as what kvm module did. Guest wake up vhost thread by
> > adding a hook func in evtchn_interrupt.
> >
> > >Did you modify the front end driver to do grant table mapping or is
> > >this all being done by mapping the domain's memory?
> > >
> > There is nothing changed in front end driver. Currently I use
> > alloc_vm_area to get address space, and map the domain's memory as
> > what what qemu did.
> >
>
> You mean you're using xc_map_foreign_range and friends in the backend to
> map guest memory? That's not very desirable as it violates Xen's
> security model. It would not be too hard to pass grant references
> instead of guest physical memory address IMHO.
>
> Wei.
It's a start and it should make it fast and work with existing
infrastructure in the host, though.
--
MST
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-09-02 7:57 ` Wei Liu
2013-09-02 8:11 ` Michael S. Tsirkin
2013-09-02 8:11 ` Michael S. Tsirkin
@ 2013-09-03 1:28 ` Qin Chuanyu
2013-09-03 8:40 ` Wei Liu
2013-09-03 8:40 ` Wei Liu
2013-09-03 1:28 ` Qin Chuanyu
3 siblings, 2 replies; 28+ messages in thread
From: Qin Chuanyu @ 2013-09-03 1:28 UTC (permalink / raw)
To: Wei Liu
Cc: Anthony Liguori, Michael S. Tsirkin, jasowang, KVM list, netdev,
qianhuibin, xen-devel, wangfuhai, likunyun, liuyongan,
liuyingdong
On 2013/9/2 15:57, Wei Liu wrote:
> On Sat, Aug 31, 2013 at 12:45:11PM +0800, Qin Chuanyu wrote:
>> On 2013/8/30 0:08, Anthony Liguori wrote:
>>> Hi Qin,
>>
>>>> By change the memory copy and notify mechanism ,currently virtio-net with
>>>> vhost_net could run on Xen with good performance。
>>>
>>> I think the key in doing this would be to implement a property
>>> ioeventfd and irqfd interface in the driver domain kernel. Just
>>> hacking vhost_net with Xen specific knowledge would be pretty nasty
>>> IMHO.
>>>
>> Yes, I add a kernel module which persist virtio-net pio_addr and
>> msix address as what kvm module did. Guest wake up vhost thread by
>> adding a hook func in evtchn_interrupt.
>>
>>> Did you modify the front end driver to do grant table mapping or is
>>> this all being done by mapping the domain's memory?
>>>
>> There is nothing changed in front end driver. Currently I use
>> alloc_vm_area to get address space, and map the domain's memory as
>> what what qemu did.
>>
>
> You mean you're using xc_map_foreign_range and friends in the backend to
> map guest memory? That's not very desirable as it violates Xen's
> security model. It would not be too hard to pass grant references
> instead of guest physical memory address IMHO.
>
In fact, I did what virtio-net have done in Qemu. I think security
is a pseudo question because Dom0 is under control.
Host could access memory of guest in KVM much easier than Xen,
but I hadn't heard someone said KVM is un-secret.
Regards
Qin chuanyu
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-09-02 7:57 ` Wei Liu
` (2 preceding siblings ...)
2013-09-03 1:28 ` Qin Chuanyu
@ 2013-09-03 1:28 ` Qin Chuanyu
3 siblings, 0 replies; 28+ messages in thread
From: Qin Chuanyu @ 2013-09-03 1:28 UTC (permalink / raw)
To: Wei Liu
Cc: qianhuibin, KVM list, Michael S. Tsirkin, likunyun, netdev,
jasowang, xen-devel, liuyongan, liuyingdong, wangfuhai,
Anthony Liguori
On 2013/9/2 15:57, Wei Liu wrote:
> On Sat, Aug 31, 2013 at 12:45:11PM +0800, Qin Chuanyu wrote:
>> On 2013/8/30 0:08, Anthony Liguori wrote:
>>> Hi Qin,
>>
>>>> By change the memory copy and notify mechanism ,currently virtio-net with
>>>> vhost_net could run on Xen with good performance。
>>>
>>> I think the key in doing this would be to implement a property
>>> ioeventfd and irqfd interface in the driver domain kernel. Just
>>> hacking vhost_net with Xen specific knowledge would be pretty nasty
>>> IMHO.
>>>
>> Yes, I add a kernel module which persist virtio-net pio_addr and
>> msix address as what kvm module did. Guest wake up vhost thread by
>> adding a hook func in evtchn_interrupt.
>>
>>> Did you modify the front end driver to do grant table mapping or is
>>> this all being done by mapping the domain's memory?
>>>
>> There is nothing changed in front end driver. Currently I use
>> alloc_vm_area to get address space, and map the domain's memory as
>> what what qemu did.
>>
>
> You mean you're using xc_map_foreign_range and friends in the backend to
> map guest memory? That's not very desirable as it violates Xen's
> security model. It would not be too hard to pass grant references
> instead of guest physical memory address IMHO.
>
In fact, I did what virtio-net have done in Qemu. I think security
is a pseudo question because Dom0 is under control.
Host could access memory of guest in KVM much easier than Xen,
but I hadn't heard someone said KVM is un-secret.
Regards
Qin chuanyu
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-09-03 1:28 ` Qin Chuanyu
2013-09-03 8:40 ` Wei Liu
@ 2013-09-03 8:40 ` Wei Liu
2013-09-03 8:55 ` Michael S. Tsirkin
2013-09-03 8:55 ` Michael S. Tsirkin
1 sibling, 2 replies; 28+ messages in thread
From: Wei Liu @ 2013-09-03 8:40 UTC (permalink / raw)
To: Qin Chuanyu
Cc: Wei Liu, Anthony Liguori, Michael S. Tsirkin, jasowang, KVM list,
netdev, qianhuibin, xen-devel, wangfuhai, likunyun, liuyongan,
liuyingdong
On Tue, Sep 03, 2013 at 09:28:11AM +0800, Qin Chuanyu wrote:
> On 2013/9/2 15:57, Wei Liu wrote:
> >On Sat, Aug 31, 2013 at 12:45:11PM +0800, Qin Chuanyu wrote:
> >>On 2013/8/30 0:08, Anthony Liguori wrote:
> >>>Hi Qin,
> >>
> >>>>By change the memory copy and notify mechanism ,currently virtio-net with
> >>>>vhost_net could run on Xen with good performance。
> >>>
> >>>I think the key in doing this would be to implement a property
> >>>ioeventfd and irqfd interface in the driver domain kernel. Just
> >>>hacking vhost_net with Xen specific knowledge would be pretty nasty
> >>>IMHO.
> >>>
> >>Yes, I add a kernel module which persist virtio-net pio_addr and
> >>msix address as what kvm module did. Guest wake up vhost thread by
> >>adding a hook func in evtchn_interrupt.
> >>
> >>>Did you modify the front end driver to do grant table mapping or is
> >>>this all being done by mapping the domain's memory?
> >>>
> >>There is nothing changed in front end driver. Currently I use
> >>alloc_vm_area to get address space, and map the domain's memory as
> >>what what qemu did.
> >>
> >
> >You mean you're using xc_map_foreign_range and friends in the backend to
> >map guest memory? That's not very desirable as it violates Xen's
> >security model. It would not be too hard to pass grant references
> >instead of guest physical memory address IMHO.
> >
> In fact, I did what virtio-net have done in Qemu. I think security
> is a pseudo question because Dom0 is under control.
>
Consider that you might have driver domains. Not every domain is under
control or trusted. Also consider that security model like XSM can be
used to audit operations to enhance security so your foreign mapping
approach might not always work.
In short term foreign mapping can save you some time implementing the
prototype. In long term using grant table is the proper way to go. And
IMHO the benifit outweights the cost.
Wei.
> Host could access memory of guest in KVM much easier than Xen,
> but I hadn't heard someone said KVM is un-secret.
>
> Regards
> Qin chuanyu
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-09-03 1:28 ` Qin Chuanyu
@ 2013-09-03 8:40 ` Wei Liu
2013-09-03 8:40 ` Wei Liu
1 sibling, 0 replies; 28+ messages in thread
From: Wei Liu @ 2013-09-03 8:40 UTC (permalink / raw)
To: Qin Chuanyu
Cc: Wei Liu, qianhuibin, KVM list, Michael S. Tsirkin, likunyun,
netdev, jasowang, xen-devel, liuyongan, liuyingdong, wangfuhai,
Anthony Liguori
On Tue, Sep 03, 2013 at 09:28:11AM +0800, Qin Chuanyu wrote:
> On 2013/9/2 15:57, Wei Liu wrote:
> >On Sat, Aug 31, 2013 at 12:45:11PM +0800, Qin Chuanyu wrote:
> >>On 2013/8/30 0:08, Anthony Liguori wrote:
> >>>Hi Qin,
> >>
> >>>>By change the memory copy and notify mechanism ,currently virtio-net with
> >>>>vhost_net could run on Xen with good performance。
> >>>
> >>>I think the key in doing this would be to implement a property
> >>>ioeventfd and irqfd interface in the driver domain kernel. Just
> >>>hacking vhost_net with Xen specific knowledge would be pretty nasty
> >>>IMHO.
> >>>
> >>Yes, I add a kernel module which persist virtio-net pio_addr and
> >>msix address as what kvm module did. Guest wake up vhost thread by
> >>adding a hook func in evtchn_interrupt.
> >>
> >>>Did you modify the front end driver to do grant table mapping or is
> >>>this all being done by mapping the domain's memory?
> >>>
> >>There is nothing changed in front end driver. Currently I use
> >>alloc_vm_area to get address space, and map the domain's memory as
> >>what what qemu did.
> >>
> >
> >You mean you're using xc_map_foreign_range and friends in the backend to
> >map guest memory? That's not very desirable as it violates Xen's
> >security model. It would not be too hard to pass grant references
> >instead of guest physical memory address IMHO.
> >
> In fact, I did what virtio-net have done in Qemu. I think security
> is a pseudo question because Dom0 is under control.
>
Consider that you might have driver domains. Not every domain is under
control or trusted. Also consider that security model like XSM can be
used to audit operations to enhance security so your foreign mapping
approach might not always work.
In short term foreign mapping can save you some time implementing the
prototype. In long term using grant table is the proper way to go. And
IMHO the benifit outweights the cost.
Wei.
> Host could access memory of guest in KVM much easier than Xen,
> but I hadn't heard someone said KVM is un-secret.
>
> Regards
> Qin chuanyu
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-09-03 8:40 ` Wei Liu
@ 2013-09-03 8:55 ` Michael S. Tsirkin
2013-09-03 9:15 ` Wei Liu
` (3 more replies)
2013-09-03 8:55 ` Michael S. Tsirkin
1 sibling, 4 replies; 28+ messages in thread
From: Michael S. Tsirkin @ 2013-09-03 8:55 UTC (permalink / raw)
To: Wei Liu
Cc: Qin Chuanyu, Anthony Liguori, jasowang, KVM list, netdev,
qianhuibin, xen-devel, wangfuhai, likunyun, liuyongan,
liuyingdong
On Tue, Sep 03, 2013 at 09:40:48AM +0100, Wei Liu wrote:
> On Tue, Sep 03, 2013 at 09:28:11AM +0800, Qin Chuanyu wrote:
> > On 2013/9/2 15:57, Wei Liu wrote:
> > >On Sat, Aug 31, 2013 at 12:45:11PM +0800, Qin Chuanyu wrote:
> > >>On 2013/8/30 0:08, Anthony Liguori wrote:
> > >>>Hi Qin,
> > >>
> > >>>>By change the memory copy and notify mechanism ,currently virtio-net with
> > >>>>vhost_net could run on Xen with good performance。
> > >>>
> > >>>I think the key in doing this would be to implement a property
> > >>>ioeventfd and irqfd interface in the driver domain kernel. Just
> > >>>hacking vhost_net with Xen specific knowledge would be pretty nasty
> > >>>IMHO.
> > >>>
> > >>Yes, I add a kernel module which persist virtio-net pio_addr and
> > >>msix address as what kvm module did. Guest wake up vhost thread by
> > >>adding a hook func in evtchn_interrupt.
> > >>
> > >>>Did you modify the front end driver to do grant table mapping or is
> > >>>this all being done by mapping the domain's memory?
> > >>>
> > >>There is nothing changed in front end driver. Currently I use
> > >>alloc_vm_area to get address space, and map the domain's memory as
> > >>what what qemu did.
> > >>
> > >
> > >You mean you're using xc_map_foreign_range and friends in the backend to
> > >map guest memory? That's not very desirable as it violates Xen's
> > >security model. It would not be too hard to pass grant references
> > >instead of guest physical memory address IMHO.
> > >
> > In fact, I did what virtio-net have done in Qemu. I think security
> > is a pseudo question because Dom0 is under control.
> >
>
> Consider that you might have driver domains. Not every domain is under
> control or trusted.
I don't see anything that will prevent using driver domains here.
> Also consider that security model like XSM can be
> used to audit operations to enhance security so your foreign mapping
> approach might not always work.
It could be nice to have as an option, sure.
XSM is disabled by default though so I don't think lack of support for
that makes it a prototype.
> In short term foreign mapping can save you some time implementing the
> prototype.
> In long term using grant table is the proper way to go. And
> IMHO the benifit outweights the cost.
>
> Wei.
I'm guessing direct access could be quite a bit faster.
But someone would have to implement your idea in order to
do a cost/benefit analysis.
> > Host could access memory of guest in KVM much easier than Xen,
> > but I hadn't heard someone said KVM is un-secret.
> >
> > Regards
> > Qin chuanyu
> >
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe netdev" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-09-03 8:40 ` Wei Liu
2013-09-03 8:55 ` Michael S. Tsirkin
@ 2013-09-03 8:55 ` Michael S. Tsirkin
1 sibling, 0 replies; 28+ messages in thread
From: Michael S. Tsirkin @ 2013-09-03 8:55 UTC (permalink / raw)
To: Wei Liu
Cc: qianhuibin, KVM list, likunyun, netdev, jasowang, xen-devel,
liuyongan, liuyingdong, wangfuhai, Anthony Liguori, Qin Chuanyu
On Tue, Sep 03, 2013 at 09:40:48AM +0100, Wei Liu wrote:
> On Tue, Sep 03, 2013 at 09:28:11AM +0800, Qin Chuanyu wrote:
> > On 2013/9/2 15:57, Wei Liu wrote:
> > >On Sat, Aug 31, 2013 at 12:45:11PM +0800, Qin Chuanyu wrote:
> > >>On 2013/8/30 0:08, Anthony Liguori wrote:
> > >>>Hi Qin,
> > >>
> > >>>>By change the memory copy and notify mechanism ,currently virtio-net with
> > >>>>vhost_net could run on Xen with good performance。
> > >>>
> > >>>I think the key in doing this would be to implement a property
> > >>>ioeventfd and irqfd interface in the driver domain kernel. Just
> > >>>hacking vhost_net with Xen specific knowledge would be pretty nasty
> > >>>IMHO.
> > >>>
> > >>Yes, I add a kernel module which persist virtio-net pio_addr and
> > >>msix address as what kvm module did. Guest wake up vhost thread by
> > >>adding a hook func in evtchn_interrupt.
> > >>
> > >>>Did you modify the front end driver to do grant table mapping or is
> > >>>this all being done by mapping the domain's memory?
> > >>>
> > >>There is nothing changed in front end driver. Currently I use
> > >>alloc_vm_area to get address space, and map the domain's memory as
> > >>what what qemu did.
> > >>
> > >
> > >You mean you're using xc_map_foreign_range and friends in the backend to
> > >map guest memory? That's not very desirable as it violates Xen's
> > >security model. It would not be too hard to pass grant references
> > >instead of guest physical memory address IMHO.
> > >
> > In fact, I did what virtio-net have done in Qemu. I think security
> > is a pseudo question because Dom0 is under control.
> >
>
> Consider that you might have driver domains. Not every domain is under
> control or trusted.
I don't see anything that will prevent using driver domains here.
> Also consider that security model like XSM can be
> used to audit operations to enhance security so your foreign mapping
> approach might not always work.
It could be nice to have as an option, sure.
XSM is disabled by default though so I don't think lack of support for
that makes it a prototype.
> In short term foreign mapping can save you some time implementing the
> prototype.
> In long term using grant table is the proper way to go. And
> IMHO the benifit outweights the cost.
>
> Wei.
I'm guessing direct access could be quite a bit faster.
But someone would have to implement your idea in order to
do a cost/benefit analysis.
> > Host could access memory of guest in KVM much easier than Xen,
> > but I hadn't heard someone said KVM is un-secret.
> >
> > Regards
> > Qin chuanyu
> >
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe netdev" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-09-03 8:55 ` Michael S. Tsirkin
2013-09-03 9:15 ` Wei Liu
@ 2013-09-03 9:15 ` Wei Liu
2013-09-05 13:33 ` Stefano Stabellini
2013-09-05 13:33 ` [Xen-devel] " Stefano Stabellini
3 siblings, 0 replies; 28+ messages in thread
From: Wei Liu @ 2013-09-03 9:15 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Wei Liu, Qin Chuanyu, Anthony Liguori, jasowang, KVM list,
netdev, qianhuibin, xen-devel, wangfuhai, likunyun, liuyongan,
liuyingdong
On Tue, Sep 03, 2013 at 11:55:56AM +0300, Michael S. Tsirkin wrote:
> On Tue, Sep 03, 2013 at 09:40:48AM +0100, Wei Liu wrote:
> > On Tue, Sep 03, 2013 at 09:28:11AM +0800, Qin Chuanyu wrote:
> > > On 2013/9/2 15:57, Wei Liu wrote:
> > > >On Sat, Aug 31, 2013 at 12:45:11PM +0800, Qin Chuanyu wrote:
> > > >>On 2013/8/30 0:08, Anthony Liguori wrote:
> > > >>>Hi Qin,
> > > >>
> > > >>>>By change the memory copy and notify mechanism ,currently virtio-net with
> > > >>>>vhost_net could run on Xen with good performance。
> > > >>>
> > > >>>I think the key in doing this would be to implement a property
> > > >>>ioeventfd and irqfd interface in the driver domain kernel. Just
> > > >>>hacking vhost_net with Xen specific knowledge would be pretty nasty
> > > >>>IMHO.
> > > >>>
> > > >>Yes, I add a kernel module which persist virtio-net pio_addr and
> > > >>msix address as what kvm module did. Guest wake up vhost thread by
> > > >>adding a hook func in evtchn_interrupt.
> > > >>
> > > >>>Did you modify the front end driver to do grant table mapping or is
> > > >>>this all being done by mapping the domain's memory?
> > > >>>
> > > >>There is nothing changed in front end driver. Currently I use
> > > >>alloc_vm_area to get address space, and map the domain's memory as
> > > >>what what qemu did.
> > > >>
> > > >
> > > >You mean you're using xc_map_foreign_range and friends in the backend to
> > > >map guest memory? That's not very desirable as it violates Xen's
> > > >security model. It would not be too hard to pass grant references
> > > >instead of guest physical memory address IMHO.
> > > >
> > > In fact, I did what virtio-net have done in Qemu. I think security
> > > is a pseudo question because Dom0 is under control.
> > >
> >
> > Consider that you might have driver domains. Not every domain is under
> > control or trusted.
>
> I don't see anything that will prevent using driver domains here.
>
There is nothing technically stopping driver domains to work. It's about
the boundary of trust.
> > Also consider that security model like XSM can be
> > used to audit operations to enhance security so your foreign mapping
> > approach might not always work.
>
> It could be nice to have as an option, sure.
> XSM is disabled by default though so I don't think lack of support for
> that makes it a prototype.
>
XSM is there already, someone in a while might just want to give it a
shot despite it's not well supported. Then all of a sudden they find out
something works before (the foreign mapping backend) would not work
anymore. That would just lead to confusion. It's like a cycle of bad
things. Users try something lack of support -> it doesn't work -> users
won't use it anymore -> less support because nobody seems to be
interested.
> > In short term foreign mapping can save you some time implementing the
> > prototype.
> > In long term using grant table is the proper way to go. And
> > IMHO the benifit outweights the cost.
> >
> > Wei.
>
> I'm guessing direct access could be quite a bit faster.
> But someone would have to implement your idea in order to
> do a cost/benefit analysis.
>
I'm not sure direct access will be faster. Either way hypervisor is
involved to update guest's page table.
Anyway, my point is, switching to grant table is not as hard as you
think - pass along grant refs instead of guest pseudo address in the
ring and use grant table API instead of foreign mapping API in the
backend.
Wei.
> > > Host could access memory of guest in KVM much easier than Xen,
> > > but I hadn't heard someone said KVM is un-secret.
> > >
> > > Regards
> > > Qin chuanyu
> > >
> > >
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe netdev" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-09-03 8:55 ` Michael S. Tsirkin
@ 2013-09-03 9:15 ` Wei Liu
2013-09-03 9:15 ` Wei Liu
` (2 subsequent siblings)
3 siblings, 0 replies; 28+ messages in thread
From: Wei Liu @ 2013-09-03 9:15 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Wei Liu, qianhuibin, KVM list, likunyun, netdev, jasowang,
xen-devel, liuyongan, liuyingdong, wangfuhai, Anthony Liguori,
Qin Chuanyu
On Tue, Sep 03, 2013 at 11:55:56AM +0300, Michael S. Tsirkin wrote:
> On Tue, Sep 03, 2013 at 09:40:48AM +0100, Wei Liu wrote:
> > On Tue, Sep 03, 2013 at 09:28:11AM +0800, Qin Chuanyu wrote:
> > > On 2013/9/2 15:57, Wei Liu wrote:
> > > >On Sat, Aug 31, 2013 at 12:45:11PM +0800, Qin Chuanyu wrote:
> > > >>On 2013/8/30 0:08, Anthony Liguori wrote:
> > > >>>Hi Qin,
> > > >>
> > > >>>>By change the memory copy and notify mechanism ,currently virtio-net with
> > > >>>>vhost_net could run on Xen with good performance。
> > > >>>
> > > >>>I think the key in doing this would be to implement a property
> > > >>>ioeventfd and irqfd interface in the driver domain kernel. Just
> > > >>>hacking vhost_net with Xen specific knowledge would be pretty nasty
> > > >>>IMHO.
> > > >>>
> > > >>Yes, I add a kernel module which persist virtio-net pio_addr and
> > > >>msix address as what kvm module did. Guest wake up vhost thread by
> > > >>adding a hook func in evtchn_interrupt.
> > > >>
> > > >>>Did you modify the front end driver to do grant table mapping or is
> > > >>>this all being done by mapping the domain's memory?
> > > >>>
> > > >>There is nothing changed in front end driver. Currently I use
> > > >>alloc_vm_area to get address space, and map the domain's memory as
> > > >>what what qemu did.
> > > >>
> > > >
> > > >You mean you're using xc_map_foreign_range and friends in the backend to
> > > >map guest memory? That's not very desirable as it violates Xen's
> > > >security model. It would not be too hard to pass grant references
> > > >instead of guest physical memory address IMHO.
> > > >
> > > In fact, I did what virtio-net have done in Qemu. I think security
> > > is a pseudo question because Dom0 is under control.
> > >
> >
> > Consider that you might have driver domains. Not every domain is under
> > control or trusted.
>
> I don't see anything that will prevent using driver domains here.
>
There is nothing technically stopping driver domains to work. It's about
the boundary of trust.
> > Also consider that security model like XSM can be
> > used to audit operations to enhance security so your foreign mapping
> > approach might not always work.
>
> It could be nice to have as an option, sure.
> XSM is disabled by default though so I don't think lack of support for
> that makes it a prototype.
>
XSM is there already, someone in a while might just want to give it a
shot despite it's not well supported. Then all of a sudden they find out
something works before (the foreign mapping backend) would not work
anymore. That would just lead to confusion. It's like a cycle of bad
things. Users try something lack of support -> it doesn't work -> users
won't use it anymore -> less support because nobody seems to be
interested.
> > In short term foreign mapping can save you some time implementing the
> > prototype.
> > In long term using grant table is the proper way to go. And
> > IMHO the benifit outweights the cost.
> >
> > Wei.
>
> I'm guessing direct access could be quite a bit faster.
> But someone would have to implement your idea in order to
> do a cost/benefit analysis.
>
I'm not sure direct access will be faster. Either way hypervisor is
involved to update guest's page table.
Anyway, my point is, switching to grant table is not as hard as you
think - pass along grant refs instead of guest pseudo address in the
ring and use grant table API instead of foreign mapping API in the
backend.
Wei.
> > > Host could access memory of guest in KVM much easier than Xen,
> > > but I hadn't heard someone said KVM is un-secret.
> > >
> > > Regards
> > > Qin chuanyu
> > >
> > >
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe netdev" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [Xen-devel] Is fallback vhost_net to qemu for live migrate available?
2013-09-03 8:55 ` Michael S. Tsirkin
` (2 preceding siblings ...)
2013-09-05 13:33 ` Stefano Stabellini
@ 2013-09-05 13:33 ` Stefano Stabellini
3 siblings, 0 replies; 28+ messages in thread
From: Stefano Stabellini @ 2013-09-05 13:33 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Wei Liu, qianhuibin, KVM list, likunyun, netdev, jasowang,
xen-devel, liuyongan, liuyingdong, wangfuhai, Anthony Liguori,
Qin Chuanyu
[-- Attachment #1: Type: text/plain, Size: 2813 bytes --]
On Tue, 3 Sep 2013, Michael S. Tsirkin wrote:
> On Tue, Sep 03, 2013 at 09:40:48AM +0100, Wei Liu wrote:
> > On Tue, Sep 03, 2013 at 09:28:11AM +0800, Qin Chuanyu wrote:
> > > On 2013/9/2 15:57, Wei Liu wrote:
> > > >On Sat, Aug 31, 2013 at 12:45:11PM +0800, Qin Chuanyu wrote:
> > > >>On 2013/8/30 0:08, Anthony Liguori wrote:
> > > >>>Hi Qin,
> > > >>
> > > >>>>By change the memory copy and notify mechanism ,currently virtio-net with
> > > >>>>vhost_net could run on Xen with good performance。
> > > >>>
> > > >>>I think the key in doing this would be to implement a property
> > > >>>ioeventfd and irqfd interface in the driver domain kernel. Just
> > > >>>hacking vhost_net with Xen specific knowledge would be pretty nasty
> > > >>>IMHO.
> > > >>>
> > > >>Yes, I add a kernel module which persist virtio-net pio_addr and
> > > >>msix address as what kvm module did. Guest wake up vhost thread by
> > > >>adding a hook func in evtchn_interrupt.
> > > >>
> > > >>>Did you modify the front end driver to do grant table mapping or is
> > > >>>this all being done by mapping the domain's memory?
> > > >>>
> > > >>There is nothing changed in front end driver. Currently I use
> > > >>alloc_vm_area to get address space, and map the domain's memory as
> > > >>what what qemu did.
> > > >>
> > > >
> > > >You mean you're using xc_map_foreign_range and friends in the backend to
> > > >map guest memory? That's not very desirable as it violates Xen's
> > > >security model. It would not be too hard to pass grant references
> > > >instead of guest physical memory address IMHO.
> > > >
> > > In fact, I did what virtio-net have done in Qemu. I think security
> > > is a pseudo question because Dom0 is under control.
Right, but we are trying to move the backends out of Dom0, for
scalability and security.
Setting up a network driver domain is pretty easy and should work out of
the box with Xen 4.3.
That said, I agree that using xc_map_foreign_range is a good way to start.
> > Consider that you might have driver domains. Not every domain is under
> > control or trusted.
>
> I don't see anything that will prevent using driver domains here.
Driver domains are not privileged, therefore cannot map random guest
pages (unless they have been granted by the guest via the grant table).
xc_map_foreign_range can't work from a driver domain.
> > Also consider that security model like XSM can be
> > used to audit operations to enhance security so your foreign mapping
> > approach might not always work.
>
> It could be nice to have as an option, sure.
> XSM is disabled by default though so I don't think lack of support for
> that makes it a prototype.
There are some security aware Xen based products in the market today
that use XSM.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-09-03 8:55 ` Michael S. Tsirkin
2013-09-03 9:15 ` Wei Liu
2013-09-03 9:15 ` Wei Liu
@ 2013-09-05 13:33 ` Stefano Stabellini
2013-09-05 13:33 ` [Xen-devel] " Stefano Stabellini
3 siblings, 0 replies; 28+ messages in thread
From: Stefano Stabellini @ 2013-09-05 13:33 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Wei Liu, qianhuibin, KVM list, likunyun, netdev, jasowang,
xen-devel, liuyongan, liuyingdong, wangfuhai, Anthony Liguori,
Qin Chuanyu
[-- Attachment #1: Type: text/plain, Size: 2813 bytes --]
On Tue, 3 Sep 2013, Michael S. Tsirkin wrote:
> On Tue, Sep 03, 2013 at 09:40:48AM +0100, Wei Liu wrote:
> > On Tue, Sep 03, 2013 at 09:28:11AM +0800, Qin Chuanyu wrote:
> > > On 2013/9/2 15:57, Wei Liu wrote:
> > > >On Sat, Aug 31, 2013 at 12:45:11PM +0800, Qin Chuanyu wrote:
> > > >>On 2013/8/30 0:08, Anthony Liguori wrote:
> > > >>>Hi Qin,
> > > >>
> > > >>>>By change the memory copy and notify mechanism ,currently virtio-net with
> > > >>>>vhost_net could run on Xen with good performance。
> > > >>>
> > > >>>I think the key in doing this would be to implement a property
> > > >>>ioeventfd and irqfd interface in the driver domain kernel. Just
> > > >>>hacking vhost_net with Xen specific knowledge would be pretty nasty
> > > >>>IMHO.
> > > >>>
> > > >>Yes, I add a kernel module which persist virtio-net pio_addr and
> > > >>msix address as what kvm module did. Guest wake up vhost thread by
> > > >>adding a hook func in evtchn_interrupt.
> > > >>
> > > >>>Did you modify the front end driver to do grant table mapping or is
> > > >>>this all being done by mapping the domain's memory?
> > > >>>
> > > >>There is nothing changed in front end driver. Currently I use
> > > >>alloc_vm_area to get address space, and map the domain's memory as
> > > >>what what qemu did.
> > > >>
> > > >
> > > >You mean you're using xc_map_foreign_range and friends in the backend to
> > > >map guest memory? That's not very desirable as it violates Xen's
> > > >security model. It would not be too hard to pass grant references
> > > >instead of guest physical memory address IMHO.
> > > >
> > > In fact, I did what virtio-net have done in Qemu. I think security
> > > is a pseudo question because Dom0 is under control.
Right, but we are trying to move the backends out of Dom0, for
scalability and security.
Setting up a network driver domain is pretty easy and should work out of
the box with Xen 4.3.
That said, I agree that using xc_map_foreign_range is a good way to start.
> > Consider that you might have driver domains. Not every domain is under
> > control or trusted.
>
> I don't see anything that will prevent using driver domains here.
Driver domains are not privileged, therefore cannot map random guest
pages (unless they have been granted by the guest via the grant table).
xc_map_foreign_range can't work from a driver domain.
> > Also consider that security model like XSM can be
> > used to audit operations to enhance security so your foreign mapping
> > approach might not always work.
>
> It could be nice to have as an option, sure.
> XSM is disabled by default though so I don't think lack of support for
> that makes it a prototype.
There are some security aware Xen based products in the market today
that use XSM.
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-08-29 16:08 ` Anthony Liguori
2013-08-31 4:45 ` Qin Chuanyu
2013-08-31 4:45 ` Qin Chuanyu
@ 2013-10-14 8:19 ` Qin Chuanyu
2013-10-14 8:19 ` Qin Chuanyu
3 siblings, 0 replies; 28+ messages in thread
From: Qin Chuanyu @ 2013-10-14 8:19 UTC (permalink / raw)
To: Anthony Liguori, Michael S. Tsirkin, jasowang, Wei Liu
Cc: KVM list, netdev, qianhuibin, xen-devel, wangfuhai, liuyongan
On 2013/8/30 0:08, Anthony Liguori wrote:
> Hi Qin,
>
> KVM and Xen represent memory in a very different way. KVM can only
> track when guest mode code dirties memory. It relies on QEMU to track
> when guest memory is dirtied by QEMU. Since vhost is running outside
> of QEMU, vhost also needs to tell QEMU when it has dirtied memory.
>
> I don't think this is a problem with Xen though. I believe (although
> could be wrong) that Xen is able to track when either the domain or
> dom0 dirties memory.
>
> So I think you can simply ignore the dirty logging with vhost and it
> should Just Work.
>
Xen track guest's memory when live migrating as what KVM did (I guess it
rely on EPT),it couldn't mark dom0's dirty memory automatically.
I did the same dirty log with vhost_net but instead of KVM's api with
Xen's dirty memory interface,then live migration work.
--------------------------------------------------------------------
There is a bug on the Xen live migration when using qemu emulate
nic(such as virtio_net).
current flow:
xc_save->dirty memory copy->suspend->stop_vcpu->last memory copy
stop_qemu->stop_virtio_net
save_qemu->save_virtio_net
it means virtio_net would dirty memory after the last memory copy.
I have test it both vhost_on_qemu and virtio_net in qemu,there are same
problem, the update of vring_index would be mistake and lead network
unreachable. my solution is:
xc_save->dirty memory copy->suspend->stop_vcpu->stop_qemu
->stop_virtio_net->last memory copy
save_qemu->save_virtio_net
Xen's netfront and netback disconnect and flush IO-ring when live
migrate,so it is OK.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: Is fallback vhost_net to qemu for live migrate available?
2013-08-29 16:08 ` Anthony Liguori
` (2 preceding siblings ...)
2013-10-14 8:19 ` Qin Chuanyu
@ 2013-10-14 8:19 ` Qin Chuanyu
3 siblings, 0 replies; 28+ messages in thread
From: Qin Chuanyu @ 2013-10-14 8:19 UTC (permalink / raw)
To: Anthony Liguori, Michael S. Tsirkin, jasowang, Wei Liu
Cc: qianhuibin, KVM list, netdev, xen-devel, liuyongan, wangfuhai
On 2013/8/30 0:08, Anthony Liguori wrote:
> Hi Qin,
>
> KVM and Xen represent memory in a very different way. KVM can only
> track when guest mode code dirties memory. It relies on QEMU to track
> when guest memory is dirtied by QEMU. Since vhost is running outside
> of QEMU, vhost also needs to tell QEMU when it has dirtied memory.
>
> I don't think this is a problem with Xen though. I believe (although
> could be wrong) that Xen is able to track when either the domain or
> dom0 dirties memory.
>
> So I think you can simply ignore the dirty logging with vhost and it
> should Just Work.
>
Xen track guest's memory when live migrating as what KVM did (I guess it
rely on EPT),it couldn't mark dom0's dirty memory automatically.
I did the same dirty log with vhost_net but instead of KVM's api with
Xen's dirty memory interface,then live migration work.
--------------------------------------------------------------------
There is a bug on the Xen live migration when using qemu emulate
nic(such as virtio_net).
current flow:
xc_save->dirty memory copy->suspend->stop_vcpu->last memory copy
stop_qemu->stop_virtio_net
save_qemu->save_virtio_net
it means virtio_net would dirty memory after the last memory copy.
I have test it both vhost_on_qemu and virtio_net in qemu,there are same
problem, the update of vring_index would be mistake and lead network
unreachable. my solution is:
xc_save->dirty memory copy->suspend->stop_vcpu->stop_qemu
->stop_virtio_net->last memory copy
save_qemu->save_virtio_net
Xen's netfront and netback disconnect and flush IO-ring when live
migrate,so it is OK.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 28+ messages in thread
end of thread, other threads:[~2013-10-14 8:21 UTC | newest]
Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-27 3:32 Is fallback vhost_net to qemu for live migrate available? Qin Chuanyu
2013-08-27 4:19 ` Michael S. Tsirkin
2013-08-27 7:04 ` Qin Chuanyu
2013-08-27 7:16 ` Michael S. Tsirkin
2013-08-27 7:22 ` Qin Chuanyu
2013-08-27 9:41 ` Wei Liu
2013-08-29 16:08 ` Anthony Liguori
2013-08-29 16:08 ` Anthony Liguori
2013-08-31 4:45 ` Qin Chuanyu
2013-09-02 3:19 ` Jason Wang
2013-09-02 3:19 ` Jason Wang
2013-09-02 7:57 ` Wei Liu
2013-09-02 7:57 ` Wei Liu
2013-09-02 8:11 ` Michael S. Tsirkin
2013-09-02 8:11 ` Michael S. Tsirkin
2013-09-03 1:28 ` Qin Chuanyu
2013-09-03 8:40 ` Wei Liu
2013-09-03 8:40 ` Wei Liu
2013-09-03 8:55 ` Michael S. Tsirkin
2013-09-03 9:15 ` Wei Liu
2013-09-03 9:15 ` Wei Liu
2013-09-05 13:33 ` Stefano Stabellini
2013-09-05 13:33 ` [Xen-devel] " Stefano Stabellini
2013-09-03 8:55 ` Michael S. Tsirkin
2013-09-03 1:28 ` Qin Chuanyu
2013-08-31 4:45 ` Qin Chuanyu
2013-10-14 8:19 ` Qin Chuanyu
2013-10-14 8:19 ` Qin Chuanyu
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.