All of lore.kernel.org
 help / color / mirror / Atom feed
* Using virtio for inter-VM communication
@ 2014-06-10 16:48 ` Henning Schild
  0 siblings, 0 replies; 91+ messages in thread
From: Henning Schild @ 2014-06-10 16:48 UTC (permalink / raw)
  To: qemu-devel, virtualization, kvm; +Cc: Henning Schild

Hi,

i am working on the jailhouse[1] project and am currently looking at
inter-VM communication. We want to connect guests directly with virtual
consoles based on shared memory. The code complexity in the hypervisor
should be minimal, it should just make the shared memory discoverable
and provide a signaling mechanism.

We would like to reuse virtio so that Linux-guests will eventually just
work without having to patch them. Having looked at virtio it seems to
be focused on host<->guest communication and does not consider direct
guest<->guest communication. I.e. the queues use guest-physical
addressing, which is only meaningful for the guest and the host.

In a first prototype i implemented a ivshmem[2] device for the
hypervisor. That way we can share memory between virtual machines.
Ivshmem is nice and simple but does not seem to be used anymore. And it
does not define higher level devices, like a console.

At this point i could:
- define a console on top of ivshmem
- see how i can get a virtio console to work between guests on shared
memory

Is anyone already using something like that? I guess zero-copy virtio
devices in Xen would be a similar case. I read a suggestion from may
2010 to introduce a virtio feature bit for shared memory
(VIRTIO_F_RING_SHMEM_ADDR). But that did not make it into the
virtio-spec.

regards,
Henning

[1] jailhouse
https://github.com/siemens/jailhouse

[2] ivshmem
https://gitorious.org/nahanni

^ permalink raw reply	[flat|nested] 91+ messages in thread

* [Qemu-devel] Using virtio for inter-VM communication
@ 2014-06-10 16:48 ` Henning Schild
  0 siblings, 0 replies; 91+ messages in thread
From: Henning Schild @ 2014-06-10 16:48 UTC (permalink / raw)
  To: qemu-devel, virtualization, kvm; +Cc: Henning Schild

Hi,

i am working on the jailhouse[1] project and am currently looking at
inter-VM communication. We want to connect guests directly with virtual
consoles based on shared memory. The code complexity in the hypervisor
should be minimal, it should just make the shared memory discoverable
and provide a signaling mechanism.

We would like to reuse virtio so that Linux-guests will eventually just
work without having to patch them. Having looked at virtio it seems to
be focused on host<->guest communication and does not consider direct
guest<->guest communication. I.e. the queues use guest-physical
addressing, which is only meaningful for the guest and the host.

In a first prototype i implemented a ivshmem[2] device for the
hypervisor. That way we can share memory between virtual machines.
Ivshmem is nice and simple but does not seem to be used anymore. And it
does not define higher level devices, like a console.

At this point i could:
- define a console on top of ivshmem
- see how i can get a virtio console to work between guests on shared
memory

Is anyone already using something like that? I guess zero-copy virtio
devices in Xen would be a similar case. I read a suggestion from may
2010 to introduce a virtio feature bit for shared memory
(VIRTIO_F_RING_SHMEM_ADDR). But that did not make it into the
virtio-spec.

regards,
Henning

[1] jailhouse
https://github.com/siemens/jailhouse

[2] ivshmem
https://gitorious.org/nahanni

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-10 16:48 ` [Qemu-devel] " Henning Schild
@ 2014-06-10 22:15   ` Vincent JARDIN
  -1 siblings, 0 replies; 91+ messages in thread
From: Vincent JARDIN @ 2014-06-10 22:15 UTC (permalink / raw)
  To: Henning Schild; +Cc: qemu-devel, virtualization, kvm

On 10/06/2014 18:48, Henning Schild wrote:> Hi,
 > In a first prototype i implemented a ivshmem[2] device for the
 > hypervisor. That way we can share memory between virtual machines.
 > Ivshmem is nice and simple but does not seem to be used anymore.
 > And it
 > does not define higher level devices, like a console.

FYI, ivhsmem is used here:
   http://dpdk.org/browse/memnic/tree/

http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c#n449

There are some few other references too, if needed.

Best regards,
   Vincent


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Using virtio for inter-VM communication
@ 2014-06-10 22:15   ` Vincent JARDIN
  0 siblings, 0 replies; 91+ messages in thread
From: Vincent JARDIN @ 2014-06-10 22:15 UTC (permalink / raw)
  To: Henning Schild; +Cc: qemu-devel, kvm, virtualization

On 10/06/2014 18:48, Henning Schild wrote:> Hi,
 > In a first prototype i implemented a ivshmem[2] device for the
 > hypervisor. That way we can share memory between virtual machines.
 > Ivshmem is nice and simple but does not seem to be used anymore.
 > And it
 > does not define higher level devices, like a console.

FYI, ivhsmem is used here:
   http://dpdk.org/browse/memnic/tree/

http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c#n449

There are some few other references too, if needed.

Best regards,
   Vincent

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-10 16:48 ` [Qemu-devel] " Henning Schild
  (?)
@ 2014-06-10 22:15 ` Vincent JARDIN
  -1 siblings, 0 replies; 91+ messages in thread
From: Vincent JARDIN @ 2014-06-10 22:15 UTC (permalink / raw)
  To: Henning Schild; +Cc: qemu-devel, kvm, virtualization

On 10/06/2014 18:48, Henning Schild wrote:> Hi,
 > In a first prototype i implemented a ivshmem[2] device for the
 > hypervisor. That way we can share memory between virtual machines.
 > Ivshmem is nice and simple but does not seem to be used anymore.
 > And it
 > does not define higher level devices, like a console.

FYI, ivhsmem is used here:
   http://dpdk.org/browse/memnic/tree/

http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c#n449

There are some few other references too, if needed.

Best regards,
   Vincent

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-10 16:48 ` [Qemu-devel] " Henning Schild
  (?)
@ 2014-06-12  2:27   ` Rusty Russell
  -1 siblings, 0 replies; 91+ messages in thread
From: Rusty Russell @ 2014-06-12  2:27 UTC (permalink / raw)
  To: Henning Schild, qemu-devel, virtualization, kvm; +Cc: Henning Schild

Henning Schild <henning.schild@siemens.com> writes:
> Hi,
>
> i am working on the jailhouse[1] project and am currently looking at
> inter-VM communication. We want to connect guests directly with virtual
> consoles based on shared memory. The code complexity in the hypervisor
> should be minimal, it should just make the shared memory discoverable
> and provide a signaling mechanism.

Hi Henning,

        The virtio assumption was that the host can see all of guest
memory.  This simplifies things significantly, and makes it efficient.

If you don't have this, *someone* needs to do a copy.  Usually the guest
OS does a bounce buffer into your shared region.  Goodbye performance.
Or you can play remapping tricks.  Goodbye performance again.

My preferred model is to have a trusted helper (ie. host) which
understands how to copy between virtio rings.  The backend guest (to
steal Xen vocab) R/O maps the descriptor, avail ring and used rings in
the guest.  It then asks the trusted helper to do various operation
(copy into writable descriptor, copy out of readable descriptor, mark
used).  The virtio ring itself acts as a grant table.

Note: that helper mechanism is completely protocol agnostic.  It was
also explicitly designed into the virtio mechanism (with its 4k
boundaries for data structures and its 'len' field to indicate how much
was written into the descriptor). 

It was also never implemented, and remains a thought experiment.
However, implementing it in lguest should be fairly easy.

Cheers,
Rusty.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Using virtio for inter-VM communication
@ 2014-06-12  2:27   ` Rusty Russell
  0 siblings, 0 replies; 91+ messages in thread
From: Rusty Russell @ 2014-06-12  2:27 UTC (permalink / raw)
  To: Henning Schild, qemu-devel, virtualization, kvm

Henning Schild <henning.schild@siemens.com> writes:
> Hi,
>
> i am working on the jailhouse[1] project and am currently looking at
> inter-VM communication. We want to connect guests directly with virtual
> consoles based on shared memory. The code complexity in the hypervisor
> should be minimal, it should just make the shared memory discoverable
> and provide a signaling mechanism.

Hi Henning,

        The virtio assumption was that the host can see all of guest
memory.  This simplifies things significantly, and makes it efficient.

If you don't have this, *someone* needs to do a copy.  Usually the guest
OS does a bounce buffer into your shared region.  Goodbye performance.
Or you can play remapping tricks.  Goodbye performance again.

My preferred model is to have a trusted helper (ie. host) which
understands how to copy between virtio rings.  The backend guest (to
steal Xen vocab) R/O maps the descriptor, avail ring and used rings in
the guest.  It then asks the trusted helper to do various operation
(copy into writable descriptor, copy out of readable descriptor, mark
used).  The virtio ring itself acts as a grant table.

Note: that helper mechanism is completely protocol agnostic.  It was
also explicitly designed into the virtio mechanism (with its 4k
boundaries for data structures and its 'len' field to indicate how much
was written into the descriptor). 

It was also never implemented, and remains a thought experiment.
However, implementing it in lguest should be fairly easy.

Cheers,
Rusty.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
@ 2014-06-12  2:27   ` Rusty Russell
  0 siblings, 0 replies; 91+ messages in thread
From: Rusty Russell @ 2014-06-12  2:27 UTC (permalink / raw)
  To: qemu-devel, virtualization, kvm; +Cc: Henning Schild

Henning Schild <henning.schild@siemens.com> writes:
> Hi,
>
> i am working on the jailhouse[1] project and am currently looking at
> inter-VM communication. We want to connect guests directly with virtual
> consoles based on shared memory. The code complexity in the hypervisor
> should be minimal, it should just make the shared memory discoverable
> and provide a signaling mechanism.

Hi Henning,

        The virtio assumption was that the host can see all of guest
memory.  This simplifies things significantly, and makes it efficient.

If you don't have this, *someone* needs to do a copy.  Usually the guest
OS does a bounce buffer into your shared region.  Goodbye performance.
Or you can play remapping tricks.  Goodbye performance again.

My preferred model is to have a trusted helper (ie. host) which
understands how to copy between virtio rings.  The backend guest (to
steal Xen vocab) R/O maps the descriptor, avail ring and used rings in
the guest.  It then asks the trusted helper to do various operation
(copy into writable descriptor, copy out of readable descriptor, mark
used).  The virtio ring itself acts as a grant table.

Note: that helper mechanism is completely protocol agnostic.  It was
also explicitly designed into the virtio mechanism (with its 4k
boundaries for data structures and its 'len' field to indicate how much
was written into the descriptor). 

It was also never implemented, and remains a thought experiment.
However, implementing it in lguest should be fairly easy.

Cheers,
Rusty.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-12  2:27   ` [Qemu-devel] " Rusty Russell
@ 2014-06-12  5:32     ` Jan Kiszka
  -1 siblings, 0 replies; 91+ messages in thread
From: Jan Kiszka @ 2014-06-12  5:32 UTC (permalink / raw)
  To: Rusty Russell, Henning Schild, qemu-devel, virtualization, kvm

On 2014-06-12 04:27, Rusty Russell wrote:
> Henning Schild <henning.schild@siemens.com> writes:
>> Hi,
>>
>> i am working on the jailhouse[1] project and am currently looking at
>> inter-VM communication. We want to connect guests directly with virtual
>> consoles based on shared memory. The code complexity in the hypervisor
>> should be minimal, it should just make the shared memory discoverable
>> and provide a signaling mechanism.
> 
> Hi Henning,
> 
>         The virtio assumption was that the host can see all of guest
> memory.  This simplifies things significantly, and makes it efficient.
> 
> If you don't have this, *someone* needs to do a copy.  Usually the guest
> OS does a bounce buffer into your shared region.  Goodbye performance.
> Or you can play remapping tricks.  Goodbye performance again.
> 
> My preferred model is to have a trusted helper (ie. host) which
> understands how to copy between virtio rings.  The backend guest (to
> steal Xen vocab) R/O maps the descriptor, avail ring and used rings in
> the guest.  It then asks the trusted helper to do various operation
> (copy into writable descriptor, copy out of readable descriptor, mark
> used).  The virtio ring itself acts as a grant table.
> 
> Note: that helper mechanism is completely protocol agnostic.  It was
> also explicitly designed into the virtio mechanism (with its 4k
> boundaries for data structures and its 'len' field to indicate how much
> was written into the descriptor). 
> 
> It was also never implemented, and remains a thought experiment.
> However, implementing it in lguest should be fairly easy.

The reason why a trusted helper, i.e. additional logic in the
hypervisor, is not our favorite solution is that we'd like to keep the
hypervisor as small as possible. I wouldn't exclude such an approach
categorically, but we have to weigh the costs (lines of code, additional
hypervisor interface) carefully against the gain (existing
specifications and guest driver infrastructure).

Back to VIRTIO_F_RING_SHMEM_ADDR (which you once brought up in an MCA
working group discussion): What speaks against introducing an
alternative encoding of addresses inside virtio data structures? The
idea of this flag was to replace guest-physical addresses with offsets
into a shared memory region associated with or part of a virtio device.
That would preserve zero-copy capabilities (as long as you can work
against the shared mem directly, e.g. doing DMA from a physical NIC or
storage device into it) and keep the hypervisor out of the loop. Is it
too invasive to existing infrastructure or does it have some other pitfalls?

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Using virtio for inter-VM communication
@ 2014-06-12  5:32     ` Jan Kiszka
  0 siblings, 0 replies; 91+ messages in thread
From: Jan Kiszka @ 2014-06-12  5:32 UTC (permalink / raw)
  To: Rusty Russell, Henning Schild, qemu-devel, virtualization, kvm

On 2014-06-12 04:27, Rusty Russell wrote:
> Henning Schild <henning.schild@siemens.com> writes:
>> Hi,
>>
>> i am working on the jailhouse[1] project and am currently looking at
>> inter-VM communication. We want to connect guests directly with virtual
>> consoles based on shared memory. The code complexity in the hypervisor
>> should be minimal, it should just make the shared memory discoverable
>> and provide a signaling mechanism.
> 
> Hi Henning,
> 
>         The virtio assumption was that the host can see all of guest
> memory.  This simplifies things significantly, and makes it efficient.
> 
> If you don't have this, *someone* needs to do a copy.  Usually the guest
> OS does a bounce buffer into your shared region.  Goodbye performance.
> Or you can play remapping tricks.  Goodbye performance again.
> 
> My preferred model is to have a trusted helper (ie. host) which
> understands how to copy between virtio rings.  The backend guest (to
> steal Xen vocab) R/O maps the descriptor, avail ring and used rings in
> the guest.  It then asks the trusted helper to do various operation
> (copy into writable descriptor, copy out of readable descriptor, mark
> used).  The virtio ring itself acts as a grant table.
> 
> Note: that helper mechanism is completely protocol agnostic.  It was
> also explicitly designed into the virtio mechanism (with its 4k
> boundaries for data structures and its 'len' field to indicate how much
> was written into the descriptor). 
> 
> It was also never implemented, and remains a thought experiment.
> However, implementing it in lguest should be fairly easy.

The reason why a trusted helper, i.e. additional logic in the
hypervisor, is not our favorite solution is that we'd like to keep the
hypervisor as small as possible. I wouldn't exclude such an approach
categorically, but we have to weigh the costs (lines of code, additional
hypervisor interface) carefully against the gain (existing
specifications and guest driver infrastructure).

Back to VIRTIO_F_RING_SHMEM_ADDR (which you once brought up in an MCA
working group discussion): What speaks against introducing an
alternative encoding of addresses inside virtio data structures? The
idea of this flag was to replace guest-physical addresses with offsets
into a shared memory region associated with or part of a virtio device.
That would preserve zero-copy capabilities (as long as you can work
against the shared mem directly, e.g. doing DMA from a physical NIC or
storage device into it) and keep the hypervisor out of the loop. Is it
too invasive to existing infrastructure or does it have some other pitfalls?

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-12  2:27   ` [Qemu-devel] " Rusty Russell
                     ` (2 preceding siblings ...)
  (?)
@ 2014-06-12  5:32   ` Jan Kiszka
  -1 siblings, 0 replies; 91+ messages in thread
From: Jan Kiszka @ 2014-06-12  5:32 UTC (permalink / raw)
  To: Rusty Russell, Henning Schild, qemu-devel, virtualization, kvm

On 2014-06-12 04:27, Rusty Russell wrote:
> Henning Schild <henning.schild@siemens.com> writes:
>> Hi,
>>
>> i am working on the jailhouse[1] project and am currently looking at
>> inter-VM communication. We want to connect guests directly with virtual
>> consoles based on shared memory. The code complexity in the hypervisor
>> should be minimal, it should just make the shared memory discoverable
>> and provide a signaling mechanism.
> 
> Hi Henning,
> 
>         The virtio assumption was that the host can see all of guest
> memory.  This simplifies things significantly, and makes it efficient.
> 
> If you don't have this, *someone* needs to do a copy.  Usually the guest
> OS does a bounce buffer into your shared region.  Goodbye performance.
> Or you can play remapping tricks.  Goodbye performance again.
> 
> My preferred model is to have a trusted helper (ie. host) which
> understands how to copy between virtio rings.  The backend guest (to
> steal Xen vocab) R/O maps the descriptor, avail ring and used rings in
> the guest.  It then asks the trusted helper to do various operation
> (copy into writable descriptor, copy out of readable descriptor, mark
> used).  The virtio ring itself acts as a grant table.
> 
> Note: that helper mechanism is completely protocol agnostic.  It was
> also explicitly designed into the virtio mechanism (with its 4k
> boundaries for data structures and its 'len' field to indicate how much
> was written into the descriptor). 
> 
> It was also never implemented, and remains a thought experiment.
> However, implementing it in lguest should be fairly easy.

The reason why a trusted helper, i.e. additional logic in the
hypervisor, is not our favorite solution is that we'd like to keep the
hypervisor as small as possible. I wouldn't exclude such an approach
categorically, but we have to weigh the costs (lines of code, additional
hypervisor interface) carefully against the gain (existing
specifications and guest driver infrastructure).

Back to VIRTIO_F_RING_SHMEM_ADDR (which you once brought up in an MCA
working group discussion): What speaks against introducing an
alternative encoding of addresses inside virtio data structures? The
idea of this flag was to replace guest-physical addresses with offsets
into a shared memory region associated with or part of a virtio device.
That would preserve zero-copy capabilities (as long as you can work
against the shared mem directly, e.g. doing DMA from a physical NIC or
storage device into it) and keep the hypervisor out of the loop. Is it
too invasive to existing infrastructure or does it have some other pitfalls?

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-10 22:15   ` [Qemu-devel] " Vincent JARDIN
@ 2014-06-12  6:48     ` Markus Armbruster
  -1 siblings, 0 replies; 91+ messages in thread
From: Markus Armbruster @ 2014-06-12  6:48 UTC (permalink / raw)
  To: Vincent JARDIN; +Cc: Henning Schild, qemu-devel, kvm, virtualization

Vincent JARDIN <vincent.jardin@6wind.com> writes:

> On 10/06/2014 18:48, Henning Schild wrote:> Hi,
>> In a first prototype i implemented a ivshmem[2] device for the
>> hypervisor. That way we can share memory between virtual machines.
>> Ivshmem is nice and simple but does not seem to be used anymore.
>> And it
>> does not define higher level devices, like a console.
>
> FYI, ivhsmem is used here:
>   http://dpdk.org/browse/memnic/tree/
>
> http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c#n449
>
> There are some few other references too, if needed.

It may be used, but that doesn't mean it's maintained, or robust against
abuse.  My advice is to steer clear of it.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Using virtio for inter-VM communication
@ 2014-06-12  6:48     ` Markus Armbruster
  0 siblings, 0 replies; 91+ messages in thread
From: Markus Armbruster @ 2014-06-12  6:48 UTC (permalink / raw)
  To: Vincent JARDIN; +Cc: Henning Schild, qemu-devel, kvm, virtualization

Vincent JARDIN <vincent.jardin@6wind.com> writes:

> On 10/06/2014 18:48, Henning Schild wrote:> Hi,
>> In a first prototype i implemented a ivshmem[2] device for the
>> hypervisor. That way we can share memory between virtual machines.
>> Ivshmem is nice and simple but does not seem to be used anymore.
>> And it
>> does not define higher level devices, like a console.
>
> FYI, ivhsmem is used here:
>   http://dpdk.org/browse/memnic/tree/
>
> http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c#n449
>
> There are some few other references too, if needed.

It may be used, but that doesn't mean it's maintained, or robust against
abuse.  My advice is to steer clear of it.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-10 22:15   ` [Qemu-devel] " Vincent JARDIN
  (?)
@ 2014-06-12  6:48   ` Markus Armbruster
  -1 siblings, 0 replies; 91+ messages in thread
From: Markus Armbruster @ 2014-06-12  6:48 UTC (permalink / raw)
  To: Vincent JARDIN; +Cc: Henning Schild, qemu-devel, kvm, virtualization

Vincent JARDIN <vincent.jardin@6wind.com> writes:

> On 10/06/2014 18:48, Henning Schild wrote:> Hi,
>> In a first prototype i implemented a ivshmem[2] device for the
>> hypervisor. That way we can share memory between virtual machines.
>> Ivshmem is nice and simple but does not seem to be used anymore.
>> And it
>> does not define higher level devices, like a console.
>
> FYI, ivhsmem is used here:
>   http://dpdk.org/browse/memnic/tree/
>
> http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c#n449
>
> There are some few other references too, if needed.

It may be used, but that doesn't mean it's maintained, or robust against
abuse.  My advice is to steer clear of it.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-12  6:48     ` [Qemu-devel] " Markus Armbruster
@ 2014-06-12  7:44       ` Henning Schild
  -1 siblings, 0 replies; 91+ messages in thread
From: Henning Schild @ 2014-06-12  7:44 UTC (permalink / raw)
  To: Markus Armbruster; +Cc: Vincent JARDIN, qemu-devel, kvm, virtualization

On Thu, 12 Jun 2014 08:48:04 +0200
Markus Armbruster <armbru@redhat.com> wrote:

> Vincent JARDIN <vincent.jardin@6wind.com> writes:
> 
> > On 10/06/2014 18:48, Henning Schild wrote:> Hi,
> >> In a first prototype i implemented a ivshmem[2] device for the
> >> hypervisor. That way we can share memory between virtual machines.
> >> Ivshmem is nice and simple but does not seem to be used anymore.
> >> And it
> >> does not define higher level devices, like a console.
> >
> > FYI, ivhsmem is used here:
> >   http://dpdk.org/browse/memnic/tree/
> >
> > http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c#n449
> >
> > There are some few other references too, if needed.
> 
> It may be used, but that doesn't mean it's maintained, or robust
> against abuse.  My advice is to steer clear of it.

Could you elaborate on why you advice against it?

Henning

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Using virtio for inter-VM communication
@ 2014-06-12  7:44       ` Henning Schild
  0 siblings, 0 replies; 91+ messages in thread
From: Henning Schild @ 2014-06-12  7:44 UTC (permalink / raw)
  To: Markus Armbruster; +Cc: Vincent JARDIN, qemu-devel, kvm, virtualization

On Thu, 12 Jun 2014 08:48:04 +0200
Markus Armbruster <armbru@redhat.com> wrote:

> Vincent JARDIN <vincent.jardin@6wind.com> writes:
> 
> > On 10/06/2014 18:48, Henning Schild wrote:> Hi,
> >> In a first prototype i implemented a ivshmem[2] device for the
> >> hypervisor. That way we can share memory between virtual machines.
> >> Ivshmem is nice and simple but does not seem to be used anymore.
> >> And it
> >> does not define higher level devices, like a console.
> >
> > FYI, ivhsmem is used here:
> >   http://dpdk.org/browse/memnic/tree/
> >
> > http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c#n449
> >
> > There are some few other references too, if needed.
> 
> It may be used, but that doesn't mean it's maintained, or robust
> against abuse.  My advice is to steer clear of it.

Could you elaborate on why you advice against it?

Henning

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-12  7:44       ` [Qemu-devel] " Henning Schild
@ 2014-06-12  9:31         ` Vincent JARDIN
  -1 siblings, 0 replies; 91+ messages in thread
From: Vincent JARDIN @ 2014-06-12  9:31 UTC (permalink / raw)
  To: Henning Schild; +Cc: Markus Armbruster, qemu-devel, virtualization, kvm

On 12/06/2014 09:44, Henning Schild wrote:
>> It may be used, but that doesn't mean it's maintained, or robust
>> >against abuse.  My advice is to steer clear of it.
> Could you elaborate on why you advice against it?

+1 elaborate please.

beside the DPDK source code, some other common use cases:
   - HPC: using inter VM shared memory for ultra low latency inter VM 
computation

Best regards,
   Vincent

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Using virtio for inter-VM communication
@ 2014-06-12  9:31         ` Vincent JARDIN
  0 siblings, 0 replies; 91+ messages in thread
From: Vincent JARDIN @ 2014-06-12  9:31 UTC (permalink / raw)
  To: Henning Schild; +Cc: virtualization, Markus Armbruster, kvm, qemu-devel

On 12/06/2014 09:44, Henning Schild wrote:
>> It may be used, but that doesn't mean it's maintained, or robust
>> >against abuse.  My advice is to steer clear of it.
> Could you elaborate on why you advice against it?

+1 elaborate please.

beside the DPDK source code, some other common use cases:
   - HPC: using inter VM shared memory for ultra low latency inter VM 
computation

Best regards,
   Vincent

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-12  7:44       ` [Qemu-devel] " Henning Schild
  (?)
@ 2014-06-12  9:31       ` Vincent JARDIN
  -1 siblings, 0 replies; 91+ messages in thread
From: Vincent JARDIN @ 2014-06-12  9:31 UTC (permalink / raw)
  To: Henning Schild; +Cc: virtualization, kvm, qemu-devel

On 12/06/2014 09:44, Henning Schild wrote:
>> It may be used, but that doesn't mean it's maintained, or robust
>> >against abuse.  My advice is to steer clear of it.
> Could you elaborate on why you advice against it?

+1 elaborate please.

beside the DPDK source code, some other common use cases:
   - HPC: using inter VM shared memory for ultra low latency inter VM 
computation

Best regards,
   Vincent

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Using virtio for inter-VM communication
  2014-06-12  7:44       ` [Qemu-devel] " Henning Schild
                         ` (2 preceding siblings ...)
  (?)
@ 2014-06-12 12:55       ` Markus Armbruster
  -1 siblings, 0 replies; 91+ messages in thread
From: Markus Armbruster @ 2014-06-12 12:55 UTC (permalink / raw)
  To: Henning Schild; +Cc: Vincent JARDIN, qemu-devel, kvm, virtualization

Henning Schild <henning.schild@siemens.com> writes:

> On Thu, 12 Jun 2014 08:48:04 +0200
> Markus Armbruster <armbru@redhat.com> wrote:
>
>> Vincent JARDIN <vincent.jardin@6wind.com> writes:
>> 
>> > On 10/06/2014 18:48, Henning Schild wrote:> Hi,
>> >> In a first prototype i implemented a ivshmem[2] device for the
>> >> hypervisor. That way we can share memory between virtual machines.
>> >> Ivshmem is nice and simple but does not seem to be used anymore.
>> >> And it
>> >> does not define higher level devices, like a console.
>> >
>> > FYI, ivhsmem is used here:
>> >   http://dpdk.org/browse/memnic/tree/
>> >
>> > http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c#n449
>> >
>> > There are some few other references too, if needed.
>> 
>> It may be used, but that doesn't mean it's maintained, or robust
>> against abuse.  My advice is to steer clear of it.
>
> Could you elaborate on why you advice against it?

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Why I advise against using ivshmem (was: [Qemu-devel] Using virtio for inter-VM communication)
  2014-06-12  7:44       ` [Qemu-devel] " Henning Schild
@ 2014-06-12 14:40         ` Markus Armbruster
  -1 siblings, 0 replies; 91+ messages in thread
From: Markus Armbruster @ 2014-06-12 14:40 UTC (permalink / raw)
  To: Henning Schild; +Cc: Vincent JARDIN, qemu-devel, kvm, virtualization

Henning Schild <henning.schild@siemens.com> writes:

> On Thu, 12 Jun 2014 08:48:04 +0200
> Markus Armbruster <armbru@redhat.com> wrote:
>
>> Vincent JARDIN <vincent.jardin@6wind.com> writes:
>> 
>> > On 10/06/2014 18:48, Henning Schild wrote:> Hi,
>> >> In a first prototype i implemented a ivshmem[2] device for the
>> >> hypervisor. That way we can share memory between virtual machines.
>> >> Ivshmem is nice and simple but does not seem to be used anymore.
>> >> And it
>> >> does not define higher level devices, like a console.
>> >
>> > FYI, ivhsmem is used here:
>> >   http://dpdk.org/browse/memnic/tree/
>> >
>> > http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c#n449
>> >
>> > There are some few other references too, if needed.
>> 
>> It may be used, but that doesn't mean it's maintained, or robust
>> against abuse.  My advice is to steer clear of it.
>
> Could you elaborate on why you advice against it?

Sure!  The reasons for my dislike range from practical to philosophical.

My practical concerns include:

1. ivshmem code needs work, but has no maintainer

   - Error handling is generally poor.  For instance, "device_add
     ivshmem" kills your guest instantly.

   - More subjectively, I don't trust the code to be robust against
     abuse by our own guest, or the other guests sharing the memory.
     Convincing me would take a code audit.

   - MAINTAINERS doesn't cover ivshmem.c.

   - The last non-trivial commit that isn't obviously part of some
     tree-wide infrastructure or cleanup work is from September 2012
     (commit c08ba66).

2. There is no libvirt support

3. Out-of-tree server program required for full functionality

   Interrupts require a "shared memory server" running in the host (see
   docs/specs/ivshmem_device_spec.txt).  It doesn't tell where to find
   one.  The initial commit 6cbf4c8 points to
   <www.gitorious.org/nahanni>.  That repository's last commit is from
   September 2012.  He's dead, Jim.

   ivshmem_device_spec.txt is silent on what the server is supposed to
   do.

   If this server requires privileges: I don't trust it without an
   audit.

4. Out-of-tree kernel uio driver required

   The device is "intended to be used with the provided UIO driver"
   (ivshmem_device_spec.txt again).  As far as I can tell, the "provided
   UIO driver" is the one in the dead Nahanni repo.

   By now, you should be expecting this: I don't trust that one either.

These concerns are all fixable, but it'll take serious work, and time.
Something like:

* Find a maintainer for the device model

* Review and fix its code

* Get the required kernel module upstream

* Get all the required parts outside QEMU packaged in major distros, or
  absorbed into QEMU

In short, create a viable community around ivshmem, either within the
QEMU community, or separately but cooperating.

On to the more philosophical ones.

5. Out-of-tree interface required

   Paraphrasing an old quip: Some people, when confronted with a
   problem, think "I know, I'll use shared memory."  Now they have two
   problems.

   Shared memory is not an interface.  It's at best something you can
   use to build an interface.

   I'd rather have us offer something with a little bit more structure.
   Very fast guest-to-guest networking perhaps.

6. Device models belong into QEMU

   Say you build an actual interface on top of ivshmem.  Then ivshmem in
   QEMU together with the supporting host code outside QEMU (see 3.) and
   the lower layer of the code using it in guests (kernel + user space)
   provide something that to me very much looks like a device model.

   Device models belong into QEMU.  It's what QEMU does.

   To all currently using ivshmem or contemplating its use: I'd like to
   invite you to work with the QEMU community to get your use case
   served better.  You could start worse than with explaining it to us.

   In case you'd rather not work with the QEMU community: I'm not
   passing judgement on that (heck, I have had days when I'd rather not,
   too).  But if somebody's reasons not to work with us include GPL
   circumvention, then that somebody is a scoundrel.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* [Qemu-devel] Why I advise against using ivshmem (was: Using virtio for inter-VM communication)
@ 2014-06-12 14:40         ` Markus Armbruster
  0 siblings, 0 replies; 91+ messages in thread
From: Markus Armbruster @ 2014-06-12 14:40 UTC (permalink / raw)
  To: Henning Schild; +Cc: Vincent JARDIN, qemu-devel, kvm, virtualization

Henning Schild <henning.schild@siemens.com> writes:

> On Thu, 12 Jun 2014 08:48:04 +0200
> Markus Armbruster <armbru@redhat.com> wrote:
>
>> Vincent JARDIN <vincent.jardin@6wind.com> writes:
>> 
>> > On 10/06/2014 18:48, Henning Schild wrote:> Hi,
>> >> In a first prototype i implemented a ivshmem[2] device for the
>> >> hypervisor. That way we can share memory between virtual machines.
>> >> Ivshmem is nice and simple but does not seem to be used anymore.
>> >> And it
>> >> does not define higher level devices, like a console.
>> >
>> > FYI, ivhsmem is used here:
>> >   http://dpdk.org/browse/memnic/tree/
>> >
>> > http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c#n449
>> >
>> > There are some few other references too, if needed.
>> 
>> It may be used, but that doesn't mean it's maintained, or robust
>> against abuse.  My advice is to steer clear of it.
>
> Could you elaborate on why you advice against it?

Sure!  The reasons for my dislike range from practical to philosophical.

My practical concerns include:

1. ivshmem code needs work, but has no maintainer

   - Error handling is generally poor.  For instance, "device_add
     ivshmem" kills your guest instantly.

   - More subjectively, I don't trust the code to be robust against
     abuse by our own guest, or the other guests sharing the memory.
     Convincing me would take a code audit.

   - MAINTAINERS doesn't cover ivshmem.c.

   - The last non-trivial commit that isn't obviously part of some
     tree-wide infrastructure or cleanup work is from September 2012
     (commit c08ba66).

2. There is no libvirt support

3. Out-of-tree server program required for full functionality

   Interrupts require a "shared memory server" running in the host (see
   docs/specs/ivshmem_device_spec.txt).  It doesn't tell where to find
   one.  The initial commit 6cbf4c8 points to
   <www.gitorious.org/nahanni>.  That repository's last commit is from
   September 2012.  He's dead, Jim.

   ivshmem_device_spec.txt is silent on what the server is supposed to
   do.

   If this server requires privileges: I don't trust it without an
   audit.

4. Out-of-tree kernel uio driver required

   The device is "intended to be used with the provided UIO driver"
   (ivshmem_device_spec.txt again).  As far as I can tell, the "provided
   UIO driver" is the one in the dead Nahanni repo.

   By now, you should be expecting this: I don't trust that one either.

These concerns are all fixable, but it'll take serious work, and time.
Something like:

* Find a maintainer for the device model

* Review and fix its code

* Get the required kernel module upstream

* Get all the required parts outside QEMU packaged in major distros, or
  absorbed into QEMU

In short, create a viable community around ivshmem, either within the
QEMU community, or separately but cooperating.

On to the more philosophical ones.

5. Out-of-tree interface required

   Paraphrasing an old quip: Some people, when confronted with a
   problem, think "I know, I'll use shared memory."  Now they have two
   problems.

   Shared memory is not an interface.  It's at best something you can
   use to build an interface.

   I'd rather have us offer something with a little bit more structure.
   Very fast guest-to-guest networking perhaps.

6. Device models belong into QEMU

   Say you build an actual interface on top of ivshmem.  Then ivshmem in
   QEMU together with the supporting host code outside QEMU (see 3.) and
   the lower layer of the code using it in guests (kernel + user space)
   provide something that to me very much looks like a device model.

   Device models belong into QEMU.  It's what QEMU does.

   To all currently using ivshmem or contemplating its use: I'd like to
   invite you to work with the QEMU community to get your use case
   served better.  You could start worse than with explaining it to us.

   In case you'd rather not work with the QEMU community: I'm not
   passing judgement on that (heck, I have had days when I'd rather not,
   too).  But if somebody's reasons not to work with us include GPL
   circumvention, then that somebody is a scoundrel.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Why I advise against using ivshmem (was: [Qemu-devel] Using virtio for inter-VM communication)
  2014-06-12  7:44       ` [Qemu-devel] " Henning Schild
                         ` (3 preceding siblings ...)
  (?)
@ 2014-06-12 14:40       ` Markus Armbruster
  -1 siblings, 0 replies; 91+ messages in thread
From: Markus Armbruster @ 2014-06-12 14:40 UTC (permalink / raw)
  To: Henning Schild; +Cc: Vincent JARDIN, qemu-devel, kvm, virtualization

Henning Schild <henning.schild@siemens.com> writes:

> On Thu, 12 Jun 2014 08:48:04 +0200
> Markus Armbruster <armbru@redhat.com> wrote:
>
>> Vincent JARDIN <vincent.jardin@6wind.com> writes:
>> 
>> > On 10/06/2014 18:48, Henning Schild wrote:> Hi,
>> >> In a first prototype i implemented a ivshmem[2] device for the
>> >> hypervisor. That way we can share memory between virtual machines.
>> >> Ivshmem is nice and simple but does not seem to be used anymore.
>> >> And it
>> >> does not define higher level devices, like a console.
>> >
>> > FYI, ivhsmem is used here:
>> >   http://dpdk.org/browse/memnic/tree/
>> >
>> > http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c#n449
>> >
>> > There are some few other references too, if needed.
>> 
>> It may be used, but that doesn't mean it's maintained, or robust
>> against abuse.  My advice is to steer clear of it.
>
> Could you elaborate on why you advice against it?

Sure!  The reasons for my dislike range from practical to philosophical.

My practical concerns include:

1. ivshmem code needs work, but has no maintainer

   - Error handling is generally poor.  For instance, "device_add
     ivshmem" kills your guest instantly.

   - More subjectively, I don't trust the code to be robust against
     abuse by our own guest, or the other guests sharing the memory.
     Convincing me would take a code audit.

   - MAINTAINERS doesn't cover ivshmem.c.

   - The last non-trivial commit that isn't obviously part of some
     tree-wide infrastructure or cleanup work is from September 2012
     (commit c08ba66).

2. There is no libvirt support

3. Out-of-tree server program required for full functionality

   Interrupts require a "shared memory server" running in the host (see
   docs/specs/ivshmem_device_spec.txt).  It doesn't tell where to find
   one.  The initial commit 6cbf4c8 points to
   <www.gitorious.org/nahanni>.  That repository's last commit is from
   September 2012.  He's dead, Jim.

   ivshmem_device_spec.txt is silent on what the server is supposed to
   do.

   If this server requires privileges: I don't trust it without an
   audit.

4. Out-of-tree kernel uio driver required

   The device is "intended to be used with the provided UIO driver"
   (ivshmem_device_spec.txt again).  As far as I can tell, the "provided
   UIO driver" is the one in the dead Nahanni repo.

   By now, you should be expecting this: I don't trust that one either.

These concerns are all fixable, but it'll take serious work, and time.
Something like:

* Find a maintainer for the device model

* Review and fix its code

* Get the required kernel module upstream

* Get all the required parts outside QEMU packaged in major distros, or
  absorbed into QEMU

In short, create a viable community around ivshmem, either within the
QEMU community, or separately but cooperating.

On to the more philosophical ones.

5. Out-of-tree interface required

   Paraphrasing an old quip: Some people, when confronted with a
   problem, think "I know, I'll use shared memory."  Now they have two
   problems.

   Shared memory is not an interface.  It's at best something you can
   use to build an interface.

   I'd rather have us offer something with a little bit more structure.
   Very fast guest-to-guest networking perhaps.

6. Device models belong into QEMU

   Say you build an actual interface on top of ivshmem.  Then ivshmem in
   QEMU together with the supporting host code outside QEMU (see 3.) and
   the lower layer of the code using it in guests (kernel + user space)
   provide something that to me very much looks like a device model.

   Device models belong into QEMU.  It's what QEMU does.

   To all currently using ivshmem or contemplating its use: I'd like to
   invite you to work with the QEMU community to get your use case
   served better.  You could start worse than with explaining it to us.

   In case you'd rather not work with the QEMU community: I'm not
   passing judgement on that (heck, I have had days when I'd rather not,
   too).  But if somebody's reasons not to work with us include GPL
   circumvention, then that somebody is a scoundrel.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Why I advise against using ivshmem
  2014-06-12 14:40         ` [Qemu-devel] Why I advise against using ivshmem (was: " Markus Armbruster
@ 2014-06-12 16:02           ` Vincent JARDIN
  -1 siblings, 0 replies; 91+ messages in thread
From: Vincent JARDIN @ 2014-06-12 16:02 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Henning Schild, qemu-devel, kvm, virtualization, David Marchand

Markus,

see inline (I am not on all mailing list, please, keep the cc list).

 > Sure!  The reasons for my dislike range from practical to
 > philosophical.
>
> My practical concerns include:
>
> 1. ivshmem code needs work, but has no maintainer
See David's contributions:
   http://patchwork.ozlabs.org/patch/358750/

> 2. There is no libvirt support

One can use qemu without libvivrt.

> 3. Out-of-tree server program required for full functionality

We have the source code, it provides the documentation to write our own 
better server program.

> 4. Out-of-tree kernel uio driver required

No, it is optional.

> These concerns are all fixable, but it'll take serious work, and time.
> Something like:
>
> * Find a maintainer for the device model
I guess, we can find it into the DPDK.org community.

> * Review and fix its code
>
> * Get the required kernel module upstream

which module? uio, it is not required.

> * Get all the required parts outside QEMU packaged in major distros, or
>    absorbed into QEMU

Redhat did disable it. why? it is there in QEMU.

> In short, create a viable community around ivshmem, either within the
> QEMU community, or separately but cooperating.

At least, DPDK.org community is a community using it.

> On to the more philosophical ones.
>
> 5. Out-of-tree interface required
>
>     Paraphrasing an old quip: Some people, when confronted with a
>     problem, think "I know, I'll use shared memory."  Now they have two
>     problems.
>
>     Shared memory is not an interface.  It's at best something you can
>     use to build an interface.
>
>     I'd rather have us offer something with a little bit more structure.
>     Very fast guest-to-guest networking perhaps.

It is not just networking, you have other use cases like HPC, sharing 
in-memory databases.

>
> 6. Device models belong into QEMU
>
>     Say you build an actual interface on top of ivshmem.  Then ivshmem in
>     QEMU together with the supporting host code outside QEMU (see 3.) and
>     the lower layer of the code using it in guests (kernel + user space)
>     provide something that to me very much looks like a device model.
>
>     Device models belong into QEMU.  It's what QEMU does.
See my previous statement, it is not just device model.


Best regards,
   Vincent


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
@ 2014-06-12 16:02           ` Vincent JARDIN
  0 siblings, 0 replies; 91+ messages in thread
From: Vincent JARDIN @ 2014-06-12 16:02 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Henning Schild, David Marchand, qemu-devel, kvm, virtualization

Markus,

see inline (I am not on all mailing list, please, keep the cc list).

 > Sure!  The reasons for my dislike range from practical to
 > philosophical.
>
> My practical concerns include:
>
> 1. ivshmem code needs work, but has no maintainer
See David's contributions:
   http://patchwork.ozlabs.org/patch/358750/

> 2. There is no libvirt support

One can use qemu without libvivrt.

> 3. Out-of-tree server program required for full functionality

We have the source code, it provides the documentation to write our own 
better server program.

> 4. Out-of-tree kernel uio driver required

No, it is optional.

> These concerns are all fixable, but it'll take serious work, and time.
> Something like:
>
> * Find a maintainer for the device model
I guess, we can find it into the DPDK.org community.

> * Review and fix its code
>
> * Get the required kernel module upstream

which module? uio, it is not required.

> * Get all the required parts outside QEMU packaged in major distros, or
>    absorbed into QEMU

Redhat did disable it. why? it is there in QEMU.

> In short, create a viable community around ivshmem, either within the
> QEMU community, or separately but cooperating.

At least, DPDK.org community is a community using it.

> On to the more philosophical ones.
>
> 5. Out-of-tree interface required
>
>     Paraphrasing an old quip: Some people, when confronted with a
>     problem, think "I know, I'll use shared memory."  Now they have two
>     problems.
>
>     Shared memory is not an interface.  It's at best something you can
>     use to build an interface.
>
>     I'd rather have us offer something with a little bit more structure.
>     Very fast guest-to-guest networking perhaps.

It is not just networking, you have other use cases like HPC, sharing 
in-memory databases.

>
> 6. Device models belong into QEMU
>
>     Say you build an actual interface on top of ivshmem.  Then ivshmem in
>     QEMU together with the supporting host code outside QEMU (see 3.) and
>     the lower layer of the code using it in guests (kernel + user space)
>     provide something that to me very much looks like a device model.
>
>     Device models belong into QEMU.  It's what QEMU does.
See my previous statement, it is not just device model.


Best regards,
   Vincent

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Why I advise against using ivshmem
  2014-06-12 14:40         ` [Qemu-devel] Why I advise against using ivshmem (was: " Markus Armbruster
  (?)
  (?)
@ 2014-06-12 16:02         ` Vincent JARDIN
  -1 siblings, 0 replies; 91+ messages in thread
From: Vincent JARDIN @ 2014-06-12 16:02 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Henning Schild, David Marchand, qemu-devel, kvm, virtualization

Markus,

see inline (I am not on all mailing list, please, keep the cc list).

 > Sure!  The reasons for my dislike range from practical to
 > philosophical.
>
> My practical concerns include:
>
> 1. ivshmem code needs work, but has no maintainer
See David's contributions:
   http://patchwork.ozlabs.org/patch/358750/

> 2. There is no libvirt support

One can use qemu without libvivrt.

> 3. Out-of-tree server program required for full functionality

We have the source code, it provides the documentation to write our own 
better server program.

> 4. Out-of-tree kernel uio driver required

No, it is optional.

> These concerns are all fixable, but it'll take serious work, and time.
> Something like:
>
> * Find a maintainer for the device model
I guess, we can find it into the DPDK.org community.

> * Review and fix its code
>
> * Get the required kernel module upstream

which module? uio, it is not required.

> * Get all the required parts outside QEMU packaged in major distros, or
>    absorbed into QEMU

Redhat did disable it. why? it is there in QEMU.

> In short, create a viable community around ivshmem, either within the
> QEMU community, or separately but cooperating.

At least, DPDK.org community is a community using it.

> On to the more philosophical ones.
>
> 5. Out-of-tree interface required
>
>     Paraphrasing an old quip: Some people, when confronted with a
>     problem, think "I know, I'll use shared memory."  Now they have two
>     problems.
>
>     Shared memory is not an interface.  It's at best something you can
>     use to build an interface.
>
>     I'd rather have us offer something with a little bit more structure.
>     Very fast guest-to-guest networking perhaps.

It is not just networking, you have other use cases like HPC, sharing 
in-memory databases.

>
> 6. Device models belong into QEMU
>
>     Say you build an actual interface on top of ivshmem.  Then ivshmem in
>     QEMU together with the supporting host code outside QEMU (see 3.) and
>     the lower layer of the code using it in guests (kernel + user space)
>     provide something that to me very much looks like a device model.
>
>     Device models belong into QEMU.  It's what QEMU does.
See my previous statement, it is not just device model.


Best regards,
   Vincent

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Why I advise against using ivshmem
  2014-06-12 16:02           ` [Qemu-devel] " Vincent JARDIN
@ 2014-06-12 16:54             ` Paolo Bonzini
  -1 siblings, 0 replies; 91+ messages in thread
From: Paolo Bonzini @ 2014-06-12 16:54 UTC (permalink / raw)
  To: Vincent JARDIN, Markus Armbruster
  Cc: Henning Schild, David Marchand, qemu-devel, kvm, virtualization

Il 12/06/2014 18:02, Vincent JARDIN ha scritto:
>
>> * Get all the required parts outside QEMU packaged in major distros, or
>>    absorbed into QEMU
>
> Redhat did disable it. why? it is there in QEMU.

We don't ship everything that is part of QEMU, just like we selectively 
disable many drivers in Linux.

Markus especially referred to parts *outside* QEMU: the server, the uio 
driver, etc.  These out-of-tree, non-packaged parts of ivshmem are one 
of the reasons why Red Hat has disabled ivshmem in RHEL7.

He also listed many others.  Basically for parts of QEMU that are not of 
high quality, we either fix them (this is for example what we did for 
qcow2) or disable them.  Not just ivshmem suffered this fate, for 
example many network cards, sound cards, SCSI storage adapters.

Now, vhost-user is in the process of being merged for 2.1.  Compared to 
the DPDK solution:

* it doesn't require hugetlbfs (which only enabled shared memory by 
chance in older QEMU releases, that was never documented)

* it doesn't require ivshmem (it does require shared memory, which will 
also be added to 2.1)

* it doesn't require the kernel driver from the DPDK sample

* it is not just shared memory, but also defines an interface to use it 
(another of Markus's points)

vhost-user is superior, and it is superior because it has been designed 
from the get-go through cooperation of all interested parties (namely 
QEMU and snabbswitch).

Paolo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
@ 2014-06-12 16:54             ` Paolo Bonzini
  0 siblings, 0 replies; 91+ messages in thread
From: Paolo Bonzini @ 2014-06-12 16:54 UTC (permalink / raw)
  To: Vincent JARDIN, Markus Armbruster
  Cc: Henning Schild, David Marchand, qemu-devel, kvm, virtualization

Il 12/06/2014 18:02, Vincent JARDIN ha scritto:
>
>> * Get all the required parts outside QEMU packaged in major distros, or
>>    absorbed into QEMU
>
> Redhat did disable it. why? it is there in QEMU.

We don't ship everything that is part of QEMU, just like we selectively 
disable many drivers in Linux.

Markus especially referred to parts *outside* QEMU: the server, the uio 
driver, etc.  These out-of-tree, non-packaged parts of ivshmem are one 
of the reasons why Red Hat has disabled ivshmem in RHEL7.

He also listed many others.  Basically for parts of QEMU that are not of 
high quality, we either fix them (this is for example what we did for 
qcow2) or disable them.  Not just ivshmem suffered this fate, for 
example many network cards, sound cards, SCSI storage adapters.

Now, vhost-user is in the process of being merged for 2.1.  Compared to 
the DPDK solution:

* it doesn't require hugetlbfs (which only enabled shared memory by 
chance in older QEMU releases, that was never documented)

* it doesn't require ivshmem (it does require shared memory, which will 
also be added to 2.1)

* it doesn't require the kernel driver from the DPDK sample

* it is not just shared memory, but also defines an interface to use it 
(another of Markus's points)

vhost-user is superior, and it is superior because it has been designed 
from the get-go through cooperation of all interested parties (namely 
QEMU and snabbswitch).

Paolo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-12  5:32     ` [Qemu-devel] " Jan Kiszka
@ 2014-06-13  0:47       ` Rusty Russell
  -1 siblings, 0 replies; 91+ messages in thread
From: Rusty Russell @ 2014-06-13  0:47 UTC (permalink / raw)
  To: Jan Kiszka, Henning Schild, qemu-devel, virtualization, kvm

Jan Kiszka <jan.kiszka@siemens.com> writes:
> On 2014-06-12 04:27, Rusty Russell wrote:
>> Henning Schild <henning.schild@siemens.com> writes:
>> It was also never implemented, and remains a thought experiment.
>> However, implementing it in lguest should be fairly easy.
>
> The reason why a trusted helper, i.e. additional logic in the
> hypervisor, is not our favorite solution is that we'd like to keep the
> hypervisor as small as possible. I wouldn't exclude such an approach
> categorically, but we have to weigh the costs (lines of code, additional
> hypervisor interface) carefully against the gain (existing
> specifications and guest driver infrastructure).

Reasonable, but I think you'll find it is about the minimal
implementation in practice.  Unfortunately, I don't have time during the
next 6 months to implement it myself :(

> Back to VIRTIO_F_RING_SHMEM_ADDR (which you once brought up in an MCA
> working group discussion): What speaks against introducing an
> alternative encoding of addresses inside virtio data structures? The
> idea of this flag was to replace guest-physical addresses with offsets
> into a shared memory region associated with or part of a virtio
> device.

We would also need a way of defining the shared memory region.  But
that's not the problem.  If such a feature is not accepted by the guest?
How to you fall back?

We don't add features which unmake the standard.

> That would preserve zero-copy capabilities (as long as you can work
> against the shared mem directly, e.g. doing DMA from a physical NIC or
> storage device into it) and keep the hypervisor out of the loop.

This seems ill thought out.  How will you program a NIC via the virtio
protocol without a hypervisor?  And how will you make it safe?  You'll
need an IOMMU.  But if you have an IOMMU you don't need shared memory.

> Is it
> too invasive to existing infrastructure or does it have some other pitfalls?

You'll have to convince every vendor to implement your addition to the
standard.  Which is easier than inventing a completely new system, but
it's not quite virtio.

Cheers,
Rusty.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Using virtio for inter-VM communication
@ 2014-06-13  0:47       ` Rusty Russell
  0 siblings, 0 replies; 91+ messages in thread
From: Rusty Russell @ 2014-06-13  0:47 UTC (permalink / raw)
  To: Jan Kiszka, Henning Schild, qemu-devel, virtualization, kvm

Jan Kiszka <jan.kiszka@siemens.com> writes:
> On 2014-06-12 04:27, Rusty Russell wrote:
>> Henning Schild <henning.schild@siemens.com> writes:
>> It was also never implemented, and remains a thought experiment.
>> However, implementing it in lguest should be fairly easy.
>
> The reason why a trusted helper, i.e. additional logic in the
> hypervisor, is not our favorite solution is that we'd like to keep the
> hypervisor as small as possible. I wouldn't exclude such an approach
> categorically, but we have to weigh the costs (lines of code, additional
> hypervisor interface) carefully against the gain (existing
> specifications and guest driver infrastructure).

Reasonable, but I think you'll find it is about the minimal
implementation in practice.  Unfortunately, I don't have time during the
next 6 months to implement it myself :(

> Back to VIRTIO_F_RING_SHMEM_ADDR (which you once brought up in an MCA
> working group discussion): What speaks against introducing an
> alternative encoding of addresses inside virtio data structures? The
> idea of this flag was to replace guest-physical addresses with offsets
> into a shared memory region associated with or part of a virtio
> device.

We would also need a way of defining the shared memory region.  But
that's not the problem.  If such a feature is not accepted by the guest?
How to you fall back?

We don't add features which unmake the standard.

> That would preserve zero-copy capabilities (as long as you can work
> against the shared mem directly, e.g. doing DMA from a physical NIC or
> storage device into it) and keep the hypervisor out of the loop.

This seems ill thought out.  How will you program a NIC via the virtio
protocol without a hypervisor?  And how will you make it safe?  You'll
need an IOMMU.  But if you have an IOMMU you don't need shared memory.

> Is it
> too invasive to existing infrastructure or does it have some other pitfalls?

You'll have to convince every vendor to implement your addition to the
standard.  Which is easier than inventing a completely new system, but
it's not quite virtio.

Cheers,
Rusty.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-13  0:47       ` [Qemu-devel] " Rusty Russell
@ 2014-06-13  6:23         ` Jan Kiszka
  -1 siblings, 0 replies; 91+ messages in thread
From: Jan Kiszka @ 2014-06-13  6:23 UTC (permalink / raw)
  To: Rusty Russell, Henning Schild, qemu-devel, virtualization, kvm

On 2014-06-13 02:47, Rusty Russell wrote:
> Jan Kiszka <jan.kiszka@siemens.com> writes:
>> On 2014-06-12 04:27, Rusty Russell wrote:
>>> Henning Schild <henning.schild@siemens.com> writes:
>>> It was also never implemented, and remains a thought experiment.
>>> However, implementing it in lguest should be fairly easy.
>>
>> The reason why a trusted helper, i.e. additional logic in the
>> hypervisor, is not our favorite solution is that we'd like to keep the
>> hypervisor as small as possible. I wouldn't exclude such an approach
>> categorically, but we have to weigh the costs (lines of code, additional
>> hypervisor interface) carefully against the gain (existing
>> specifications and guest driver infrastructure).
> 
> Reasonable, but I think you'll find it is about the minimal
> implementation in practice.  Unfortunately, I don't have time during the
> next 6 months to implement it myself :(
> 
>> Back to VIRTIO_F_RING_SHMEM_ADDR (which you once brought up in an MCA
>> working group discussion): What speaks against introducing an
>> alternative encoding of addresses inside virtio data structures? The
>> idea of this flag was to replace guest-physical addresses with offsets
>> into a shared memory region associated with or part of a virtio
>> device.
> 
> We would also need a way of defining the shared memory region.  But
> that's not the problem.  If such a feature is not accepted by the guest?
> How to you fall back?

Depends on the hypervisor and its scope, but it should be quite
straightforward: full-featured ones like KVM could fall back to slow
copying, specialized ones like Jailhouse would clear FEATURES_OK if the
guest driver does not accept it (because there would be no ring walking
or copying code in Jailhouse), thus refuse the activate the device. That
would be absolutely fine for application domains of specialized
hypervisors (often embedded, customized guests etc.).

The shared memory regions could be exposed as a BARs (PCI) or additional
address ranges (device tree) and addressed in the redefined guest
address fields via some region index and offset.

> 
> We don't add features which unmake the standard.
> 
>> That would preserve zero-copy capabilities (as long as you can work
>> against the shared mem directly, e.g. doing DMA from a physical NIC or
>> storage device into it) and keep the hypervisor out of the loop.
> 
> This seems ill thought out.  How will you program a NIC via the virtio
> protocol without a hypervisor?  And how will you make it safe?  You'll
> need an IOMMU.  But if you have an IOMMU you don't need shared memory.

Scenarios behind this are things like driver VMs: You pass through the
physical hardware to a driver guest that talks to the hardware and
relays data via one or more virtual channels to other VMs. This confines
a certain set of security and stability risks to the driver VM.

> 
>> Is it
>> too invasive to existing infrastructure or does it have some other pitfalls?
> 
> You'll have to convince every vendor to implement your addition to the
> standard.  Which is easier than inventing a completely new system, but
> it's not quite virtio.

It would be an optional addition, a feature all three sides (host and
the communicating guests) would have to agree on. I think we would only
have to agree on extending the spec to enable this - after demonstrating
it via an implementation, of course.

Thanks,
Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Using virtio for inter-VM communication
@ 2014-06-13  6:23         ` Jan Kiszka
  0 siblings, 0 replies; 91+ messages in thread
From: Jan Kiszka @ 2014-06-13  6:23 UTC (permalink / raw)
  To: Rusty Russell, Henning Schild, qemu-devel, virtualization, kvm

On 2014-06-13 02:47, Rusty Russell wrote:
> Jan Kiszka <jan.kiszka@siemens.com> writes:
>> On 2014-06-12 04:27, Rusty Russell wrote:
>>> Henning Schild <henning.schild@siemens.com> writes:
>>> It was also never implemented, and remains a thought experiment.
>>> However, implementing it in lguest should be fairly easy.
>>
>> The reason why a trusted helper, i.e. additional logic in the
>> hypervisor, is not our favorite solution is that we'd like to keep the
>> hypervisor as small as possible. I wouldn't exclude such an approach
>> categorically, but we have to weigh the costs (lines of code, additional
>> hypervisor interface) carefully against the gain (existing
>> specifications and guest driver infrastructure).
> 
> Reasonable, but I think you'll find it is about the minimal
> implementation in practice.  Unfortunately, I don't have time during the
> next 6 months to implement it myself :(
> 
>> Back to VIRTIO_F_RING_SHMEM_ADDR (which you once brought up in an MCA
>> working group discussion): What speaks against introducing an
>> alternative encoding of addresses inside virtio data structures? The
>> idea of this flag was to replace guest-physical addresses with offsets
>> into a shared memory region associated with or part of a virtio
>> device.
> 
> We would also need a way of defining the shared memory region.  But
> that's not the problem.  If such a feature is not accepted by the guest?
> How to you fall back?

Depends on the hypervisor and its scope, but it should be quite
straightforward: full-featured ones like KVM could fall back to slow
copying, specialized ones like Jailhouse would clear FEATURES_OK if the
guest driver does not accept it (because there would be no ring walking
or copying code in Jailhouse), thus refuse the activate the device. That
would be absolutely fine for application domains of specialized
hypervisors (often embedded, customized guests etc.).

The shared memory regions could be exposed as a BARs (PCI) or additional
address ranges (device tree) and addressed in the redefined guest
address fields via some region index and offset.

> 
> We don't add features which unmake the standard.
> 
>> That would preserve zero-copy capabilities (as long as you can work
>> against the shared mem directly, e.g. doing DMA from a physical NIC or
>> storage device into it) and keep the hypervisor out of the loop.
> 
> This seems ill thought out.  How will you program a NIC via the virtio
> protocol without a hypervisor?  And how will you make it safe?  You'll
> need an IOMMU.  But if you have an IOMMU you don't need shared memory.

Scenarios behind this are things like driver VMs: You pass through the
physical hardware to a driver guest that talks to the hardware and
relays data via one or more virtual channels to other VMs. This confines
a certain set of security and stability risks to the driver VM.

> 
>> Is it
>> too invasive to existing infrastructure or does it have some other pitfalls?
> 
> You'll have to convince every vendor to implement your addition to the
> standard.  Which is easier than inventing a completely new system, but
> it's not quite virtio.

It would be an optional addition, a feature all three sides (host and
the communicating guests) would have to agree on. I think we would only
have to agree on extending the spec to enable this - after demonstrating
it via an implementation, of course.

Thanks,
Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SES-DE
Corporate Competence Center Embedded Linux

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-13  6:23         ` [Qemu-devel] " Jan Kiszka
@ 2014-06-13  8:45           ` Paolo Bonzini
  -1 siblings, 0 replies; 91+ messages in thread
From: Paolo Bonzini @ 2014-06-13  8:45 UTC (permalink / raw)
  To: Jan Kiszka, Rusty Russell, Henning Schild, qemu-devel,
	virtualization, kvm

Il 13/06/2014 08:23, Jan Kiszka ha scritto:
>>> That would preserve zero-copy capabilities (as long as you can work
>>> against the shared mem directly, e.g. doing DMA from a physical NIC or
>>> storage device into it) and keep the hypervisor out of the loop.
> >
> > This seems ill thought out.  How will you program a NIC via the virtio
> > protocol without a hypervisor?  And how will you make it safe?  You'll
> > need an IOMMU.  But if you have an IOMMU you don't need shared memory.
>
> Scenarios behind this are things like driver VMs: You pass through the
> physical hardware to a driver guest that talks to the hardware and
> relays data via one or more virtual channels to other VMs. This confines
> a certain set of security and stability risks to the driver VM.

I think implementing Xen hypercalls in jailhouse for grant table and 
event channels would actually make a lot of sense.  The Xen 
implementation is 2.5kLOC and I think it should be possible to compact 
it noticeably, especially if you limit yourself to 64-bit guests.

It should also be almost enough to run Xen PVH guests as jailhouse 
partitions.

If later Xen starts to support virtio, you will get that for free.

Paolo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Using virtio for inter-VM communication
@ 2014-06-13  8:45           ` Paolo Bonzini
  0 siblings, 0 replies; 91+ messages in thread
From: Paolo Bonzini @ 2014-06-13  8:45 UTC (permalink / raw)
  To: Jan Kiszka, Rusty Russell, Henning Schild, qemu-devel,
	virtualization, kvm

Il 13/06/2014 08:23, Jan Kiszka ha scritto:
>>> That would preserve zero-copy capabilities (as long as you can work
>>> against the shared mem directly, e.g. doing DMA from a physical NIC or
>>> storage device into it) and keep the hypervisor out of the loop.
> >
> > This seems ill thought out.  How will you program a NIC via the virtio
> > protocol without a hypervisor?  And how will you make it safe?  You'll
> > need an IOMMU.  But if you have an IOMMU you don't need shared memory.
>
> Scenarios behind this are things like driver VMs: You pass through the
> physical hardware to a driver guest that talks to the hardware and
> relays data via one or more virtual channels to other VMs. This confines
> a certain set of security and stability risks to the driver VM.

I think implementing Xen hypercalls in jailhouse for grant table and 
event channels would actually make a lot of sense.  The Xen 
implementation is 2.5kLOC and I think it should be possible to compact 
it noticeably, especially if you limit yourself to 64-bit guests.

It should also be almost enough to run Xen PVH guests as jailhouse 
partitions.

If later Xen starts to support virtio, you will get that for free.

Paolo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-12 16:02           ` [Qemu-devel] " Vincent JARDIN
  (?)
  (?)
@ 2014-06-13  8:46           ` Markus Armbruster
  2014-06-13  9:26             ` Vincent JARDIN
                               ` (2 more replies)
  -1 siblings, 3 replies; 91+ messages in thread
From: Markus Armbruster @ 2014-06-13  8:46 UTC (permalink / raw)
  To: Vincent JARDIN
  Cc: Henning Schild, virtualization, David Marchand, kvm, qemu-devel

Some dropped quoted text restored.

Vincent JARDIN <vincent.jardin@6wind.com> writes:

> Markus,
>
> see inline (I am not on all mailing list, please, keep the cc list).
>
>> Sure!  The reasons for my dislike range from practical to
>> philosophical.
>>
>> My practical concerns include:
>>
>> 1. ivshmem code needs work, but has no maintainer
> See David's contributions:
>   http://patchwork.ozlabs.org/patch/358750/

We're grateful for David's patch for qemu-char.c, but this isn't ivshmem
maintenance, yet.

>>   - Error handling is generally poor.  For instance, "device_add
>>     ivshmem" kills your guest instantly.
>>
>>   - More subjectively, I don't trust the code to be robust against
>>     abuse by our own guest, or the other guests sharing the memory.
>>     Convincing me would take a code audit.
>>
>>   - MAINTAINERS doesn't cover ivshmem.c.
>>
>>   - The last non-trivial commit that isn't obviously part of some
>>     tree-wide infrastructure or cleanup work is from September 2012
>>     (commit c08ba66).
>>
>> 2. There is no libvirt support
>
> One can use qemu without libvivrt.

You asked me for my reasons for disliking ivshmem.  This is one.

Sure, I can drink my water through a straw while standing on one foot,
but that doesn't mean I have to like it.  And me not liking it doesn't
mean the next guy shouldn't like it.  To each their own.

>> 3. Out-of-tree server program required for full functionality
>>
>>   Interrupts require a "shared memory server" running in the host (see
>>   docs/specs/ivshmem_device_spec.txt).  It doesn't tell where to find
>>   one.  The initial commit 6cbf4c8 points to
>>   <www.gitorious.org/nahanni>.  That repository's last commit is from
>>   September 2012.  He's dead, Jim.
>>
>>   ivshmem_device_spec.txt is silent on what the server is supposed to
>>   do.
>
> We have the source code, it provides the documentation to write our
> own better server program.

Good for you.  Not good enough for the QEMU community.

QEMU features requiring on out-of-tree software to be useful are fine,
as long as said out-of-tree software is readily available to QEMU
developers and users.

Free software with a community around it and packaged in major distros
qualifies.  If you haven't got that, talk to us to find out whether what
you've got qualifies, and if not, what you'd have to do to make it
qualify.

Back when we accepted ivshmem, the out-of-tree parts it needs were well
below the "community & packaged" bar.  But folks interested in it talked
to us, and the fact that it's in shows that QEMU maintainers decided
what they had then was enough.

Unfortunately, we now have considerably less: Nahanni appears to be
dead.

An apparently dead git repository you can study is not enough.  The fact
that you hold an improved reimplementation privately is immaterial.  So
is the (plausible) claim that others could also create a
reimplementation.

>>   If this server requires privileges: I don't trust it without an
>>   audit.
>>
>> 4. Out-of-tree kernel uio driver required
>
> No, it is optional.

Good to know.  Would you be willing to send a patch to
ivshmem_device_spec.txt clarifying that?

>>   The device is "intended to be used with the provided UIO driver"
>>   (ivshmem_device_spec.txt again).  As far as I can tell, the "provided
>>   UIO driver" is the one in the dead Nahanni repo.
>>
>>   By now, you should be expecting this: I don't trust that one either.
>>
>> These concerns are all fixable, but it'll take serious work, and time.
>> Something like:
>>
>> * Find a maintainer for the device model
> I guess, we can find it into the DPDK.org community.
>> * Review and fix its code
>>
>> * Get the required kernel module upstream
>
> which module? uio, it is not required.
>
>> * Get all the required parts outside QEMU packaged in major distros, or
>>    absorbed into QEMU
>
> Redhat did disable it. why? it is there in QEMU.

Up to now, I've been wearing my QEMU hat.  Let me exchange it for my Red
one for a bit.

We (Red Hat) don't just package & ship metric tons of random free
software.  We package & ship useful free software we can support for
many, many years.

Sometimes, we find that we have to focus serious development resources
on making something useful supportable (Paolo mentioned qcow2).  We
obviously can't focus on everything, though.

Anyway, ivshmem didn't make the cut for RHEL-7.0.  Sorry if that
inconveniences you.  To get it into RHEL, you need to show it's both
useful and supportable.  Building a community around it would go a long
way towards that.

If you want to discuss this in more detail with us, you may want to try
communication channels provided by your RHEL subscription in addition to
the QEMU development mailing list.  Don't be shy, you're paying for it!

As always, I'm not speaking for myself, not my employer.

Okay, wearing my QEMU hat again.

>> In short, create a viable community around ivshmem, either within the
>> QEMU community, or separately but cooperating.
>
> At least, DPDK.org community is a community using it.

Using something isn't the same as maintaining something.  But it's a
necessary first step.

[...]

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-13  8:46           ` Markus Armbruster
@ 2014-06-13  9:26             ` Vincent JARDIN
  2014-06-13  9:31                 ` Jobin Raju George
                                 ` (4 more replies)
  2014-06-13  9:29               ` [Qemu-devel] " Jobin Raju George
  2014-06-13  9:29             ` Jobin Raju George
  2 siblings, 5 replies; 91+ messages in thread
From: Vincent JARDIN @ 2014-06-13  9:26 UTC (permalink / raw)
  To: Markus Armbruster, Paolo Bonzini
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel, David Marchand,
	virtualization, thomas.monjalon

(+merging with Paolo's email because of overlaps)

>> see inline (I am not on all mailing list, please, keep the cc list).
>>

>>> 1. ivshmem code needs work, but has no maintainer
>> See David's contributions:
>>    http://patchwork.ozlabs.org/patch/358750/
>
> We're grateful for David's patch for qemu-char.c, but this isn't ivshmem
> maintenance, yet.

others can come (doc), see below.

>>> 2. There is no libvirt support
>>
>> One can use qemu without libvivrt.
>
> You asked me for my reasons for disliking ivshmem.  This is one.
>
> Sure, I can drink my water through a straw while standing on one foot,
> but that doesn't mean I have to like it.  And me not liking it doesn't
> mean the next guy shouldn't like it.  To each their own.

I like using qemu without libvirt, libvirt is not part of qemu.
Let's avoid trolling about it ;)

> Back when we accepted ivshmem, the out-of-tree parts it needs were well
> below the "community & packaged" bar.  But folks interested in it talked
> to us, and the fact that it's in shows that QEMU maintainers decided
> what they had then was enough.
>
> Unfortunately, we now have considerably less: Nahanni appears to be
> dead.

agree and to bad it is dead. We should let Nahanni dead since ivshmem is 
a QEMU topic now, see below. Does it make sense?

>
> An apparently dead git repository you can study is not enough.  The fact
> that you hold an improved reimplementation privately is immaterial.  So
> is the (plausible) claim that others could also create a
> reimplementation.

Got the point. What's about a patch to 
docs/specs/ivshmem_device_spec.txt that improves it?

I can make qemu's ivshmem better:
   - keep explaining memnic for instance,
   - explain how to write other ivshmem.

does it help?

>>> 4. Out-of-tree kernel uio driver required
>>
>> No, it is optional.
>
> Good to know.  Would you be willing to send a patch to
> ivshmem_device_spec.txt clarifying that?

got the point, yes,

>>> * Get all the required parts outside QEMU packaged in major distros, or
>>>     absorbed into QEMU
>>
>> Redhat did disable it. why? it is there in QEMU.
>
> Up to now, I've been wearing my QEMU hat.  Let me exchange it for my Red
> one for a bit.
>
> We (Red Hat) don't just package & ship metric tons of random free
> software.  We package & ship useful free software we can support for
> many, many years.
>
> Sometimes, we find that we have to focus serious development resources
> on making something useful supportable (Paolo mentioned qcow2).  We
> obviously can't focus on everything, though.

Good open technology should rule. ivshmem has use cases. And I go agree 
with you, it is like the phoenix, it has to be re-explained/documented 
to be back to life. I was not aware that the QEMU community was missing 
ivshmem contributors (my bad I did not check MAINTAINERS).

> Anyway, ivshmem didn't make the cut for RHEL-7.0.  Sorry if that
> inconveniences you.  To get it into RHEL, you need to show it's both
> useful and supportable.  Building a community around it would go a long
> way towards that.

understood.

> If you want to discuss this in more detail with us, you may want to try
> communication channels provided by your RHEL subscription in addition to
> the QEMU development mailing list.  Don't be shy, you're paying for it!

done. I was focusing on DPDK.org and ignorant of QEMU's status, thinking 
Redhat was covering it. How to know which part of an opensource software 
are and are not included into Redhat. Sales are ignorant about it ;). 
Redhat randomly disables some files at compilation (for some good 
reasons I guess, but not public rationals or I am missing something).

Feel free to open this PR to anyone:
   https://bugzilla.redhat.com/show_bug.cgi?id=1088332

>>> In short, create a viable community around ivshmem, either within the
>>> QEMU community, or separately but cooperating.
>>
>> At least, DPDK.org community is a community using it.
>
> Using something isn't the same as maintaining something.  But it's a
> necessary first step.

understood, after David's patch, documentation will come.

(now Paolo's email since there were some overlaps)

 > Markus especially referred to parts *outside* QEMU: the server, the
 > uio driver, etc.  These out-of-tree, non-packaged parts of ivshmem
 > are one of the reasons why Red Hat has disabled ivshmem in RHEL7.

You made the right choices, these out-of-tree packages are not required. 
You can use QEMU's ivshmem without any of the out-of-tree packages. The 
out-of-tree packages are just some examples of using ivshmem.

 > He also listed many others.  Basically for parts of QEMU that are not
 > of high quality, we either fix them (this is for example what we did
 > for qcow2) or disable them.  Not just ivshmem suffered this fate, for
 > example many network cards, sound cards, SCSI storage adapters.

I and David (cc) are working on making it better based on the issues 
that are found.

 > Now, vhost-user is in the process of being merged for 2.1.  Compared 
to the DPDK solution:

now, you cannot compare vhost-user to DPDK/ivshmem; both should exsit 
because they have different scope and use cases. It is like comparing 
two different(A) models of IPC:
   - vhost-user -> networking use case specific
   - ivshmem -> framework to be generic to have shared memory for many 
use cases (HPC, in-memory-database, a network too like memnic).

Later one, some news services will be needed for shared memory. virtio 
will come in picture (see VIRTIO_F_RING_SHMEM_ADDR's threads). 
Currently, ivhsmem is the only "stable" option since there remains many 
unsolved issues with virtio and shared memory.

 > * it doesn't require hugetlbfs (which only enabled shared memory by
 > chance in older QEMU releases, that was never documented)

ivhsmem does not require hugetlbfs. It is optional.

 > * it doesn't require ivshmem (it does require shared memory, which
 > will also be added to 2.1)

somehow I agree: we need both models: vhost-user and ivshmem because of 
the previous (A) comments.

 > * it doesn't require the kernel driver from the DPDK sample

ivhsmem does not require DPDK kernel driver. see memnic's PMD:
   http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c

 > * it is not just shared memory, but also defines an interface to use
 > it (another of Markus's points)

agreed Paolo: but you short narrow it for networking use cases only. 
Shared memory à la ivshmem provides other features (see (A) again).

 >
 > vhost-user is superior, and it is superior because it has been
 > designed
 > from the get-go through cooperation of all interested parties (namely
 > QEMU and snabbswitch).

It is not an argument. vhost-user is a specific case.

Best regards,
   Vincent

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Why I advise against using ivshmem
  2014-06-13  8:46           ` Markus Armbruster
@ 2014-06-13  9:29               ` Jobin Raju George
  2014-06-13  9:29               ` [Qemu-devel] " Jobin Raju George
  2014-06-13  9:29             ` Jobin Raju George
  2 siblings, 0 replies; 91+ messages in thread
From: Jobin Raju George @ 2014-06-13  9:29 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Henning Schild, kvm, QEMU Developers, David Marchand,
	sagar patni, virtualization, Vincent JARDIN

[-- Attachment #1: Type: text/plain, Size: 6196 bytes --]

Nahanni's poor current development coupled with virtIO's promising
expansion was what encouraged us to explore virtIO-serial [1] for
inter-virtual machine communication. Though virtIO-serial as it is isn't
helpful for inter-VM communication, some work is needed for this purpose
and this is exactly what we (I and two of my fellow classmates)
accomplished.

We haven't published it yet since we do need to polish yet for upstreaming
it and are planning do it in near future.


[1]: http://fedoraproject.org/wiki/Features/VirtioSerial


On Fri, Jun 13, 2014 at 2:16 PM, Markus Armbruster <armbru@redhat.com>
wrote:

> Some dropped quoted text restored.
>
> Vincent JARDIN <vincent.jardin@6wind.com> writes:
>
> > Markus,
> >
> > see inline (I am not on all mailing list, please, keep the cc list).
> >
> >> Sure!  The reasons for my dislike range from practical to
> >> philosophical.
> >>
> >> My practical concerns include:
> >>
> >> 1. ivshmem code needs work, but has no maintainer
> > See David's contributions:
> >   http://patchwork.ozlabs.org/patch/358750/
>
> We're grateful for David's patch for qemu-char.c, but this isn't ivshmem
> maintenance, yet.
>
> >>   - Error handling is generally poor.  For instance, "device_add
> >>     ivshmem" kills your guest instantly.
> >>
> >>   - More subjectively, I don't trust the code to be robust against
> >>     abuse by our own guest, or the other guests sharing the memory.
> >>     Convincing me would take a code audit.
> >>
> >>   - MAINTAINERS doesn't cover ivshmem.c.
> >>
> >>   - The last non-trivial commit that isn't obviously part of some
> >>     tree-wide infrastructure or cleanup work is from September 2012
> >>     (commit c08ba66).
> >>
> >> 2. There is no libvirt support
> >
> > One can use qemu without libvivrt.
>
> You asked me for my reasons for disliking ivshmem.  This is one.
>
> Sure, I can drink my water through a straw while standing on one foot,
> but that doesn't mean I have to like it.  And me not liking it doesn't
> mean the next guy shouldn't like it.  To each their own.
>
> >> 3. Out-of-tree server program required for full functionality
> >>
> >>   Interrupts require a "shared memory server" running in the host (see
> >>   docs/specs/ivshmem_device_spec.txt).  It doesn't tell where to find
> >>   one.  The initial commit 6cbf4c8 points to
> >>   <www.gitorious.org/nahanni>.  That repository's last commit is from
> >>   September 2012.  He's dead, Jim.
> >>
> >>   ivshmem_device_spec.txt is silent on what the server is supposed to
> >>   do.
> >
> > We have the source code, it provides the documentation to write our
> > own better server program.
>
> Good for you.  Not good enough for the QEMU community.
>
> QEMU features requiring on out-of-tree software to be useful are fine,
> as long as said out-of-tree software is readily available to QEMU
> developers and users.
>
> Free software with a community around it and packaged in major distros
> qualifies.  If you haven't got that, talk to us to find out whether what
> you've got qualifies, and if not, what you'd have to do to make it
> qualify.
>
> Back when we accepted ivshmem, the out-of-tree parts it needs were well
> below the "community & packaged" bar.  But folks interested in it talked
> to us, and the fact that it's in shows that QEMU maintainers decided
> what they had then was enough.
>
> Unfortunately, we now have considerably less: Nahanni appears to be
> dead.
>
> An apparently dead git repository you can study is not enough.  The fact
> that you hold an improved reimplementation privately is immaterial.  So
> is the (plausible) claim that others could also create a
> reimplementation.
>
> >>   If this server requires privileges: I don't trust it without an
> >>   audit.
> >>
> >> 4. Out-of-tree kernel uio driver required
> >
> > No, it is optional.
>
> Good to know.  Would you be willing to send a patch to
> ivshmem_device_spec.txt clarifying that?
>
> >>   The device is "intended to be used with the provided UIO driver"
> >>   (ivshmem_device_spec.txt again).  As far as I can tell, the "provided
> >>   UIO driver" is the one in the dead Nahanni repo.
> >>
> >>   By now, you should be expecting this: I don't trust that one either.
> >>
> >> These concerns are all fixable, but it'll take serious work, and time.
> >> Something like:
> >>
> >> * Find a maintainer for the device model
> > I guess, we can find it into the DPDK.org community.
> >> * Review and fix its code
> >>
> >> * Get the required kernel module upstream
> >
> > which module? uio, it is not required.
> >
> >> * Get all the required parts outside QEMU packaged in major distros, or
> >>    absorbed into QEMU
> >
> > Redhat did disable it. why? it is there in QEMU.
>
> Up to now, I've been wearing my QEMU hat.  Let me exchange it for my Red
> one for a bit.
>
> We (Red Hat) don't just package & ship metric tons of random free
> software.  We package & ship useful free software we can support for
> many, many years.
>
> Sometimes, we find that we have to focus serious development resources
> on making something useful supportable (Paolo mentioned qcow2).  We
> obviously can't focus on everything, though.
>
> Anyway, ivshmem didn't make the cut for RHEL-7.0.  Sorry if that
> inconveniences you.  To get it into RHEL, you need to show it's both
> useful and supportable.  Building a community around it would go a long
> way towards that.
>
> If you want to discuss this in more detail with us, you may want to try
> communication channels provided by your RHEL subscription in addition to
> the QEMU development mailing list.  Don't be shy, you're paying for it!
>
> As always, I'm not speaking for myself, not my employer.
>
> Okay, wearing my QEMU hat again.
>
> >> In short, create a viable community around ivshmem, either within the
> >> QEMU community, or separately but cooperating.
> >
> > At least, DPDK.org community is a community using it.
>
> Using something isn't the same as maintaining something.  But it's a
> necessary first step.
>
> [...]
>
>


-- 

Thanks and regards,

Jobin Raju George

Final Year, Information Technology

College of Engineering Pune

Alternate e-mail: georgejr10.it@coep.ac.in

[-- Attachment #2: Type: text/html, Size: 9437 bytes --]

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
@ 2014-06-13  9:29               ` Jobin Raju George
  0 siblings, 0 replies; 91+ messages in thread
From: Jobin Raju George @ 2014-06-13  9:29 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Henning Schild, kvm, QEMU Developers, David Marchand,
	sagar patni, virtualization, Vincent JARDIN

[-- Attachment #1: Type: text/plain, Size: 6196 bytes --]

Nahanni's poor current development coupled with virtIO's promising
expansion was what encouraged us to explore virtIO-serial [1] for
inter-virtual machine communication. Though virtIO-serial as it is isn't
helpful for inter-VM communication, some work is needed for this purpose
and this is exactly what we (I and two of my fellow classmates)
accomplished.

We haven't published it yet since we do need to polish yet for upstreaming
it and are planning do it in near future.


[1]: http://fedoraproject.org/wiki/Features/VirtioSerial


On Fri, Jun 13, 2014 at 2:16 PM, Markus Armbruster <armbru@redhat.com>
wrote:

> Some dropped quoted text restored.
>
> Vincent JARDIN <vincent.jardin@6wind.com> writes:
>
> > Markus,
> >
> > see inline (I am not on all mailing list, please, keep the cc list).
> >
> >> Sure!  The reasons for my dislike range from practical to
> >> philosophical.
> >>
> >> My practical concerns include:
> >>
> >> 1. ivshmem code needs work, but has no maintainer
> > See David's contributions:
> >   http://patchwork.ozlabs.org/patch/358750/
>
> We're grateful for David's patch for qemu-char.c, but this isn't ivshmem
> maintenance, yet.
>
> >>   - Error handling is generally poor.  For instance, "device_add
> >>     ivshmem" kills your guest instantly.
> >>
> >>   - More subjectively, I don't trust the code to be robust against
> >>     abuse by our own guest, or the other guests sharing the memory.
> >>     Convincing me would take a code audit.
> >>
> >>   - MAINTAINERS doesn't cover ivshmem.c.
> >>
> >>   - The last non-trivial commit that isn't obviously part of some
> >>     tree-wide infrastructure or cleanup work is from September 2012
> >>     (commit c08ba66).
> >>
> >> 2. There is no libvirt support
> >
> > One can use qemu without libvivrt.
>
> You asked me for my reasons for disliking ivshmem.  This is one.
>
> Sure, I can drink my water through a straw while standing on one foot,
> but that doesn't mean I have to like it.  And me not liking it doesn't
> mean the next guy shouldn't like it.  To each their own.
>
> >> 3. Out-of-tree server program required for full functionality
> >>
> >>   Interrupts require a "shared memory server" running in the host (see
> >>   docs/specs/ivshmem_device_spec.txt).  It doesn't tell where to find
> >>   one.  The initial commit 6cbf4c8 points to
> >>   <www.gitorious.org/nahanni>.  That repository's last commit is from
> >>   September 2012.  He's dead, Jim.
> >>
> >>   ivshmem_device_spec.txt is silent on what the server is supposed to
> >>   do.
> >
> > We have the source code, it provides the documentation to write our
> > own better server program.
>
> Good for you.  Not good enough for the QEMU community.
>
> QEMU features requiring on out-of-tree software to be useful are fine,
> as long as said out-of-tree software is readily available to QEMU
> developers and users.
>
> Free software with a community around it and packaged in major distros
> qualifies.  If you haven't got that, talk to us to find out whether what
> you've got qualifies, and if not, what you'd have to do to make it
> qualify.
>
> Back when we accepted ivshmem, the out-of-tree parts it needs were well
> below the "community & packaged" bar.  But folks interested in it talked
> to us, and the fact that it's in shows that QEMU maintainers decided
> what they had then was enough.
>
> Unfortunately, we now have considerably less: Nahanni appears to be
> dead.
>
> An apparently dead git repository you can study is not enough.  The fact
> that you hold an improved reimplementation privately is immaterial.  So
> is the (plausible) claim that others could also create a
> reimplementation.
>
> >>   If this server requires privileges: I don't trust it without an
> >>   audit.
> >>
> >> 4. Out-of-tree kernel uio driver required
> >
> > No, it is optional.
>
> Good to know.  Would you be willing to send a patch to
> ivshmem_device_spec.txt clarifying that?
>
> >>   The device is "intended to be used with the provided UIO driver"
> >>   (ivshmem_device_spec.txt again).  As far as I can tell, the "provided
> >>   UIO driver" is the one in the dead Nahanni repo.
> >>
> >>   By now, you should be expecting this: I don't trust that one either.
> >>
> >> These concerns are all fixable, but it'll take serious work, and time.
> >> Something like:
> >>
> >> * Find a maintainer for the device model
> > I guess, we can find it into the DPDK.org community.
> >> * Review and fix its code
> >>
> >> * Get the required kernel module upstream
> >
> > which module? uio, it is not required.
> >
> >> * Get all the required parts outside QEMU packaged in major distros, or
> >>    absorbed into QEMU
> >
> > Redhat did disable it. why? it is there in QEMU.
>
> Up to now, I've been wearing my QEMU hat.  Let me exchange it for my Red
> one for a bit.
>
> We (Red Hat) don't just package & ship metric tons of random free
> software.  We package & ship useful free software we can support for
> many, many years.
>
> Sometimes, we find that we have to focus serious development resources
> on making something useful supportable (Paolo mentioned qcow2).  We
> obviously can't focus on everything, though.
>
> Anyway, ivshmem didn't make the cut for RHEL-7.0.  Sorry if that
> inconveniences you.  To get it into RHEL, you need to show it's both
> useful and supportable.  Building a community around it would go a long
> way towards that.
>
> If you want to discuss this in more detail with us, you may want to try
> communication channels provided by your RHEL subscription in addition to
> the QEMU development mailing list.  Don't be shy, you're paying for it!
>
> As always, I'm not speaking for myself, not my employer.
>
> Okay, wearing my QEMU hat again.
>
> >> In short, create a viable community around ivshmem, either within the
> >> QEMU community, or separately but cooperating.
> >
> > At least, DPDK.org community is a community using it.
>
> Using something isn't the same as maintaining something.  But it's a
> necessary first step.
>
> [...]
>
>


-- 

Thanks and regards,

Jobin Raju George

Final Year, Information Technology

College of Engineering Pune

Alternate e-mail: georgejr10.it@coep.ac.in

[-- Attachment #2: Type: text/html, Size: 9437 bytes --]

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-13  8:46           ` Markus Armbruster
  2014-06-13  9:26             ` Vincent JARDIN
  2014-06-13  9:29               ` [Qemu-devel] " Jobin Raju George
@ 2014-06-13  9:29             ` Jobin Raju George
  2 siblings, 0 replies; 91+ messages in thread
From: Jobin Raju George @ 2014-06-13  9:29 UTC (permalink / raw)
  To: Markus Armbruster
  Cc: Henning Schild, kvm, QEMU Developers, David Marchand,
	sagar patni, virtualization, Vincent JARDIN


[-- Attachment #1.1: Type: text/plain, Size: 6196 bytes --]

Nahanni's poor current development coupled with virtIO's promising
expansion was what encouraged us to explore virtIO-serial [1] for
inter-virtual machine communication. Though virtIO-serial as it is isn't
helpful for inter-VM communication, some work is needed for this purpose
and this is exactly what we (I and two of my fellow classmates)
accomplished.

We haven't published it yet since we do need to polish yet for upstreaming
it and are planning do it in near future.


[1]: http://fedoraproject.org/wiki/Features/VirtioSerial


On Fri, Jun 13, 2014 at 2:16 PM, Markus Armbruster <armbru@redhat.com>
wrote:

> Some dropped quoted text restored.
>
> Vincent JARDIN <vincent.jardin@6wind.com> writes:
>
> > Markus,
> >
> > see inline (I am not on all mailing list, please, keep the cc list).
> >
> >> Sure!  The reasons for my dislike range from practical to
> >> philosophical.
> >>
> >> My practical concerns include:
> >>
> >> 1. ivshmem code needs work, but has no maintainer
> > See David's contributions:
> >   http://patchwork.ozlabs.org/patch/358750/
>
> We're grateful for David's patch for qemu-char.c, but this isn't ivshmem
> maintenance, yet.
>
> >>   - Error handling is generally poor.  For instance, "device_add
> >>     ivshmem" kills your guest instantly.
> >>
> >>   - More subjectively, I don't trust the code to be robust against
> >>     abuse by our own guest, or the other guests sharing the memory.
> >>     Convincing me would take a code audit.
> >>
> >>   - MAINTAINERS doesn't cover ivshmem.c.
> >>
> >>   - The last non-trivial commit that isn't obviously part of some
> >>     tree-wide infrastructure or cleanup work is from September 2012
> >>     (commit c08ba66).
> >>
> >> 2. There is no libvirt support
> >
> > One can use qemu without libvivrt.
>
> You asked me for my reasons for disliking ivshmem.  This is one.
>
> Sure, I can drink my water through a straw while standing on one foot,
> but that doesn't mean I have to like it.  And me not liking it doesn't
> mean the next guy shouldn't like it.  To each their own.
>
> >> 3. Out-of-tree server program required for full functionality
> >>
> >>   Interrupts require a "shared memory server" running in the host (see
> >>   docs/specs/ivshmem_device_spec.txt).  It doesn't tell where to find
> >>   one.  The initial commit 6cbf4c8 points to
> >>   <www.gitorious.org/nahanni>.  That repository's last commit is from
> >>   September 2012.  He's dead, Jim.
> >>
> >>   ivshmem_device_spec.txt is silent on what the server is supposed to
> >>   do.
> >
> > We have the source code, it provides the documentation to write our
> > own better server program.
>
> Good for you.  Not good enough for the QEMU community.
>
> QEMU features requiring on out-of-tree software to be useful are fine,
> as long as said out-of-tree software is readily available to QEMU
> developers and users.
>
> Free software with a community around it and packaged in major distros
> qualifies.  If you haven't got that, talk to us to find out whether what
> you've got qualifies, and if not, what you'd have to do to make it
> qualify.
>
> Back when we accepted ivshmem, the out-of-tree parts it needs were well
> below the "community & packaged" bar.  But folks interested in it talked
> to us, and the fact that it's in shows that QEMU maintainers decided
> what they had then was enough.
>
> Unfortunately, we now have considerably less: Nahanni appears to be
> dead.
>
> An apparently dead git repository you can study is not enough.  The fact
> that you hold an improved reimplementation privately is immaterial.  So
> is the (plausible) claim that others could also create a
> reimplementation.
>
> >>   If this server requires privileges: I don't trust it without an
> >>   audit.
> >>
> >> 4. Out-of-tree kernel uio driver required
> >
> > No, it is optional.
>
> Good to know.  Would you be willing to send a patch to
> ivshmem_device_spec.txt clarifying that?
>
> >>   The device is "intended to be used with the provided UIO driver"
> >>   (ivshmem_device_spec.txt again).  As far as I can tell, the "provided
> >>   UIO driver" is the one in the dead Nahanni repo.
> >>
> >>   By now, you should be expecting this: I don't trust that one either.
> >>
> >> These concerns are all fixable, but it'll take serious work, and time.
> >> Something like:
> >>
> >> * Find a maintainer for the device model
> > I guess, we can find it into the DPDK.org community.
> >> * Review and fix its code
> >>
> >> * Get the required kernel module upstream
> >
> > which module? uio, it is not required.
> >
> >> * Get all the required parts outside QEMU packaged in major distros, or
> >>    absorbed into QEMU
> >
> > Redhat did disable it. why? it is there in QEMU.
>
> Up to now, I've been wearing my QEMU hat.  Let me exchange it for my Red
> one for a bit.
>
> We (Red Hat) don't just package & ship metric tons of random free
> software.  We package & ship useful free software we can support for
> many, many years.
>
> Sometimes, we find that we have to focus serious development resources
> on making something useful supportable (Paolo mentioned qcow2).  We
> obviously can't focus on everything, though.
>
> Anyway, ivshmem didn't make the cut for RHEL-7.0.  Sorry if that
> inconveniences you.  To get it into RHEL, you need to show it's both
> useful and supportable.  Building a community around it would go a long
> way towards that.
>
> If you want to discuss this in more detail with us, you may want to try
> communication channels provided by your RHEL subscription in addition to
> the QEMU development mailing list.  Don't be shy, you're paying for it!
>
> As always, I'm not speaking for myself, not my employer.
>
> Okay, wearing my QEMU hat again.
>
> >> In short, create a viable community around ivshmem, either within the
> >> QEMU community, or separately but cooperating.
> >
> > At least, DPDK.org community is a community using it.
>
> Using something isn't the same as maintaining something.  But it's a
> necessary first step.
>
> [...]
>
>


-- 

Thanks and regards,

Jobin Raju George

Final Year, Information Technology

College of Engineering Pune

Alternate e-mail: georgejr10.it@coep.ac.in

[-- Attachment #1.2: Type: text/html, Size: 9437 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-13  9:26             ` Vincent JARDIN
@ 2014-06-13  9:31                 ` Jobin Raju George
  2014-06-13  9:31               ` Jobin Raju George
                                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 91+ messages in thread
From: Jobin Raju George @ 2014-06-13  9:31 UTC (permalink / raw)
  To: Vincent JARDIN
  Cc: Markus Armbruster, Paolo Bonzini, Henning Schild, Olivier MATZ,
	kvm, QEMU Developers, David Marchand, virtualization,
	thomas.monjalon

Nahanni's poor current development coupled with virtIO's promising
expansion was what encouraged us to explore virtIO-serial [1] for
inter-virtual machine communication. Though virtIO-serial as it is
isn't helpful for inter-VM communication, some work is needed for this
purpose and this is exactly what we (I and two of my fellow
classmates) accomplished.

We haven't published it yet since we do need to polish yet for
upstreaming it and are planning do it in near future.

[1]: http://fedoraproject.org/wiki/Features/VirtioSerial


On Fri, Jun 13, 2014 at 2:56 PM, Vincent JARDIN
<vincent.jardin@6wind.com> wrote:
>
> (+merging with Paolo's email because of overlaps)
>
>
>>> see inline (I am not on all mailing list, please, keep the cc list).
>>>
>
>>>> 1. ivshmem code needs work, but has no maintainer
>>>
>>> See David's contributions:
>>>    http://patchwork.ozlabs.org/patch/358750/
>>
>>
>> We're grateful for David's patch for qemu-char.c, but this isn't ivshmem
>> maintenance, yet.
>
>
> others can come (doc), see below.
>
>
>>>> 2. There is no libvirt support
>>>
>>>
>>> One can use qemu without libvivrt.
>>
>>
>> You asked me for my reasons for disliking ivshmem.  This is one.
>>
>> Sure, I can drink my water through a straw while standing on one foot,
>> but that doesn't mean I have to like it.  And me not liking it doesn't
>> mean the next guy shouldn't like it.  To each their own.
>
>
> I like using qemu without libvirt, libvirt is not part of qemu.
> Let's avoid trolling about it ;)
>
>
>> Back when we accepted ivshmem, the out-of-tree parts it needs were well
>> below the "community & packaged" bar.  But folks interested in it talked
>> to us, and the fact that it's in shows that QEMU maintainers decided
>> what they had then was enough.
>>
>> Unfortunately, we now have considerably less: Nahanni appears to be
>> dead.
>
>
> agree and to bad it is dead. We should let Nahanni dead since ivshmem is a QEMU topic now, see below. Does it make sense?
>
>
>>
>> An apparently dead git repository you can study is not enough.  The fact
>> that you hold an improved reimplementation privately is immaterial.  So
>> is the (plausible) claim that others could also create a
>> reimplementation.
>
>
> Got the point. What's about a patch to docs/specs/ivshmem_device_spec.txt that improves it?
>
> I can make qemu's ivshmem better:
>   - keep explaining memnic for instance,
>   - explain how to write other ivshmem.
>
> does it help?
>
>
>>>> 4. Out-of-tree kernel uio driver required
>>>
>>>
>>> No, it is optional.
>>
>>
>> Good to know.  Would you be willing to send a patch to
>> ivshmem_device_spec.txt clarifying that?
>
>
> got the point, yes,
>
>
>>>> * Get all the required parts outside QEMU packaged in major distros, or
>>>>     absorbed into QEMU
>>>
>>>
>>> Redhat did disable it. why? it is there in QEMU.
>>
>>
>> Up to now, I've been wearing my QEMU hat.  Let me exchange it for my Red
>> one for a bit.
>>
>> We (Red Hat) don't just package & ship metric tons of random free
>> software.  We package & ship useful free software we can support for
>> many, many years.
>>
>> Sometimes, we find that we have to focus serious development resources
>> on making something useful supportable (Paolo mentioned qcow2).  We
>> obviously can't focus on everything, though.
>
>
> Good open technology should rule. ivshmem has use cases. And I go agree with you, it is like the phoenix, it has to be re-explained/documented to be back to life. I was not aware that the QEMU community was missing ivshmem contributors (my bad I did not check MAINTAINERS).
>
>
>> Anyway, ivshmem didn't make the cut for RHEL-7.0.  Sorry if that
>> inconveniences you.  To get it into RHEL, you need to show it's both
>> useful and supportable.  Building a community around it would go a long
>> way towards that.
>
>
> understood.
>
>
>> If you want to discuss this in more detail with us, you may want to try
>> communication channels provided by your RHEL subscription in addition to
>> the QEMU development mailing list.  Don't be shy, you're paying for it!
>
>
> done. I was focusing on DPDK.org and ignorant of QEMU's status, thinking Redhat was covering it. How to know which part of an opensource software are and are not included into Redhat. Sales are ignorant about it ;). Redhat randomly disables some files at compilation (for some good reasons I guess, but not public rationals or I am missing something).
>
> Feel free to open this PR to anyone:
>   https://bugzilla.redhat.com/show_bug.cgi?id=1088332
>
>
>>>> In short, create a viable community around ivshmem, either within the
>>>> QEMU community, or separately but cooperating.
>>>
>>>
>>> At least, DPDK.org community is a community using it.
>>
>>
>> Using something isn't the same as maintaining something.  But it's a
>> necessary first step.
>
>
> understood, after David's patch, documentation will come.
>
> (now Paolo's email since there were some overlaps)
>
> > Markus especially referred to parts *outside* QEMU: the server, the
> > uio driver, etc.  These out-of-tree, non-packaged parts of ivshmem
> > are one of the reasons why Red Hat has disabled ivshmem in RHEL7.
>
> You made the right choices, these out-of-tree packages are not required. You can use QEMU's ivshmem without any of the out-of-tree packages. The out-of-tree packages are just some examples of using ivshmem.
>
> > He also listed many others.  Basically for parts of QEMU that are not
> > of high quality, we either fix them (this is for example what we did
> > for qcow2) or disable them.  Not just ivshmem suffered this fate, for
> > example many network cards, sound cards, SCSI storage adapters.
>
> I and David (cc) are working on making it better based on the issues that are found.
>
> > Now, vhost-user is in the process of being merged for 2.1.  Compared to the DPDK solution:
>
> now, you cannot compare vhost-user to DPDK/ivshmem; both should exsit because they have different scope and use cases. It is like comparing two different(A) models of IPC:
>   - vhost-user -> networking use case specific
>   - ivshmem -> framework to be generic to have shared memory for many use cases (HPC, in-memory-database, a network too like memnic).
>
> Later one, some news services will be needed for shared memory. virtio will come in picture (see VIRTIO_F_RING_SHMEM_ADDR's threads). Currently, ivhsmem is the only "stable" option since there remains many unsolved issues with virtio and shared memory.
>
> > * it doesn't require hugetlbfs (which only enabled shared memory by
> > chance in older QEMU releases, that was never documented)
>
> ivhsmem does not require hugetlbfs. It is optional.
>
> > * it doesn't require ivshmem (it does require shared memory, which
> > will also be added to 2.1)
>
> somehow I agree: we need both models: vhost-user and ivshmem because of the previous (A) comments.
>
> > * it doesn't require the kernel driver from the DPDK sample
>
> ivhsmem does not require DPDK kernel driver. see memnic's PMD:
>   http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c
>
> > * it is not just shared memory, but also defines an interface to use
> > it (another of Markus's points)
>
> agreed Paolo: but you short narrow it for networking use cases only. Shared memory à la ivshmem provides other features (see (A) again).
>
> >
> > vhost-user is superior, and it is superior because it has been
> > designed
> > from the get-go through cooperation of all interested parties (namely
> > QEMU and snabbswitch).
>
> It is not an argument. vhost-user is a specific case.
>
> Best regards,
>   Vincent
>

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
@ 2014-06-13  9:31                 ` Jobin Raju George
  0 siblings, 0 replies; 91+ messages in thread
From: Jobin Raju George @ 2014-06-13  9:31 UTC (permalink / raw)
  To: Vincent JARDIN
  Cc: Henning Schild, Olivier MATZ, kvm, QEMU Developers,
	Markus Armbruster, thomas.monjalon, Paolo Bonzini,
	virtualization, David Marchand

Nahanni's poor current development coupled with virtIO's promising
expansion was what encouraged us to explore virtIO-serial [1] for
inter-virtual machine communication. Though virtIO-serial as it is
isn't helpful for inter-VM communication, some work is needed for this
purpose and this is exactly what we (I and two of my fellow
classmates) accomplished.

We haven't published it yet since we do need to polish yet for
upstreaming it and are planning do it in near future.

[1]: http://fedoraproject.org/wiki/Features/VirtioSerial


On Fri, Jun 13, 2014 at 2:56 PM, Vincent JARDIN
<vincent.jardin@6wind.com> wrote:
>
> (+merging with Paolo's email because of overlaps)
>
>
>>> see inline (I am not on all mailing list, please, keep the cc list).
>>>
>
>>>> 1. ivshmem code needs work, but has no maintainer
>>>
>>> See David's contributions:
>>>    http://patchwork.ozlabs.org/patch/358750/
>>
>>
>> We're grateful for David's patch for qemu-char.c, but this isn't ivshmem
>> maintenance, yet.
>
>
> others can come (doc), see below.
>
>
>>>> 2. There is no libvirt support
>>>
>>>
>>> One can use qemu without libvivrt.
>>
>>
>> You asked me for my reasons for disliking ivshmem.  This is one.
>>
>> Sure, I can drink my water through a straw while standing on one foot,
>> but that doesn't mean I have to like it.  And me not liking it doesn't
>> mean the next guy shouldn't like it.  To each their own.
>
>
> I like using qemu without libvirt, libvirt is not part of qemu.
> Let's avoid trolling about it ;)
>
>
>> Back when we accepted ivshmem, the out-of-tree parts it needs were well
>> below the "community & packaged" bar.  But folks interested in it talked
>> to us, and the fact that it's in shows that QEMU maintainers decided
>> what they had then was enough.
>>
>> Unfortunately, we now have considerably less: Nahanni appears to be
>> dead.
>
>
> agree and to bad it is dead. We should let Nahanni dead since ivshmem is a QEMU topic now, see below. Does it make sense?
>
>
>>
>> An apparently dead git repository you can study is not enough.  The fact
>> that you hold an improved reimplementation privately is immaterial.  So
>> is the (plausible) claim that others could also create a
>> reimplementation.
>
>
> Got the point. What's about a patch to docs/specs/ivshmem_device_spec.txt that improves it?
>
> I can make qemu's ivshmem better:
>   - keep explaining memnic for instance,
>   - explain how to write other ivshmem.
>
> does it help?
>
>
>>>> 4. Out-of-tree kernel uio driver required
>>>
>>>
>>> No, it is optional.
>>
>>
>> Good to know.  Would you be willing to send a patch to
>> ivshmem_device_spec.txt clarifying that?
>
>
> got the point, yes,
>
>
>>>> * Get all the required parts outside QEMU packaged in major distros, or
>>>>     absorbed into QEMU
>>>
>>>
>>> Redhat did disable it. why? it is there in QEMU.
>>
>>
>> Up to now, I've been wearing my QEMU hat.  Let me exchange it for my Red
>> one for a bit.
>>
>> We (Red Hat) don't just package & ship metric tons of random free
>> software.  We package & ship useful free software we can support for
>> many, many years.
>>
>> Sometimes, we find that we have to focus serious development resources
>> on making something useful supportable (Paolo mentioned qcow2).  We
>> obviously can't focus on everything, though.
>
>
> Good open technology should rule. ivshmem has use cases. And I go agree with you, it is like the phoenix, it has to be re-explained/documented to be back to life. I was not aware that the QEMU community was missing ivshmem contributors (my bad I did not check MAINTAINERS).
>
>
>> Anyway, ivshmem didn't make the cut for RHEL-7.0.  Sorry if that
>> inconveniences you.  To get it into RHEL, you need to show it's both
>> useful and supportable.  Building a community around it would go a long
>> way towards that.
>
>
> understood.
>
>
>> If you want to discuss this in more detail with us, you may want to try
>> communication channels provided by your RHEL subscription in addition to
>> the QEMU development mailing list.  Don't be shy, you're paying for it!
>
>
> done. I was focusing on DPDK.org and ignorant of QEMU's status, thinking Redhat was covering it. How to know which part of an opensource software are and are not included into Redhat. Sales are ignorant about it ;). Redhat randomly disables some files at compilation (for some good reasons I guess, but not public rationals or I am missing something).
>
> Feel free to open this PR to anyone:
>   https://bugzilla.redhat.com/show_bug.cgi?id=1088332
>
>
>>>> In short, create a viable community around ivshmem, either within the
>>>> QEMU community, or separately but cooperating.
>>>
>>>
>>> At least, DPDK.org community is a community using it.
>>
>>
>> Using something isn't the same as maintaining something.  But it's a
>> necessary first step.
>
>
> understood, after David's patch, documentation will come.
>
> (now Paolo's email since there were some overlaps)
>
> > Markus especially referred to parts *outside* QEMU: the server, the
> > uio driver, etc.  These out-of-tree, non-packaged parts of ivshmem
> > are one of the reasons why Red Hat has disabled ivshmem in RHEL7.
>
> You made the right choices, these out-of-tree packages are not required. You can use QEMU's ivshmem without any of the out-of-tree packages. The out-of-tree packages are just some examples of using ivshmem.
>
> > He also listed many others.  Basically for parts of QEMU that are not
> > of high quality, we either fix them (this is for example what we did
> > for qcow2) or disable them.  Not just ivshmem suffered this fate, for
> > example many network cards, sound cards, SCSI storage adapters.
>
> I and David (cc) are working on making it better based on the issues that are found.
>
> > Now, vhost-user is in the process of being merged for 2.1.  Compared to the DPDK solution:
>
> now, you cannot compare vhost-user to DPDK/ivshmem; both should exsit because they have different scope and use cases. It is like comparing two different(A) models of IPC:
>   - vhost-user -> networking use case specific
>   - ivshmem -> framework to be generic to have shared memory for many use cases (HPC, in-memory-database, a network too like memnic).
>
> Later one, some news services will be needed for shared memory. virtio will come in picture (see VIRTIO_F_RING_SHMEM_ADDR's threads). Currently, ivhsmem is the only "stable" option since there remains many unsolved issues with virtio and shared memory.
>
> > * it doesn't require hugetlbfs (which only enabled shared memory by
> > chance in older QEMU releases, that was never documented)
>
> ivhsmem does not require hugetlbfs. It is optional.
>
> > * it doesn't require ivshmem (it does require shared memory, which
> > will also be added to 2.1)
>
> somehow I agree: we need both models: vhost-user and ivshmem because of the previous (A) comments.
>
> > * it doesn't require the kernel driver from the DPDK sample
>
> ivhsmem does not require DPDK kernel driver. see memnic's PMD:
>   http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c
>
> > * it is not just shared memory, but also defines an interface to use
> > it (another of Markus's points)
>
> agreed Paolo: but you short narrow it for networking use cases only. Shared memory à la ivshmem provides other features (see (A) again).
>
> >
> > vhost-user is superior, and it is superior because it has been
> > designed
> > from the get-go through cooperation of all interested parties (namely
> > QEMU and snabbswitch).
>
> It is not an argument. vhost-user is a specific case.
>
> Best regards,
>   Vincent
>

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-13  9:26             ` Vincent JARDIN
  2014-06-13  9:31                 ` Jobin Raju George
@ 2014-06-13  9:31               ` Jobin Raju George
  2014-06-13  9:48               ` Olivier MATZ
                                 ` (2 subsequent siblings)
  4 siblings, 0 replies; 91+ messages in thread
From: Jobin Raju George @ 2014-06-13  9:31 UTC (permalink / raw)
  To: Vincent JARDIN
  Cc: Henning Schild, Olivier MATZ, kvm, QEMU Developers,
	thomas.monjalon, Paolo Bonzini, virtualization, David Marchand

Nahanni's poor current development coupled with virtIO's promising
expansion was what encouraged us to explore virtIO-serial [1] for
inter-virtual machine communication. Though virtIO-serial as it is
isn't helpful for inter-VM communication, some work is needed for this
purpose and this is exactly what we (I and two of my fellow
classmates) accomplished.

We haven't published it yet since we do need to polish yet for
upstreaming it and are planning do it in near future.

[1]: http://fedoraproject.org/wiki/Features/VirtioSerial


On Fri, Jun 13, 2014 at 2:56 PM, Vincent JARDIN
<vincent.jardin@6wind.com> wrote:
>
> (+merging with Paolo's email because of overlaps)
>
>
>>> see inline (I am not on all mailing list, please, keep the cc list).
>>>
>
>>>> 1. ivshmem code needs work, but has no maintainer
>>>
>>> See David's contributions:
>>>    http://patchwork.ozlabs.org/patch/358750/
>>
>>
>> We're grateful for David's patch for qemu-char.c, but this isn't ivshmem
>> maintenance, yet.
>
>
> others can come (doc), see below.
>
>
>>>> 2. There is no libvirt support
>>>
>>>
>>> One can use qemu without libvivrt.
>>
>>
>> You asked me for my reasons for disliking ivshmem.  This is one.
>>
>> Sure, I can drink my water through a straw while standing on one foot,
>> but that doesn't mean I have to like it.  And me not liking it doesn't
>> mean the next guy shouldn't like it.  To each their own.
>
>
> I like using qemu without libvirt, libvirt is not part of qemu.
> Let's avoid trolling about it ;)
>
>
>> Back when we accepted ivshmem, the out-of-tree parts it needs were well
>> below the "community & packaged" bar.  But folks interested in it talked
>> to us, and the fact that it's in shows that QEMU maintainers decided
>> what they had then was enough.
>>
>> Unfortunately, we now have considerably less: Nahanni appears to be
>> dead.
>
>
> agree and to bad it is dead. We should let Nahanni dead since ivshmem is a QEMU topic now, see below. Does it make sense?
>
>
>>
>> An apparently dead git repository you can study is not enough.  The fact
>> that you hold an improved reimplementation privately is immaterial.  So
>> is the (plausible) claim that others could also create a
>> reimplementation.
>
>
> Got the point. What's about a patch to docs/specs/ivshmem_device_spec.txt that improves it?
>
> I can make qemu's ivshmem better:
>   - keep explaining memnic for instance,
>   - explain how to write other ivshmem.
>
> does it help?
>
>
>>>> 4. Out-of-tree kernel uio driver required
>>>
>>>
>>> No, it is optional.
>>
>>
>> Good to know.  Would you be willing to send a patch to
>> ivshmem_device_spec.txt clarifying that?
>
>
> got the point, yes,
>
>
>>>> * Get all the required parts outside QEMU packaged in major distros, or
>>>>     absorbed into QEMU
>>>
>>>
>>> Redhat did disable it. why? it is there in QEMU.
>>
>>
>> Up to now, I've been wearing my QEMU hat.  Let me exchange it for my Red
>> one for a bit.
>>
>> We (Red Hat) don't just package & ship metric tons of random free
>> software.  We package & ship useful free software we can support for
>> many, many years.
>>
>> Sometimes, we find that we have to focus serious development resources
>> on making something useful supportable (Paolo mentioned qcow2).  We
>> obviously can't focus on everything, though.
>
>
> Good open technology should rule. ivshmem has use cases. And I go agree with you, it is like the phoenix, it has to be re-explained/documented to be back to life. I was not aware that the QEMU community was missing ivshmem contributors (my bad I did not check MAINTAINERS).
>
>
>> Anyway, ivshmem didn't make the cut for RHEL-7.0.  Sorry if that
>> inconveniences you.  To get it into RHEL, you need to show it's both
>> useful and supportable.  Building a community around it would go a long
>> way towards that.
>
>
> understood.
>
>
>> If you want to discuss this in more detail with us, you may want to try
>> communication channels provided by your RHEL subscription in addition to
>> the QEMU development mailing list.  Don't be shy, you're paying for it!
>
>
> done. I was focusing on DPDK.org and ignorant of QEMU's status, thinking Redhat was covering it. How to know which part of an opensource software are and are not included into Redhat. Sales are ignorant about it ;). Redhat randomly disables some files at compilation (for some good reasons I guess, but not public rationals or I am missing something).
>
> Feel free to open this PR to anyone:
>   https://bugzilla.redhat.com/show_bug.cgi?id=1088332
>
>
>>>> In short, create a viable community around ivshmem, either within the
>>>> QEMU community, or separately but cooperating.
>>>
>>>
>>> At least, DPDK.org community is a community using it.
>>
>>
>> Using something isn't the same as maintaining something.  But it's a
>> necessary first step.
>
>
> understood, after David's patch, documentation will come.
>
> (now Paolo's email since there were some overlaps)
>
> > Markus especially referred to parts *outside* QEMU: the server, the
> > uio driver, etc.  These out-of-tree, non-packaged parts of ivshmem
> > are one of the reasons why Red Hat has disabled ivshmem in RHEL7.
>
> You made the right choices, these out-of-tree packages are not required. You can use QEMU's ivshmem without any of the out-of-tree packages. The out-of-tree packages are just some examples of using ivshmem.
>
> > He also listed many others.  Basically for parts of QEMU that are not
> > of high quality, we either fix them (this is for example what we did
> > for qcow2) or disable them.  Not just ivshmem suffered this fate, for
> > example many network cards, sound cards, SCSI storage adapters.
>
> I and David (cc) are working on making it better based on the issues that are found.
>
> > Now, vhost-user is in the process of being merged for 2.1.  Compared to the DPDK solution:
>
> now, you cannot compare vhost-user to DPDK/ivshmem; both should exsit because they have different scope and use cases. It is like comparing two different(A) models of IPC:
>   - vhost-user -> networking use case specific
>   - ivshmem -> framework to be generic to have shared memory for many use cases (HPC, in-memory-database, a network too like memnic).
>
> Later one, some news services will be needed for shared memory. virtio will come in picture (see VIRTIO_F_RING_SHMEM_ADDR's threads). Currently, ivhsmem is the only "stable" option since there remains many unsolved issues with virtio and shared memory.
>
> > * it doesn't require hugetlbfs (which only enabled shared memory by
> > chance in older QEMU releases, that was never documented)
>
> ivhsmem does not require hugetlbfs. It is optional.
>
> > * it doesn't require ivshmem (it does require shared memory, which
> > will also be added to 2.1)
>
> somehow I agree: we need both models: vhost-user and ivshmem because of the previous (A) comments.
>
> > * it doesn't require the kernel driver from the DPDK sample
>
> ivhsmem does not require DPDK kernel driver. see memnic's PMD:
>   http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c
>
> > * it is not just shared memory, but also defines an interface to use
> > it (another of Markus's points)
>
> agreed Paolo: but you short narrow it for networking use cases only. Shared memory à la ivshmem provides other features (see (A) again).
>
> >
> > vhost-user is superior, and it is superior because it has been
> > designed
> > from the get-go through cooperation of all interested parties (namely
> > QEMU and snabbswitch).
>
> It is not an argument. vhost-user is a specific case.
>
> Best regards,
>   Vincent
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-13  9:26             ` Vincent JARDIN
@ 2014-06-13  9:48                 ` Olivier MATZ
  2014-06-13  9:31               ` Jobin Raju George
                                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 91+ messages in thread
From: Olivier MATZ @ 2014-06-13  9:48 UTC (permalink / raw)
  To: Vincent JARDIN, Markus Armbruster, Paolo Bonzini
  Cc: Henning Schild, David Marchand, qemu-devel, kvm, virtualization,
	thomas.monjalon

Hello,

On 06/13/2014 11:26 AM, Vincent JARDIN wrote:
> ivhsmem does not require hugetlbfs. It is optional.
>
>  > * it doesn't require ivshmem (it does require shared memory, which
>  > will also be added to 2.1)

Right, hugetlbfs is not required. A posix shared memory or tmpfs
can be used instead. For instance, to use /dev/shm/foobar:

   qemu-system-x86_64 -enable-kvm -cpu host [...] \
      -device ivshmem,size=16,shm=foobar


Regards,
Olivier

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
@ 2014-06-13  9:48                 ` Olivier MATZ
  0 siblings, 0 replies; 91+ messages in thread
From: Olivier MATZ @ 2014-06-13  9:48 UTC (permalink / raw)
  To: Vincent JARDIN, Markus Armbruster, Paolo Bonzini
  Cc: Henning Schild, kvm, qemu-devel, David Marchand, virtualization,
	thomas.monjalon

Hello,

On 06/13/2014 11:26 AM, Vincent JARDIN wrote:
> ivhsmem does not require hugetlbfs. It is optional.
>
>  > * it doesn't require ivshmem (it does require shared memory, which
>  > will also be added to 2.1)

Right, hugetlbfs is not required. A posix shared memory or tmpfs
can be used instead. For instance, to use /dev/shm/foobar:

   qemu-system-x86_64 -enable-kvm -cpu host [...] \
      -device ivshmem,size=16,shm=foobar


Regards,
Olivier

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-13  9:26             ` Vincent JARDIN
  2014-06-13  9:31                 ` Jobin Raju George
  2014-06-13  9:31               ` Jobin Raju George
@ 2014-06-13  9:48               ` Olivier MATZ
  2014-06-13  9:48                 ` Olivier MATZ
  2014-06-13 10:09               ` Paolo Bonzini
  4 siblings, 0 replies; 91+ messages in thread
From: Olivier MATZ @ 2014-06-13  9:48 UTC (permalink / raw)
  To: Vincent JARDIN, Markus Armbruster, Paolo Bonzini
  Cc: Henning Schild, kvm, qemu-devel, David Marchand, virtualization,
	thomas.monjalon

Hello,

On 06/13/2014 11:26 AM, Vincent JARDIN wrote:
> ivhsmem does not require hugetlbfs. It is optional.
>
>  > * it doesn't require ivshmem (it does require shared memory, which
>  > will also be added to 2.1)

Right, hugetlbfs is not required. A posix shared memory or tmpfs
can be used instead. For instance, to use /dev/shm/foobar:

   qemu-system-x86_64 -enable-kvm -cpu host [...] \
      -device ivshmem,size=16,shm=foobar


Regards,
Olivier

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-13  9:26             ` Vincent JARDIN
                                 ` (3 preceding siblings ...)
  2014-06-13  9:48                 ` Olivier MATZ
@ 2014-06-13 10:09               ` Paolo Bonzini
  2014-06-13 13:41                 ` Vincent JARDIN
  2014-06-13 13:41                   ` Vincent JARDIN
  4 siblings, 2 replies; 91+ messages in thread
From: Paolo Bonzini @ 2014-06-13 10:09 UTC (permalink / raw)
  To: Vincent JARDIN, Markus Armbruster
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel, David Marchand,
	virtualization, thomas.monjalon

Il 13/06/2014 11:26, Vincent JARDIN ha scritto:
>> Markus especially referred to parts *outside* QEMU: the server, the
>> uio driver, etc.  These out-of-tree, non-packaged parts of ivshmem
>> are one of the reasons why Red Hat has disabled ivshmem in RHEL7.
>
> You made the right choices, these out-of-tree packages are not required.
> You can use QEMU's ivshmem without any of the out-of-tree packages. The
> out-of-tree packages are just some examples of using ivshmem.

Fine, however Red Hat would also need a way to test ivshmem code, with 
proper quality assurance (that also benefits upstream, of course).  With 
ivshmem this is not possible without the out-of-tree packages.

Disabling all the unwanted devices is a lot of work and thankless too 
(you only get complaints, in fact!).  But we prefer to ship only what we 
know we can test, support and improve.  We do not want customers' bug 
reports to languish because they are using code that cannot really be fixed.

Note that we do take into account community contributions in choosing 
which new code can be supported.  For example most work on VMDK images 
was done by Fam when he was a student, libiscsi is mostly the work of 
Peter Lieven, and so on; both of them are supported in RHEL.  These 
people did/do a great job, and we were happy to embrace those features!

Now, putting back my QEMU hat...

>> He also listed many others.  Basically for parts of QEMU that are not
>> of high quality, we either fix them (this is for example what we did
>> for qcow2) or disable them.  Not just ivshmem suffered this fate, for
>> example many network cards, sound cards, SCSI storage adapters.
>
> I and David (cc) are working on making it better based on the issues
> that are found.
>
>> Now, vhost-user is in the process of being merged for 2.1.  Compared
> to the DPDK solution:
>
> now, you cannot compare vhost-user to DPDK/ivshmem; both should exsit
> because they have different scope and use cases. It is like comparing
> two different(A) models of IPC:
>   - vhost-user -> networking use case specific

Not necessarily.  First and foremost, vhost-user defines an API for 
communication between QEMU and the host, including:

* file descriptor passing for the shared memory file

* mapping offsets in shared memory to physical memory addresses in the 
guests

* passing dirty memory information back and forth, so that migration is 
not prevented

* sending interrupts to a device

* setting up ring buffers in the shared memory


None of these is virtio specific, except the last (even then, you could 
repurpose the messages to pass the address of the whole shared memory 
area, instead of the vrings only).

Yes, the only front-end for vhost-user, right now, is a network device. 
  But it is possible to connect vhost-scsi to vhost-user as well, it is 
possible to develop a vhost-serial as well, and it is possible to only 
use the RPC and develop arbitrary shared-memory based tools using this 
API.  It's just that no one has done it yet.

Also, vhost-user is documented! See here: 
https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg00581.html

The only part of ivshmem that vhost doesn't include is the n-way 
inter-guest doorbell.  This is the part that requires a server and uio 
driver.  vhost only supports host->guest and guest->host doorbells.

>> * it doesn't require hugetlbfs (which only enabled shared memory by
>> chance in older QEMU releases, that was never documented)
>
> ivhsmem does not require hugetlbfs. It is optional.
>
>> * it doesn't require the kernel driver from the DPDK sample
>
> ivhsmem does not require DPDK kernel driver. see memnic's PMD:
>   http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c

You're right, I was confusing memnic and the vhost example in DPDK.

Paolo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-13 10:09               ` Paolo Bonzini
@ 2014-06-13 13:41                   ` Vincent JARDIN
  2014-06-13 13:41                   ` Vincent JARDIN
  1 sibling, 0 replies; 91+ messages in thread
From: Vincent JARDIN @ 2014-06-13 13:41 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Markus Armbruster, Henning Schild, David Marchand, qemu-devel,
	kvm, virtualization, Olivier MATZ, thomas.monjalon

> Fine, however Red Hat would also need a way to test ivshmem code, with
> proper quality assurance (that also benefits upstream, of course).  With
> ivshmem this is not possible without the out-of-tree packages.

You did not reply to my question: how to get the list of things that 
are/will be disabled by Redhat?

About Redhat's QA, I do not care.
About Qemu's QA, I do care ;)

I guess we can combine both. What's about something like:
   tests/virtio-net-test.c # qtest_add_func( is a nop)
but for ivshmem
   test/ivshmem-test.c
?

would it have any values?

If not, what do you use at Redhat to test Qemu?

>> now, you cannot compare vhost-user to DPDK/ivshmem; both should exsit
>> because they have different scope and use cases. It is like comparing
>> two different(A) models of IPC:

I do repeat this use case that you had removed because vhost-user does 
not solve it yet:

 >>  - ivshmem -> framework to be generic to have shared memory for many
 >> use cases (HPC, in-memory-database, a network too like memnic).

>>   - vhost-user -> networking use case specific
>
> Not necessarily.  First and foremost, vhost-user defines an API for
> communication between QEMU and the host, including:
> * file descriptor passing for the shared memory file
> * mapping offsets in shared memory to physical memory addresses in the
> guests
> * passing dirty memory information back and forth, so that migration is
> not prevented
> * sending interrupts to a device
> * setting up ring buffers in the shared memory

Yes, I do agree that it is promising.
And of course some tests are here:
   https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg00584.html
for some of the bullets you are listing (not all yet).

> Also, vhost-user is documented! See here:
> https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg00581.html

as I told you, we'll send a contribution with ivshmem's documentation.

> The only part of ivshmem that vhost doesn't include is the n-way
> inter-guest doorbell.  This is the part that requires a server and uio
> driver.  vhost only supports host->guest and guest->host doorbells.

agree: both will need it: vhost and ivshmem requires a doorbell for 
VM2VM, but then we'll have a security issue to be managed by Qemu for 
vhost and ivshmem.
I'll be pleased to contribute on it for ivshmem thru another thread that 
this one.

>> ivhsmem does not require DPDK kernel driver. see memnic's PMD:
>>   http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c
>
> You're right, I was confusing memnic and the vhost example in DPDK.

Definitively, it proves a lack of documentation. You welcome. Olivier 
did explain it:

>> ivhsmem does not require hugetlbfs. It is optional.
>>
>>  > * it doesn't require ivshmem (it does require shared memory, which
>>  > will also be added to 2.1)
>
> Right, hugetlbfs is not required. A posix shared memory or tmpfs
> can be used instead. For instance, to use /dev/shm/foobar:
>
>   qemu-system-x86_64 -enable-kvm -cpu host [...] \
>      -device ivshmem,size=16,shm=foobar


Best regards,
   Vincent

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
@ 2014-06-13 13:41                   ` Vincent JARDIN
  0 siblings, 0 replies; 91+ messages in thread
From: Vincent JARDIN @ 2014-06-13 13:41 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Henning Schild, Olivier MATZ, kvm, Markus Armbruster, qemu-devel,
	virtualization, thomas.monjalon, David Marchand

> Fine, however Red Hat would also need a way to test ivshmem code, with
> proper quality assurance (that also benefits upstream, of course).  With
> ivshmem this is not possible without the out-of-tree packages.

You did not reply to my question: how to get the list of things that 
are/will be disabled by Redhat?

About Redhat's QA, I do not care.
About Qemu's QA, I do care ;)

I guess we can combine both. What's about something like:
   tests/virtio-net-test.c # qtest_add_func( is a nop)
but for ivshmem
   test/ivshmem-test.c
?

would it have any values?

If not, what do you use at Redhat to test Qemu?

>> now, you cannot compare vhost-user to DPDK/ivshmem; both should exsit
>> because they have different scope and use cases. It is like comparing
>> two different(A) models of IPC:

I do repeat this use case that you had removed because vhost-user does 
not solve it yet:

 >>  - ivshmem -> framework to be generic to have shared memory for many
 >> use cases (HPC, in-memory-database, a network too like memnic).

>>   - vhost-user -> networking use case specific
>
> Not necessarily.  First and foremost, vhost-user defines an API for
> communication between QEMU and the host, including:
> * file descriptor passing for the shared memory file
> * mapping offsets in shared memory to physical memory addresses in the
> guests
> * passing dirty memory information back and forth, so that migration is
> not prevented
> * sending interrupts to a device
> * setting up ring buffers in the shared memory

Yes, I do agree that it is promising.
And of course some tests are here:
   https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg00584.html
for some of the bullets you are listing (not all yet).

> Also, vhost-user is documented! See here:
> https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg00581.html

as I told you, we'll send a contribution with ivshmem's documentation.

> The only part of ivshmem that vhost doesn't include is the n-way
> inter-guest doorbell.  This is the part that requires a server and uio
> driver.  vhost only supports host->guest and guest->host doorbells.

agree: both will need it: vhost and ivshmem requires a doorbell for 
VM2VM, but then we'll have a security issue to be managed by Qemu for 
vhost and ivshmem.
I'll be pleased to contribute on it for ivshmem thru another thread that 
this one.

>> ivhsmem does not require DPDK kernel driver. see memnic's PMD:
>>   http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c
>
> You're right, I was confusing memnic and the vhost example in DPDK.

Definitively, it proves a lack of documentation. You welcome. Olivier 
did explain it:

>> ivhsmem does not require hugetlbfs. It is optional.
>>
>>  > * it doesn't require ivshmem (it does require shared memory, which
>>  > will also be added to 2.1)
>
> Right, hugetlbfs is not required. A posix shared memory or tmpfs
> can be used instead. For instance, to use /dev/shm/foobar:
>
>   qemu-system-x86_64 -enable-kvm -cpu host [...] \
>      -device ivshmem,size=16,shm=foobar


Best regards,
   Vincent

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-13 10:09               ` Paolo Bonzini
@ 2014-06-13 13:41                 ` Vincent JARDIN
  2014-06-13 13:41                   ` Vincent JARDIN
  1 sibling, 0 replies; 91+ messages in thread
From: Vincent JARDIN @ 2014-06-13 13:41 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel, virtualization,
	thomas.monjalon, David Marchand

> Fine, however Red Hat would also need a way to test ivshmem code, with
> proper quality assurance (that also benefits upstream, of course).  With
> ivshmem this is not possible without the out-of-tree packages.

You did not reply to my question: how to get the list of things that 
are/will be disabled by Redhat?

About Redhat's QA, I do not care.
About Qemu's QA, I do care ;)

I guess we can combine both. What's about something like:
   tests/virtio-net-test.c # qtest_add_func( is a nop)
but for ivshmem
   test/ivshmem-test.c
?

would it have any values?

If not, what do you use at Redhat to test Qemu?

>> now, you cannot compare vhost-user to DPDK/ivshmem; both should exsit
>> because they have different scope and use cases. It is like comparing
>> two different(A) models of IPC:

I do repeat this use case that you had removed because vhost-user does 
not solve it yet:

 >>  - ivshmem -> framework to be generic to have shared memory for many
 >> use cases (HPC, in-memory-database, a network too like memnic).

>>   - vhost-user -> networking use case specific
>
> Not necessarily.  First and foremost, vhost-user defines an API for
> communication between QEMU and the host, including:
> * file descriptor passing for the shared memory file
> * mapping offsets in shared memory to physical memory addresses in the
> guests
> * passing dirty memory information back and forth, so that migration is
> not prevented
> * sending interrupts to a device
> * setting up ring buffers in the shared memory

Yes, I do agree that it is promising.
And of course some tests are here:
   https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg00584.html
for some of the bullets you are listing (not all yet).

> Also, vhost-user is documented! See here:
> https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg00581.html

as I told you, we'll send a contribution with ivshmem's documentation.

> The only part of ivshmem that vhost doesn't include is the n-way
> inter-guest doorbell.  This is the part that requires a server and uio
> driver.  vhost only supports host->guest and guest->host doorbells.

agree: both will need it: vhost and ivshmem requires a doorbell for 
VM2VM, but then we'll have a security issue to be managed by Qemu for 
vhost and ivshmem.
I'll be pleased to contribute on it for ivshmem thru another thread that 
this one.

>> ivhsmem does not require DPDK kernel driver. see memnic's PMD:
>>   http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c
>
> You're right, I was confusing memnic and the vhost example in DPDK.

Definitively, it proves a lack of documentation. You welcome. Olivier 
did explain it:

>> ivhsmem does not require hugetlbfs. It is optional.
>>
>>  > * it doesn't require ivshmem (it does require shared memory, which
>>  > will also be added to 2.1)
>
> Right, hugetlbfs is not required. A posix shared memory or tmpfs
> can be used instead. For instance, to use /dev/shm/foobar:
>
>   qemu-system-x86_64 -enable-kvm -cpu host [...] \
>      -device ivshmem,size=16,shm=foobar


Best regards,
   Vincent

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-13 13:41                   ` Vincent JARDIN
@ 2014-06-13 14:10                     ` Paolo Bonzini
  -1 siblings, 0 replies; 91+ messages in thread
From: Paolo Bonzini @ 2014-06-13 14:10 UTC (permalink / raw)
  To: Vincent JARDIN
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel, virtualization,
	thomas.monjalon, David Marchand

Il 13/06/2014 15:41, Vincent JARDIN ha scritto:
>> Fine, however Red Hat would also need a way to test ivshmem code, with
>> proper quality assurance (that also benefits upstream, of course).  With
>> ivshmem this is not possible without the out-of-tree packages.
>
> You did not reply to my question: how to get the list of things that
> are/will be disabled by Redhat?

I don't know exactly what the answer is, and this is probably not the 
right list to discuss it.  I guess there are partnership programs with 
Red Hat that I don't know the details of, but these are more for 
management folks and not really for developers.

ivshmem in particular was disabled even in RHEL7 beta, so you could have 
found out about this in December and opened a bug in Bugzilla about it.

> I guess we can combine both. What's about something like:
>   tests/virtio-net-test.c # qtest_add_func( is a nop)
> but for ivshmem
>   test/ivshmem-test.c
> ?
>
> would it have any values?

The first things to do are:

1) try to understand if there is any value in a simplified shared memory 
device with no interrupts (and those no eventfd or uio dependencies, not 
even optionally).  You are not using them because DPDK only does polling 
and basically reserves a core for the NIC code. If so, this would be a 
very simple device, just a 100 or so lines of code.  We could get this 
in upstream, and it would be likely enabled in RHEL too.

2) if not, get the server and uio driver merged into the QEMU tree, and 
document the protocol in docs/specs/ivshmem_device_spec.txt.  It doesn't 
matter if the code comes from the Nahanni repository or from your own 
implementation.  Also start fixing bugs such as the ones that Markus 
reported (removing all exit() invocations).

Writing testcases using the qtest framework would also be useful, but 
first of all it is important to make ivshmem easier to use.

> If not, what do you use at Redhat to test Qemu?

We do integration testing using autotest/virt-test (QEMU and KVM 
developers for upstream use it too) and also some manual functional tests.

Contributing ivshmem tests to the virt-test would also be helpful in 
demonstrating your interest in maintaining ivshmem.  The repository and 
documentation is at https://github.com/autotest/virt-test/ (a bit 
Fedora-centric).

> I do repeat this use case that you had removed because vhost-user does
> not solve it yet:
>
>>>  - ivshmem -> framework to be generic to have shared memory for many
>>> use cases (HPC, in-memory-database, a network too like memnic).

Right, ivshmem is better for guest-to-guest.  vhost-user is not 
restricted to networking, but it is indeed more focused on 
guest-to-host.  ivshmem is usable for guest-to-host, but I would prefer 
still some "hybrid" that uses vhost-like messages to pass the shared 
memory fds to the external program.

Paolo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
@ 2014-06-13 14:10                     ` Paolo Bonzini
  0 siblings, 0 replies; 91+ messages in thread
From: Paolo Bonzini @ 2014-06-13 14:10 UTC (permalink / raw)
  To: Vincent JARDIN
  Cc: Henning Schild, Olivier MATZ, kvm, Markus Armbruster, qemu-devel,
	virtualization, thomas.monjalon, David Marchand

Il 13/06/2014 15:41, Vincent JARDIN ha scritto:
>> Fine, however Red Hat would also need a way to test ivshmem code, with
>> proper quality assurance (that also benefits upstream, of course).  With
>> ivshmem this is not possible without the out-of-tree packages.
>
> You did not reply to my question: how to get the list of things that
> are/will be disabled by Redhat?

I don't know exactly what the answer is, and this is probably not the 
right list to discuss it.  I guess there are partnership programs with 
Red Hat that I don't know the details of, but these are more for 
management folks and not really for developers.

ivshmem in particular was disabled even in RHEL7 beta, so you could have 
found out about this in December and opened a bug in Bugzilla about it.

> I guess we can combine both. What's about something like:
>   tests/virtio-net-test.c # qtest_add_func( is a nop)
> but for ivshmem
>   test/ivshmem-test.c
> ?
>
> would it have any values?

The first things to do are:

1) try to understand if there is any value in a simplified shared memory 
device with no interrupts (and those no eventfd or uio dependencies, not 
even optionally).  You are not using them because DPDK only does polling 
and basically reserves a core for the NIC code. If so, this would be a 
very simple device, just a 100 or so lines of code.  We could get this 
in upstream, and it would be likely enabled in RHEL too.

2) if not, get the server and uio driver merged into the QEMU tree, and 
document the protocol in docs/specs/ivshmem_device_spec.txt.  It doesn't 
matter if the code comes from the Nahanni repository or from your own 
implementation.  Also start fixing bugs such as the ones that Markus 
reported (removing all exit() invocations).

Writing testcases using the qtest framework would also be useful, but 
first of all it is important to make ivshmem easier to use.

> If not, what do you use at Redhat to test Qemu?

We do integration testing using autotest/virt-test (QEMU and KVM 
developers for upstream use it too) and also some manual functional tests.

Contributing ivshmem tests to the virt-test would also be helpful in 
demonstrating your interest in maintaining ivshmem.  The repository and 
documentation is at https://github.com/autotest/virt-test/ (a bit 
Fedora-centric).

> I do repeat this use case that you had removed because vhost-user does
> not solve it yet:
>
>>>  - ivshmem -> framework to be generic to have shared memory for many
>>> use cases (HPC, in-memory-database, a network too like memnic).

Right, ivshmem is better for guest-to-guest.  vhost-user is not 
restricted to networking, but it is indeed more focused on 
guest-to-host.  ivshmem is usable for guest-to-host, but I would prefer 
still some "hybrid" that uses vhost-like messages to pass the shared 
memory fds to the external program.

Paolo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-13 14:10                     ` Paolo Bonzini
@ 2014-06-14 18:01                       ` Vincent JARDIN
  -1 siblings, 0 replies; 91+ messages in thread
From: Vincent JARDIN @ 2014-06-14 18:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Henning Schild, Olivier MATZ, kvm, virtualization,
	thomas.monjalon, Paolo Bonzini, David Marchand

(resending, this email is missing at 
http://lists.nongnu.org/archive/html/qemu-devel/2014-06/index.html)

 > Fine, however Red Hat would also need a way to test ivshmem code, with
 > proper quality assurance (that also benefits upstream, of course).
 >  With ivshmem this is not possible without the out-of-tree packages.

You did not reply to my question: how to get the list of things that
are/will be disabled by Redhat?

About Redhat's QA, I do not care.
About Qemu's QA, I do care ;)

I guess we can combine both. What's about something like:
    tests/virtio-net-test.c # qtest_add_func( is a nop)
but for ivshmem
    test/ivshmem-test.c
?

would it have any values?

If not, what do you use at Redhat to test Qemu?

 >> now, you cannot compare vhost-user to DPDK/ivshmem; both should exsit
 >> because they have different scope and use cases. It is like comparing
 >> two different(A) models of IPC:

I do repeat this use case that you had removed because vhost-user does
not solve it yet:

 >>  - ivshmem -> framework to be generic to have shared memory for many
 >> use cases (HPC, in-memory-database, a network too like memnic).

 >>   - vhost-user -> networking use case specific
 >
 > Not necessarily.  First and foremost, vhost-user defines an API for
 > communication between QEMU and the host, including:
 > * file descriptor passing for the shared memory file
 > * mapping offsets in shared memory to physical memory addresses in the
 > guests
 > * passing dirty memory information back and forth, so that migration
 > is not prevented
 > * sending interrupts to a device
 > * setting up ring buffers in the shared memory

Yes, I do agree that it is promising.
And of course some tests are here:
    https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg00584.html
for some of the bullets you are listing (not all yet).

 > Also, vhost-user is documented! See here:
 > https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg00581.html

as I told you, we'll send a contribution with ivshmem's documentation.

 > The only part of ivshmem that vhost doesn't include is the n-way
 > inter-guest doorbell.  This is the part that requires a server and uio
 > driver.  vhost only supports host->guest and guest->host doorbells.

agree: both will need it: vhost and ivshmem requires a doorbell for
VM2VM, but then we'll have a security issue to be managed by Qemu for
vhost and ivshmem.
I'll be pleased to contribute on it for ivshmem thru another thread that 
this one.

 >> ivhsmem does not require DPDK kernel driver. see memnic's PMD:
 >>   http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c
 >
 > You're right, I was confusing memnic and the vhost example in DPDK.

Definitively, it proves a lack of documentation. You welcome. Olivier
did explain it:

http://lists.nongnu.org/archive/html/qemu-devel/2014-06/msg03127.html
 >> ivhsmem does not require hugetlbfs. It is optional.
 >>
 >>  > * it doesn't require ivshmem (it does require shared memory, which
 >>  > will also be added to 2.1)
 >
 > Right, hugetlbfs is not required. A posix shared memory or tmpfs
 > can be used instead. For instance, to use /dev/shm/foobar:
 >
 >   qemu-system-x86_64 -enable-kvm -cpu host [...] \
 >      -device ivshmem,size=16,shm=foobar


Best regards,
    Vincent

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
@ 2014-06-14 18:01                       ` Vincent JARDIN
  0 siblings, 0 replies; 91+ messages in thread
From: Vincent JARDIN @ 2014-06-14 18:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: Henning Schild, Olivier MATZ, kvm, Markus Armbruster,
	virtualization, thomas.monjalon, Paolo Bonzini, David Marchand

(resending, this email is missing at 
http://lists.nongnu.org/archive/html/qemu-devel/2014-06/index.html)

 > Fine, however Red Hat would also need a way to test ivshmem code, with
 > proper quality assurance (that also benefits upstream, of course).
 >  With ivshmem this is not possible without the out-of-tree packages.

You did not reply to my question: how to get the list of things that
are/will be disabled by Redhat?

About Redhat's QA, I do not care.
About Qemu's QA, I do care ;)

I guess we can combine both. What's about something like:
    tests/virtio-net-test.c # qtest_add_func( is a nop)
but for ivshmem
    test/ivshmem-test.c
?

would it have any values?

If not, what do you use at Redhat to test Qemu?

 >> now, you cannot compare vhost-user to DPDK/ivshmem; both should exsit
 >> because they have different scope and use cases. It is like comparing
 >> two different(A) models of IPC:

I do repeat this use case that you had removed because vhost-user does
not solve it yet:

 >>  - ivshmem -> framework to be generic to have shared memory for many
 >> use cases (HPC, in-memory-database, a network too like memnic).

 >>   - vhost-user -> networking use case specific
 >
 > Not necessarily.  First and foremost, vhost-user defines an API for
 > communication between QEMU and the host, including:
 > * file descriptor passing for the shared memory file
 > * mapping offsets in shared memory to physical memory addresses in the
 > guests
 > * passing dirty memory information back and forth, so that migration
 > is not prevented
 > * sending interrupts to a device
 > * setting up ring buffers in the shared memory

Yes, I do agree that it is promising.
And of course some tests are here:
    https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg00584.html
for some of the bullets you are listing (not all yet).

 > Also, vhost-user is documented! See here:
 > https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg00581.html

as I told you, we'll send a contribution with ivshmem's documentation.

 > The only part of ivshmem that vhost doesn't include is the n-way
 > inter-guest doorbell.  This is the part that requires a server and uio
 > driver.  vhost only supports host->guest and guest->host doorbells.

agree: both will need it: vhost and ivshmem requires a doorbell for
VM2VM, but then we'll have a security issue to be managed by Qemu for
vhost and ivshmem.
I'll be pleased to contribute on it for ivshmem thru another thread that 
this one.

 >> ivhsmem does not require DPDK kernel driver. see memnic's PMD:
 >>   http://dpdk.org/browse/memnic/tree/pmd/pmd_memnic.c
 >
 > You're right, I was confusing memnic and the vhost example in DPDK.

Definitively, it proves a lack of documentation. You welcome. Olivier
did explain it:

http://lists.nongnu.org/archive/html/qemu-devel/2014-06/msg03127.html
 >> ivhsmem does not require hugetlbfs. It is optional.
 >>
 >>  > * it doesn't require ivshmem (it does require shared memory, which
 >>  > will also be added to 2.1)
 >
 > Right, hugetlbfs is not required. A posix shared memory or tmpfs
 > can be used instead. For instance, to use /dev/shm/foobar:
 >
 >   qemu-system-x86_64 -enable-kvm -cpu host [...] \
 >      -device ivshmem,size=16,shm=foobar


Best regards,
    Vincent

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-13  8:45           ` [Qemu-devel] " Paolo Bonzini
@ 2014-06-15  6:20             ` Jan Kiszka
  -1 siblings, 0 replies; 91+ messages in thread
From: Jan Kiszka @ 2014-06-15  6:20 UTC (permalink / raw)
  To: Paolo Bonzini, Rusty Russell, Henning Schild, qemu-devel,
	virtualization, kvm
  Cc: Jailhouse

[-- Attachment #1: Type: text/plain, Size: 1677 bytes --]

On 2014-06-13 10:45, Paolo Bonzini wrote:
> Il 13/06/2014 08:23, Jan Kiszka ha scritto:
>>>> That would preserve zero-copy capabilities (as long as you can work
>>>> against the shared mem directly, e.g. doing DMA from a physical NIC or
>>>> storage device into it) and keep the hypervisor out of the loop.
>> >
>> > This seems ill thought out.  How will you program a NIC via the virtio
>> > protocol without a hypervisor?  And how will you make it safe?  You'll
>> > need an IOMMU.  But if you have an IOMMU you don't need shared memory.
>>
>> Scenarios behind this are things like driver VMs: You pass through the
>> physical hardware to a driver guest that talks to the hardware and
>> relays data via one or more virtual channels to other VMs. This confines
>> a certain set of security and stability risks to the driver VM.
> 
> I think implementing Xen hypercalls in jailhouse for grant table and
> event channels would actually make a lot of sense.  The Xen
> implementation is 2.5kLOC and I think it should be possible to compact
> it noticeably, especially if you limit yourself to 64-bit guests.

At least the grant table model seems unsuited for Jailhouse. It allows a
guest to influence the mapping of another guest during runtime. This we
want (or even have) to avoid in Jailhouse.

I'm therefore more in favor of a model where the shared memory region is
defined on cell (guest) creation by adding a virtual device that comes
with such a region.

Jan

> 
> It should also be almost enough to run Xen PVH guests as jailhouse
> partitions.
> 
> If later Xen starts to support virtio, you will get that for free.
> 
> Paolo



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 263 bytes --]

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Using virtio for inter-VM communication
@ 2014-06-15  6:20             ` Jan Kiszka
  0 siblings, 0 replies; 91+ messages in thread
From: Jan Kiszka @ 2014-06-15  6:20 UTC (permalink / raw)
  To: Paolo Bonzini, Rusty Russell, Henning Schild, qemu-devel,
	virtualization, kvm
  Cc: Jailhouse

[-- Attachment #1: Type: text/plain, Size: 1677 bytes --]

On 2014-06-13 10:45, Paolo Bonzini wrote:
> Il 13/06/2014 08:23, Jan Kiszka ha scritto:
>>>> That would preserve zero-copy capabilities (as long as you can work
>>>> against the shared mem directly, e.g. doing DMA from a physical NIC or
>>>> storage device into it) and keep the hypervisor out of the loop.
>> >
>> > This seems ill thought out.  How will you program a NIC via the virtio
>> > protocol without a hypervisor?  And how will you make it safe?  You'll
>> > need an IOMMU.  But if you have an IOMMU you don't need shared memory.
>>
>> Scenarios behind this are things like driver VMs: You pass through the
>> physical hardware to a driver guest that talks to the hardware and
>> relays data via one or more virtual channels to other VMs. This confines
>> a certain set of security and stability risks to the driver VM.
> 
> I think implementing Xen hypercalls in jailhouse for grant table and
> event channels would actually make a lot of sense.  The Xen
> implementation is 2.5kLOC and I think it should be possible to compact
> it noticeably, especially if you limit yourself to 64-bit guests.

At least the grant table model seems unsuited for Jailhouse. It allows a
guest to influence the mapping of another guest during runtime. This we
want (or even have) to avoid in Jailhouse.

I'm therefore more in favor of a model where the shared memory region is
defined on cell (guest) creation by adding a virtual device that comes
with such a region.

Jan

> 
> It should also be almost enough to run Xen PVH guests as jailhouse
> partitions.
> 
> If later Xen starts to support virtio, you will get that for free.
> 
> Paolo



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 263 bytes --]

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-13  8:45           ` [Qemu-devel] " Paolo Bonzini
  (?)
@ 2014-06-15  6:20           ` Jan Kiszka
  -1 siblings, 0 replies; 91+ messages in thread
From: Jan Kiszka @ 2014-06-15  6:20 UTC (permalink / raw)
  To: Paolo Bonzini, Rusty Russell, Henning Schild, qemu-devel,
	virtualization, kvm
  Cc: Jailhouse


[-- Attachment #1.1: Type: text/plain, Size: 1677 bytes --]

On 2014-06-13 10:45, Paolo Bonzini wrote:
> Il 13/06/2014 08:23, Jan Kiszka ha scritto:
>>>> That would preserve zero-copy capabilities (as long as you can work
>>>> against the shared mem directly, e.g. doing DMA from a physical NIC or
>>>> storage device into it) and keep the hypervisor out of the loop.
>> >
>> > This seems ill thought out.  How will you program a NIC via the virtio
>> > protocol without a hypervisor?  And how will you make it safe?  You'll
>> > need an IOMMU.  But if you have an IOMMU you don't need shared memory.
>>
>> Scenarios behind this are things like driver VMs: You pass through the
>> physical hardware to a driver guest that talks to the hardware and
>> relays data via one or more virtual channels to other VMs. This confines
>> a certain set of security and stability risks to the driver VM.
> 
> I think implementing Xen hypercalls in jailhouse for grant table and
> event channels would actually make a lot of sense.  The Xen
> implementation is 2.5kLOC and I think it should be possible to compact
> it noticeably, especially if you limit yourself to 64-bit guests.

At least the grant table model seems unsuited for Jailhouse. It allows a
guest to influence the mapping of another guest during runtime. This we
want (or even have) to avoid in Jailhouse.

I'm therefore more in favor of a model where the shared memory region is
defined on cell (guest) creation by adding a virtual device that comes
with such a region.

Jan

> 
> It should also be almost enough to run Xen PVH guests as jailhouse
> partitions.
> 
> If later Xen starts to support virtio, you will get that for free.
> 
> Paolo



[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 263 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-13 14:10                     ` Paolo Bonzini
  (?)
  (?)
@ 2014-06-17  2:54                     ` Stefan Hajnoczi
  2014-06-17  9:03                         ` David Marchand
  2014-06-17  9:03                       ` David Marchand
  -1 siblings, 2 replies; 91+ messages in thread
From: Stefan Hajnoczi @ 2014-06-17  2:54 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel,
	Linux Virtualization, Vincent JARDIN, David Marchand,
	thomas.monjalon

On Fri, Jun 13, 2014 at 10:10 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 13/06/2014 15:41, Vincent JARDIN ha scritto:
>> I do repeat this use case that you had removed because vhost-user does
>> not solve it yet:
>>
>>>>  - ivshmem -> framework to be generic to have shared memory for many
>>>> use cases (HPC, in-memory-database, a network too like memnic).
>
>
> Right, ivshmem is better for guest-to-guest.  vhost-user is not restricted
> to networking, but it is indeed more focused on guest-to-host.  ivshmem is
> usable for guest-to-host, but I would prefer still some "hybrid" that uses
> vhost-like messages to pass the shared memory fds to the external program.

ivshmem has a performance disadvantage for guest-to-host
communication.  Since the shared memory is exposed as PCI BARs, the
guest has to memcpy into the shared memory.

vhost-user can access guest memory directly and avoid the copy inside the guest.

Unless someone steps up and maintains ivshmem, I think it should be
deprecated and dropped from QEMU.

Stefan

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-15  6:20             ` [Qemu-devel] " Jan Kiszka
@ 2014-06-17  5:24               ` Paolo Bonzini
  -1 siblings, 0 replies; 91+ messages in thread
From: Paolo Bonzini @ 2014-06-17  5:24 UTC (permalink / raw)
  To: Jan Kiszka, Rusty Russell, Henning Schild, qemu-devel,
	virtualization, kvm
  Cc: Jailhouse

Il 15/06/2014 08:20, Jan Kiszka ha scritto:
>> > I think implementing Xen hypercalls in jailhouse for grant table and
>> > event channels would actually make a lot of sense.  The Xen
>> > implementation is 2.5kLOC and I think it should be possible to compact
>> > it noticeably, especially if you limit yourself to 64-bit guests.
> At least the grant table model seems unsuited for Jailhouse. It allows a
> guest to influence the mapping of another guest during runtime. This we
> want (or even have) to avoid in Jailhouse.

IIRC implementing the grant table hypercalls with copies is inefficient 
but valid.

Paolo

-- 
You received this message because you are subscribed to the Google Groups "Jailhouse" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jailhouse-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Using virtio for inter-VM communication
@ 2014-06-17  5:24               ` Paolo Bonzini
  0 siblings, 0 replies; 91+ messages in thread
From: Paolo Bonzini @ 2014-06-17  5:24 UTC (permalink / raw)
  To: Jan Kiszka, Rusty Russell, Henning Schild, qemu-devel,
	virtualization, kvm
  Cc: Jailhouse

Il 15/06/2014 08:20, Jan Kiszka ha scritto:
>> > I think implementing Xen hypercalls in jailhouse for grant table and
>> > event channels would actually make a lot of sense.  The Xen
>> > implementation is 2.5kLOC and I think it should be possible to compact
>> > it noticeably, especially if you limit yourself to 64-bit guests.
> At least the grant table model seems unsuited for Jailhouse. It allows a
> guest to influence the mapping of another guest during runtime. This we
> want (or even have) to avoid in Jailhouse.

IIRC implementing the grant table hypercalls with copies is inefficient 
but valid.

Paolo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-15  6:20             ` [Qemu-devel] " Jan Kiszka
  (?)
  (?)
@ 2014-06-17  5:24             ` Paolo Bonzini
  -1 siblings, 0 replies; 91+ messages in thread
From: Paolo Bonzini @ 2014-06-17  5:24 UTC (permalink / raw)
  To: Jan Kiszka, Rusty Russell, Henning Schild, qemu-devel,
	virtualization, kvm
  Cc: Jailhouse

Il 15/06/2014 08:20, Jan Kiszka ha scritto:
>> > I think implementing Xen hypercalls in jailhouse for grant table and
>> > event channels would actually make a lot of sense.  The Xen
>> > implementation is 2.5kLOC and I think it should be possible to compact
>> > it noticeably, especially if you limit yourself to 64-bit guests.
> At least the grant table model seems unsuited for Jailhouse. It allows a
> guest to influence the mapping of another guest during runtime. This we
> want (or even have) to avoid in Jailhouse.

IIRC implementing the grant table hypercalls with copies is inefficient 
but valid.

Paolo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-17  5:24               ` [Qemu-devel] " Paolo Bonzini
@ 2014-06-17  5:57                 ` Jan Kiszka
  -1 siblings, 0 replies; 91+ messages in thread
From: Jan Kiszka @ 2014-06-17  5:57 UTC (permalink / raw)
  To: Paolo Bonzini, Rusty Russell, Henning Schild, qemu-devel,
	virtualization, kvm
  Cc: Jailhouse

[-- Attachment #1: Type: text/plain, Size: 949 bytes --]

On 2014-06-17 07:24, Paolo Bonzini wrote:
> Il 15/06/2014 08:20, Jan Kiszka ha scritto:
>>> > I think implementing Xen hypercalls in jailhouse for grant table and
>>> > event channels would actually make a lot of sense.  The Xen
>>> > implementation is 2.5kLOC and I think it should be possible to compact
>>> > it noticeably, especially if you limit yourself to 64-bit guests.
>> At least the grant table model seems unsuited for Jailhouse. It allows a
>> guest to influence the mapping of another guest during runtime. This we
>> want (or even have) to avoid in Jailhouse.
> 
> IIRC implementing the grant table hypercalls with copies is inefficient
> but valid.

Back to #1: This is what Rusty is suggesting for virtio. Nothing to win
with grant tables then. And if we really have to copy, I would prefer to
use a standard.

I guess we need to play with prototypes to assess feasibility and impact
on existing code.

Jan



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 263 bytes --]

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Using virtio for inter-VM communication
@ 2014-06-17  5:57                 ` Jan Kiszka
  0 siblings, 0 replies; 91+ messages in thread
From: Jan Kiszka @ 2014-06-17  5:57 UTC (permalink / raw)
  To: Paolo Bonzini, Rusty Russell, Henning Schild, qemu-devel,
	virtualization, kvm
  Cc: Jailhouse

[-- Attachment #1: Type: text/plain, Size: 949 bytes --]

On 2014-06-17 07:24, Paolo Bonzini wrote:
> Il 15/06/2014 08:20, Jan Kiszka ha scritto:
>>> > I think implementing Xen hypercalls in jailhouse for grant table and
>>> > event channels would actually make a lot of sense.  The Xen
>>> > implementation is 2.5kLOC and I think it should be possible to compact
>>> > it noticeably, especially if you limit yourself to 64-bit guests.
>> At least the grant table model seems unsuited for Jailhouse. It allows a
>> guest to influence the mapping of another guest during runtime. This we
>> want (or even have) to avoid in Jailhouse.
> 
> IIRC implementing the grant table hypercalls with copies is inefficient
> but valid.

Back to #1: This is what Rusty is suggesting for virtio. Nothing to win
with grant tables then. And if we really have to copy, I would prefer to
use a standard.

I guess we need to play with prototypes to assess feasibility and impact
on existing code.

Jan



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 263 bytes --]

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: Using virtio for inter-VM communication
  2014-06-17  5:24               ` [Qemu-devel] " Paolo Bonzini
  (?)
@ 2014-06-17  5:57               ` Jan Kiszka
  -1 siblings, 0 replies; 91+ messages in thread
From: Jan Kiszka @ 2014-06-17  5:57 UTC (permalink / raw)
  To: Paolo Bonzini, Rusty Russell, Henning Schild, qemu-devel,
	virtualization, kvm
  Cc: Jailhouse


[-- Attachment #1.1: Type: text/plain, Size: 949 bytes --]

On 2014-06-17 07:24, Paolo Bonzini wrote:
> Il 15/06/2014 08:20, Jan Kiszka ha scritto:
>>> > I think implementing Xen hypercalls in jailhouse for grant table and
>>> > event channels would actually make a lot of sense.  The Xen
>>> > implementation is 2.5kLOC and I think it should be possible to compact
>>> > it noticeably, especially if you limit yourself to 64-bit guests.
>> At least the grant table model seems unsuited for Jailhouse. It allows a
>> guest to influence the mapping of another guest during runtime. This we
>> want (or even have) to avoid in Jailhouse.
> 
> IIRC implementing the grant table hypercalls with copies is inefficient
> but valid.

Back to #1: This is what Rusty is suggesting for virtio. Nothing to win
with grant tables then. And if we really have to copy, I would prefer to
use a standard.

I guess we need to play with prototypes to assess feasibility and impact
on existing code.

Jan



[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 263 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-17  2:54                     ` Stefan Hajnoczi
@ 2014-06-17  9:03                         ` David Marchand
  2014-06-17  9:03                       ` David Marchand
  1 sibling, 0 replies; 91+ messages in thread
From: David Marchand @ 2014-06-17  9:03 UTC (permalink / raw)
  To: Stefan Hajnoczi, Paolo Bonzini
  Cc: Vincent JARDIN, Henning Schild, Olivier MATZ, kvm, qemu-devel,
	Linux Virtualization, thomas.monjalon

Hello all,

On 06/17/2014 04:54 AM, Stefan Hajnoczi wrote:
> ivshmem has a performance disadvantage for guest-to-host
> communication.  Since the shared memory is exposed as PCI BARs, the
> guest has to memcpy into the shared memory.
>
> vhost-user can access guest memory directly and avoid the copy inside the guest.

Actually, you can avoid this memory copy using frameworks like DPDK.


> Unless someone steps up and maintains ivshmem, I think it should be
> deprecated and dropped from QEMU.

Then I can maintain ivshmem for QEMU.
If this is ok, I will send a patch for MAINTAINERS file.


-- 
David Marchand

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
@ 2014-06-17  9:03                         ` David Marchand
  0 siblings, 0 replies; 91+ messages in thread
From: David Marchand @ 2014-06-17  9:03 UTC (permalink / raw)
  To: Stefan Hajnoczi, Paolo Bonzini
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel,
	Linux Virtualization, Vincent JARDIN, thomas.monjalon

Hello all,

On 06/17/2014 04:54 AM, Stefan Hajnoczi wrote:
> ivshmem has a performance disadvantage for guest-to-host
> communication.  Since the shared memory is exposed as PCI BARs, the
> guest has to memcpy into the shared memory.
>
> vhost-user can access guest memory directly and avoid the copy inside the guest.

Actually, you can avoid this memory copy using frameworks like DPDK.


> Unless someone steps up and maintains ivshmem, I think it should be
> deprecated and dropped from QEMU.

Then I can maintain ivshmem for QEMU.
If this is ok, I will send a patch for MAINTAINERS file.


-- 
David Marchand

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-17  2:54                     ` Stefan Hajnoczi
  2014-06-17  9:03                         ` David Marchand
@ 2014-06-17  9:03                       ` David Marchand
  1 sibling, 0 replies; 91+ messages in thread
From: David Marchand @ 2014-06-17  9:03 UTC (permalink / raw)
  To: Stefan Hajnoczi, Paolo Bonzini
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel,
	Linux Virtualization, Vincent JARDIN, thomas.monjalon

Hello all,

On 06/17/2014 04:54 AM, Stefan Hajnoczi wrote:
> ivshmem has a performance disadvantage for guest-to-host
> communication.  Since the shared memory is exposed as PCI BARs, the
> guest has to memcpy into the shared memory.
>
> vhost-user can access guest memory directly and avoid the copy inside the guest.

Actually, you can avoid this memory copy using frameworks like DPDK.


> Unless someone steps up and maintains ivshmem, I think it should be
> deprecated and dropped from QEMU.

Then I can maintain ivshmem for QEMU.
If this is ok, I will send a patch for MAINTAINERS file.


-- 
David Marchand

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-17  9:03                         ` David Marchand
  (?)
@ 2014-06-17  9:44                         ` Paolo Bonzini
  2014-06-18 10:48                             ` Stefan Hajnoczi
  -1 siblings, 1 reply; 91+ messages in thread
From: Paolo Bonzini @ 2014-06-17  9:44 UTC (permalink / raw)
  To: David Marchand, Stefan Hajnoczi
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel,
	Linux Virtualization, Vincent JARDIN, thomas.monjalon

Il 17/06/2014 11:03, David Marchand ha scritto:
>> Unless someone steps up and maintains ivshmem, I think it should be
>> deprecated and dropped from QEMU.
>
> Then I can maintain ivshmem for QEMU.
> If this is ok, I will send a patch for MAINTAINERS file.

Typically, adding yourself to maintainers is done only after having 
proved your ability to be a maintainer. :)

So, let's stop talking and go back to code!  You can start doing what 
was suggested elsewhere in the thread: get the server and uio driver 
merged into the QEMU tree, document the protocol in 
docs/specs/ivshmem_device_spec.txt, and start fixing bugs such as the 
ones that Markus reported.

Since ivshmem is basically KVM-only (it has a soft dependency on 
ioeventfd), CC the patches to kvm@vger.kernel.org and I'll merge them 
via the KVM tree for now.  I'll (more than) gladly give maintainership 
away in due time.

Paolo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-17  9:44                         ` Paolo Bonzini
@ 2014-06-18 10:48                             ` Stefan Hajnoczi
  0 siblings, 0 replies; 91+ messages in thread
From: Stefan Hajnoczi @ 2014-06-18 10:48 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel, David Marchand,
	Linux Virtualization, Vincent JARDIN, thomas.monjalon


[-- Attachment #1.1: Type: text/plain, Size: 1182 bytes --]

On Tue, Jun 17, 2014 at 11:44:11AM +0200, Paolo Bonzini wrote:
> Il 17/06/2014 11:03, David Marchand ha scritto:
> >>Unless someone steps up and maintains ivshmem, I think it should be
> >>deprecated and dropped from QEMU.
> >
> >Then I can maintain ivshmem for QEMU.
> >If this is ok, I will send a patch for MAINTAINERS file.
> 
> Typically, adding yourself to maintainers is done only after having proved
> your ability to be a maintainer. :)
> 
> So, let's stop talking and go back to code!  You can start doing what was
> suggested elsewhere in the thread: get the server and uio driver merged into
> the QEMU tree, document the protocol in docs/specs/ivshmem_device_spec.txt,
> and start fixing bugs such as the ones that Markus reported.

One more thing to add to the list:

static void ivshmem_read(void *opaque, const uint8_t * buf, int flags)

The "flags" argument should be "size".  Size should be checked before
accessing buf.

Please also see the bug fixes in the following unapplied patch:
"[PATCH] ivshmem: fix potential OOB r/w access (#2)" by Sebastian Krahmer
https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg03538.html

Stefan

[-- Attachment #1.2: Type: application/pgp-signature, Size: 473 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
@ 2014-06-18 10:48                             ` Stefan Hajnoczi
  0 siblings, 0 replies; 91+ messages in thread
From: Stefan Hajnoczi @ 2014-06-18 10:48 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel, David Marchand,
	Linux Virtualization, Vincent JARDIN, thomas.monjalon

[-- Attachment #1: Type: text/plain, Size: 1182 bytes --]

On Tue, Jun 17, 2014 at 11:44:11AM +0200, Paolo Bonzini wrote:
> Il 17/06/2014 11:03, David Marchand ha scritto:
> >>Unless someone steps up and maintains ivshmem, I think it should be
> >>deprecated and dropped from QEMU.
> >
> >Then I can maintain ivshmem for QEMU.
> >If this is ok, I will send a patch for MAINTAINERS file.
> 
> Typically, adding yourself to maintainers is done only after having proved
> your ability to be a maintainer. :)
> 
> So, let's stop talking and go back to code!  You can start doing what was
> suggested elsewhere in the thread: get the server and uio driver merged into
> the QEMU tree, document the protocol in docs/specs/ivshmem_device_spec.txt,
> and start fixing bugs such as the ones that Markus reported.

One more thing to add to the list:

static void ivshmem_read(void *opaque, const uint8_t * buf, int flags)

The "flags" argument should be "size".  Size should be checked before
accessing buf.

Please also see the bug fixes in the following unapplied patch:
"[PATCH] ivshmem: fix potential OOB r/w access (#2)" by Sebastian Krahmer
https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg03538.html

Stefan

[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-17  9:03                         ` David Marchand
@ 2014-06-18 10:51                           ` Stefan Hajnoczi
  -1 siblings, 0 replies; 91+ messages in thread
From: Stefan Hajnoczi @ 2014-06-18 10:51 UTC (permalink / raw)
  To: David Marchand
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel,
	Linux Virtualization, Paolo Bonzini, Vincent JARDIN,
	thomas.monjalon


[-- Attachment #1.1: Type: text/plain, Size: 663 bytes --]

On Tue, Jun 17, 2014 at 11:03:32AM +0200, David Marchand wrote:
> On 06/17/2014 04:54 AM, Stefan Hajnoczi wrote:
> >ivshmem has a performance disadvantage for guest-to-host
> >communication.  Since the shared memory is exposed as PCI BARs, the
> >guest has to memcpy into the shared memory.
> >
> >vhost-user can access guest memory directly and avoid the copy inside the guest.
> 
> Actually, you can avoid this memory copy using frameworks like DPDK.

I guess it's careful to allocate all packets in the mmapped BAR?

That's fine if you can modify applications but doesn't work for
unmodified applications using regular networking APIs.

Stefan

[-- Attachment #1.2: Type: application/pgp-signature, Size: 473 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
@ 2014-06-18 10:51                           ` Stefan Hajnoczi
  0 siblings, 0 replies; 91+ messages in thread
From: Stefan Hajnoczi @ 2014-06-18 10:51 UTC (permalink / raw)
  To: David Marchand
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel,
	Linux Virtualization, Paolo Bonzini, Vincent JARDIN,
	thomas.monjalon

[-- Attachment #1: Type: text/plain, Size: 663 bytes --]

On Tue, Jun 17, 2014 at 11:03:32AM +0200, David Marchand wrote:
> On 06/17/2014 04:54 AM, Stefan Hajnoczi wrote:
> >ivshmem has a performance disadvantage for guest-to-host
> >communication.  Since the shared memory is exposed as PCI BARs, the
> >guest has to memcpy into the shared memory.
> >
> >vhost-user can access guest memory directly and avoid the copy inside the guest.
> 
> Actually, you can avoid this memory copy using frameworks like DPDK.

I guess it's careful to allocate all packets in the mmapped BAR?

That's fine if you can modify applications but doesn't work for
unmodified applications using regular networking APIs.

Stefan

[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-17  9:03                         ` David Marchand
                                           ` (2 preceding siblings ...)
  (?)
@ 2014-06-18 14:22                         ` Claudio Fontana
  -1 siblings, 0 replies; 91+ messages in thread
From: Claudio Fontana @ 2014-06-18 14:22 UTC (permalink / raw)
  To: qemu-devel

On 17.06.2014 11:03, David Marchand wrote:
> Hello all,
> 
> On 06/17/2014 04:54 AM, Stefan Hajnoczi wrote:
>> ivshmem has a performance disadvantage for guest-to-host
>> communication.  Since the shared memory is exposed as PCI BARs, the
>> guest has to memcpy into the shared memory.
>>
>> vhost-user can access guest memory directly and avoid the copy inside the guest.
> 
> Actually, you can avoid this memory copy using frameworks like DPDK.
> 
> 
>> Unless someone steps up and maintains ivshmem, I think it should be
>> deprecated and dropped from QEMU.
> 
> Then I can maintain ivshmem for QEMU.
> If this is ok, I will send a patch for MAINTAINERS file.
> 
> 

Just a +1 over here for the need of a guest to guest shared memory solution.

There are several internal requirements for that, and I saw this discussion just about when starting to build on top of nahanni/ivshmem.

In general what I'd like to see is for ivshmem (or any other guest-guest shared memory communication solution)
to get consolidated into the QEMU codebase, and not having to pick and choose pieces from different repositories.
vhost-user is interesting and welcome, however guest-host communication is not the use case I have over here at the moment.

Ciao,

Claudio

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-18 10:48                             ` Stefan Hajnoczi
@ 2014-06-18 14:57                               ` David Marchand
  -1 siblings, 0 replies; 91+ messages in thread
From: David Marchand @ 2014-06-18 14:57 UTC (permalink / raw)
  To: Stefan Hajnoczi, Paolo Bonzini
  Cc: Vincent JARDIN, Henning Schild, Olivier MATZ, kvm, qemu-devel,
	Linux Virtualization, thomas.monjalon

Hello Stefan,

On 06/18/2014 12:48 PM, Stefan Hajnoczi wrote:
> One more thing to add to the list:
>
> static void ivshmem_read(void *opaque, const uint8_t * buf, int flags)
>
> The "flags" argument should be "size".  Size should be checked before
> accessing buf.

You are welcome to send a fix and I will review it.

>
> Please also see the bug fixes in the following unapplied patch:
> "[PATCH] ivshmem: fix potential OOB r/w access (#2)" by Sebastian Krahmer
> https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg03538.html

Thanks for the pointer. I'll check it.



-- 
David Marchand

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
@ 2014-06-18 14:57                               ` David Marchand
  0 siblings, 0 replies; 91+ messages in thread
From: David Marchand @ 2014-06-18 14:57 UTC (permalink / raw)
  To: Stefan Hajnoczi, Paolo Bonzini
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel,
	Linux Virtualization, Vincent JARDIN, thomas.monjalon

Hello Stefan,

On 06/18/2014 12:48 PM, Stefan Hajnoczi wrote:
> One more thing to add to the list:
>
> static void ivshmem_read(void *opaque, const uint8_t * buf, int flags)
>
> The "flags" argument should be "size".  Size should be checked before
> accessing buf.

You are welcome to send a fix and I will review it.

>
> Please also see the bug fixes in the following unapplied patch:
> "[PATCH] ivshmem: fix potential OOB r/w access (#2)" by Sebastian Krahmer
> https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg03538.html

Thanks for the pointer. I'll check it.



-- 
David Marchand

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-18 10:48                             ` Stefan Hajnoczi
  (?)
@ 2014-06-18 14:57                             ` David Marchand
  -1 siblings, 0 replies; 91+ messages in thread
From: David Marchand @ 2014-06-18 14:57 UTC (permalink / raw)
  To: Stefan Hajnoczi, Paolo Bonzini
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel,
	Linux Virtualization, Vincent JARDIN, thomas.monjalon

Hello Stefan,

On 06/18/2014 12:48 PM, Stefan Hajnoczi wrote:
> One more thing to add to the list:
>
> static void ivshmem_read(void *opaque, const uint8_t * buf, int flags)
>
> The "flags" argument should be "size".  Size should be checked before
> accessing buf.

You are welcome to send a fix and I will review it.

>
> Please also see the bug fixes in the following unapplied patch:
> "[PATCH] ivshmem: fix potential OOB r/w access (#2)" by Sebastian Krahmer
> https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg03538.html

Thanks for the pointer. I'll check it.



-- 
David Marchand

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-18 10:51                           ` Stefan Hajnoczi
@ 2014-06-18 14:58                             ` David Marchand
  -1 siblings, 0 replies; 91+ messages in thread
From: David Marchand @ 2014-06-18 14:58 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Paolo Bonzini, Vincent JARDIN, Henning Schild, Olivier MATZ, kvm,
	qemu-devel, Linux Virtualization, thomas.monjalon


On 06/18/2014 12:51 PM, Stefan Hajnoczi wrote:
>>
>> Actually, you can avoid this memory copy using frameworks like DPDK.
>
> I guess it's careful to allocate all packets in the mmapped BAR?

Yes.

> That's fine if you can modify applications but doesn't work for
> unmodified applications using regular networking APIs.

If you have access to source code, this should not be a problem.



-- 
David Marchand

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
@ 2014-06-18 14:58                             ` David Marchand
  0 siblings, 0 replies; 91+ messages in thread
From: David Marchand @ 2014-06-18 14:58 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel,
	Linux Virtualization, Paolo Bonzini, Vincent JARDIN,
	thomas.monjalon


On 06/18/2014 12:51 PM, Stefan Hajnoczi wrote:
>>
>> Actually, you can avoid this memory copy using frameworks like DPDK.
>
> I guess it's careful to allocate all packets in the mmapped BAR?

Yes.

> That's fine if you can modify applications but doesn't work for
> unmodified applications using regular networking APIs.

If you have access to source code, this should not be a problem.



-- 
David Marchand

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-18 10:51                           ` Stefan Hajnoczi
  (?)
@ 2014-06-18 14:58                           ` David Marchand
  -1 siblings, 0 replies; 91+ messages in thread
From: David Marchand @ 2014-06-18 14:58 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel,
	Linux Virtualization, Paolo Bonzini, Vincent JARDIN,
	thomas.monjalon


On 06/18/2014 12:51 PM, Stefan Hajnoczi wrote:
>>
>> Actually, you can avoid this memory copy using frameworks like DPDK.
>
> I guess it's careful to allocate all packets in the mmapped BAR?

Yes.

> That's fine if you can modify applications but doesn't work for
> unmodified applications using regular networking APIs.

If you have access to source code, this should not be a problem.



-- 
David Marchand

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-18 10:48                             ` Stefan Hajnoczi
@ 2014-06-18 15:01                               ` Andreas Färber
  -1 siblings, 0 replies; 91+ messages in thread
From: Andreas Färber @ 2014-06-18 15:01 UTC (permalink / raw)
  To: Stefan Hajnoczi, Paolo Bonzini, Vincent JARDIN
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel, David Marchand,
	Linux Virtualization, thomas.monjalon, Peter Maydell,
	Alexander Graf

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Am 18.06.2014 12:48, schrieb Stefan Hajnoczi:
> On Tue, Jun 17, 2014 at 11:44:11AM +0200, Paolo Bonzini wrote:
>> Il 17/06/2014 11:03, David Marchand ha scritto:
>>>> Unless someone steps up and maintains ivshmem, I think it
>>>> should be deprecated and dropped from QEMU.
>>> 
>>> Then I can maintain ivshmem for QEMU. If this is ok, I will
>>> send a patch for MAINTAINERS file.
>> 
>> Typically, adding yourself to maintainers is done only after
>> having proved your ability to be a maintainer. :)
>> 
>> So, let's stop talking and go back to code!  You can start doing
>> what was suggested elsewhere in the thread: get the server and
>> uio driver merged into the QEMU tree, document the protocol in
>> docs/specs/ivshmem_device_spec.txt, and start fixing bugs such as
>> the ones that Markus reported.
> 
> One more thing to add to the list:
> 
> static void ivshmem_read(void *opaque, const uint8_t * buf, int
> flags)
> 
> The "flags" argument should be "size".  Size should be checked
> before accessing buf.
> 
> Please also see the bug fixes in the following unapplied patch: 
> "[PATCH] ivshmem: fix potential OOB r/w access (#2)" by Sebastian
> Krahmer 
> https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg03538.html

Jumping
> 
late onto this thread: SUSE Security team has just recently
done a thorough review of QEMU ivshmem code because a customer has
requested this be supported in SLES12. Multiple security-related
patches were submitted by Stefan Hajnoczi and Sebastian Krahmer, and I
fear they are probably still not merged for lack of active
maintainer... In such cases, after review, I expect them to be picked
up by Peter as committer or via qemu-trivial.

So -1, against dropping it.

Vincent, you will find an RFC for an ivshmem-test in the qemu-devel
list archives or possibly on my qtest branch. The blocking issue that
I haven't worked on yet is that we can't unconditionally run the qtest
because it depends on KVM enabled at configure time (as opposed to
runtime) to have the device available.
http://patchwork.ozlabs.org/patch/336367/

As others have stated before, the nahanni server seems unmaintained,
thus not getting packaged by SUSE either and making testing the
interrupt parts of ivshmem difficult - unless we sort out and fill
with actual test code my proposed qtest.

Regards,
Andreas

- -- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJToanZAAoJEPou0S0+fgE/L6YP/jtPiwvz3YoW3+H/h/YzrnE7
xVP92jj5orzmbG3HMmEnx0l7YrtzYkwymgUO56dy2SrLFe0xVMnxuzcHLzHLsnm3
bYvMVq3eAx8sdx9c/O2B/rQbNo2p8PF/luTNewN7A+w5TX0XgxdI3TpLT2pVxf0b
kMaBnfivzUf2JY/zg6NaiGnwvVrA/0kXsCGKcTBiMQxOX2EdDgNak842SjlmS332
dPbqp5PIMdxwCxI/p+gpmu0cSy1bl2H6N2gkmKQZ63Z2tA7bWn/APdQeHyOcESZE
xRAfDz2Cs3/6EL7FLirJWdwT9EMNaFcM+eRgIqDamFzviQPZVuLKdDUteO1k9x1s
FlhL3ZRa3qHair9ByEJItqzneAeYmuwZ2DkKh4p/HQfbcxLzZlL8a1EEtYz5DTy0
8+Ax6IU5U5RZmwJ4/M/Ov5eT4t/fNe0MbG3mf5A8FJ6GWoF11ut/wyj70p/EmXua
QjUblK/eFemN4YvIi0ovD4DR9ZH2+bXOb44wKL7yFahKLldaP4y9DhJTap2J0mT1
b62FfFZ6hVIGP5n30OHLlhe39QY6SyIPc4JNc9VZ3GcpXtfOHPUOAD/ykt/As1P3
cPfL+jM0QSb6VNJHNbvUsSlJ6xI26qEWzyJ5R7ww4fyEoq4XiE2RCDUWJ2t9/jQb
+Bi/esBUDhAduc1Eh3FK
=MtPH
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
@ 2014-06-18 15:01                               ` Andreas Färber
  0 siblings, 0 replies; 91+ messages in thread
From: Andreas Färber @ 2014-06-18 15:01 UTC (permalink / raw)
  To: Stefan Hajnoczi, Paolo Bonzini, Vincent JARDIN
  Cc: Henning Schild, Olivier MATZ, kvm, Peter Maydell, qemu-devel,
	David Marchand, Linux Virtualization, Alexander Graf,
	thomas.monjalon

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Am 18.06.2014 12:48, schrieb Stefan Hajnoczi:
> On Tue, Jun 17, 2014 at 11:44:11AM +0200, Paolo Bonzini wrote:
>> Il 17/06/2014 11:03, David Marchand ha scritto:
>>>> Unless someone steps up and maintains ivshmem, I think it
>>>> should be deprecated and dropped from QEMU.
>>> 
>>> Then I can maintain ivshmem for QEMU. If this is ok, I will
>>> send a patch for MAINTAINERS file.
>> 
>> Typically, adding yourself to maintainers is done only after
>> having proved your ability to be a maintainer. :)
>> 
>> So, let's stop talking and go back to code!  You can start doing
>> what was suggested elsewhere in the thread: get the server and
>> uio driver merged into the QEMU tree, document the protocol in
>> docs/specs/ivshmem_device_spec.txt, and start fixing bugs such as
>> the ones that Markus reported.
> 
> One more thing to add to the list:
> 
> static void ivshmem_read(void *opaque, const uint8_t * buf, int
> flags)
> 
> The "flags" argument should be "size".  Size should be checked
> before accessing buf.
> 
> Please also see the bug fixes in the following unapplied patch: 
> "[PATCH] ivshmem: fix potential OOB r/w access (#2)" by Sebastian
> Krahmer 
> https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg03538.html

Jumping
> 
late onto this thread: SUSE Security team has just recently
done a thorough review of QEMU ivshmem code because a customer has
requested this be supported in SLES12. Multiple security-related
patches were submitted by Stefan Hajnoczi and Sebastian Krahmer, and I
fear they are probably still not merged for lack of active
maintainer... In such cases, after review, I expect them to be picked
up by Peter as committer or via qemu-trivial.

So -1, against dropping it.

Vincent, you will find an RFC for an ivshmem-test in the qemu-devel
list archives or possibly on my qtest branch. The blocking issue that
I haven't worked on yet is that we can't unconditionally run the qtest
because it depends on KVM enabled at configure time (as opposed to
runtime) to have the device available.
http://patchwork.ozlabs.org/patch/336367/

As others have stated before, the nahanni server seems unmaintained,
thus not getting packaged by SUSE either and making testing the
interrupt parts of ivshmem difficult - unless we sort out and fill
with actual test code my proposed qtest.

Regards,
Andreas

- -- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJToanZAAoJEPou0S0+fgE/L6YP/jtPiwvz3YoW3+H/h/YzrnE7
xVP92jj5orzmbG3HMmEnx0l7YrtzYkwymgUO56dy2SrLFe0xVMnxuzcHLzHLsnm3
bYvMVq3eAx8sdx9c/O2B/rQbNo2p8PF/luTNewN7A+w5TX0XgxdI3TpLT2pVxf0b
kMaBnfivzUf2JY/zg6NaiGnwvVrA/0kXsCGKcTBiMQxOX2EdDgNak842SjlmS332
dPbqp5PIMdxwCxI/p+gpmu0cSy1bl2H6N2gkmKQZ63Z2tA7bWn/APdQeHyOcESZE
xRAfDz2Cs3/6EL7FLirJWdwT9EMNaFcM+eRgIqDamFzviQPZVuLKdDUteO1k9x1s
FlhL3ZRa3qHair9ByEJItqzneAeYmuwZ2DkKh4p/HQfbcxLzZlL8a1EEtYz5DTy0
8+Ax6IU5U5RZmwJ4/M/Ov5eT4t/fNe0MbG3mf5A8FJ6GWoF11ut/wyj70p/EmXua
QjUblK/eFemN4YvIi0ovD4DR9ZH2+bXOb44wKL7yFahKLldaP4y9DhJTap2J0mT1
b62FfFZ6hVIGP5n30OHLlhe39QY6SyIPc4JNc9VZ3GcpXtfOHPUOAD/ykt/As1P3
cPfL+jM0QSb6VNJHNbvUsSlJ6xI26qEWzyJ5R7ww4fyEoq4XiE2RCDUWJ2t9/jQb
+Bi/esBUDhAduc1Eh3FK
=MtPH
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-18 10:48                             ` Stefan Hajnoczi
                                               ` (3 preceding siblings ...)
  (?)
@ 2014-06-18 15:01                             ` Andreas Färber
  -1 siblings, 0 replies; 91+ messages in thread
From: Andreas Färber @ 2014-06-18 15:01 UTC (permalink / raw)
  To: Stefan Hajnoczi, Paolo Bonzini, Vincent JARDIN
  Cc: Henning Schild, Olivier MATZ, kvm, Peter Maydell, qemu-devel,
	David Marchand, Linux Virtualization, thomas.monjalon

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Am 18.06.2014 12:48, schrieb Stefan Hajnoczi:
> On Tue, Jun 17, 2014 at 11:44:11AM +0200, Paolo Bonzini wrote:
>> Il 17/06/2014 11:03, David Marchand ha scritto:
>>>> Unless someone steps up and maintains ivshmem, I think it
>>>> should be deprecated and dropped from QEMU.
>>> 
>>> Then I can maintain ivshmem for QEMU. If this is ok, I will
>>> send a patch for MAINTAINERS file.
>> 
>> Typically, adding yourself to maintainers is done only after
>> having proved your ability to be a maintainer. :)
>> 
>> So, let's stop talking and go back to code!  You can start doing
>> what was suggested elsewhere in the thread: get the server and
>> uio driver merged into the QEMU tree, document the protocol in
>> docs/specs/ivshmem_device_spec.txt, and start fixing bugs such as
>> the ones that Markus reported.
> 
> One more thing to add to the list:
> 
> static void ivshmem_read(void *opaque, const uint8_t * buf, int
> flags)
> 
> The "flags" argument should be "size".  Size should be checked
> before accessing buf.
> 
> Please also see the bug fixes in the following unapplied patch: 
> "[PATCH] ivshmem: fix potential OOB r/w access (#2)" by Sebastian
> Krahmer 
> https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg03538.html

Jumping
> 
late onto this thread: SUSE Security team has just recently
done a thorough review of QEMU ivshmem code because a customer has
requested this be supported in SLES12. Multiple security-related
patches were submitted by Stefan Hajnoczi and Sebastian Krahmer, and I
fear they are probably still not merged for lack of active
maintainer... In such cases, after review, I expect them to be picked
up by Peter as committer or via qemu-trivial.

So -1, against dropping it.

Vincent, you will find an RFC for an ivshmem-test in the qemu-devel
list archives or possibly on my qtest branch. The blocking issue that
I haven't worked on yet is that we can't unconditionally run the qtest
because it depends on KVM enabled at configure time (as opposed to
runtime) to have the device available.
http://patchwork.ozlabs.org/patch/336367/

As others have stated before, the nahanni server seems unmaintained,
thus not getting packaged by SUSE either and making testing the
interrupt parts of ivshmem difficult - unless we sort out and fill
with actual test code my proposed qtest.

Regards,
Andreas

- -- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJToanZAAoJEPou0S0+fgE/L6YP/jtPiwvz3YoW3+H/h/YzrnE7
xVP92jj5orzmbG3HMmEnx0l7YrtzYkwymgUO56dy2SrLFe0xVMnxuzcHLzHLsnm3
bYvMVq3eAx8sdx9c/O2B/rQbNo2p8PF/luTNewN7A+w5TX0XgxdI3TpLT2pVxf0b
kMaBnfivzUf2JY/zg6NaiGnwvVrA/0kXsCGKcTBiMQxOX2EdDgNak842SjlmS332
dPbqp5PIMdxwCxI/p+gpmu0cSy1bl2H6N2gkmKQZ63Z2tA7bWn/APdQeHyOcESZE
xRAfDz2Cs3/6EL7FLirJWdwT9EMNaFcM+eRgIqDamFzviQPZVuLKdDUteO1k9x1s
FlhL3ZRa3qHair9ByEJItqzneAeYmuwZ2DkKh4p/HQfbcxLzZlL8a1EEtYz5DTy0
8+Ax6IU5U5RZmwJ4/M/Ov5eT4t/fNe0MbG3mf5A8FJ6GWoF11ut/wyj70p/EmXua
QjUblK/eFemN4YvIi0ovD4DR9ZH2+bXOb44wKL7yFahKLldaP4y9DhJTap2J0mT1
b62FfFZ6hVIGP5n30OHLlhe39QY6SyIPc4JNc9VZ3GcpXtfOHPUOAD/ykt/As1P3
cPfL+jM0QSb6VNJHNbvUsSlJ6xI26qEWzyJ5R7ww4fyEoq4XiE2RCDUWJ2t9/jQb
+Bi/esBUDhAduc1Eh3FK
=MtPH
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-18 14:57                               ` David Marchand
  (?)
@ 2014-06-18 15:10                               ` Paolo Bonzini
  -1 siblings, 0 replies; 91+ messages in thread
From: Paolo Bonzini @ 2014-06-18 15:10 UTC (permalink / raw)
  To: David Marchand, Stefan Hajnoczi
  Cc: Henning Schild, Olivier MATZ, kvm, Claudio Fontana, qemu-devel,
	Linux Virtualization, Vincent JARDIN, thomas.monjalon

Il 18/06/2014 16:57, David Marchand ha scritto:
> Hello Stefan,
>
> On 06/18/2014 12:48 PM, Stefan Hajnoczi wrote:
>> One more thing to add to the list:
>>
>> static void ivshmem_read(void *opaque, const uint8_t * buf, int flags)
>>
>> The "flags" argument should be "size".  Size should be checked before
>> accessing buf.
>
> You are welcome to send a fix and I will review it.

This is not what a maintainer should do.  A maintainer should, if 
possible, contribute fixes to improve the code.

I know this is very different from usual "company-style" development 
(even open source software can be developed on with methods more typical 
of proprietary software), but we're asking you to do it because you 
evidently understand ivshmem better than us.

Claudio has more experience with free/open-source software.  Since he's 
interested in ivshmem, he can help you too.  Perhaps you could try 
sending out the patch, and Claudio can review it and send pull requests 
at least in the beginning?

Paolo

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-18 15:01                               ` Andreas Färber
@ 2014-06-19  8:25                                 ` David Marchand
  -1 siblings, 0 replies; 91+ messages in thread
From: David Marchand @ 2014-06-19  8:25 UTC (permalink / raw)
  To: Andreas Färber, Stefan Hajnoczi, Paolo Bonzini, Vincent JARDIN
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel,
	Linux Virtualization, thomas.monjalon, Peter Maydell,
	Alexander Graf

On 06/18/2014 05:01 PM, Andreas Färber wrote:
> late onto this thread: SUSE Security team has just recently
> done a thorough review of QEMU ivshmem code because a customer has
> requested this be supported in SLES12. Multiple security-related
> patches were submitted by Stefan Hajnoczi and Sebastian Krahmer, and I
> fear they are probably still not merged for lack of active
> maintainer... In such cases, after review, I expect them to be picked
> up by Peter as committer or via qemu-trivial.
>
> So -1, against dropping it.

Are these patches on patchwork ?

> Vincent, you will find an RFC for an ivshmem-test in the qemu-devel
> list archives or possibly on my qtest branch. The blocking issue that
> I haven't worked on yet is that we can't unconditionally run the qtest
> because it depends on KVM enabled at configure time (as opposed to
> runtime) to have the device available.
> http://patchwork.ozlabs.org/patch/336367/
>
> As others have stated before, the nahanni server seems unmaintained,
> thus not getting packaged by SUSE either and making testing the
> interrupt parts of ivshmem difficult - unless we sort out and fill
> with actual test code my proposed qtest.

Thanks for the RFC patch.

About ivshmem server, yes I will look at it.
I will see what I can propose or if importing nahanni implementation 
as-is is the best solution.

Anyway, first, documentation.


-- 
David Marchand

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
@ 2014-06-19  8:25                                 ` David Marchand
  0 siblings, 0 replies; 91+ messages in thread
From: David Marchand @ 2014-06-19  8:25 UTC (permalink / raw)
  To: Andreas Färber, Stefan Hajnoczi, Paolo Bonzini, Vincent JARDIN
  Cc: Henning Schild, Olivier MATZ, kvm, Peter Maydell, qemu-devel,
	Linux Virtualization, Alexander Graf, thomas.monjalon

On 06/18/2014 05:01 PM, Andreas Färber wrote:
> late onto this thread: SUSE Security team has just recently
> done a thorough review of QEMU ivshmem code because a customer has
> requested this be supported in SLES12. Multiple security-related
> patches were submitted by Stefan Hajnoczi and Sebastian Krahmer, and I
> fear they are probably still not merged for lack of active
> maintainer... In such cases, after review, I expect them to be picked
> up by Peter as committer or via qemu-trivial.
>
> So -1, against dropping it.

Are these patches on patchwork ?

> Vincent, you will find an RFC for an ivshmem-test in the qemu-devel
> list archives or possibly on my qtest branch. The blocking issue that
> I haven't worked on yet is that we can't unconditionally run the qtest
> because it depends on KVM enabled at configure time (as opposed to
> runtime) to have the device available.
> http://patchwork.ozlabs.org/patch/336367/
>
> As others have stated before, the nahanni server seems unmaintained,
> thus not getting packaged by SUSE either and making testing the
> interrupt parts of ivshmem difficult - unless we sort out and fill
> with actual test code my proposed qtest.

Thanks for the RFC patch.

About ivshmem server, yes I will look at it.
I will see what I can propose or if importing nahanni implementation 
as-is is the best solution.

Anyway, first, documentation.


-- 
David Marchand

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-18 15:01                               ` Andreas Färber
  (?)
  (?)
@ 2014-06-19  8:25                               ` David Marchand
  -1 siblings, 0 replies; 91+ messages in thread
From: David Marchand @ 2014-06-19  8:25 UTC (permalink / raw)
  To: Andreas Färber, Stefan Hajnoczi, Paolo Bonzini, Vincent JARDIN
  Cc: Henning Schild, Olivier MATZ, kvm, Peter Maydell, qemu-devel,
	Linux Virtualization, thomas.monjalon

On 06/18/2014 05:01 PM, Andreas Färber wrote:
> late onto this thread: SUSE Security team has just recently
> done a thorough review of QEMU ivshmem code because a customer has
> requested this be supported in SLES12. Multiple security-related
> patches were submitted by Stefan Hajnoczi and Sebastian Krahmer, and I
> fear they are probably still not merged for lack of active
> maintainer... In such cases, after review, I expect them to be picked
> up by Peter as committer or via qemu-trivial.
>
> So -1, against dropping it.

Are these patches on patchwork ?

> Vincent, you will find an RFC for an ivshmem-test in the qemu-devel
> list archives or possibly on my qtest branch. The blocking issue that
> I haven't worked on yet is that we can't unconditionally run the qtest
> because it depends on KVM enabled at configure time (as opposed to
> runtime) to have the device available.
> http://patchwork.ozlabs.org/patch/336367/
>
> As others have stated before, the nahanni server seems unmaintained,
> thus not getting packaged by SUSE either and making testing the
> interrupt parts of ivshmem difficult - unless we sort out and fill
> with actual test code my proposed qtest.

Thanks for the RFC patch.

About ivshmem server, yes I will look at it.
I will see what I can propose or if importing nahanni implementation 
as-is is the best solution.

Anyway, first, documentation.


-- 
David Marchand

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-18 14:57                               ` David Marchand
  (?)
  (?)
@ 2014-06-21  9:34                               ` Stefan Hajnoczi
  2014-06-26 20:02                                   ` Cam Macdonell
  -1 siblings, 1 reply; 91+ messages in thread
From: Stefan Hajnoczi @ 2014-06-21  9:34 UTC (permalink / raw)
  To: David Marchand
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel,
	Linux Virtualization, Paolo Bonzini, Vincent JARDIN,
	thomas.monjalon

On Wed, Jun 18, 2014 at 10:57 PM, David Marchand
<david.marchand@6wind.com> wrote:
> On 06/18/2014 12:48 PM, Stefan Hajnoczi wrote:
>>
>> One more thing to add to the list:
>>
>> static void ivshmem_read(void *opaque, const uint8_t * buf, int flags)
>>
>> The "flags" argument should be "size".  Size should be checked before
>> accessing buf.
>
>
> You are welcome to send a fix and I will review it.

I don't plan to send ivshmem patches in the near future because I
don't use or support it.

I thought you were interested in bringing ivshmem up to a level where
distros feel comfortable enabling and supporting it.  Getting there
will require effort from you to audit, clean up, and achieve test
coverage.  That's what a maintainer needs to do in a case like this.

Stefan

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-21  9:34                               ` Stefan Hajnoczi
@ 2014-06-26 20:02                                   ` Cam Macdonell
  0 siblings, 0 replies; 91+ messages in thread
From: Cam Macdonell @ 2014-06-26 20:02 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Henning Schild, Olivier MATZ, qemu-devel, Linux Virtualization,
	thomas.monjalon, Paolo Bonzini, Vincent JARDIN, David Marchand

[-- Attachment #1: Type: text/plain, Size: 1462 bytes --]

Hello,

Just to add my two bits.

I will fully support getting all the necessary parts of ivshmem into tree
where appropriate, both qemu and a driver in Linux.  I understand those
concerns.

I do not have the time to fully maintain ivshmem at the level needed, but I
will help as much as I can.

Sorry for the delay in contributing to this conversation.

Cheers,
Cam


On Sat, Jun 21, 2014 at 3:34 AM, Stefan Hajnoczi <stefanha@gmail.com> wrote:

> On Wed, Jun 18, 2014 at 10:57 PM, David Marchand
> <david.marchand@6wind.com> wrote:
> > On 06/18/2014 12:48 PM, Stefan Hajnoczi wrote:
> >>
> >> One more thing to add to the list:
> >>
> >> static void ivshmem_read(void *opaque, const uint8_t * buf, int flags)
> >>
> >> The "flags" argument should be "size".  Size should be checked before
> >> accessing buf.
> >
> >
> > You are welcome to send a fix and I will review it.
>
> I don't plan to send ivshmem patches in the near future because I
> don't use or support it.
>
> I thought you were interested in bringing ivshmem up to a level where
> distros feel comfortable enabling and supporting it.  Getting there
> will require effort from you to audit, clean up, and achieve test
> coverage.  That's what a maintainer needs to do in a case like this.
>
> Stefan
> _______________________________________________
> Virtualization mailing list
> Virtualization@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/virtualization
>
>

[-- Attachment #2: Type: text/html, Size: 2406 bytes --]

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
@ 2014-06-26 20:02                                   ` Cam Macdonell
  0 siblings, 0 replies; 91+ messages in thread
From: Cam Macdonell @ 2014-06-26 20:02 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Henning Schild, Olivier MATZ, qemu-devel, Linux Virtualization,
	thomas.monjalon, Paolo Bonzini, Vincent JARDIN, David Marchand


[-- Attachment #1.1: Type: text/plain, Size: 1462 bytes --]

Hello,

Just to add my two bits.

I will fully support getting all the necessary parts of ivshmem into tree
where appropriate, both qemu and a driver in Linux.  I understand those
concerns.

I do not have the time to fully maintain ivshmem at the level needed, but I
will help as much as I can.

Sorry for the delay in contributing to this conversation.

Cheers,
Cam


On Sat, Jun 21, 2014 at 3:34 AM, Stefan Hajnoczi <stefanha@gmail.com> wrote:

> On Wed, Jun 18, 2014 at 10:57 PM, David Marchand
> <david.marchand@6wind.com> wrote:
> > On 06/18/2014 12:48 PM, Stefan Hajnoczi wrote:
> >>
> >> One more thing to add to the list:
> >>
> >> static void ivshmem_read(void *opaque, const uint8_t * buf, int flags)
> >>
> >> The "flags" argument should be "size".  Size should be checked before
> >> accessing buf.
> >
> >
> > You are welcome to send a fix and I will review it.
>
> I don't plan to send ivshmem patches in the near future because I
> don't use or support it.
>
> I thought you were interested in bringing ivshmem up to a level where
> distros feel comfortable enabling and supporting it.  Getting there
> will require effort from you to audit, clean up, and achieve test
> coverage.  That's what a maintainer needs to do in a case like this.
>
> Stefan
> _______________________________________________
> Virtualization mailing list
> Virtualization@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/virtualization
>
>

[-- Attachment #1.2: Type: text/html, Size: 2406 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-18 10:48                             ` Stefan Hajnoczi
@ 2014-06-30 11:10                               ` Markus Armbruster
  -1 siblings, 0 replies; 91+ messages in thread
From: Markus Armbruster @ 2014-06-30 11:10 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Paolo Bonzini, Henning Schild, Olivier MATZ, kvm, qemu-devel,
	David Marchand, Linux Virtualization, Vincent JARDIN,
	thomas.monjalon

Stefan Hajnoczi <stefanha@gmail.com> writes:

> On Tue, Jun 17, 2014 at 11:44:11AM +0200, Paolo Bonzini wrote:
>> Il 17/06/2014 11:03, David Marchand ha scritto:
>> >>Unless someone steps up and maintains ivshmem, I think it should be
>> >>deprecated and dropped from QEMU.
>> >
>> >Then I can maintain ivshmem for QEMU.
>> >If this is ok, I will send a patch for MAINTAINERS file.
>> 
>> Typically, adding yourself to maintainers is done only after having proved
>> your ability to be a maintainer. :)
>> 
>> So, let's stop talking and go back to code!  You can start doing what was
>> suggested elsewhere in the thread: get the server and uio driver merged into
>> the QEMU tree, document the protocol in docs/specs/ivshmem_device_spec.txt,
>> and start fixing bugs such as the ones that Markus reported.
>
> One more thing to add to the list:
>
> static void ivshmem_read(void *opaque, const uint8_t * buf, int flags)
>
> The "flags" argument should be "size".  Size should be checked before
> accessing buf.
>
> Please also see the bug fixes in the following unapplied patch:
> "[PATCH] ivshmem: fix potential OOB r/w access (#2)" by Sebastian Krahmer
> https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg03538.html

Another one: most devices can be controlled via a dedicated
CONFIG_<DEVNAME>, but not ivshmem: it uses CONFIG_KVM and CONFIG_PCI.
Giving it its own CONFIG_IVSHMEM would be nice.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
@ 2014-06-30 11:10                               ` Markus Armbruster
  0 siblings, 0 replies; 91+ messages in thread
From: Markus Armbruster @ 2014-06-30 11:10 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel, David Marchand,
	Paolo Bonzini, Linux Virtualization, Vincent JARDIN,
	thomas.monjalon

Stefan Hajnoczi <stefanha@gmail.com> writes:

> On Tue, Jun 17, 2014 at 11:44:11AM +0200, Paolo Bonzini wrote:
>> Il 17/06/2014 11:03, David Marchand ha scritto:
>> >>Unless someone steps up and maintains ivshmem, I think it should be
>> >>deprecated and dropped from QEMU.
>> >
>> >Then I can maintain ivshmem for QEMU.
>> >If this is ok, I will send a patch for MAINTAINERS file.
>> 
>> Typically, adding yourself to maintainers is done only after having proved
>> your ability to be a maintainer. :)
>> 
>> So, let's stop talking and go back to code!  You can start doing what was
>> suggested elsewhere in the thread: get the server and uio driver merged into
>> the QEMU tree, document the protocol in docs/specs/ivshmem_device_spec.txt,
>> and start fixing bugs such as the ones that Markus reported.
>
> One more thing to add to the list:
>
> static void ivshmem_read(void *opaque, const uint8_t * buf, int flags)
>
> The "flags" argument should be "size".  Size should be checked before
> accessing buf.
>
> Please also see the bug fixes in the following unapplied patch:
> "[PATCH] ivshmem: fix potential OOB r/w access (#2)" by Sebastian Krahmer
> https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg03538.html

Another one: most devices can be controlled via a dedicated
CONFIG_<DEVNAME>, but not ivshmem: it uses CONFIG_KVM and CONFIG_PCI.
Giving it its own CONFIG_IVSHMEM would be nice.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [Qemu-devel] Why I advise against using ivshmem
  2014-06-18 10:48                             ` Stefan Hajnoczi
                                               ` (4 preceding siblings ...)
  (?)
@ 2014-06-30 11:10                             ` Markus Armbruster
  -1 siblings, 0 replies; 91+ messages in thread
From: Markus Armbruster @ 2014-06-30 11:10 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Henning Schild, Olivier MATZ, kvm, qemu-devel, David Marchand,
	Paolo Bonzini, Linux Virtualization, Vincent JARDIN,
	thomas.monjalon

Stefan Hajnoczi <stefanha@gmail.com> writes:

> On Tue, Jun 17, 2014 at 11:44:11AM +0200, Paolo Bonzini wrote:
>> Il 17/06/2014 11:03, David Marchand ha scritto:
>> >>Unless someone steps up and maintains ivshmem, I think it should be
>> >>deprecated and dropped from QEMU.
>> >
>> >Then I can maintain ivshmem for QEMU.
>> >If this is ok, I will send a patch for MAINTAINERS file.
>> 
>> Typically, adding yourself to maintainers is done only after having proved
>> your ability to be a maintainer. :)
>> 
>> So, let's stop talking and go back to code!  You can start doing what was
>> suggested elsewhere in the thread: get the server and uio driver merged into
>> the QEMU tree, document the protocol in docs/specs/ivshmem_device_spec.txt,
>> and start fixing bugs such as the ones that Markus reported.
>
> One more thing to add to the list:
>
> static void ivshmem_read(void *opaque, const uint8_t * buf, int flags)
>
> The "flags" argument should be "size".  Size should be checked before
> accessing buf.
>
> Please also see the bug fixes in the following unapplied patch:
> "[PATCH] ivshmem: fix potential OOB r/w access (#2)" by Sebastian Krahmer
> https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg03538.html

Another one: most devices can be controlled via a dedicated
CONFIG_<DEVNAME>, but not ivshmem: it uses CONFIG_KVM and CONFIG_PCI.
Giving it its own CONFIG_IVSHMEM would be nice.

^ permalink raw reply	[flat|nested] 91+ messages in thread

end of thread, other threads:[~2014-06-30 11:10 UTC | newest]

Thread overview: 91+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-10 16:48 Using virtio for inter-VM communication Henning Schild
2014-06-10 16:48 ` [Qemu-devel] " Henning Schild
2014-06-10 22:15 ` Vincent JARDIN
2014-06-10 22:15 ` Vincent JARDIN
2014-06-10 22:15   ` [Qemu-devel] " Vincent JARDIN
2014-06-12  6:48   ` Markus Armbruster
2014-06-12  6:48   ` Markus Armbruster
2014-06-12  6:48     ` [Qemu-devel] " Markus Armbruster
2014-06-12  7:44     ` Henning Schild
2014-06-12  7:44       ` [Qemu-devel] " Henning Schild
2014-06-12  9:31       ` Vincent JARDIN
2014-06-12  9:31       ` Vincent JARDIN
2014-06-12  9:31         ` [Qemu-devel] " Vincent JARDIN
2014-06-12 12:55       ` Markus Armbruster
2014-06-12 14:40       ` Why I advise against using ivshmem (was: [Qemu-devel] Using virtio for inter-VM communication) Markus Armbruster
2014-06-12 14:40       ` Markus Armbruster
2014-06-12 14:40         ` [Qemu-devel] Why I advise against using ivshmem (was: " Markus Armbruster
2014-06-12 16:02         ` Why I advise against using ivshmem Vincent JARDIN
2014-06-12 16:02           ` [Qemu-devel] " Vincent JARDIN
2014-06-12 16:54           ` Paolo Bonzini
2014-06-12 16:54             ` [Qemu-devel] " Paolo Bonzini
2014-06-13  8:46           ` Markus Armbruster
2014-06-13  9:26             ` Vincent JARDIN
2014-06-13  9:31               ` Jobin Raju George
2014-06-13  9:31                 ` Jobin Raju George
2014-06-13  9:31               ` Jobin Raju George
2014-06-13  9:48               ` Olivier MATZ
2014-06-13  9:48               ` Olivier MATZ
2014-06-13  9:48                 ` Olivier MATZ
2014-06-13 10:09               ` Paolo Bonzini
2014-06-13 13:41                 ` Vincent JARDIN
2014-06-13 13:41                 ` Vincent JARDIN
2014-06-13 13:41                   ` Vincent JARDIN
2014-06-13 14:10                   ` Paolo Bonzini
2014-06-13 14:10                     ` Paolo Bonzini
2014-06-14 18:01                     ` Vincent JARDIN
2014-06-14 18:01                       ` Vincent JARDIN
2014-06-17  2:54                     ` Stefan Hajnoczi
2014-06-17  9:03                       ` David Marchand
2014-06-17  9:03                         ` David Marchand
2014-06-17  9:44                         ` Paolo Bonzini
2014-06-18 10:48                           ` Stefan Hajnoczi
2014-06-18 10:48                             ` Stefan Hajnoczi
2014-06-18 14:57                             ` David Marchand
2014-06-18 14:57                             ` David Marchand
2014-06-18 14:57                               ` David Marchand
2014-06-18 15:10                               ` Paolo Bonzini
2014-06-21  9:34                               ` Stefan Hajnoczi
2014-06-26 20:02                                 ` Cam Macdonell
2014-06-26 20:02                                   ` Cam Macdonell
2014-06-18 15:01                             ` Andreas Färber
2014-06-18 15:01                               ` Andreas Färber
2014-06-19  8:25                               ` David Marchand
2014-06-19  8:25                                 ` David Marchand
2014-06-19  8:25                               ` David Marchand
2014-06-18 15:01                             ` Andreas Färber
2014-06-30 11:10                             ` Markus Armbruster
2014-06-30 11:10                             ` Markus Armbruster
2014-06-30 11:10                               ` Markus Armbruster
2014-06-18 10:51                         ` Stefan Hajnoczi
2014-06-18 10:51                           ` Stefan Hajnoczi
2014-06-18 14:58                           ` David Marchand
2014-06-18 14:58                           ` David Marchand
2014-06-18 14:58                             ` David Marchand
2014-06-18 14:22                         ` Claudio Fontana
2014-06-17  9:03                       ` David Marchand
2014-06-13  9:29             ` Jobin Raju George
2014-06-13  9:29               ` [Qemu-devel] " Jobin Raju George
2014-06-13  9:29             ` Jobin Raju George
2014-06-12 16:02         ` Vincent JARDIN
2014-06-12  2:27 ` Using virtio for inter-VM communication Rusty Russell
2014-06-12  2:27   ` Rusty Russell
2014-06-12  2:27   ` [Qemu-devel] " Rusty Russell
2014-06-12  5:32   ` Jan Kiszka
2014-06-12  5:32     ` [Qemu-devel] " Jan Kiszka
2014-06-13  0:47     ` Rusty Russell
2014-06-13  0:47       ` [Qemu-devel] " Rusty Russell
2014-06-13  6:23       ` Jan Kiszka
2014-06-13  6:23         ` [Qemu-devel] " Jan Kiszka
2014-06-13  8:45         ` Paolo Bonzini
2014-06-13  8:45           ` [Qemu-devel] " Paolo Bonzini
2014-06-15  6:20           ` Jan Kiszka
2014-06-15  6:20           ` Jan Kiszka
2014-06-15  6:20             ` [Qemu-devel] " Jan Kiszka
2014-06-17  5:24             ` Paolo Bonzini
2014-06-17  5:24               ` [Qemu-devel] " Paolo Bonzini
2014-06-17  5:57               ` Jan Kiszka
2014-06-17  5:57               ` Jan Kiszka
2014-06-17  5:57                 ` [Qemu-devel] " Jan Kiszka
2014-06-17  5:24             ` Paolo Bonzini
2014-06-12  5:32   ` Jan Kiszka

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.