All of lore.kernel.org
 help / color / mirror / Atom feed
From: Srivatsa Vaddagiri <vatsa@codeaurora.org>
To: Jan Kiszka <jan.kiszka@siemens.com>
Cc: Will Deacon <will@kernel.org>,
	konrad.wilk@oracle.com, mst@redhat.com, jasowang@redhat.com,
	stefano.stabellini@xilinx.com, iommu@lists.linux-foundation.org,
	virtualization@lists.linux-foundation.org,
	virtio-dev@lists.oasis-open.org, tsoni@codeaurora.org,
	pratikp@codeaurora.org, christoffer.dall@arm.com,
	alex.bennee@linaro.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
Date: Thu, 30 Apr 2020 19:03:21 +0530	[thread overview]
Message-ID: <20200430133321.GC3204@quicinc.com> (raw)
In-Reply-To: <7bf8bffe-267b-6c66-86c9-40017d3ca4c2@siemens.com>

* Jan Kiszka <jan.kiszka@siemens.com> [2020-04-30 14:59:50]:

> >I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
> >(for life-cycle management of VMs), which our hypervisor may not support. A
> >simple shared memory and doorbell or message-queue based transport will work for
> >us.
> 
> As written in our private conversation, a mapping of the ivshmem2 device
> discovery to platform mechanism (device tree etc.) and maybe even the
> register access for doorbell and life-cycle management to something
> hypercall-like would be imaginable. What would count more from virtio
> perspective is a common mapping on a shared memory transport.

Yes that sounds simpler for us.

> That said, I also warned about all the features that PCI already defined
> (such as message-based interrupts) which you may have to add when going a
> different way for the shared memory device.

Is it really required to present this shared memory as belonging to a PCI
device? I would expect the device-tree to indicate the presence of this shared
memory region, which we should be able to present to ivshmem2 as shared memory
region to use (along with some handles for doorbell or message queue use).

I understand the usefulness of modeling the shared memory as part of device so
that hypervisor can send events related to peers going down or coming up. In our
case, there will be other means to discover those events and avoiding this
requirement on hypervisor (to emulate PCI) will simplify the solution for us.

Any idea when we can expect virtio over ivshmem2 to become available?!
 
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

WARNING: multiple messages have this Message-ID (diff)
From: Srivatsa Vaddagiri <vatsa@codeaurora.org>
To: Jan Kiszka <jan.kiszka@siemens.com>
Cc: tsoni@codeaurora.org, virtio-dev@lists.oasis-open.org,
	mst@redhat.com, pratikp@codeaurora.org, jasowang@redhat.com,
	konrad.wilk@oracle.com, christoffer.dall@arm.com,
	virtualization@lists.linux-foundation.org,
	alex.bennee@linaro.org, iommu@lists.linux-foundation.org,
	stefano.stabellini@xilinx.com, Will Deacon <will@kernel.org>,
	linux-kernel@vger.kernel.org
Subject: Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
Date: Thu, 30 Apr 2020 19:03:21 +0530	[thread overview]
Message-ID: <20200430133321.GC3204@quicinc.com> (raw)
In-Reply-To: <7bf8bffe-267b-6c66-86c9-40017d3ca4c2@siemens.com>

* Jan Kiszka <jan.kiszka@siemens.com> [2020-04-30 14:59:50]:

> >I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
> >(for life-cycle management of VMs), which our hypervisor may not support. A
> >simple shared memory and doorbell or message-queue based transport will work for
> >us.
> 
> As written in our private conversation, a mapping of the ivshmem2 device
> discovery to platform mechanism (device tree etc.) and maybe even the
> register access for doorbell and life-cycle management to something
> hypercall-like would be imaginable. What would count more from virtio
> perspective is a common mapping on a shared memory transport.

Yes that sounds simpler for us.

> That said, I also warned about all the features that PCI already defined
> (such as message-based interrupts) which you may have to add when going a
> different way for the shared memory device.

Is it really required to present this shared memory as belonging to a PCI
device? I would expect the device-tree to indicate the presence of this shared
memory region, which we should be able to present to ivshmem2 as shared memory
region to use (along with some handles for doorbell or message queue use).

I understand the usefulness of modeling the shared memory as part of device so
that hypervisor can send events related to peers going down or coming up. In our
case, there will be other means to discover those events and avoiding this
requirement on hypervisor (to emulate PCI) will simplify the solution for us.

Any idea when we can expect virtio over ivshmem2 to become available?!
 
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

WARNING: multiple messages have this Message-ID (diff)
From: Srivatsa Vaddagiri <vatsa-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org>
To: Jan Kiszka <jan.kiszka-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
Cc: tsoni-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org,
	virtio-dev-sDuHXQ4OtrM4h7I2RyI4rWD2FQJk+8+b@public.gmane.org,
	mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	pratikp-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org,
	jasowang-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	konrad.wilk-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org,
	christoffer.dall-5wv7dgnIgG8@public.gmane.org,
	virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
	alex.bennee-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org,
	stefano.stabellini-gjFFaj9aHVfQT0dZR+AlfA@public.gmane.org,
	Will Deacon <will-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
Date: Thu, 30 Apr 2020 19:03:21 +0530	[thread overview]
Message-ID: <20200430133321.GC3204@quicinc.com> (raw)
In-Reply-To: <7bf8bffe-267b-6c66-86c9-40017d3ca4c2-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>

* Jan Kiszka <jan.kiszka-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org> [2020-04-30 14:59:50]:

> >I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
> >(for life-cycle management of VMs), which our hypervisor may not support. A
> >simple shared memory and doorbell or message-queue based transport will work for
> >us.
> 
> As written in our private conversation, a mapping of the ivshmem2 device
> discovery to platform mechanism (device tree etc.) and maybe even the
> register access for doorbell and life-cycle management to something
> hypercall-like would be imaginable. What would count more from virtio
> perspective is a common mapping on a shared memory transport.

Yes that sounds simpler for us.

> That said, I also warned about all the features that PCI already defined
> (such as message-based interrupts) which you may have to add when going a
> different way for the shared memory device.

Is it really required to present this shared memory as belonging to a PCI
device? I would expect the device-tree to indicate the presence of this shared
memory region, which we should be able to present to ivshmem2 as shared memory
region to use (along with some handles for doorbell or message queue use).

I understand the usefulness of modeling the shared memory as part of device so
that hypervisor can send events related to peers going down or coming up. In our
case, there will be other means to discover those events and avoiding this
requirement on hypervisor (to emulate PCI) will simplify the solution for us.

Any idea when we can expect virtio over ivshmem2 to become available?!
 
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
of Code Aurora Forum, hosted by The Linux Foundation

  reply	other threads:[~2020-04-30 13:34 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-30 10:02 [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO Srivatsa Vaddagiri
2020-04-30 10:02 ` Srivatsa Vaddagiri
2020-04-30 10:02 ` [RFC/PATCH 1/1] virtio: Introduce MMIO ops Srivatsa Vaddagiri
2020-04-30 10:02   ` Srivatsa Vaddagiri
2020-04-30 10:14   ` Will Deacon
2020-04-30 10:14     ` Will Deacon
2020-04-30 10:34     ` Srivatsa Vaddagiri
2020-04-30 10:34       ` Srivatsa Vaddagiri
2020-04-30 10:41       ` Will Deacon
2020-04-30 10:41         ` Will Deacon
2020-04-30 11:11         ` Srivatsa Vaddagiri
2020-04-30 11:11           ` Srivatsa Vaddagiri
2020-04-30 12:59           ` Jan Kiszka
2020-04-30 12:59             ` [virtio-dev] " Jan Kiszka
2020-04-30 12:59             ` Jan Kiszka
2020-04-30 13:33             ` Srivatsa Vaddagiri [this message]
2020-04-30 13:33               ` Srivatsa Vaddagiri
2020-04-30 13:33               ` Srivatsa Vaddagiri
2020-04-30 19:34               ` Michael S. Tsirkin
2020-04-30 19:34                 ` [virtio-dev] " Michael S. Tsirkin
2020-04-30 19:34                 ` Michael S. Tsirkin
2020-04-30 10:07 ` [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO Michael S. Tsirkin
2020-04-30 10:07   ` [virtio-dev] " Michael S. Tsirkin
2020-04-30 10:07   ` Michael S. Tsirkin
2020-04-30 10:40   ` Srivatsa Vaddagiri
2020-04-30 10:40     ` Srivatsa Vaddagiri
2020-04-30 10:56   ` Jason Wang
2020-04-30 10:56     ` [virtio-dev] " Jason Wang
2020-04-30 10:56     ` Jason Wang
2020-04-30 10:08 ` Will Deacon
2020-04-30 10:08   ` Will Deacon
2020-04-30 10:29   ` Srivatsa Vaddagiri
2020-04-30 10:29     ` Srivatsa Vaddagiri
2020-04-30 10:39     ` Will Deacon
2020-04-30 10:39       ` Will Deacon
2020-04-30 11:02       ` Srivatsa Vaddagiri
2020-04-30 11:02         ` Srivatsa Vaddagiri
2020-04-30 11:02         ` Srivatsa Vaddagiri

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200430133321.GC3204@quicinc.com \
    --to=vatsa@codeaurora.org \
    --cc=alex.bennee@linaro.org \
    --cc=christoffer.dall@arm.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jan.kiszka@siemens.com \
    --cc=jasowang@redhat.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=pratikp@codeaurora.org \
    --cc=stefano.stabellini@xilinx.com \
    --cc=tsoni@codeaurora.org \
    --cc=virtio-dev@lists.oasis-open.org \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.