* [virtio-comment] Re: PCI cap for larger offsets/lengths
[not found] <20180921095459.GA2842@work-vm>
@ 2018-11-26 11:16 ` Stefan Hajnoczi
[not found] ` <20181126135245.v4lklw6qwa2ggv4t@sirius.home.kraxel.org>
0 siblings, 1 reply; 6+ messages in thread
From: Stefan Hajnoczi @ 2018-11-26 11:16 UTC (permalink / raw)
To: Dr. David Alan Gilbert; +Cc: virtio-comment, Michael S. Tsirkin, kraxel
[-- Attachment #1: Type: text/plain, Size: 2175 bytes --]
On Fri, Sep 21, 2018 at 10:54:59AM +0100, Dr. David Alan Gilbert wrote:
> Hi,
> We've got an experimental virtio device (using vhost-user) we're playing with
> that would like to share multiple large mappings from the client back to qemu.
CCing Michael Tsirkin and Gerd Hoffman. Gerd could use this for
virtio-gpu where some memory must be owned by the host.
>
> 'virtio_pci_cap' only has 32bit offset and length fields and so
> I've got a different capability to express larger regions:
>
>
> /* Additional shared memory capability */
> #define VIRTIO_PCI_CAP_SHARED_MEMORY_CFG 8
>
> struct virtio_pci_shm_cap {
> struct virtio_pci_cap cap;
> le32 offset_hi; /* Most sig 32 bits of offset */
> le32 length_hi; /* Most sig 32 bits of length */
> u8 id; /* To distinguish shm chunks */
> };
>
> One oddity is that I'm allowing multiple instances of this capability
> on one device, distinguished by their 'id' field which I've made device
> type specific, e.g.:
>
> #define VIRTIO_MYDEV_PCI_SHMCAP_ID_CACHE 0
> #define VIRTIO_MYDEV_PCI_SHMCAP_ID_JOURNAL 1
>
> Having more than one capability of the same type is a little different
> from what the virtio-spec currently suggests; it currently talks
> about 'order of preference suggested by the device' - i.e. two
> capabilities where the user picks one; in this case I'm using
> all of the instances for different things.
>
> If this makes sense to people, I'd be happy to write this up into a more
> formal change; if it doesn't I'd be interested to hear about alternative
> suggestions.
Another option is to define a unique PCI capability for each type:
struct virtio_pci_shm_cap {
struct virtio_pci_cap cap;
le32 offset_hi; /* Most significant 32 bits of offset */
le32 length_hi; /* Most significant 32 bits of length */
};
#define VIRTIO_PCI_CAP_MYDEV_SHM_CACHE 8
#define VIRTIO_PCI_CAP_MYDEV_SHM_JOURNAL 9
This way the "order of preference" semantics can be kept. The drawback
is consuming more PCI capability numbers.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* [virtio-comment] Re: PCI cap for larger offsets/lengths
[not found] ` <20181126135245.v4lklw6qwa2ggv4t@sirius.home.kraxel.org>
@ 2018-11-26 14:03 ` Dr. David Alan Gilbert
[not found] ` <20181126145145.cbuzccf3qo4z2nau@sirius.home.kraxel.org>
0 siblings, 1 reply; 6+ messages in thread
From: Dr. David Alan Gilbert @ 2018-11-26 14:03 UTC (permalink / raw)
To: Gerd Hoffmann; +Cc: Stefan Hajnoczi, virtio-comment, Michael S. Tsirkin
* Gerd Hoffmann (kraxel@redhat.com) wrote:
> On Mon, Nov 26, 2018 at 11:16:12AM +0000, Stefan Hajnoczi wrote:
> > On Fri, Sep 21, 2018 at 10:54:59AM +0100, Dr. David Alan Gilbert wrote:
> > > Hi,
> > > We've got an experimental virtio device (using vhost-user) we're playing with
> > > that would like to share multiple large mappings from the client back to qemu.
> >
> > CCing Michael Tsirkin and Gerd Hoffman. Gerd could use this for
> > virtio-gpu where some memory must be owned by the host.
>
> Yep. For virtio-gpu I want be able to map host gpu resources (which
> must be allocated by the host gpu driver) into the guest address space.
>
> > > 'virtio_pci_cap' only has 32bit offset and length fields and so
> > > I've got a different capability to express larger regions:
> > >
> > >
> > > /* Additional shared memory capability */
> > > #define VIRTIO_PCI_CAP_SHARED_MEMORY_CFG 8
> > >
> > > struct virtio_pci_shm_cap {
> > > struct virtio_pci_cap cap;
> > > le32 offset_hi; /* Most sig 32 bits of offset */
> > > le32 length_hi; /* Most sig 32 bits of length */
> > > u8 id; /* To distinguish shm chunks */
> > > };
> > >
> > > One oddity is that I'm allowing multiple instances of this capability
> > > on one device, distinguished by their 'id' field which I've made device
> > > type specific, e.g.:
> > >
> > > #define VIRTIO_MYDEV_PCI_SHMCAP_ID_CACHE 0
> > > #define VIRTIO_MYDEV_PCI_SHMCAP_ID_JOURNAL 1
>
> For my experimental virtio-gpu code I use one pci bar to reserve address
> space. It is a separate pci bar. First, because it is a 64bit bar.
> Second, because it is declared as prefetchable (unlike the mmio bar
> which is not). I also simply use the whole bar, so no offset/length is
> needed.
>
> gpu resources are sub-regions within that pci bar, and they are managed
> using device-specific commands.
>
> So, I'm wondering whenever it makes sense to just do the same for your
> device. Just use one pci bar as shared memory umbrella, specify that
> one using the virtio vendor cap, then have sub-regions within that bar
> for the various regions you have. Manage them dynamically (using
> device-specific virtio commands) or just have a static configuration (in
> device-specific config space).
Ours are static subdivisions; so it felt easier to declare them; it's a
shame to make that device specific.
> That avoids the problem with multiple capabilities of the same kind, and
> it also avoids exhausting the cap IDs quicky if every device defines
> their own VIRTIO_FOO_DEVICE_PCI_SHMCAP_ID_BAR_REGION.
Is having multiple capabilities of the same type actually a problem, or
is it just historical in the defitinition of virtio?
Dave
> cheers,
> Gerd
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
This publicly archived list offers a means to provide input to the
OASIS Virtual I/O Device (VIRTIO) TC.
In order to verify user consent to the Feedback License terms and
to minimize spam in the list archive, subscription is required
before posting.
Subscribe: virtio-comment-subscribe@lists.oasis-open.org
Unsubscribe: virtio-comment-unsubscribe@lists.oasis-open.org
List help: virtio-comment-help@lists.oasis-open.org
List archive: https://lists.oasis-open.org/archives/virtio-comment/
Feedback License: https://www.oasis-open.org/who/ipr/feedback_license.pdf
List Guidelines: https://www.oasis-open.org/policies-guidelines/mailing-lists
Committee: https://www.oasis-open.org/committees/virtio/
Join OASIS: https://www.oasis-open.org/join/
^ permalink raw reply [flat|nested] 6+ messages in thread
* [virtio-comment] Re: PCI cap for larger offsets/lengths
[not found] ` <20181126145145.cbuzccf3qo4z2nau@sirius.home.kraxel.org>
@ 2018-11-27 9:27 ` Stefan Hajnoczi
2018-11-27 15:10 ` Michael S. Tsirkin
1 sibling, 0 replies; 6+ messages in thread
From: Stefan Hajnoczi @ 2018-11-27 9:27 UTC (permalink / raw)
To: Gerd Hoffmann; +Cc: Dr. David Alan Gilbert, virtio-comment, Michael S. Tsirkin
[-- Attachment #1: Type: text/plain, Size: 2094 bytes --]
On Mon, Nov 26, 2018 at 03:51:45PM +0100, Gerd Hoffmann wrote:
> Hi,
>
> > > So, I'm wondering whenever it makes sense to just do the same for your
> > > device. Just use one pci bar as shared memory umbrella, specify that
> > > one using the virtio vendor cap, then have sub-regions within that bar
> > > for the various regions you have. Manage them dynamically (using
> > > device-specific virtio commands) or just have a static configuration (in
> > > device-specific config space).
> >
> > Ours are static subdivisions; so it felt easier to declare them; it's a
> > shame to make that device specific.
>
> Shared memory handling is device specific anyway, so I fail to see why
> this is a problem. Or do you want place virtio queues there (which
> could be common ground for multiple device types) ?
>
> > > That avoids the problem with multiple capabilities of the same kind, and
> > > it also avoids exhausting the cap IDs quicky if every device defines
> > > their own VIRTIO_FOO_DEVICE_PCI_SHMCAP_ID_BAR_REGION.
> >
> > Is having multiple capabilities of the same type actually a problem, or
> > is it just historical in the defitinition of virtio?
>
> I think the reason is that you can in theory have the same region twice,
> once in an IO bar and once in an MMIO bar, and then the guest could
> prefer the IO bar if possible and use the MMIO bar otherwise (PCIe slot
> without IO address window for example). I think that was never actually
> done in practice, and for (prefetchable) memory bars it doesn't make
> sense at all. So that would unlikely be a problem in practice.
>
> Running out of capability IDs could become a real problem though.
If we use a dedicated prefetchable MMIO BAR for shared memory resources,
then that BAR is unavailable. For example, if the virtio-pci transport
itself ever wants to use a prefetchable MMIO BAR too.
I would keep the indirection offered by PCI capabilities so that devices
retain the ability to decide how exactly BARs are used without requiring
spec or driver changes.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* [virtio-comment] Re: PCI cap for larger offsets/lengths
[not found] ` <20181126145145.cbuzccf3qo4z2nau@sirius.home.kraxel.org>
2018-11-27 9:27 ` Stefan Hajnoczi
@ 2018-11-27 15:10 ` Michael S. Tsirkin
2018-11-28 17:18 ` Dr. David Alan Gilbert
1 sibling, 1 reply; 6+ messages in thread
From: Michael S. Tsirkin @ 2018-11-27 15:10 UTC (permalink / raw)
To: Gerd Hoffmann; +Cc: Dr. David Alan Gilbert, Stefan Hajnoczi, virtio-comment
On Mon, Nov 26, 2018 at 03:51:45PM +0100, Gerd Hoffmann wrote:
> Hi,
>
> > > So, I'm wondering whenever it makes sense to just do the same for your
> > > device. Just use one pci bar as shared memory umbrella, specify that
> > > one using the virtio vendor cap, then have sub-regions within that bar
> > > for the various regions you have. Manage them dynamically (using
> > > device-specific virtio commands) or just have a static configuration (in
> > > device-specific config space).
> >
> > Ours are static subdivisions; so it felt easier to declare them; it's a
> > shame to make that device specific.
>
> Shared memory handling is device specific anyway, so I fail to see why
> this is a problem. Or do you want place virtio queues there (which
> could be common ground for multiple device types) ?
>
> > > That avoids the problem with multiple capabilities of the same kind, and
> > > it also avoids exhausting the cap IDs quicky if every device defines
> > > their own VIRTIO_FOO_DEVICE_PCI_SHMCAP_ID_BAR_REGION.
> >
> > Is having multiple capabilities of the same type actually a problem, or
> > is it just historical in the defitinition of virtio?
>
> I think the reason is that you can in theory have the same region twice,
> once in an IO bar and once in an MMIO bar, and then the guest could
> prefer the IO bar if possible and use the MMIO bar otherwise (PCIe slot
> without IO address window for example). I think that was never actually
> done in practice,
There's an option to enable that for AMD CPUs where MMIO
faults are slower than on intel CPUs.
> and for (prefetchable) memory bars it doesn't make
> sense at all. So that would unlikely be a problem in practice.
>
> Running out of capability IDs could become a real problem though.
>
> cheers,
> Gerd
We can always add more bits if we run out of these. there's
no real limit on capability size.
--
MST
This publicly archived list offers a means to provide input to the
OASIS Virtual I/O Device (VIRTIO) TC.
In order to verify user consent to the Feedback License terms and
to minimize spam in the list archive, subscription is required
before posting.
Subscribe: virtio-comment-subscribe@lists.oasis-open.org
Unsubscribe: virtio-comment-unsubscribe@lists.oasis-open.org
List help: virtio-comment-help@lists.oasis-open.org
List archive: https://lists.oasis-open.org/archives/virtio-comment/
Feedback License: https://www.oasis-open.org/who/ipr/feedback_license.pdf
List Guidelines: https://www.oasis-open.org/policies-guidelines/mailing-lists
Committee: https://www.oasis-open.org/committees/virtio/
Join OASIS: https://www.oasis-open.org/join/
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [virtio-comment] Re: PCI cap for larger offsets/lengths
2018-11-27 15:10 ` Michael S. Tsirkin
@ 2018-11-28 17:18 ` Dr. David Alan Gilbert
2018-11-28 20:16 ` Michael S. Tsirkin
0 siblings, 1 reply; 6+ messages in thread
From: Dr. David Alan Gilbert @ 2018-11-28 17:18 UTC (permalink / raw)
To: Michael S. Tsirkin; +Cc: Gerd Hoffmann, Stefan Hajnoczi, virtio-comment
* Michael S. Tsirkin (mst@redhat.com) wrote:
> On Mon, Nov 26, 2018 at 03:51:45PM +0100, Gerd Hoffmann wrote:
> > Hi,
> >
> > > > So, I'm wondering whenever it makes sense to just do the same for your
> > > > device. Just use one pci bar as shared memory umbrella, specify that
> > > > one using the virtio vendor cap, then have sub-regions within that bar
> > > > for the various regions you have. Manage them dynamically (using
> > > > device-specific virtio commands) or just have a static configuration (in
> > > > device-specific config space).
> > >
> > > Ours are static subdivisions; so it felt easier to declare them; it's a
> > > shame to make that device specific.
> >
> > Shared memory handling is device specific anyway, so I fail to see why
> > this is a problem. Or do you want place virtio queues there (which
> > could be common ground for multiple device types) ?
> >
> > > > That avoids the problem with multiple capabilities of the same kind, and
> > > > it also avoids exhausting the cap IDs quicky if every device defines
> > > > their own VIRTIO_FOO_DEVICE_PCI_SHMCAP_ID_BAR_REGION.
> > >
> > > Is having multiple capabilities of the same type actually a problem, or
> > > is it just historical in the defitinition of virtio?
> >
> > I think the reason is that you can in theory have the same region twice,
> > once in an IO bar and once in an MMIO bar, and then the guest could
> > prefer the IO bar if possible and use the MMIO bar otherwise (PCIe slot
> > without IO address window for example). I think that was never actually
> > done in practice,
>
> There's an option to enable that for AMD CPUs where MMIO
> faults are slower than on intel CPUs.
>
> > and for (prefetchable) memory bars it doesn't make
> > sense at all. So that would unlikely be a problem in practice.
> >
> > Running out of capability IDs could become a real problem though.
> >
> > cheers,
> > Gerd
>
> We can always add more bits if we run out of these. there's
> no real limit on capability size.
I thought it was defined as 8 bits by the capability-linked list
structure in PCI?
Dave
> --
> MST
>
> This publicly archived list offers a means to provide input to the
> OASIS Virtual I/O Device (VIRTIO) TC.
>
> In order to verify user consent to the Feedback License terms and
> to minimize spam in the list archive, subscription is required
> before posting.
>
> Subscribe: virtio-comment-subscribe@lists.oasis-open.org
> Unsubscribe: virtio-comment-unsubscribe@lists.oasis-open.org
> List help: virtio-comment-help@lists.oasis-open.org
> List archive: https://lists.oasis-open.org/archives/virtio-comment/
> Feedback License: https://www.oasis-open.org/who/ipr/feedback_license.pdf
> List Guidelines: https://www.oasis-open.org/policies-guidelines/mailing-lists
> Committee: https://www.oasis-open.org/committees/virtio/
> Join OASIS: https://www.oasis-open.org/join/
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
This publicly archived list offers a means to provide input to the
OASIS Virtual I/O Device (VIRTIO) TC.
In order to verify user consent to the Feedback License terms and
to minimize spam in the list archive, subscription is required
before posting.
Subscribe: virtio-comment-subscribe@lists.oasis-open.org
Unsubscribe: virtio-comment-unsubscribe@lists.oasis-open.org
List help: virtio-comment-help@lists.oasis-open.org
List archive: https://lists.oasis-open.org/archives/virtio-comment/
Feedback License: https://www.oasis-open.org/who/ipr/feedback_license.pdf
List Guidelines: https://www.oasis-open.org/policies-guidelines/mailing-lists
Committee: https://www.oasis-open.org/committees/virtio/
Join OASIS: https://www.oasis-open.org/join/
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [virtio-comment] Re: PCI cap for larger offsets/lengths
2018-11-28 17:18 ` Dr. David Alan Gilbert
@ 2018-11-28 20:16 ` Michael S. Tsirkin
0 siblings, 0 replies; 6+ messages in thread
From: Michael S. Tsirkin @ 2018-11-28 20:16 UTC (permalink / raw)
To: Dr. David Alan Gilbert; +Cc: Gerd Hoffmann, Stefan Hajnoczi, virtio-comment
On Wed, Nov 28, 2018 at 05:18:37PM +0000, Dr. David Alan Gilbert wrote:
> * Michael S. Tsirkin (mst@redhat.com) wrote:
> > On Mon, Nov 26, 2018 at 03:51:45PM +0100, Gerd Hoffmann wrote:
> > > Hi,
> > >
> > > > > So, I'm wondering whenever it makes sense to just do the same for your
> > > > > device. Just use one pci bar as shared memory umbrella, specify that
> > > > > one using the virtio vendor cap, then have sub-regions within that bar
> > > > > for the various regions you have. Manage them dynamically (using
> > > > > device-specific virtio commands) or just have a static configuration (in
> > > > > device-specific config space).
> > > >
> > > > Ours are static subdivisions; so it felt easier to declare them; it's a
> > > > shame to make that device specific.
> > >
> > > Shared memory handling is device specific anyway, so I fail to see why
> > > this is a problem. Or do you want place virtio queues there (which
> > > could be common ground for multiple device types) ?
> > >
> > > > > That avoids the problem with multiple capabilities of the same kind, and
> > > > > it also avoids exhausting the cap IDs quicky if every device defines
> > > > > their own VIRTIO_FOO_DEVICE_PCI_SHMCAP_ID_BAR_REGION.
> > > >
> > > > Is having multiple capabilities of the same type actually a problem, or
> > > > is it just historical in the defitinition of virtio?
> > >
> > > I think the reason is that you can in theory have the same region twice,
> > > once in an IO bar and once in an MMIO bar, and then the guest could
> > > prefer the IO bar if possible and use the MMIO bar otherwise (PCIe slot
> > > without IO address window for example). I think that was never actually
> > > done in practice,
> >
> > There's an option to enable that for AMD CPUs where MMIO
> > faults are slower than on intel CPUs.
> >
> > > and for (prefetchable) memory bars it doesn't make
> > > sense at all. So that would unlikely be a problem in practice.
> > >
> > > Running out of capability IDs could become a real problem though.
> > >
> > > cheers,
> > > Gerd
> >
> > We can always add more bits if we run out of these. there's
> > no real limit on capability size.
>
> I thought it was defined as 8 bits by the capability-linked list
> structure in PCI?
>
> Dave
Maybe I misunderstand. Don't you mean u8 cfg_type field type in struct
virtio_pci_cap? If so then what I was pointing out is that
if we ever need more than 256 types then we
can always have a special cfg_type value(s) meaning
"different format" and add more types this way.
> > --
> > MST
> >
> > This publicly archived list offers a means to provide input to the
> > OASIS Virtual I/O Device (VIRTIO) TC.
> >
> > In order to verify user consent to the Feedback License terms and
> > to minimize spam in the list archive, subscription is required
> > before posting.
> >
> > Subscribe: virtio-comment-subscribe@lists.oasis-open.org
> > Unsubscribe: virtio-comment-unsubscribe@lists.oasis-open.org
> > List help: virtio-comment-help@lists.oasis-open.org
> > List archive: https://lists.oasis-open.org/archives/virtio-comment/
> > Feedback License: https://www.oasis-open.org/who/ipr/feedback_license.pdf
> > List Guidelines: https://www.oasis-open.org/policies-guidelines/mailing-lists
> > Committee: https://www.oasis-open.org/committees/virtio/
> > Join OASIS: https://www.oasis-open.org/join/
> >
> --
> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
This publicly archived list offers a means to provide input to the
OASIS Virtual I/O Device (VIRTIO) TC.
In order to verify user consent to the Feedback License terms and
to minimize spam in the list archive, subscription is required
before posting.
Subscribe: virtio-comment-subscribe@lists.oasis-open.org
Unsubscribe: virtio-comment-unsubscribe@lists.oasis-open.org
List help: virtio-comment-help@lists.oasis-open.org
List archive: https://lists.oasis-open.org/archives/virtio-comment/
Feedback License: https://www.oasis-open.org/who/ipr/feedback_license.pdf
List Guidelines: https://www.oasis-open.org/policies-guidelines/mailing-lists
Committee: https://www.oasis-open.org/committees/virtio/
Join OASIS: https://www.oasis-open.org/join/
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2018-11-28 20:17 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20180921095459.GA2842@work-vm>
2018-11-26 11:16 ` [virtio-comment] Re: PCI cap for larger offsets/lengths Stefan Hajnoczi
[not found] ` <20181126135245.v4lklw6qwa2ggv4t@sirius.home.kraxel.org>
2018-11-26 14:03 ` Dr. David Alan Gilbert
[not found] ` <20181126145145.cbuzccf3qo4z2nau@sirius.home.kraxel.org>
2018-11-27 9:27 ` Stefan Hajnoczi
2018-11-27 15:10 ` Michael S. Tsirkin
2018-11-28 17:18 ` Dr. David Alan Gilbert
2018-11-28 20:16 ` Michael S. Tsirkin
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.