All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: Shared memory and event channel
       [not found] ` <1266787199.24577.18.camel@agari.van.xensource.com>
@ 2010-02-21 23:33   ` Ritu kaur
  2010-02-22  7:55     ` Daniel Stodden
  0 siblings, 1 reply; 16+ messages in thread
From: Ritu kaur @ 2010-02-21 23:33 UTC (permalink / raw)
  To: Daniel Stodden; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 3833 bytes --]

Hi Daniel,

Thanks for the explanation, however, my main question is still unanswered

"My understanding is one has to use xenbus(registering and monitoring for
device creation) mechanism to setup shared mem rings and event channel
between dom's and there is no other way to do it."

All I need is to setup NIC register reads and writes from domUs(ioctl is one
such application which I have been discussing in another thread) and to
implement this I considered an option of using shared memory rings. If
answer to the above question is "Yes" then probably I will not take that
route.

It will be really helpful if you can elaborate on "why not just write an
auxiliary driver adding only added functionality but remaining separate from
the base networking stack"

Thanks


On Sun, Feb 21, 2010 at 1:19 PM, Daniel Stodden
<daniel.stodden@citrix.com>wrote:

> On Sun, 2010-02-21 at 13:58 -0500, Ritu kaur wrote:
> > Hi,
> >
> > This is related to my other thread(ioctls) but thought this subject
> > mandates a seperate thread by itself. Below is what my understanding,
> > inputs and correction will be very helpful
> >
> > 1. in order to setup shared memory rings and event channels a
> > frontend(running in domU) and backend(running in dom0) drivers are
> > required.
>
> Yes, and device instances come in pairs.
>
> > 2. these drivers registers with xenbus, monitor for a device creation
> > in xenstored.
>
> Yes. The backend device is created as soon as node <domid>/<devid> in
> backend/<type> is created. Resulting in a .probe event on the respective
> driver. Frontend device creation work equivalently.
>
> > 3. when devices are created, xenbus invokes backend/frontend probe
> > functions which then triggers xenbus state machine.
>
> Yes. The "state" field on either end drives frontend/backend connection
> setup and teardown. These are the "otherend_changed" callbacks in the
> xenbus drivers.
>
> > 4. before xenbus state machine gets into connected state, shared
> > memory and event channels are setup and using hypervisor calls can be
> > accessed.
>
> Right. You will find two grant references for the I/O rings. One ring
> for RX and TX, respectively. This memory is allocated by domU and
> granted to the 'otherend' (=backend) domain. The grant reference is an
> index into a table maintained by domU, which contains the sharing
> permissions.
>
> The other important key is the descriptor for a bidirectional
> ('interdomain') event channel. This is basically the interrupt line used
> to notify the remote end when messages are produced on either ring.
>
> > My understanding is one has to use xenbus(registering and monitoring
> > for device creation) mechanism to setup shared mem rings and event
> > channel between dom's and there is no other way to do it.
> >
> > If I had to write a new driver I should have a new device name and my
> > driver will monitor this device via xenbus. In order to have new
> > device supported, I have to modify xapi toolstack, so it looks like
> > lot of changes has to be done to support this. I wish to be wrong
> > here. If there is an alternate mechanism to do it I would like to
> > know. Inputs much appreciated.
>
> Why do you need a different driver? Essentially: Why aren't your network
> frontends happy with the existing abstractions? What exactly is the
> functionality you want to add?
>
> Collecting statistics or low level DMA setup, as you mentioned, sounds a
> lot like details better kept in dom0. Why would domU have to bother, it
> should even be allowed to to anything about it.
>
> Even assuming it's a good idea to add these calls:
>
> Why would you need to reinvent the entire networking wheel? Why  not
> just write an auxiliary driver adding only added functionality, but
> remaining separate from the base networking stack?
>
> Cheers,
> Daniel
>
>

[-- Attachment #1.2: Type: text/html, Size: 4675 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Shared memory and event channel
  2010-02-21 23:33   ` Shared memory and event channel Ritu kaur
@ 2010-02-22  7:55     ` Daniel Stodden
  2010-02-22 17:36       ` Ritu kaur
  0 siblings, 1 reply; 16+ messages in thread
From: Daniel Stodden @ 2010-02-22  7:55 UTC (permalink / raw)
  To: Ritu kaur; +Cc: xen-devel

On Sun, 2010-02-21 at 18:33 -0500, Ritu kaur wrote:
> Hi Daniel,
> 
> Thanks for the explanation, however, my main question is still
> unanswered
> 
> "My understanding is one has to use xenbus(registering and monitoring
> for device creation) mechanism to setup shared mem rings and event
> channel between dom's and there is no other way to do it."

Anything capable of passing two integers around could give you a shared
memory connection.

> All I need is to setup NIC register reads and writes from domUs(ioctl
> is one such application which I have been discussing in another
> thread) and to implement this I considered an option of using shared
> memory rings. If answer to the above question is "Yes" then probably I
> will not take that route.

You need to understand that netback and the interface corresponding to
your hardware NIC have no direct association. Netback just provides a
kernel network interface, not the hardware controller. As any good
network citizen, it passes packet buffers around, without any
assumptions were they go. Not even an implicit assumption that somewhere
in there is a physical NIC involved at all.

There's a galaxy of layer 2/3 stuff between netback and the hardware.
Bridging, routing, NAT etc., all in different variants. For XCP it's
typically bridged. Netback won't know, because it doesn't have to.

And least of all it wants to learn about your NIC.

> It will be really helpful if you can elaborate on "why not just write
> an auxiliary driver adding only added functionality but remaining
> separate from the base networking stack"

You would not even have to take down the vifs to prevent domU access to
a NIC. The aren't bound to the NIC anyway.

For low-level access to the NIC, you also don't necessarily need to set
up message passing. Even if you would, none of that belongs into the PV
interface.

I'm not sure right now how easy the control plane in XCP will make it
without other domU's notice, but maybe consider something like:

  1. Take the physical NIC out of the virtual network.
  2. Take the driver down.
  3. Pass access to the NIC to a domU.
  4. Let domU do the unspeakable.
  5.-7. Revert 3,2,1 to normal.

This won't mess with the the PV drivers. Get PCI passthrough to work for
3 and 4 and you save yourself a tedious ring protocol design. If not,
consider doing the hardware programming in dom0, because there's not
much left for domU anyway.

You need a split toolstack to get the dom0 network control steps on
behalf of domU done. Might be just a scripted agent, accessible to domU
via a couple RPCs. Could also turn out to be as simple as talking
through the primary vif, because the connection between domU and dom0
could remain unaffected.

Daniel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Shared memory and event channel
  2010-02-22  7:55     ` Daniel Stodden
@ 2010-02-22 17:36       ` Ritu kaur
  2010-02-22 21:34         ` Daniel Stodden
  0 siblings, 1 reply; 16+ messages in thread
From: Ritu kaur @ 2010-02-22 17:36 UTC (permalink / raw)
  To: Daniel Stodden; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 4086 bytes --]

Hi Daniel,

Thanks once again and reply/question's inline...

On Sun, Feb 21, 2010 at 11:55 PM, Daniel Stodden
<daniel.stodden@citrix.com>wrote:

> On Sun, 2010-02-21 at 18:33 -0500, Ritu kaur wrote:
> > Hi Daniel,
> >
> > Thanks for the explanation, however, my main question is still
> > unanswered
> >
> > "My understanding is one has to use xenbus(registering and monitoring
> > for device creation) mechanism to setup shared mem rings and event
> > channel between dom's and there is no other way to do it."
>
> Anything capable of passing two integers around could give you a shared
> memory connection.
>

I want to know more about "anything capable". I have read documents from
xen.org and so far my understanding is that the only mechanism of setting up
shared mem rings is via xenbus(since PV drivers are the only users
currently),  pointers or example code which uses alternate mechanism will be
helpful.


> > All I need is to setup NIC register reads and writes from domUs(ioctl
> > is one such application which I have been discussing in another
> > thread) and to implement this I considered an option of using shared
> > memory rings. If answer to the above question is "Yes" then probably I
> > will not take that route.
>
> You need to understand that netback and the interface corresponding to
> your hardware NIC have no direct association. Netback just provides a
> kernel network interface, not the hardware controller. As any good
> network citizen, it passes packet buffers around, without any
> assumptions were they go. Not even an implicit assumption that somewhere
> in there is a physical NIC involved at all.
>
> There's a galaxy of layer 2/3 stuff between netback and the hardware.
> Bridging, routing, NAT etc., all in different variants. For XCP it's
> typically bridged. Netback won't know, because it doesn't have to.
>
> And least of all it wants to learn about your NIC.
>
> > It will be really helpful if you can elaborate on "why not just write
> > an auxiliary driver adding only added functionality but remaining
> > separate from the base networking stack"
>
> You would not even have to take down the vifs to prevent domU access to
> a NIC. The aren't bound to the NIC anyway.
>
> For low-level access to the NIC, you also don't necessarily need to set
> up message passing. Even if you would, none of that belongs into the PV
> interface.
>
> I'm not sure right now how easy the control plane in XCP will make it
> without other domU's notice, but maybe consider something like:
>
>  1. Take the physical NIC out of the virtual network.
>  2. Take the driver down.
>  3. Pass access to the NIC to a domU.
>  4. Let domU do the unspeakable.
>  5.-7. Revert 3,2,1 to normal.
>
> This won't mess with the the PV drivers. Get PCI passthrough to work for
> 3 and 4 and you save yourself a tedious ring protocol design. If not,
> consider doing the hardware programming in dom0, because there's not
> much left for domU anyway.
>
> You need a split toolstack to get the dom0 network control steps on
> behalf of domU done. Might be just a scripted agent, accessible to domU
> via a couple RPCs. Could also turn out to be as simple as talking
> through the primary vif, because the connection between domU and dom0
> could remain unaffected.
>
>
PCI passthrough is via config changes and no code changes, if that's the
case I am not sure how it would solve multiple domU accesses. For the second
paragraph, do you have recommended readings? frankly I don't completely
understand the solution any pointers appreciated.

In addition, registers in NIC are memory mapped(ioremap function is used,
and in ioctls memcpy_toio and memcpy_fromio is used to write/read registers)
and wanted  to know if its possible to map memory from dom0 into domU's? I
haven't looked into details of issues that will comeup with mapping, but
thought of checking. ioctl is one application which uses register
reads/writes and there are other modules(in kernel I believe) is being
developed which need register read/write functionality as well.

Thanks

Daniel
>
>
>
>
>
>
>

[-- Attachment #1.2: Type: text/html, Size: 5322 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Shared memory and event channel
  2010-02-22 17:36       ` Ritu kaur
@ 2010-02-22 21:34         ` Daniel Stodden
  2010-02-22 22:16           ` Ritu kaur
  0 siblings, 1 reply; 16+ messages in thread
From: Daniel Stodden @ 2010-02-22 21:34 UTC (permalink / raw)
  To: Ritu kaur; +Cc: xen-devel

On Mon, 2010-02-22 at 12:36 -0500, Ritu kaur wrote:

>         
>         I'm not sure right now how easy the control plane in XCP will
>         make it
>         without other domU's notice, but maybe consider something
>         like:
>         
>          1. Take the physical NIC out of the virtual network.
>          2. Take the driver down.
>          3. Pass access to the NIC to a domU.
>          4. Let domU do the unspeakable.
>          5.-7. Revert 3,2,1 to normal.
>         
>         This won't mess with the the PV drivers. Get PCI passthrough
>         to work for
>         3 and 4 and you save yourself a tedious ring protocol design.
>         If not,
>         consider doing the hardware programming in dom0, because
>         there's not
>         much left for domU anyway.
>         
>         You need a split toolstack to get the dom0 network control
>         steps on
>         behalf of domU done. Might be just a scripted agent,
>         accessible to domU
>         via a couple RPCs. Could also turn out to be as simple as
>         talking
>         through the primary vif, because the connection between domU
>         and dom0
>         could remain unaffected.
>         
>         
> 
> PCI passthrough is via config changes and no code changes, if that's
> the case I am not sure how it would solve multiple domU accesses. 

My understanding after catching up a little on the past of this thread
was that you want the network controller in some maintenance mode. Is
this correct?

To get it there you will need to temporarily remove it from the virtual
network topology.

The PCI passthrough mode might solve your second problem, which is how
the domU is supposed to access the device once it's been pulled off the
data path.

> For the second paragraph, do you have recommended readings? frankly I
> don't completely understand the solution any pointers appreciated.

> In addition, registers in NIC are memory mapped(ioremap function is
> used, and in ioctls memcpy_toio and memcpy_fromio is used to
> write/read registers) and wanted  to know if its possible to map
> memory from dom0 into domU's? 

Yes. This is the third problem, which is how to program a device. I'd
recommend "Linux Device Drivers" on that subject. There are also free
books like http://tldp.org/LDP/tlk/tlk-title.html. Examples likely
outdate, but the concepts remain.

If the device is memory mapped, it doesn't mean it's in memory. It means
it's in the machine memory address space. The difference should become
clear once you're done with understanding your driver.

Is this the reason why you are so concerned about the memory sharing
mechanism? The good news is now you won't need to bother, that's only
for memory. :)

Daniel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Shared memory and event channel
  2010-02-22 21:34         ` Daniel Stodden
@ 2010-02-22 22:16           ` Ritu kaur
  2010-02-23  9:38             ` Ian Campbell
  0 siblings, 1 reply; 16+ messages in thread
From: Ritu kaur @ 2010-02-22 22:16 UTC (permalink / raw)
  To: Daniel Stodden; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 3862 bytes --]

Hi Daniel,

Please see inline...

On Mon, Feb 22, 2010 at 1:34 PM, Daniel Stodden
<daniel.stodden@citrix.com>wrote:

> On Mon, 2010-02-22 at 12:36 -0500, Ritu kaur wrote:
>
> >
> >         I'm not sure right now how easy the control plane in XCP will
> >         make it
> >         without other domU's notice, but maybe consider something
> >         like:
> >
> >          1. Take the physical NIC out of the virtual network.
> >          2. Take the driver down.
> >          3. Pass access to the NIC to a domU.
> >          4. Let domU do the unspeakable.
> >          5.-7. Revert 3,2,1 to normal.
> >
> >         This won't mess with the the PV drivers. Get PCI passthrough
> >         to work for
> >         3 and 4 and you save yourself a tedious ring protocol design.
> >         If not,
> >         consider doing the hardware programming in dom0, because
> >         there's not
> >         much left for domU anyway.
> >
> >         You need a split toolstack to get the dom0 network control
> >         steps on
> >         behalf of domU done. Might be just a scripted agent,
> >         accessible to domU
> >         via a couple RPCs. Could also turn out to be as simple as
> >         talking
> >         through the primary vif, because the connection between domU
> >         and dom0
> >         could remain unaffected.
> >
> >
> >
> > PCI passthrough is via config changes and no code changes, if that's
> > the case I am not sure how it would solve multiple domU accesses.
>
> My understanding after catching up a little on the past of this thread
> was that you want the network controller in some maintenance mode. Is
> this correct?
>

All I need to  is access NIC registers via domU's(network controller will
still be working normally). Using PCI passthrough solves the problem for a
domU, however, it doesn't solve when multiple domU's wanting to read NIC
registers(ex. statistics).

>
> To get it there you will need to temporarily remove it from the virtual
> network topology.
>
> The PCI passthrough mode might solve your second problem, which is how
> the domU is supposed to access the device once it's been pulled off the
> data path.
>

> > For the second paragraph, do you have recommended readings? frankly I
> > don't completely understand the solution any pointers appreciated.
>
> > In addition, registers in NIC are memory mapped(ioremap function is
> > used, and in ioctls memcpy_toio and memcpy_fromio is used to
> > write/read registers) and wanted  to know if its possible to map
> > memory from dom0 into domU's?
>
> Yes. This is the third problem, which is how to program a device. I'd
> recommend "Linux Device Drivers" on that subject. There are also free
> books like http://tldp.org/LDP/tlk/tlk-title.html. Examples likely
> outdate, but the concepts remain.
>
> If the device is memory mapped, it doesn't mean it's in memory. It means
> it's in the machine memory address space. The difference should become
> clear once you're done with understanding your driver.
>
> Is this the reason why you are so concerned about the memory sharing
> mechanism?


No not really. I wanted to use shared memory between dom's as a solution for
multiple domU access(since pci passthrough doesn't solve it).

The clarification I wanted here(NIC registers are memory mapped), can I take
"machine memory address space(which is in dom0)" and remap it to domU's such
that I can get multiple domU access.

To summarize,

1. PCI passthrough mechanism works for single domU
2. Shared memory rings between dom's as a solution to have multiple domU
access, not a workable solution though
3. Take mapped machine address in dom0 and remap it into domU's(just another
thought, not sure it works) and wanted clarification here.

Thanks


> The good news is now you won't need to bother, that's only
> for memory. :)
>

> Daniel
>
>
>

[-- Attachment #1.2: Type: text/html, Size: 5404 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Shared memory and event channel
  2010-02-22 22:16           ` Ritu kaur
@ 2010-02-23  9:38             ` Ian Campbell
  2010-02-23 14:47               ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 16+ messages in thread
From: Ian Campbell @ 2010-02-23  9:38 UTC (permalink / raw)
  To: Ritu kaur; +Cc: xen-devel, Daniel Stodden

On Mon, 2010-02-22 at 22:16 +0000, Ritu kaur wrote:
> 
> All I need to  is access NIC registers via domU's(network controller
> will still be working normally). Using PCI passthrough solves the
> problem for a domU, however, it doesn't solve when multiple domU's
> wanting to read NIC registers(ex. statistics).  

Direct access to hardware registers and availability of the device to
multiple guest domains are mutually exclusive configurations under Xen
(in the absence of additional technologies such as SR-IOV).

The paravirtual front and back devices contain no hardware specific
functionality, in this configuration all hardware specific knowledge is
contained in the driver in domain 0. Guests use regular L2 or L3
mechanisms such as bridging, NAT or routing to obtain a path to the
physical hardware but they are never aware of that physical hardware.

PCI passthrough allows a guest direct access to a PCI device but this is
obviously incompatible with access from multiple guests (again, unless
you have SR-IOV or something similar)

Ian.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Shared memory and event channel
  2010-02-23  9:38             ` Ian Campbell
@ 2010-02-23 14:47               ` Konrad Rzeszutek Wilk
  2010-02-23 15:42                 ` Ian Campbell
  0 siblings, 1 reply; 16+ messages in thread
From: Konrad Rzeszutek Wilk @ 2010-02-23 14:47 UTC (permalink / raw)
  To: Ian Campbell; +Cc: Ritu kaur, xen-devel, Daniel Stodden

On Tue, Feb 23, 2010 at 09:38:26AM +0000, Ian Campbell wrote:
> On Mon, 2010-02-22 at 22:16 +0000, Ritu kaur wrote:
> > 
> > All I need to  is access NIC registers via domU's(network controller
> > will still be working normally). Using PCI passthrough solves the
> > problem for a domU, however, it doesn't solve when multiple domU's
> > wanting to read NIC registers(ex. statistics).  
> 
> Direct access to hardware registers and availability of the device to
> multiple guest domains are mutually exclusive configurations under Xen
> (in the absence of additional technologies such as SR-IOV).
> 
> The paravirtual front and back devices contain no hardware specific
> functionality, in this configuration all hardware specific knowledge is
> contained in the driver in domain 0. Guests use regular L2 or L3
> mechanisms such as bridging, NAT or routing to obtain a path to the
> physical hardware but they are never aware of that physical hardware.
> 
> PCI passthrough allows a guest direct access to a PCI device but this is
> obviously incompatible with access from multiple guests (again, unless
> you have SR-IOV or something similar)

What if the netback was set be able to work in guest mode? This way you
could export it out to the guests?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Shared memory and event channel
  2010-02-23 14:47               ` Konrad Rzeszutek Wilk
@ 2010-02-23 15:42                 ` Ian Campbell
  2010-02-23 15:53                   ` Ritu kaur
  0 siblings, 1 reply; 16+ messages in thread
From: Ian Campbell @ 2010-02-23 15:42 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: Ritu kaur, xen-devel, Daniel Stodden

On Tue, 2010-02-23 at 14:47 +0000, Konrad Rzeszutek Wilk wrote:
> On Tue, Feb 23, 2010 at 09:38:26AM +0000, Ian Campbell wrote:
> > On Mon, 2010-02-22 at 22:16 +0000, Ritu kaur wrote:
> > > 
> > > All I need to  is access NIC registers via domU's(network controller
> > > will still be working normally). Using PCI passthrough solves the
> > > problem for a domU, however, it doesn't solve when multiple domU's
> > > wanting to read NIC registers(ex. statistics).  
> > 
> > Direct access to hardware registers and availability of the device to
> > multiple guest domains are mutually exclusive configurations under Xen
> > (in the absence of additional technologies such as SR-IOV).
> > 
> > The paravirtual front and back devices contain no hardware specific
> > functionality, in this configuration all hardware specific knowledge is
> > contained in the driver in domain 0. Guests use regular L2 or L3
> > mechanisms such as bridging, NAT or routing to obtain a path to the
> > physical hardware but they are never aware of that physical hardware.
> > 
> > PCI passthrough allows a guest direct access to a PCI device but this is
> > obviously incompatible with access from multiple guests (again, unless
> > you have SR-IOV or something similar)
> 
> What if the netback was set be able to work in guest mode? This way you
> could export it out to the guests?

Like a driver domain model? That would work (I think) but is still not
the same as having multiple domain's with access to the physical
registers. netback in a guest works in exactly the same as how it works
for domain 0.

Ian.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Shared memory and event channel
  2010-02-23 15:42                 ` Ian Campbell
@ 2010-02-23 15:53                   ` Ritu kaur
  2010-02-23 17:42                     ` djmagee
  0 siblings, 1 reply; 16+ messages in thread
From: Ritu kaur @ 2010-02-23 15:53 UTC (permalink / raw)
  To: Ian Campbell; +Cc: xen-devel, Daniel Stodden, Konrad Rzeszutek Wilk


[-- Attachment #1.1: Type: text/plain, Size: 2361 bytes --]

Hi Ian,

Thanks for your inputs, I skimmed through Intel 82576 SR-IOV document and it
looks like it needs hardware support and I don't think our hardware has
it(will double check with our team). I believe currently there is no good
solution other than using pci passthrough(with a single domU access). I just
want to bring one thing and I hope it was not missed out from my earlier
email i.e

"The NIC registers are memory mapped, can I take "machine memory address
space(which is in dom0)" and remap it to domU's such that I can get multiple
domU access. "

The above soln is just a thought, not sure it's feasible.

Thanks


On Tue, Feb 23, 2010 at 7:42 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Tue, 2010-02-23 at 14:47 +0000, Konrad Rzeszutek Wilk wrote:
> > On Tue, Feb 23, 2010 at 09:38:26AM +0000, Ian Campbell wrote:
> > > On Mon, 2010-02-22 at 22:16 +0000, Ritu kaur wrote:
> > > >
> > > > All I need to  is access NIC registers via domU's(network controller
> > > > will still be working normally). Using PCI passthrough solves the
> > > > problem for a domU, however, it doesn't solve when multiple domU's
> > > > wanting to read NIC registers(ex. statistics).
> > >
> > > Direct access to hardware registers and availability of the device to
> > > multiple guest domains are mutually exclusive configurations under Xen
> > > (in the absence of additional technologies such as SR-IOV).
> > >
> > > The paravirtual front and back devices contain no hardware specific
> > > functionality, in this configuration all hardware specific knowledge is
> > > contained in the driver in domain 0. Guests use regular L2 or L3
> > > mechanisms such as bridging, NAT or routing to obtain a path to the
> > > physical hardware but they are never aware of that physical hardware.
> > >
> > > PCI passthrough allows a guest direct access to a PCI device but this
> is
> > > obviously incompatible with access from multiple guests (again, unless
> > > you have SR-IOV or something similar)
> >
> > What if the netback was set be able to work in guest mode? This way you
> > could export it out to the guests?
>
> Like a driver domain model? That would work (I think) but is still not
> the same as having multiple domain's with access to the physical
> registers. netback in a guest works in exactly the same as how it works
> for domain 0.
>
> Ian.
>
>

[-- Attachment #1.2: Type: text/html, Size: 3033 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: Shared memory and event channel
  2010-02-23 15:53                   ` Ritu kaur
@ 2010-02-23 17:42                     ` djmagee
  2010-02-23 19:26                       ` Ritu kaur
  0 siblings, 1 reply; 16+ messages in thread
From: djmagee @ 2010-02-23 17:42 UTC (permalink / raw)
  To: Ritu kaur, Ian Campbell; +Cc: xen-devel, Konrad Rzeszutek Wilk, Daniel Stodden


[-- Attachment #1.1: Type: text/plain, Size: 3272 bytes --]

What is the data you're trying to access in the device registers?  If
it's statistics, which you gave as an example, then why would a domain
want to read statistics for a card that shared by many other guests, of
which it has no knowledge?  In fact, I'm struggling to think of any
situation where data applicable to the physical card that's carrying
packets for every guest on the box could be useable by one single guest.

 

Can't you just write a daemon in dom0 that reads the data you're
interested in and makes it available to the domUs via a simple network
service?

 

From: xen-devel-bounces@lists.xensource.com
[mailto:xen-devel-bounces@lists.xensource.com] On Behalf Of Ritu kaur
Sent: Tuesday, February 23, 2010 10:53 AM
To: Ian Campbell
Cc: xen-devel@lists.xensource.com; Daniel Stodden; Konrad Rzeszutek Wilk
Subject: Re: [Xen-devel] Shared memory and event channel

 

Hi Ian,

Thanks for your inputs, I skimmed through Intel 82576 SR-IOV document
and it looks like it needs hardware support and I don't think our
hardware has it(will double check with our team). I believe currently
there is no good solution other than using pci passthrough(with a single
domU access). I just want to bring one thing and I hope it was not
missed out from my earlier email i.e 

"The NIC registers are memory mapped, can I take "machine memory address
space(which is in dom0)" and remap it to domU's such that I can get
multiple domU access. "

The above soln is just a thought, not sure it's feasible.

Thanks



On Tue, Feb 23, 2010 at 7:42 AM, Ian Campbell <Ian.Campbell@citrix.com>
wrote:

On Tue, 2010-02-23 at 14:47 +0000, Konrad Rzeszutek Wilk wrote:
> On Tue, Feb 23, 2010 at 09:38:26AM +0000, Ian Campbell wrote:
> > On Mon, 2010-02-22 at 22:16 +0000, Ritu kaur wrote:
> > >
> > > All I need to  is access NIC registers via domU's(network
controller
> > > will still be working normally). Using PCI passthrough solves the
> > > problem for a domU, however, it doesn't solve when multiple domU's
> > > wanting to read NIC registers(ex. statistics).
> >
> > Direct access to hardware registers and availability of the device
to
> > multiple guest domains are mutually exclusive configurations under
Xen
> > (in the absence of additional technologies such as SR-IOV).
> >
> > The paravirtual front and back devices contain no hardware specific
> > functionality, in this configuration all hardware specific knowledge
is
> > contained in the driver in domain 0. Guests use regular L2 or L3
> > mechanisms such as bridging, NAT or routing to obtain a path to the
> > physical hardware but they are never aware of that physical
hardware.
> >
> > PCI passthrough allows a guest direct access to a PCI device but
this is
> > obviously incompatible with access from multiple guests (again,
unless
> > you have SR-IOV or something similar)
>
> What if the netback was set be able to work in guest mode? This way
you
> could export it out to the guests?

Like a driver domain model? That would work (I think) but is still not
the same as having multiple domain's with access to the physical
registers. netback in a guest works in exactly the same as how it works
for domain 0.

Ian.

 


[-- Attachment #1.2: Type: text/html, Size: 9121 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Shared memory and event channel
  2010-02-23 17:42                     ` djmagee
@ 2010-02-23 19:26                       ` Ritu kaur
  2010-02-24  9:38                         ` Ian Campbell
  0 siblings, 1 reply; 16+ messages in thread
From: Ritu kaur @ 2010-02-23 19:26 UTC (permalink / raw)
  To: djmagee; +Cc: xen-devel, Ian Campbell, Konrad Rzeszutek Wilk, Daniel Stodden


[-- Attachment #1.1: Type: text/plain, Size: 4148 bytes --]

Hi,

I have tried proxy client and server model with server running in dom0 and
client in domU. It intercepts ioctls and passes it to dom0 and able to read
registers. This is done via socket calls. However, Citrix doesn't allow
socket calls into dom0 and I had to tweak firewall setting(basically I
cleared everything for testing purposes).

Need a clarification, using pci passthrough I believe it remove access to
device from dom0 and attaches the device to a domU and from then on can only
be accessed via that domU or is it possible to have dom0 and a single domU
gain access to device using pci passthrough? I guess not, thought of
checking.

Thanks


On Tue, Feb 23, 2010 at 9:42 AM, <djmagee@mageenet.net> wrote:

>  What is the data you’re trying to access in the device registers?  If
> it’s statistics, which you gave as an example, then why would a domain want
> to read statistics for a card that shared by many other guests, of which it
> has no knowledge?  In fact, I’m struggling to think of any situation where
> data applicable to the physical card that’s carrying packets for every guest
> on the box could be useable by one single guest.
>
>
>
> Can’t you just write a daemon in dom0 that reads the data you’re interested
> in and makes it available to the domUs via a simple network service?
>
>
>
> *From:* xen-devel-bounces@lists.xensource.com [mailto:
> xen-devel-bounces@lists.xensource.com] *On Behalf Of *Ritu kaur
> *Sent:* Tuesday, February 23, 2010 10:53 AM
> *To:* Ian Campbell
> *Cc:* xen-devel@lists.xensource.com; Daniel Stodden; Konrad Rzeszutek Wilk
> *Subject:* Re: [Xen-devel] Shared memory and event channel
>
>
>
> Hi Ian,
>
> Thanks for your inputs, I skimmed through Intel 82576 SR-IOV document and
> it looks like it needs hardware support and I don't think our hardware has
> it(will double check with our team). I believe currently there is no good
> solution other than using pci passthrough(with a single domU access). I just
> want to bring one thing and I hope it was not missed out from my earlier
> email i.e
>
> "The NIC registers are memory mapped, can I take "machine memory address
> space(which is in dom0)" and remap it to domU's such that I can get multiple
> domU access. "
>
> The above soln is just a thought, not sure it's feasible.
>
> Thanks
>
>  On Tue, Feb 23, 2010 at 7:42 AM, Ian Campbell <Ian.Campbell@citrix.com>
> wrote:
>
> On Tue, 2010-02-23 at 14:47 +0000, Konrad Rzeszutek Wilk wrote:
> > On Tue, Feb 23, 2010 at 09:38:26AM +0000, Ian Campbell wrote:
> > > On Mon, 2010-02-22 at 22:16 +0000, Ritu kaur wrote:
> > > >
> > > > All I need to  is access NIC registers via domU's(network controller
> > > > will still be working normally). Using PCI passthrough solves the
> > > > problem for a domU, however, it doesn't solve when multiple domU's
> > > > wanting to read NIC registers(ex. statistics).
> > >
> > > Direct access to hardware registers and availability of the device to
> > > multiple guest domains are mutually exclusive configurations under Xen
> > > (in the absence of additional technologies such as SR-IOV).
> > >
> > > The paravirtual front and back devices contain no hardware specific
> > > functionality, in this configuration all hardware specific knowledge is
> > > contained in the driver in domain 0. Guests use regular L2 or L3
> > > mechanisms such as bridging, NAT or routing to obtain a path to the
> > > physical hardware but they are never aware of that physical hardware.
> > >
> > > PCI passthrough allows a guest direct access to a PCI device but this
> is
> > > obviously incompatible with access from multiple guests (again, unless
> > > you have SR-IOV or something similar)
> >
> > What if the netback was set be able to work in guest mode? This way you
> > could export it out to the guests?
>
> Like a driver domain model? That would work (I think) but is still not
> the same as having multiple domain's with access to the physical
> registers. netback in a guest works in exactly the same as how it works
> for domain 0.
>
> Ian.
>
>
>

[-- Attachment #1.2: Type: text/html, Size: 6101 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Shared memory and event channel
  2010-02-23 19:26                       ` Ritu kaur
@ 2010-02-24  9:38                         ` Ian Campbell
  0 siblings, 0 replies; 16+ messages in thread
From: Ian Campbell @ 2010-02-24  9:38 UTC (permalink / raw)
  To: Ritu kaur; +Cc: Daniel, xen-devel, djmagee, Konrad Rzeszutek Wilk, Stodden

On Tue, 2010-02-23 at 19:26 +0000, Ritu kaur wrote:

> Need a clarification, using pci passthrough I believe it remove access
> to device from dom0 and attaches the device to a domU and from then on
> can only be accessed via that domU or is it possible to have dom0 and
> a single domU gain access to device using pci passthrough? I guess
> not, thought of checking.

You are correct, a given hardware device is only directly accessible to
a single domain at a time, be that dom0 or a domU via PCI passthrough.

Ian.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: shared memory and event channel
  2007-12-21  8:39   ` tgh
@ 2007-12-21 12:54     ` Keir Fraser
  0 siblings, 0 replies; 16+ messages in thread
From: Keir Fraser @ 2007-12-21 12:54 UTC (permalink / raw)
  To: tgh, Mark Williamson, Keir Fraser; +Cc: xen-devel, Amit Singh

It's a special-case type of inter-domain event channel in which one end is
connected to Xen itself rather than the guest. It's actually only used for
the per-vcpu ioemu event-channel that HVM guests require.

 -- Keir

On 21/12/07 08:39, "tgh" <wwwwww4187@sina.com.cn> wrote:

> hi
>   I read the code of eventchannel,and I am confused by the viarable
> named as"consumer_is_xen"in the evtchn struct, what is the function of
> consumer_is_xen? and does dom use eventchannel to communicate with
> hypervisor? why not the hypercall? and in which condition is
> eventchannel used in this way,that is ,the dom issues an event to
> hypervisor or xen,and xen is a consumer?
> 
> Thanks in advance
> 
> 
> 
> 
> Mark Williamson 写道:
>>>    For each domUs there is unique shared memory(2-way circular queue) and
>>> event-channel(one shared memory and event-channel per domU) or there is
>>> only one shared memory and interdomain event-channel(for every DomU)?
>>>     
>> 
>> Each domain has a separate shared memory page and event channel.  Actually,
>> in 
>> general, there are multiple shared memory areas and event channels per domU.
>> 
>> Each virtual device (e.g. virtual network interface) may require its own
>> separate shared memory page and event channel to talk to the backend.  So if
>> you have a domain with two vifs it'll need two shared memory pages and two
>> event channels.
>> 
>> The block driver will also want a memory page and event channel for each
>> virtual block device.
>> 
>> And so on.
>> 
>> Hope this helps,
>> 
>> Cheers,
>> Mark
>> 
>>   
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: shared memory and event channel
  2007-11-28  1:45 ` Mark Williamson
@ 2007-12-21  8:39   ` tgh
  2007-12-21 12:54     ` Keir Fraser
  0 siblings, 1 reply; 16+ messages in thread
From: tgh @ 2007-12-21  8:39 UTC (permalink / raw)
  To: Mark Williamson, Keir Fraser; +Cc: xen-devel, Amit Singh

hi
  I read the code of eventchannel,and I am confused by the viarable 
named as"consumer_is_xen"in the evtchn struct, what is the function of 
consumer_is_xen? and does dom use eventchannel to communicate with 
hypervisor? why not the hypercall? and in which condition is 
eventchannel used in this way,that is ,the dom issues an event to 
hypervisor or xen,and xen is a consumer?

Thanks in advance




Mark Williamson 写道:
>>    For each domUs there is unique shared memory(2-way circular queue) and
>> event-channel(one shared memory and event-channel per domU) or there is
>> only one shared memory and interdomain event-channel(for every DomU)?
>>     
>
> Each domain has a separate shared memory page and event channel.  Actually, in 
> general, there are multiple shared memory areas and event channels per domU.
>
> Each virtual device (e.g. virtual network interface) may require its own 
> separate shared memory page and event channel to talk to the backend.  So if 
> you have a domain with two vifs it'll need two shared memory pages and two 
> event channels.
>
> The block driver will also want a memory page and event channel for each 
> virtual block device.
>
> And so on.
>
> Hope this helps,
>
> Cheers,
> Mark
>
>   

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: shared memory and event channel
  2007-11-19  7:59 shared " Amit Singh
@ 2007-11-28  1:45 ` Mark Williamson
  2007-12-21  8:39   ` tgh
  0 siblings, 1 reply; 16+ messages in thread
From: Mark Williamson @ 2007-11-28  1:45 UTC (permalink / raw)
  To: xen-devel; +Cc: Amit Singh

>    For each domUs there is unique shared memory(2-way circular queue) and
> event-channel(one shared memory and event-channel per domU) or there is
> only one shared memory and interdomain event-channel(for every DomU)?

Each domain has a separate shared memory page and event channel.  Actually, in 
general, there are multiple shared memory areas and event channels per domU.

Each virtual device (e.g. virtual network interface) may require its own 
separate shared memory page and event channel to talk to the backend.  So if 
you have a domain with two vifs it'll need two shared memory pages and two 
event channels.

The block driver will also want a memory page and event channel for each 
virtual block device.

And so on.

Hope this helps,

Cheers,
Mark

-- 
Dave: Just a question. What use is a unicyle with no seat?  And no pedals!
Mark: To answer a question with a question: What use is a skateboard?
Dave: Skateboards have wheels.
Mark: My wheel has a wheel!

^ permalink raw reply	[flat|nested] 16+ messages in thread

* shared memory and event channel
@ 2007-11-19  7:59 Amit Singh
  2007-11-28  1:45 ` Mark Williamson
  0 siblings, 1 reply; 16+ messages in thread
From: Amit Singh @ 2007-11-19  7:59 UTC (permalink / raw)
  To: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 249 bytes --]


Hi,

   For each domUs there is unique shared memory(2-way circular queue) and event-channel(one shared memory and event-channel per domU)
or there is only one shared memory and interdomain event-channel(for every DomU)?


regards:

Amit

[-- Attachment #1.2: Type: text/html, Size: 678 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2010-02-24  9:38 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <29b32d341002211058l7e283336pa4fdfd0dc0b7124b@mail.gmail.com>
     [not found] ` <1266787199.24577.18.camel@agari.van.xensource.com>
2010-02-21 23:33   ` Shared memory and event channel Ritu kaur
2010-02-22  7:55     ` Daniel Stodden
2010-02-22 17:36       ` Ritu kaur
2010-02-22 21:34         ` Daniel Stodden
2010-02-22 22:16           ` Ritu kaur
2010-02-23  9:38             ` Ian Campbell
2010-02-23 14:47               ` Konrad Rzeszutek Wilk
2010-02-23 15:42                 ` Ian Campbell
2010-02-23 15:53                   ` Ritu kaur
2010-02-23 17:42                     ` djmagee
2010-02-23 19:26                       ` Ritu kaur
2010-02-24  9:38                         ` Ian Campbell
2007-11-19  7:59 shared " Amit Singh
2007-11-28  1:45 ` Mark Williamson
2007-12-21  8:39   ` tgh
2007-12-21 12:54     ` Keir Fraser

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.