All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC] scf: SCF device tree and configuration documentation
@ 2017-05-04  9:32 Andrii Anisov
  2017-05-04 10:03 ` Andrii Anisov
  0 siblings, 1 reply; 20+ messages in thread
From: Andrii Anisov @ 2017-05-04  9:32 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrii Anisov

From: Andrii Anisov <andrii_anisov@epam.com>

Description of SCF specific device tree properties and SCF configuration
using device tree.

Signed-off-by: Andrii Anisov <andrii_anisov@epam.com>
---

Dear All,

I would like to present a concept of SCF [1] configuration using device tree.
The idea is that the framework configuration is too complex to get it passed
using xen command line (for Dom0) or configuration file (for DomU). So the
configuration will be done using device tree with special properties which
will mark a device to be shared by the framework, or a node to describe a
virtual coprocessor.
Please notice that the partial device tree which is used for DomU
configuration is passed to the hypervisor as a binary blob in order to be
processed by SCF.

[1] - Shared coprocessor framework https://lists.xenproject.org/archives/html/xen-devel/2016-10/msg01966.html

---
 docs/misc/arm/device-tree/scf.txt |  16 +++
 docs/misc/arm/scf.txt             | 250 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 266 insertions(+)
 create mode 100644 docs/misc/arm/device-tree/scf.txt
 create mode 100644 docs/misc/arm/scf.txt

diff --git a/docs/misc/arm/device-tree/scf.txt b/docs/misc/arm/device-tree/scf.txt
new file mode 100644
index 0000000..9b9ac14
--- /dev/null
+++ b/docs/misc/arm/device-tree/scf.txt
@@ -0,0 +1,16 @@
+Shared coprocessor framework configuration
+==========================================
+
+Any device with the property "xen,coproc" set will be owned by SCF and will not
+be directly exposed to Dom0. "xen,coproc" property is only meaningful for a
+system device tree which is passed to XEN.
+
+Any device with the property "xen,vcoproc" will be processed by SCF to create
+a virtual coprocessor. If succeed, the device will be injected in the domain's
+device tree with that property wiped. The property value is a stringified path
+or an alias (from a system device tree)  of a physical coproc device the virtual
+coprocessor will run on. That physical coprocessor device node has to be marked
+with "xen,coproc" property.
+"xen,vcoproc" property is meaningful both for a system device tree and a partial
+device tree for DomU.
+
diff --git a/docs/misc/arm/scf.txt b/docs/misc/arm/scf.txt
new file mode 100644
index 0000000..f74e74d
--- /dev/null
+++ b/docs/misc/arm/scf.txt
@@ -0,0 +1,250 @@
+Share a coprocessor to several domains using guest's Device Tree
+================================================================
+
+This example will configure two dummy coprocessors and correspondent virtual
+coprocessors and is based on an RCar Gen3 device tree already adjusted for Dom0.
+Some free iomem range not assigned to any real device will be used for a dummy
+coprocessors definition. One dummy coprocessor would have a single iomem range
+and one irq. Another one will have several named iomem ranges and irqs.
+Two virtual instances of each coprocessor will be configured for Dom0 using a
+system device tree and other two will be configured for DomU using a partial
+device tree.
+
+Let us have two dummy coprocessors defined as following:
+
+\ {
+	...
+	aliases {
+		...
+		pcoproc0 = &coproc0;
+		pcoproc1 = &coproc1;
+		...
+	};
+	...
+	soc {
+		...
+		coproc0: pcoproc0@0xe6110000 {
+			compatible = "vendor_xxx,coproc_xxx";
+			reg = <0x0 0xe6110000 0x0 0x1000>;
+			interrupts = <GIC_SPI 160 IRQ_TYPE_LEVEL_HIGH>;
+			status = "okay";
+		};
+
+		coproc1: pcoproc1@0xe6182000 {
+			compatible = "vendor_xxx,coproc_xxc";
+			reg = <0x0 0xe6182000 0x0 0x1000>,
+			      <0x0 0xe6184000 0x0 0x1000>,
+			      <0x0 0xe6188000 0x0 0x8000>;
+			reg-names = "op", "mmu", "sram";
+			interrupts = <GIC_SPI 132 IRQ_TYPE_LEVEL_HIGH>,
+				     <GIC_SPI 135 IRQ_TYPE_LEVEL_HIGH>;
+			interrupt-names = "op", "mmu";
+			status = "okay";
+		};
+
+		...
+	};
+	...
+};
+
+Notes:
+    * If a coprocessor has several iomem ranges they must be named. Same for
+      interrupts. Names are used for matching both for shared coprocessor
+      platform code probe as well as mapping virtual coprocessors into domains.
+
+1) Mark the coprocessor node to let Xen know it will be used for sharing.
+This is done in the device tree node describing the device by adding the
+property "xen,coproc".
+
+\ {
+	...
+	aliases {
+		...
+		pcoproc0 = &coproc0;
+		pcoproc1 = &coproc1;
+		...
+	};
+	...
+	soc {
+		...
+		coproc0: pcoproc0@0xe6110000 {
+			compatible = "vendor_xxx,coproc_xxx";
+			reg = <0x0 0xe6110000 0x0 0x1000>;
+			interrupts = <GIC_SPI 160 IRQ_TYPE_LEVEL_HIGH>;
+			status = "okay";
++			xen,coproc;
+		};
+
+		coproc1: pcoproc1@0xe6182000 {
+			compatible = "vendor_xxx,coproc_xxc";
+			reg = <0x0 0xe6182000 0x0 0x1000>,
+			      <0x0 0xe6184000 0x0 0x1000>,
+			      <0x0 0xe6188000 0x0 0x8000>;
+			reg-names = "op", "mmu", "sram";
+			interrupts = <GIC_SPI 132 IRQ_TYPE_LEVEL_HIGH>,
+				     <GIC_SPI 135 IRQ_TYPE_LEVEL_HIGH>;
+			interrupt-names = "op", "mmu";
+			status = "okay";
++			xen,coproc;
+		};
+		...
+	};
+	...
+};
+
+2) Create virtual coprocessors nodes. Virtual coprocessor nodes should contain a
+property "xen,vcoproc". The value of this property is a string which reflects a
+full name of a physical coproc node or an alias valid within a system's device
+tree.
+For vcoproc's MMIOs you should find holes of suitable size in a domain memory
+layout. For the first instance of the virtual coprocessor in a domain you freely
+can use MMIO ranges same as a correspondent physical coprocessor uses.
+A virtual coprocessor in a domain will be using the same IRQ(s) as the
+physical coprocessor uses. If several virtual coprocessors baked by one physical
+coprocessor are provided to a domain, they will share the same IRQ(s). 
+
+\ {
+	...
+	aliases {
+		...
+		pcoproc0 = &coproc0;
+		pcoproc1 = &coproc1;
+		...
+	};
+	...
+	soc {
+		...
+		coproc0: pcoproc0@0xe6110000 {
+			compatible = "vendor_xxx,coproc_xxx";
+			reg = <0x0 0xe6110000 0x0 0x1000>;
+			interrupts = <GIC_SPI 160 IRQ_TYPE_LEVEL_HIGH>;
+			status = "okay";
+			xen,coproc;
+		};
++
++		coproc0@e6110000 {
++			reg = <0x0 0xe6110000 0x0 0x1000>;
++			/* reference a pcoproc by a full node name */
++			xen,vcoproc = "/soc/pcoproc0@0xe6110000";
++		};
++		coproc0@e6114000 {
++			/* reference a pcoproc by an alias */
++			reg = <0x0 0xe6114000 0x0 0x1000>;
++			xen,vcoproc = "pcoproc0";
++		};
+
+		coproc1: pcoproc1@0xe6182000 {
+			compatible = "vendor_xxx,coproc_xxc";
+			reg = <0x0 0xe6182000 0x0 0x1000>,
+			      <0x0 0xe6184000 0x0 0x1000>,
+			      <0x0 0xe6188000 0x0 0x8000>;
+			reg-names = "op", "mmu", "sram";
+			interrupts = <GIC_SPI 132 IRQ_TYPE_LEVEL_HIGH>,
+				     <GIC_SPI 135 IRQ_TYPE_LEVEL_HIGH>;
+			interrupt-names = "op", "mmu";
+			status = "okay";
+			xen,coproc;
+		};
+
++		coproc1@0xe6182000 {
++			reg = <0x0 0xe6182000 0x0 0x1000>,
++			      <0x0 0xe6184000 0x0 0x1000>,
++			      <0x0 0xe6188000 0x0 0x8000>;
++			reg-names = "op", "mmu", "sram";
++			xen,vcoproc = "/soc/pcoproc1@0xe6182000";
++		};
++
++		coproc1@0xe619a000 {
++			reg = <0x0 0xe619a000 0x0 0x1000>,
++			      <0x0 0xe6152000 0x0 0x1000>,
++			      <0x0 0xe6058000 0x0 0x8000>;
++			reg-names = "op", "mmu", "sram";
++			xen,vcoproc = "pcoproc1";
+
+		...
+	};
+	...
+};
+
+Notes:
+    * As you could notice virtual coprocessor "coproc1@0xe619a000" has its iomem
+      ranges mapped to arbitrary addresses, their relative offsets are not kept.
+      This does not take us into issues because range name matching is used.
+
+2) Create a partial device tree describing a virtual coprocessors provided to
+DomU:
+
+/dts-v1/;
+
+#include <dt-bindings/interrupt-controller/arm-gic.h>
+
+/ {
+    /* #*cells are here to keep DTC happy */
+    #address-cells = <2>;
+    #size-cells = <2>;
+
+	soc {
+		compatible = "simple-bus";
+		ranges;
+		#address-cells = <2>;
+		#size-cells = <2>;
+
+		coproc0@e6110000 {
+			xen,vcoproc = "/soc/pcoproc0@0xe6110000";
+			compatible = "vendor_xxx,coproc_xxx";
+			reg = <0x0 0xe6110000 0x0 0x1000>;
+			interrupts = <GIC_SPI 160 IRQ_TYPE_LEVEL_HIGH>;
+			status = "okay";
+		};
+		coproc0@e6114000 {
+			xen,vcoproc = "pcoproc0";
+			compatible = "vendor_xxx,coproc_xxx";
+			reg = <0x0 0xe6114000 0x0 0x1000>;
+			interrupts = <GIC_SPI 160 IRQ_TYPE_LEVEL_HIGH>;
+			status = "okay";
+		};
+
+		coproc1@0xe6182000 {
+			xen,vcoproc = "/soc/pcoproc1@0xe6182000";
+			compatible = "vendor_xxx,coproc_xxc";
+			reg = <0x0 0xe6182000 0x0 0x1000>,
+				<0x0 0xe6184000 0x0 0x1000>,
+				<0x0 0xe6188000 0x0 0x8000>;
+			reg-names = "op", "mmu", "sram";
+			interrupts = <GIC_SPI 132 IRQ_TYPE_LEVEL_HIGH>,
+					<GIC_SPI 135 IRQ_TYPE_LEVEL_HIGH>;
+			interrupt-names = "op", "mmu";
+		};
+
+		coproc1@0xe619a000 {
+			xen,vcoproc = "pcoproc1";
+			compatible = "vendor_xxx,coproc_xxc";
+			reg = <0x0 0xe619a000 0x0 0x1000>,
+				<0x0 0xe6152000 0x0 0x1000>,
+				<0x0 0xe6058000 0x0 0x8000>;
+			reg-names = "op", "mmu", "sram";
+			interrupts = <GIC_SPI 132 IRQ_TYPE_LEVEL_HIGH>,
+					<GIC_SPI 135 IRQ_TYPE_LEVEL_HIGH>;
+			interrupt-names = "op", "mmu";
+		};
+};
+
+Notes:
+    * For a Dom0 device tree in vcoproc nodes you should specify only properties
+      you would like to override, other properties would be taken from physical
+      coprocessor node.
+    * In a DomU partial device tree you should provide all needed properties for
+      virtual coprocessor node which you need in a DomU device tree.
+
+3) Compile the partial guest device with dtc (Device Tree Compiler).
+For our purpose, the compiled file will be called domu.dtb and
+placed in /xen/ in Dom0.
+
+3) Specify a partial device tree in a DomU configuration file:
+
+device_tree = "/xen/domu.dts"
+
+4) Nothing else should be specified in a domain configuration file. All SCF
+configuration is described in a system device-tree and partial device tree.
+
+
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-04  9:32 [RFC] scf: SCF device tree and configuration documentation Andrii Anisov
@ 2017-05-04 10:03 ` Andrii Anisov
  2017-05-04 10:41   ` Julien Grall
  0 siblings, 1 reply; 20+ messages in thread
From: Andrii Anisov @ 2017-05-04 10:03 UTC (permalink / raw)
  To: Andrii Anisov, xen-devel; +Cc: Julien Grall, Stefano Stabellini

Dear All,

During the topic implementation I faced a nasty issue with a DomU vgic 
configuration.
Originally I planned that the partial device tree for DomU is being 
passed to the
hypervisor from libxl__arch_domain_create, but it is too late to set 
vgic configuration
at this time. The DomU’s vgic is configured from 
libxl_arch_domain_prepare_config, and
now it seems it is a proper place to send device_tree to a hypervisor as 
a part of
configuration. Provided that I need the device tree blob even later, 
during device
tree creation for domU (in libxl_prepare_dtb) I would like to have it 
read from a file
once and keep it during guest domain creation process.

My problem is that domain creation functions stack mainly operates with 
auto generated
structures. For my current understanding it means that I have to 
introduce another
libxl type (f.e. File) which will read device tree file and operate with 
a binary blob.

Understanding the complexity of such a change I would like to hear 
comments about the
SCF configuration concept and feasibility of passing device tree blob 
from toolstack
to hypervisor as a part of domain configuration.

*Andrii Anisov*



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-04 10:03 ` Andrii Anisov
@ 2017-05-04 10:41   ` Julien Grall
  2017-05-04 12:35     ` Andrii Anisov
  0 siblings, 1 reply; 20+ messages in thread
From: Julien Grall @ 2017-05-04 10:41 UTC (permalink / raw)
  To: Andrii Anisov, Andrii Anisov, xen-devel; +Cc: Stefano Stabellini



On 04/05/17 11:03, Andrii Anisov wrote:
> Dear All,

Hi Andrii,

> During the topic implementation I faced a nasty issue with a DomU vgic
> configuration.
> Originally I planned that the partial device tree for DomU is being
> passed to the
> hypervisor from libxl__arch_domain_create, but it is too late to set
> vgic configuration
> at this time. The DomU’s vgic is configured from
> libxl_arch_domain_prepare_config, and
> now it seems it is a proper place to send device_tree to a hypervisor as
> a part of
> configuration.

I am not going to comment on the binding itself, but the idea of sending 
a device_tree to the hypervisor as part of configuration.

As you may have seen in the description of the option "device_tree", it 
is complex to verify the partial device tree because of the libfdt 
design. So without fully auditing libfdt and fixing the holes, this 
suggestion would be a vector attack to the hypervisor.

Whilst I do agree that this could be an interface between the user and 
the toolstack, we shall look into introducing a series of DOMCTL for the 
toolstack <-> hypervisor. What would be the issue to do that?

Cheers,

> Provided that I need the device tree blob even later,
> during device
> tree creation for domU (in libxl_prepare_dtb) I would like to have it
> read from a file
> once and keep it during guest domain creation process.
>
> My problem is that domain creation functions stack mainly operates with
> auto generated
> structures. For my current understanding it means that I have to
> introduce another
> libxl type (f.e. File) which will read device tree file and operate with
> a binary blob.
>
> Understanding the complexity of such a change I would like to hear
> comments about the
> SCF configuration concept and feasibility of passing device tree blob
> from toolstack
> to hypervisor as a part of domain configuration.
>
> *Andrii Anisov*
>
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-04 10:41   ` Julien Grall
@ 2017-05-04 12:35     ` Andrii Anisov
  2017-05-04 12:46       ` Julien Grall
  0 siblings, 1 reply; 20+ messages in thread
From: Andrii Anisov @ 2017-05-04 12:35 UTC (permalink / raw)
  To: Julien Grall, Andrii Anisov, xen-devel; +Cc: Stefano Stabellini

Hello Julien,

Thank you for your comments.

> As you may have seen in the description of the option "device_tree", 
> it is complex to verify the partial device tree because of the libfdt 
> design. So without fully auditing libfdt and fixing the holes, this 
> suggestion would be a vector attack to the hypervisor.
I understand these concerns, but not sure should we be scared of attack 
from a domain privileged enough to run domains?
It seems to me that system hypervisor attack through libfdt is the less 
valuable benefit from compromised dom0.
About a system stability issues due to libfdt holes I've just got a 
crazy idea: let us put a fdt unflatenning code into a generic EL0 
application :)

> Whilst I do agree that this could be an interface between the user and 
> the toolstack, we shall look into introducing a series of DOMCTL for 
> the toolstack <-> hypervisor. What would be the issue to do that?
There were two reasons turned me to using device tree as a configuration:
- I did not come up with a clear and flexible enough format to describe 
a SCF configuration. Any format which would allow us describe a complex 
virtual coprocessor with several mmios and irqs, and would allow 
describe breeding for a domain several virtual coprocessors running on 
one physical coprocessor will turn to a device tree node or something 
similar. This could fit somehow the DomU configuration file, but not XEN 
command line.
- The idea to reuse the same SCF configuration code both for Dom0 and 
DomU got me.

All said above is pretty true for me, but recently I have concerned that 
once we would like to spread SCF to x86 or systems configured through 
ACPI we will have to reconsider SCF configuration again.

-- 

*Andrii Anisov*


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-04 12:35     ` Andrii Anisov
@ 2017-05-04 12:46       ` Julien Grall
  2017-05-04 15:50         ` Andrii Anisov
  2017-05-04 16:13         ` Andrii Anisov
  0 siblings, 2 replies; 20+ messages in thread
From: Julien Grall @ 2017-05-04 12:46 UTC (permalink / raw)
  To: Andrii Anisov, Andrii Anisov, xen-devel; +Cc: Stefano Stabellini



On 04/05/17 13:35, Andrii Anisov wrote:
> Hello Julien,

Hi Andrii,

> Thank you for your comments.
>
>> As you may have seen in the description of the option "device_tree",
>> it is complex to verify the partial device tree because of the libfdt
>> design. So without fully auditing libfdt and fixing the holes, this
>> suggestion would be a vector attack to the hypervisor.
> I understand these concerns, but not sure should we be scared of attack
> from a domain privileged enough to run domains?

Whilst the domain is privileged enough to run domains, the configuration 
can be provided by a user (for instance in cloud environment). So you 
cannot trust what the user provided and any missing invalidation would 
lead to a security issue (see XSA-95 [1] for instance).

That's why we specifically said only trusted device tree should be used 
with the option "device_tree".

> It seems to me that system hypervisor attack through libfdt is the less
> valuable benefit from compromised dom0.

It is much more valuable, DOM0 may still have limited access to 
functionally whilst the hypervisor has access to everything.

> About a system stability issues due to libfdt holes I've just got a
> crazy idea: let us put a fdt unflatenning code into a generic EL0
> application :)
>
>> Whilst I do agree that this could be an interface between the user and
>> the toolstack, we shall look into introducing a series of DOMCTL for
>> the toolstack <-> hypervisor. What would be the issue to do that?
> There were two reasons turned me to using device tree as a configuration:
> - I did not come up with a clear and flexible enough format to describe
> a SCF configuration. Any format which would allow us describe a complex
> virtual coprocessor with several mmios and irqs, and would allow
> describe breeding for a domain several virtual coprocessors running on
> one physical coprocessor will turn to a device tree node or something
> similar. This could fit somehow the DomU configuration file, but not XEN
> command line.
> - The idea to reuse the same SCF configuration code both for Dom0 and
> DomU got me.
>
> All said above is pretty true for me, but recently I have concerned that
> once we would like to spread SCF to x86 or systems configured through
> ACPI we will have to reconsider SCF configuration again.

This is another point of the problem. Also, I do believe that the domain 
creation should be limited to create the domain and not configuring the 
devices other than the strict necessary. For anything else (UART, 
co-processor), this should be done later on.

What I would like to understand is what are the information that the 
hypervisors as to know for sharing co-processor? So far I have:
	- MMIOs
	- Interrupts

Anything else?

Cheers,

[1] https://xenbits.xen.org/xsa/advisory-95.html

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-04 12:46       ` Julien Grall
@ 2017-05-04 15:50         ` Andrii Anisov
  2017-05-05 13:49           ` Julien Grall
  2017-05-04 16:13         ` Andrii Anisov
  1 sibling, 1 reply; 20+ messages in thread
From: Andrii Anisov @ 2017-05-04 15:50 UTC (permalink / raw)
  To: Julien Grall, Andrii Anisov, xen-devel; +Cc: Stefano Stabellini

Julien,

>
> What I would like to understand is what are the information that the 
> hypervisors as to know for sharing co-processor? So far I have:
>     - MMIOs
>     - Interrupts
>
> Anything else?
IOMMU bindings.
This knowledge enough to get the physical coprocessor shared.

In order to spawn a virtual coprocessor (vcoproc) for some domain you 
have to provide additional configuration information:
     - Which physical coprocessor this vcoproc should represent to a 
domain ( a SoC could have several physical coprocs shared through the 
framework)
     - IRQ(s) (provided that no IRQ remapping is implemented in XEN) 
could be omitted (or used for verification only)
     - IOMEM ranges correspondence between this vcoproc instance and a 
physical coprocessor.

The latest point in the configuration is the most complex.
Let me explain a use case we faced now:
     - a GPU has two different firmwares implementing OpenGL and OpenCL
     - we need both GL and CL in the same domain working simultaneously 
(actually concurrently, but the concurrency should be transparent for 
domain, GPU drivers and firmwares)
In current case we are lucky, the GPU has a single mmio range.
We can implement such system using SCF: spawn two vcoprocs for a domain. 
Those vcoprocs will have own mmio range within the domain.
In a hypervisor those mmio ranges would be served by the same handler, 
but must be associated with the own vcoproc context.

In case a coprocessor has several mmio ranges things are getting worse.

In a device tree configuration concept I explicitly link vcoproc to 
pcoproc and keep mmio ranges correspondency with names.
I'm not sure how to keep this coincidence in a simple way.

-- 

*Andrii Anisov*



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-04 12:46       ` Julien Grall
  2017-05-04 15:50         ` Andrii Anisov
@ 2017-05-04 16:13         ` Andrii Anisov
  2017-05-05 14:12           ` Julien Grall
  1 sibling, 1 reply; 20+ messages in thread
From: Andrii Anisov @ 2017-05-04 16:13 UTC (permalink / raw)
  To: Julien Grall, Andrii Anisov, xen-devel; +Cc: Stefano Stabellini

Julien,


On 04.05.17 15:46, Julien Grall wrote:
>
>> I understand these concerns, but not sure should we be scared of attack
>> from a domain privileged enough to run domains?
>
> Whilst the domain is privileged enough to run domains, the 
> configuration can be provided by a user (for instance in cloud 
> environment). So you cannot trust what the user provided and any 
> missing invalidation would lead to a security issue (see XSA-95 [1] 
> for instance).
>
> That's why we specifically said only trusted device tree should be 
> used with the option "device_tree".
I see. But I also could state the same.

>> It seems to me that system hypervisor attack through libfdt is the less
>> valuable benefit from compromised dom0.
>
> It is much more valuable, DOM0 may still have limited access to 
> functionally whilst the hypervisor has access to everything.
Well, from dom0 you could start/stop any domain you want, grant access 
to any hardware, but only from hypervisor you could map another domain 
memory to access some runtime data. Is my understanding correct?

> Also, I do believe that the domain creation should be limited to 
> create the domain and not configuring the devices other than the 
> strict necessary. For anything else (UART, co-processor), 
But vgic is configured at the earliest stages of the domain creation. So 
we have to know at the moment which IRQs would be injected into the 
domain. And that is my current problem.

> this should be done later on.
What is the proper moment to spawn virtual coprocessors for guest 
domains from your point of view?

-- 

*Andrii Anisov*



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-04 15:50         ` Andrii Anisov
@ 2017-05-05 13:49           ` Julien Grall
  2017-05-05 14:13             ` Ian Jackson
  0 siblings, 1 reply; 20+ messages in thread
From: Julien Grall @ 2017-05-05 13:49 UTC (permalink / raw)
  To: Andrii Anisov, Andrii Anisov, xen-devel
  Cc: Wei Liu, Stefano Stabellini, Ian Jackson

On 04/05/17 16:50, Andrii Anisov wrote:
> Julien,

Hi Andrii,

>>
>> What I would like to understand is what are the information that the
>> hypervisors as to know for sharing co-processor? So far I have:
>>     - MMIOs
>>     - Interrupts
>>
>> Anything else?
> IOMMU bindings.
> This knowledge enough to get the physical coprocessor shared.
>
> In order to spawn a virtual coprocessor (vcoproc) for some domain you
> have to provide additional configuration information:
>     - Which physical coprocessor this vcoproc should represent to a
> domain ( a SoC could have several physical coprocs shared through the
> framework)
>     - IRQ(s) (provided that no IRQ remapping is implemented in XEN)
> could be omitted (or used for verification only)
>     - IOMEM ranges correspondence between this vcoproc instance and a
> physical coprocessor.
>
> The latest point in the configuration is the most complex.
> Let me explain a use case we faced now:
>     - a GPU has two different firmwares implementing OpenGL and OpenCL
>     - we need both GL and CL in the same domain working simultaneously
> (actually concurrently, but the concurrency should be transparent for
> domain, GPU drivers and firmwares)
> In current case we are lucky, the GPU has a single mmio range.
> We can implement such system using SCF: spawn two vcoprocs for a domain.
> Those vcoprocs will have own mmio range within the domain.
> In a hypervisor those mmio ranges would be served by the same handler,
> but must be associated with the own vcoproc context.

I have CCed Ian and Wei to comment on the difficult to describe a such 
interface in libxl. They may have insights how to do this properly.

@Ian @Wei: Andrii is suggesting to use Device-Tree for describing 
virtual co-processor as it seems it would be very difficult to do the 
same with the configuration file. See the suggested binding in [1].

>
> In case a coprocessor has several mmio ranges things are getting worse.
>
> In a device tree configuration concept I explicitly link vcoproc to
> pcoproc and keep mmio ranges correspondency with names.
> I'm not sure how to keep this coincidence in a simple way.
>

Cheers,

[1] https://www.mail-archive.com/xen-devel@lists.xen.org/msg106924.html


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-04 16:13         ` Andrii Anisov
@ 2017-05-05 14:12           ` Julien Grall
  2017-05-05 15:27             ` Andrii Anisov
  0 siblings, 1 reply; 20+ messages in thread
From: Julien Grall @ 2017-05-05 14:12 UTC (permalink / raw)
  To: Andrii Anisov, Andrii Anisov, xen-devel
  Cc: Wei Liu, Stefano Stabellini, Ian Jackson

(CC tools maintainers)

On 04/05/17 17:13, Andrii Anisov wrote:
> Julien,

Hi Andrii,


>
> On 04.05.17 15:46, Julien Grall wrote:
>>
>>> I understand these concerns, but not sure should we be scared of attack
>>> from a domain privileged enough to run domains?
>>
>> Whilst the domain is privileged enough to run domains, the
>> configuration can be provided by a user (for instance in cloud
>> environment). So you cannot trust what the user provided and any
>> missing invalidation would lead to a security issue (see XSA-95 [1]
>> for instance).
>>
>> That's why we specifically said only trusted device tree should be
>> used with the option "device_tree".
> I see. But I also could state the same.

I would rather avoid to take this approach until we explored all the 
possibility.

We took this approach for platform device passthrough because we 
considered it would only be used for embedded platform where everything 
will be under control.

In the case of virtual co-processor, I can see a usage beyond embedded 
so we would need to deal with non-trusted input.

>
>>> It seems to me that system hypervisor attack through libfdt is the less
>>> valuable benefit from compromised dom0.
>>
>> It is much more valuable, DOM0 may still have limited access to
>> functionally whilst the hypervisor has access to everything.
> Well, from dom0 you could start/stop any domain you want, grant access
> to any hardware, but only from hypervisor you could map another domain
> memory to access some runtime data. Is my understanding correct?

The tools are here to provide a nice and comprehensible interface 
between Xen and the user. At the end the hypervisor is in charge of 
creating a domain, handling the memory...

Dom0 may have restrict access to the guest, but the hypervisor does not 
have such restriction. So if you compromise the hypervisor you 
compromise the whole platform.

>
>> Also, I do believe that the domain creation should be limited to
>> create the domain and not configuring the devices other than the
>> strict necessary. For anything else (UART, co-processor),
> But vgic is configured at the earliest stages of the domain creation. So
> we have to know at the moment which IRQs would be injected into the
> domain. And that is my current problem.

No, the vGIC only needs to know the maximum number of interrupts it can 
handle. You don't need to route them at that time.

Currently, the toolstack is deciding on the number of spis supported 
(give a look at nr_spis).

IHMO, the toolstack should be able to figure out the number of 
interrupts required by the virtual co-processors and then update nr_spis 
accordingly.

>
>> this should be done later on.
> What is the proper moment to spawn virtual coprocessors for guest
> domains from your point of view?

The DOMCTL createdomain does not scale for things like co-processors. It 
is only here to initialize the bare minimum for a domain. You could 
create new DOMCTL to handle co-processors and call them afterwards from 
libxl__arch_domain_create.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-05 13:49           ` Julien Grall
@ 2017-05-05 14:13             ` Ian Jackson
  2017-05-05 17:07               ` Andrii Anisov
  0 siblings, 1 reply; 20+ messages in thread
From: Ian Jackson @ 2017-05-05 14:13 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Andrii Anisov, Wei Liu, Andrii Anisov

Julien Grall writes ("Re: [RFC] scf: SCF device tree and configuration documentation"):
> I have CCed Ian and Wei to comment on the difficult to describe a such 
> interface in libxl. They may have insights how to do this properly.

Hi.

> @Ian @Wei: Andrii is suggesting to use Device-Tree for describing 
> virtual co-processor as it seems it would be very difficult to do the 
> same with the configuration file. See the suggested binding in [1].

Firstly, I should say that I'm starting fresh on this ARM coprocessor
topic.  So forgive me if I make any obvious mistakes.

> [1] https://www.mail-archive.com/xen-devel@lists.xen.org/msg106924.html

I read this proposal.

I agree that putting all the details (interrupts, mmio, etc.) in the
libxl config file is probably undesirable.

AFAICT, there, a particularly coprocessor can be identified as a
portion of the host's DT.  Is that right ?  The plan seems to be to
take one such thing (or perhaps, several) and pass it through to "the
guest".

If these regions of the DT can be marked by this "xen,coproc"
property, can't we instead identify them (eg in the libxl domain
configuration) by their DT path ?  So then you could say "please pass
through coprocessor /aliases/soc/coproc0" or something.

Also, the proposal there does not seem to provide any way to say which
guest should get any particular coprocessor.  It talks about "the
domain" (implicitly, "the" guest) - as if there could only be one.
Surely this is wrong ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-05 14:12           ` Julien Grall
@ 2017-05-05 15:27             ` Andrii Anisov
  2017-05-05 17:51               ` Julien Grall
  0 siblings, 1 reply; 20+ messages in thread
From: Andrii Anisov @ 2017-05-05 15:27 UTC (permalink / raw)
  To: Julien Grall, Andrii Anisov, xen-devel
  Cc: Wei Liu, Stefano Stabellini, Ian Jackson

Hello Julien,

On 05.05.17 17:12, Julien Grall wrote:
> (CC tools maintainers)
>
> On 04/05/17 17:13, Andrii Anisov wrote:
>> Julien,
>
> Hi Andrii,
>
>
>>
>> On 04.05.17 15:46, Julien Grall wrote:
>>>
>>>> I understand these concerns, but not sure should we be scared of 
>>>> attack
>>>> from a domain privileged enough to run domains?
>>>
>>> Whilst the domain is privileged enough to run domains, the
>>> configuration can be provided by a user (for instance in cloud
>>> environment). So you cannot trust what the user provided and any
>>> missing invalidation would lead to a security issue (see XSA-95 [1]
>>> for instance).
>>>
>>> That's why we specifically said only trusted device tree should be
>>> used with the option "device_tree".
>> I see. But I also could state the same.
> I would rather avoid to take this approach until we explored all the 
> possibility.
>
> We took this approach for platform device passthrough because we 
> considered it would only be used for embedded platform where 
> everything will be under control.
>
> In the case of virtual co-processor, I can see a usage beyond embedded 
> so we would need to deal with non-trusted input.
Yep, it's one of our targets to spread SCF beyond the embedded world.
>>> Also, I do believe that the domain creation should be limited to
>>> create the domain and not configuring the devices other than the
>>> strict necessary. For anything else (UART, co-processor),
>> But vgic is configured at the earliest stages of the domain creation. So
>> we have to know at the moment which IRQs would be injected into the
>> domain. And that is my current problem.
>
> No, the vGIC only needs to know the maximum number of interrupts it 
> can handle. You don't need to route them at that time.
>
> Currently, the toolstack is deciding on the number of spis supported 
> (give a look at nr_spis).
>
> IHMO, the toolstack should be able to figure out the number of 
> interrupts required by the virtual co-processors and then update 
> nr_spis accordingly.
This will lead to the need of parse (and maybe reads dtb file) first 
here. Then one more time on DomU device tree generation.
>>> this should be done later on.
>> What is the proper moment to spawn virtual coprocessors for guest
>> domains from your point of view?
>
> The DOMCTL createdomain does not scale for things like co-processors. 
> It is only here to initialize the bare minimum for a domain. You could 
> create new DOMCTL to handle co-processors and call them afterwards 
> from libxl__arch_domain_create.
It is done in such way now. From libxl__arch_domain_create another 
domctl sends a pfdt blob to hypervisor for SCF configuration.

-- 

*Andrii Anisov*

*
*


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-05 14:13             ` Ian Jackson
@ 2017-05-05 17:07               ` Andrii Anisov
  2017-05-05 17:20                 ` Ian Jackson
  0 siblings, 1 reply; 20+ messages in thread
From: Andrii Anisov @ 2017-05-05 17:07 UTC (permalink / raw)
  To: Ian Jackson, Julien Grall
  Cc: xen-devel, Stefano Stabellini, Wei Liu, Andrii Anisov

Hello Ian,


On 05.05.17 17:13, Ian Jackson wrote:
> I read this proposal.
>
> I agree that putting all the details (interrupts, mmio, etc.) in the
> libxl config file is probably undesirable.
>
> AFAICT, there, a particularly coprocessor can be identified as a
> portion of the host's DT.  Is that right ?  The plan seems to be to
> take one such thing (or perhaps, several) and pass it through to "the
> guest".
>
> If these regions of the DT can be marked by this "xen,coproc"
> property, can't we instead identify them (eg in the libxl domain
> configuration) by their DT path ?  So then you could say "please pass
> through coprocessor /aliases/soc/coproc0" or something.
Yep, exactly this approach worked for us when there were no requirement 
to spawn for one guest domain several vcoprocs from one physical 
coprocessor.
That requirement leads to a need of mapping "second" vcoproc mmio ranges 
to different addresses and potentially using another IRQ, in order to 
let a domain treat those devices separately.

> Also, the proposal there does not seem to provide any way to say which
> guest should get any particular coprocessor.
No, its not about getting (owning) a coprocessor for some domain, but 
get a virtual coprocessor (vcoproc).
Actually a vcoproc abstraction is inspired by vcpu abstraction, maybe 
such view would help to get an idea.
Please also refer [1] to get the high level overview of the topic.

>    It talks about "the
> domain" (implicitly, "the" guest) - as if there could only be one.
> Surely this is wrong ?
Yep, several domains (domU) could be created using the same partial 
device tree describing a configuration of virtual coprocessors.

[1] 
https://lists.xenproject.org/archives/html/xen-devel/2016-10/msg01966.html

-- 

*Andrii Anisov*



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-05 17:07               ` Andrii Anisov
@ 2017-05-05 17:20                 ` Ian Jackson
  2017-05-10  9:16                   ` Andrii Anisov
  0 siblings, 1 reply; 20+ messages in thread
From: Ian Jackson @ 2017-05-05 17:20 UTC (permalink / raw)
  To: Andrii Anisov
  Cc: xen-devel, Julien Grall, Stefano Stabellini, Wei Liu, Andrii Anisov

Andrii Anisov writes ("Re: [RFC] scf: SCF device tree and configuration documentation"):
> On 05.05.17 17:13, Ian Jackson wrote:
> > If these regions of the DT can be marked by this "xen,coproc"
> > property, can't we instead identify them (eg in the libxl domain
> > configuration) by their DT path ?  So then you could say "please pass
> > through coprocessor /aliases/soc/coproc0" or something.
>
> Yep, exactly this approach worked for us when there were no requirement 
> to spawn for one guest domain several vcoprocs from one physical 
> coprocessor.
> That requirement leads to a need of mapping "second" vcoproc mmio ranges 
> to different addresses and potentially using another IRQ, in order to 
> let a domain treat those devices separately.

Why wouldn't the toolstack simply choose appropriate irqs/mmio
ranges ?  I would expect the virtual irqs/mmio ranges to not
necessarily match the physical ones anyway.  Is choosing these ranges
complicated ?

> > Also, the proposal there does not seem to provide any way to say which
> > guest should get any particular coprocessor.
> No, its not about getting (owning) a coprocessor for some domain, but 
> get a virtual coprocessor (vcoproc).
> Actually a vcoproc abstraction is inspired by vcpu abstraction, maybe 
> such view would help to get an idea.
> Please also refer [1] to get the high level overview of the topic.

Thanks.  I still feel like I am labouring under some misapprehensions.
I hope you'll clear them up as they become evident...

What I mean is that your previous proposal doesn't provide any way to
say which guest(s) should get instances of this virtual coprocessor.
Some guests should get none; some one; etc.

Also, I am perplexed by your suggestion that a single physical coproc
might be presented to a guest as two vcoprocs.  If your sharing
strategy is context-switching, is this not going to result in a lot of
context-switching, whenever the guest (which thinks it has two
coprocs) touches one and then the other ?

Obviously there are many things I still don't understand here.

Regards,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-05 15:27             ` Andrii Anisov
@ 2017-05-05 17:51               ` Julien Grall
  2017-05-10 15:30                 ` Andrii Anisov
  0 siblings, 1 reply; 20+ messages in thread
From: Julien Grall @ 2017-05-05 17:51 UTC (permalink / raw)
  To: Andrii Anisov, Andrii Anisov, xen-devel
  Cc: Wei Liu, Stefano Stabellini, Ian Jackson



On 05/05/2017 04:27 PM, Andrii Anisov wrote:
> Hello Julien,
>
> On 05.05.17 17:12, Julien Grall wrote:
>> (CC tools maintainers)
>>
>> On 04/05/17 17:13, Andrii Anisov wrote:
>>> Julien,
>>
>> Hi Andrii,
>>
>>
>>>
>>> On 04.05.17 15:46, Julien Grall wrote:
>>>>
>>>>> I understand these concerns, but not sure should we be scared of
>>>>> attack
>>>>> from a domain privileged enough to run domains?
>>>>
>>>> Whilst the domain is privileged enough to run domains, the
>>>> configuration can be provided by a user (for instance in cloud
>>>> environment). So you cannot trust what the user provided and any
>>>> missing invalidation would lead to a security issue (see XSA-95 [1]
>>>> for instance).
>>>>
>>>> That's why we specifically said only trusted device tree should be
>>>> used with the option "device_tree".
>>> I see. But I also could state the same.
>> I would rather avoid to take this approach until we explored all the
>> possibility.
>>
>> We took this approach for platform device passthrough because we
>> considered it would only be used for embedded platform where
>> everything will be under control.
>>
>> In the case of virtual co-processor, I can see a usage beyond embedded
>> so we would need to deal with non-trusted input.
> Yep, it's one of our targets to spread SCF beyond the embedded world.
>>>> Also, I do believe that the domain creation should be limited to
>>>> create the domain and not configuring the devices other than the
>>>> strict necessary. For anything else (UART, co-processor),
>>> But vgic is configured at the earliest stages of the domain creation. So
>>> we have to know at the moment which IRQs would be injected into the
>>> domain. And that is my current problem.
>>
>> No, the vGIC only needs to know the maximum number of interrupts it
>> can handle. You don't need to route them at that time.
>>
>> Currently, the toolstack is deciding on the number of spis supported
>> (give a look at nr_spis).
>>
>> IHMO, the toolstack should be able to figure out the number of
>> interrupts required by the virtual co-processors and then update
>> nr_spis accordingly.
> This will lead to the need of parse (and maybe reads dtb file) first
> here. Then one more time on DomU device tree generation.

The code is not set in stone. It can be reworked to avoid that.

>> The DOMCTL createdomain does not scale for things like co-processors.
>> It is only here to initialize the bare minimum for a domain. You could
>> create new DOMCTL to handle co-processors and call them afterwards
>> from libxl__arch_domain_create.
> It is done in such way now. From libxl__arch_domain_create another
> domctl sends a pfdt blob to hypervisor for SCF configuration.

Passing an fdt blob to the hypervisor should be used at the last resort. 
Whilst I agree it would be difficult to get a suitable interface between 
the user and the toolstack, C allows a lot of freedom to create suitable 
structure. So I still don't understand why you want to use a device tree 
between the toolstack and the hypervisor.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-05 17:20                 ` Ian Jackson
@ 2017-05-10  9:16                   ` Andrii Anisov
  2017-05-10 14:22                     ` Ian Jackson
  0 siblings, 1 reply; 20+ messages in thread
From: Andrii Anisov @ 2017-05-10  9:16 UTC (permalink / raw)
  To: Ian Jackson
  Cc: xen-devel, Julien Grall, Stefano Stabellini, Wei Liu, Andrii Anisov

Hello Ian,


On 05.05.17 20:20, Ian Jackson wrote:
> Why wouldn't the toolstack simply choose appropriate irqs/mmio
> ranges ?  I would expect the virtual irqs/mmio ranges to not
> necessarily match the physical ones anyway.  Is choosing these ranges
> complicated ?
This could make sense. Choosing ranges should not be really complicated. 
The point here is that we need these ranges both for hypervisor and domU 
device tree generation, and we should keep the accordance. Moreover, we 
need a coprocessor device tree node for domU. Because it could describe 
some specifics beyond an iomem/irq presentation of the coprocessor.

> What I mean is that your previous proposal doesn't provide any way to
> say which guest(s) should get instances of this virtual coprocessor.
> Some guests should get none; some one; etc.
I think those guests which are configured to have a virtual 
coprocessor(s) will have some.
Those which are not configured to have - will not.
I would not set any specific limitations here. Shall I?

> Also, I am perplexed by your suggestion that a single physical coproc
> might be presented to a guest as two vcoprocs.  If your sharing
> strategy is context-switching, is this not going to result in a lot of
> context-switching, whenever the guest (which thinks it has two
> coprocs) touches one and then the other ?
I would not treat this case too specific. Any touches of the scheduled 
out virtual coprocessor would be handled by IO access emulation 
mechanism, and it is not necessary to evoke the context switch at the 
moment.
So the number of context switches is rather matter of the scheduling 
algorithm I guess.


-- 

*Andrii Anisov*



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-10  9:16                   ` Andrii Anisov
@ 2017-05-10 14:22                     ` Ian Jackson
  2017-05-10 16:26                       ` Andrii Anisov
  0 siblings, 1 reply; 20+ messages in thread
From: Ian Jackson @ 2017-05-10 14:22 UTC (permalink / raw)
  To: Andrii Anisov
  Cc: xen-devel, Julien Grall, Stefano Stabellini, Wei Liu, Andrii Anisov

Andrii Anisov writes ("Re: [RFC] scf: SCF device tree and configuration documentation"):
> On 05.05.17 20:20, Ian Jackson wrote:
> > Why wouldn't the toolstack simply choose appropriate irqs/mmio
> > ranges ?  I would expect the virtual irqs/mmio ranges to not
> > necessarily match the physical ones anyway.  Is choosing these ranges
> > complicated ?
>
> This could make sense. Choosing ranges should not be really complicated. 
> The point here is that we need these ranges both for hypervisor and domU 
> device tree generation, and we should keep the accordance.

Obviously I am still confused, because this doesn't seem to make sense
to me.  I was imagining the toolstack generating virtual irqs/mmio
ranges which the guest would see.  It would then arrange for Xen to
program the hardware appropriately, to direct those ranges to the
physical hardware (when the coproc is exposed to the guest).

And I don't see why the physical address ranges used by Xen to
manipulate the coproc would have to be the same as the guest
pseudophysical addresses used by the guest.

As for device tree generation, any kind of passthrough of a DT device
is going to involve filtering/processing/amending the DT information
for the device: something is going to have to take the information
from the physical DT (as provided to Xen), find the relevant parts
(the parts which relate to the particular device), and substitute
addresses etc., and insert the result into the guest DT.

> > What I mean is that your previous proposal doesn't provide any way to
> > say which guest(s) should get instances of this virtual coprocessor.
> > Some guests should get none; some one; etc.
> 
> I think those guests which are configured to have a virtual 
> coprocessor(s) will have some.

So this will be done by something in the domain configuration ?

> Those which are not configured to have - will not.
> I would not set any specific limitations here. Shall I?

Err, no.  At least, I don't think so.  That's not what I was asking
for.

> > Also, I am perplexed by your suggestion that a single physical coproc
> > might be presented to a guest as two vcoprocs.  If your sharing
> > strategy is context-switching, is this not going to result in a lot of
> > context-switching, whenever the guest (which thinks it has two
> > coprocs) touches one and then the other ?
> 
> I would not treat this case too specific. Any touches of the scheduled 
> out virtual coprocessor would be handled by IO access emulation 
> mechanism, and it is not necessary to evoke the context switch at the 
> moment.

The IO access emulation just directs the access to somewhere where it
can be emulated.  Does that mean you intend for there to be a software
emulation of the vcoproc, as well as hardware passthrough (with
context switching) ?

Ian.
(still rather baffled, I'm afraid)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-05 17:51               ` Julien Grall
@ 2017-05-10 15:30                 ` Andrii Anisov
  2017-05-10 17:40                   ` Julien Grall
  0 siblings, 1 reply; 20+ messages in thread
From: Andrii Anisov @ 2017-05-10 15:30 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Ian Jackson, Wei Liu, Andrii Anisov

Julien,

On 05.05.17 20:51, Julien Grall wrote:

> The code is not set in stone. It can be reworked to avoid that.
Yep.
I would like to not introduce changes related to dtb into 
libxl_create.c, keep as much as possible in libxl_arm.c . The only 
common data structure between libxl__arch_domain_prepare_config() and 
libxl__arch_domain_create() is an autogenerated libxl_domain_config 
structure.
The option I see now is to introduce kind of auto type "File" which will 
read and manage file blob within libxl_domain_config structure. 
Honestly, I do not like this option. Any suggestions?

> So I still don't understand why you want to use a device tree between 
> the toolstack and the hypervisor.
It has a functional (except vgic settings) implementation now.
But as you said the code is not set in stone. I'm here to hear your 
valuable opinion, discuss options and see what ideas will come up.

-- 

*Andrii Anisov*



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-10 14:22                     ` Ian Jackson
@ 2017-05-10 16:26                       ` Andrii Anisov
  0 siblings, 0 replies; 20+ messages in thread
From: Andrii Anisov @ 2017-05-10 16:26 UTC (permalink / raw)
  To: Ian Jackson
  Cc: xen-devel, Julien Grall, Stefano Stabellini, Wei Liu, Andrii Anisov


On 10.05.17 17:22, Ian Jackson wrote:
> The IO access emulation just directs the access to somewhere where it
> can be emulated.  Does that mean you intend for there to be a software
> emulation of the vcoproc, as well as hardware passthrough (with
> context switching) ?

The concept of an "access emulation" is not about emulating a coproc 
functionality, its only about reacting on mmio accesses appropriately 
from the user (domain) point of view.
Basically on a register read - return shadowed value; on writes - stack 
them, to replay them to the HW once this vcoproc context is scheduled in.
I do realize that this part could be the most complex part of some 
specific coprocessor SCF support code.


>
> Obviously I am still confused, because this doesn't seem to make sense
> to me.  I was imagining the toolstack generating virtual irqs/mmio
> ranges which the guest would see.  It would then arrange for Xen to
> program the hardware appropriately, to direct those ranges to the
> physical hardware (when the coproc is exposed to the guest).
While you are virtualizing a coprocessor, you can not map those address 
ranges for the domain permanently. At least during the period a vcoproc 
is scheduled out, ranges should be unmapped and accesses should be 
handled by access emulation mmio handlers.

> And I don't see why the physical address ranges used by Xen to
> manipulate the coproc would have to be the same as the guest
> pseudophysical addresses used by the guest.
No need to keep ranges the same. But for proper access emulation 
functionality you have to identify specific address ranges in order to 
assign appropriate mmio handlers.

> As for device tree generation, any kind of passthrough of a DT device
> is going to involve filtering/processing/amending the DT information
> for the device: something is going to have to take the information
> from the physical DT (as provided to Xen), find the relevant parts
> (the parts which relate to the particular device), and substitute
> addresses etc., and insert the result into the guest DT.

Something like this is done now, except taking the info from the physical DT. Now it is assumed that the pfdt OK, if the SCF is able to be configured with this pfdt. Not brilliant, but works for me this far.

> So this will be done by something in the domain configuration ?
Yes, sure. I think the domain configuration file should have the needed 
config options.
But so far I did not realize the appropriate config format, so I keep 
the configuration in a device tree both for Dom0 and DomU.
For DomU the partial device tree option only is used so far. Toolstack 
does parse the pfdt, in case nodes with "xen,vcoproc" are
found, actions to configure SCF are taken.



-- 

*Andrii Anisov*



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-10 15:30                 ` Andrii Anisov
@ 2017-05-10 17:40                   ` Julien Grall
  2017-05-10 17:47                     ` Andrii Anisov
  0 siblings, 1 reply; 20+ messages in thread
From: Julien Grall @ 2017-05-10 17:40 UTC (permalink / raw)
  To: Andrii Anisov
  Cc: xen-devel, Stefano Stabellini, Ian Jackson, Wei Liu, Andrii Anisov



On 05/10/2017 04:30 PM, Andrii Anisov wrote:
> Julien,

Hi Andrii,

>
> On 05.05.17 20:51, Julien Grall wrote:
>
>> The code is not set in stone. It can be reworked to avoid that.
> Yep.
> I would like to not introduce changes related to dtb into
> libxl_create.c, keep as much as possible in libxl_arm.c . The only
> common data structure between libxl__arch_domain_prepare_config() and
> libxl__arch_domain_create() is an autogenerated libxl_domain_config
> structure.
> The option I see now is to introduce kind of auto type "File" which will
> read and manage file blob within libxl_domain_config structure.
> Honestly, I do not like this option. Any suggestions?

I don't know much the toolstack, I will leave Ian and Wei commenting on 
this.

>
>> So I still don't understand why you want to use a device tree between
>> the toolstack and the hypervisor.
> It has a functional (except vgic settings) implementation now.
> But as you said the code is not set in stone. I'm here to hear your
> valuable opinion, discuss options and see what ideas will come up.

Have you tried to define an interface using C structure? If not, my 
suggestion would be to first do that so we can discuss on other alternative.

This could be part of a new version of the design document to get all 
the context.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC] scf: SCF device tree and configuration documentation
  2017-05-10 17:40                   ` Julien Grall
@ 2017-05-10 17:47                     ` Andrii Anisov
  0 siblings, 0 replies; 20+ messages in thread
From: Andrii Anisov @ 2017-05-10 17:47 UTC (permalink / raw)
  To: Julien Grall
  Cc: xen-devel, Stefano Stabellini, Ian Jackson, Wei Liu, Andrii Anisov

Hello Julien,


On 10.05.17 20:40, Julien Grall wrote:
> Have you tried to define an interface using C structure? 
Not yet.
> If not, my suggestion would be to first do that so we can discuss on 
> other alternative.
Going to take this action soon.

-- 

*Andrii Anisov*



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2017-05-10 17:48 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-04  9:32 [RFC] scf: SCF device tree and configuration documentation Andrii Anisov
2017-05-04 10:03 ` Andrii Anisov
2017-05-04 10:41   ` Julien Grall
2017-05-04 12:35     ` Andrii Anisov
2017-05-04 12:46       ` Julien Grall
2017-05-04 15:50         ` Andrii Anisov
2017-05-05 13:49           ` Julien Grall
2017-05-05 14:13             ` Ian Jackson
2017-05-05 17:07               ` Andrii Anisov
2017-05-05 17:20                 ` Ian Jackson
2017-05-10  9:16                   ` Andrii Anisov
2017-05-10 14:22                     ` Ian Jackson
2017-05-10 16:26                       ` Andrii Anisov
2017-05-04 16:13         ` Andrii Anisov
2017-05-05 14:12           ` Julien Grall
2017-05-05 15:27             ` Andrii Anisov
2017-05-05 17:51               ` Julien Grall
2017-05-10 15:30                 ` Andrii Anisov
2017-05-10 17:40                   ` Julien Grall
2017-05-10 17:47                     ` Andrii Anisov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.