All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 01/16] Introduce skeleton SUPPORT.md
@ 2017-11-13 15:41 George Dunlap
  2017-11-13 15:41 ` [PATCH 02/16] SUPPORT.md: Add core functionality George Dunlap
                   ` (16 more replies)
  0 siblings, 17 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-13 15:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Dario Faggioli, Tim Deegan, George Dunlap, Julien Grall,
	Paul Durrant, Jan Beulich, Tamas K Lengyel, Anthony Perard,
	Ian Jackson, Roger Pau Monne

Add a machine-readable file to describe what features are in what
state of being 'supported', as well as information about how long this
release will be supported, and so on.

The document should be formatted using "semantic newlines" [1], to make
changes easier.

Begin with the basic framework.

Signed-off-by: Ian Jackson <ian.jackson@citrix.com>
Signed-off-by: George Dunlap <george.dunlap@citrix.com>

[1] http://rhodesmill.org/brandon/2012/one-sentence-per-line/
---
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
CC: Dario Faggioli <dario.faggioli@citrix.com>
CC: Tamas K Lengyel <tamas.lengyel@zentific.com>
CC: Roger Pau Monne <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Anthony Perard <anthony.perard@citrix.com>
CC: Paul Durrant <paul.durrant@citrix.com>
CC: Konrad Wilk <konrad.wilk@oracle.com>
CC: Julien Grall <julien.grall@arm.com>
---
 SUPPORT.md | 196 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 196 insertions(+)
 create mode 100644 SUPPORT.md

diff --git a/SUPPORT.md b/SUPPORT.md
new file mode 100644
index 0000000000..d7f2ae45e4
--- /dev/null
+++ b/SUPPORT.md
@@ -0,0 +1,196 @@
+# Support statement for this release
+
+This document describes the support status 
+and in particular the security support status of the Xen branch
+within which you find it.
+
+See the bottom of the file 
+for the definitions of the support status levels etc.
+
+# Release Support
+
+    Xen-Version: 4.10-unstable
+    Initial-Release: n/a
+    Supported-Until: TBD
+    Security-Support-Until: Unreleased - not yet security-supported
+
+# Feature Support
+
+# Format and definitions
+
+This file contains prose, and machine-readable fragments.
+The data in a machine-readable fragment relate to
+the section and subsection in which it is found.
+
+The file is in markdown format.
+The machine-readable fragments are markdown literals
+containing RFC-822-like (deb822-like) data.
+
+## Keys found in the Feature Support subsections
+
+### Status
+
+This gives the overall status of the feature,
+including security support status, functional completeness, etc.
+Refer to the detailed definitions below.
+
+If support differs based on implementation
+(for instance, x86 / ARM, Linux / QEMU / FreeBSD),
+one line for each set of implementations will be listed.
+
+## Definition of Status labels
+
+Each Status value corresponds to levels of security support,
+testing, stability, etc., as follows:
+
+### Experimental
+
+    Functional completeness: No
+    Functional stability: Here be dragons
+    Interface stability: Not stable
+    Security supported: No
+
+### Tech Preview
+
+    Functional completeness: Yes
+    Functional stability: Quirky
+    Interface stability: Provisionally stable
+    Security supported: No
+
+#### Supported
+
+    Functional completeness: Yes
+    Functional stability: Normal
+    Interface stability: Yes
+    Security supported: Yes
+
+#### Deprecated
+
+    Functional completeness: Yes
+    Functional stability: Quirky
+    Interface stability: No (as in, may disappear the next release)
+    Security supported: Yes
+
+All of these may appear in modified form.  
+There are several interfaces, for instance,
+which are officially declared as not stable;
+in such a case this feature may be described as "Stable / Interface not stable".
+
+## Definition of the status label interpretation tags
+
+### Functionally complete
+
+Does it behave like a fully functional feature?
+Does it work on all expected platforms,
+or does it only work for a very specific sub-case?
+Does it have a sensible UI,
+or do you have to have a deep understanding of the internals
+to get it to work properly?
+
+### Functional stability
+
+What is the risk of it exhibiting bugs?
+
+General answers to the above:
+
+ * **Here be dragons**
+
+   Pretty likely to still crash / fail to work.
+   Not recommended unless you like life on the bleeding edge.
+
+ * **Quirky**
+
+   Mostly works but may have odd behavior here and there.
+   Recommended for playing around or for non-production use cases.
+
+ * **Normal**
+
+   Ready for production use
+
+### Interface stability
+
+If I build a system based on the current interfaces,
+will they still work when I upgrade to the next version?
+
+ * **Not stable**
+
+   Interface is still in the early stages and
+   still fairly likely to be broken in future updates.
+
+ * **Provisionally stable**
+
+   We're not yet promising backwards compatibility,
+   but we think this is probably the final form of the interface.
+   It may still require some tweaks.
+
+ * **Stable**
+
+   We will try very hard to avoid breaking backwards  compatibility,
+   and to fix any regressions that are reported.
+
+### Security supported
+
+Will XSAs be issued if security-related bugs are discovered
+in the functionality?
+
+If "no",
+anyone who finds a security-related bug in the feature
+will be advised to
+post it publicly to the Xen Project mailing lists
+(or contact another security response team,
+if a relevant one exists).
+
+Bugs found after the end of **Security-Support-Until**
+in the Release Support section will receive an XSA
+if they also affect newer, security-supported, versions of Xen.
+However, the Xen Project will not provide official fixes
+for non-security-supported versions.
+
+Three common 'diversions' from the 'Supported' category
+are given the following labels:
+
+  * **Supported, Not security supported**
+
+    Functionally complete, normal stability,
+    interface stable, but no security support
+
+  * **Supported, Security support external**
+  
+    This feature is security supported
+    by a different organization (not the XenProject).
+    See **External security support** below.
+
+  * **Supported, with caveats**
+
+    This feature is security supported only under certain conditions,
+    or support is given only for certain aspects of the feature,
+    or the feature should be used with care
+    because it is easy to use insecurely without knowing it.
+    Additional details will be given in the description.
+
+### Interaction with other features
+
+Not all features interact well with all other features.
+Some features are only for HVM guests; some don't work with migration, &c.
+
+### External security support
+
+The XenProject security team
+provides security support for XenProject projects.
+
+We also provide security support for Xen-related code in Linux,
+which is an external project but doesn't have its own security process.
+
+External projects that provide their own security support for Xen-related features are listed below.
+
+  * QEMU https://wiki.qemu.org/index.php/SecurityProcess
+
+  * Libvirt https://libvirt.org/securityprocess.html
+
+  * FreeBSD https://www.freebsd.org/security/
+  
+  * NetBSD http://www.netbsd.org/support/security/
+  
+  * OpenBSD https://www.openbsd.org/security.html
+
+ 
-- 
2.15.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 02/16] SUPPORT.md: Add core functionality
  2017-11-13 15:41 [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
@ 2017-11-13 15:41 ` George Dunlap
  2017-11-21  8:03   ` Jan Beulich
  2017-11-13 15:41 ` [PATCH 03/16] SUPPORT.md: Add some x86 features George Dunlap
                   ` (15 subsequent siblings)
  16 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-13 15:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Nathan Studer, Wei Liu, Andrew Cooper, Dario Faggioli,
	Tim Deegan, George Dunlap, Jan Beulich, Ian Jackson

Core memory management and scheduling.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
CC: Dario Faggioli <dario.faggioli@citrix.com>
CC: Nathan Studer <nathan.studer@dornerworks.com>
---
 SUPPORT.md | 59 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index d7f2ae45e4..064a2f43e9 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -16,6 +16,65 @@ for the definitions of the support status levels etc.
 
 # Feature Support
 
+## Memory Management
+
+### Memory Ballooning
+
+    Status: Supported
+
+## Resource Management
+
+### CPU Pools
+
+    Status: Supported
+
+Groups physical cpus into distinct groups called "cpupools",
+with each pool having the capability
+of using different schedulers and scheduling properties.
+
+### Credit Scheduler
+
+    Status: Supported
+
+A weighted proportional fair share virtual CPU scheduler.
+This is the default scheduler.
+
+### Credit2 Scheduler
+
+    Status: Supported
+
+A general purpose scheduler for Xen,
+designed with particular focus on fairness, responsiveness, and scalability
+
+### RTDS based Scheduler
+
+    Status: Experimental
+
+A soft real-time CPU scheduler 
+built to provide guaranteed CPU capacity to guest VMs on SMP hosts
+
+### ARINC653 Scheduler
+
+    Status: Supported
+
+A periodically repeating fixed timeslice scheduler.
+Currently only single-vcpu domains are supported.
+
+### Null Scheduler
+
+    Status: Experimental
+
+A very simple, very static scheduling policy 
+that always schedules the same vCPU(s) on the same pCPU(s). 
+It is designed for maximum determinism and minimum overhead
+on embedded platforms.
+
+### NUMA scheduler affinity
+
+    Status, x86: Supported
+
+Enables NUMA aware scheduling in Xen
+
 # Format and definitions
 
 This file contains prose, and machine-readable fragments.
-- 
2.15.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 03/16] SUPPORT.md: Add some x86 features
  2017-11-13 15:41 [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
  2017-11-13 15:41 ` [PATCH 02/16] SUPPORT.md: Add core functionality George Dunlap
@ 2017-11-13 15:41 ` George Dunlap
  2017-11-21  8:09   ` Jan Beulich
  2017-11-13 15:41 ` [PATCH 04/16] SUPPORT.md: Add core ARM features George Dunlap
                   ` (14 subsequent siblings)
  16 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-13 15:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, George Dunlap, Jan Beulich, Ian Jackson,
	Roger Pau Monne

Including host architecture support and guest types.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Konrad Wilk <konrad.wilk@oracle.com>
CC: Tim Deegan <tim@xen.org>
CC: Roger Pau Monne <roger.pau@citrix.com>
---
 SUPPORT.md | 53 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 53 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index 064a2f43e9..6b09f98331 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -16,6 +16,59 @@ for the definitions of the support status levels etc.
 
 # Feature Support
 
+## Host Architecture
+
+### x86-64
+
+    Status: Supported
+
+## Host hardware support
+
+### Physical CPU Hotplug
+
+    Status, x86: Supported
+
+### Physical Memory Hotplug
+
+    Status, x86: Supported
+
+### Host ACPI (via Domain 0)
+
+    Status, x86 PV: Supported
+    Status, x86 PVH: Tech preview
+
+### x86/Intel Platform QoS Technologies
+
+    Status: Tech Preview
+
+## Guest Type
+
+### x86/PV
+
+    Status: Supported
+
+Traditional Xen PV guest
+
+No hardware requirements
+
+### x86/HVM
+
+    Status: Supported
+
+Fully virtualised guest using hardware virtualisation extensions
+
+Requires hardware virtualisation support (Intel VMX / AMD SVM)
+
+### x86/PVH guest
+
+    Status: Supported
+
+PVH is a next-generation paravirtualized mode 
+designed to take advantage of hardware virtualization support when possible.
+During development this was sometimes called HVMLite or PVHv2.
+
+Requires hardware virtualisation support (Intel VMX / AMD SVM)
+
 ## Memory Management
 
 ### Memory Ballooning
-- 
2.15.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 04/16] SUPPORT.md: Add core ARM features
  2017-11-13 15:41 [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
  2017-11-13 15:41 ` [PATCH 02/16] SUPPORT.md: Add core functionality George Dunlap
  2017-11-13 15:41 ` [PATCH 03/16] SUPPORT.md: Add some x86 features George Dunlap
@ 2017-11-13 15:41 ` George Dunlap
  2017-11-21  8:11   ` Jan Beulich
  2017-11-13 15:41 ` [PATCH 05/16] SUPPORT.md: Toolstack core George Dunlap
                   ` (13 subsequent siblings)
  16 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-13 15:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, George Dunlap, Julien Grall, Jan Beulich,
	Ian Jackson

Hardware support and guest type.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Konrad Wilk <konrad.wilk@oracle.com>
CC: Tim Deegan <tim@xen.org>
CC: Julien Grall <julien.grall@arm.com>
---
 SUPPORT.md | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index 6b09f98331..7c01d8cf9a 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -22,6 +22,14 @@ for the definitions of the support status levels etc.
 
     Status: Supported
 
+### ARM v7 + Virtualization Extensions
+
+    Status: Supported
+
+### ARM v8
+
+    Status: Supported
+
 ## Host hardware support
 
 ### Physical CPU Hotplug
@@ -36,11 +44,26 @@ for the definitions of the support status levels etc.
 
     Status, x86 PV: Supported
     Status, x86 PVH: Tech preview
+    Status, ARM: Experimental
 
 ### x86/Intel Platform QoS Technologies
 
     Status: Tech Preview
 
+### ARM/SMMUv1
+
+    Status: Supported
+
+### ARM/SMMUv2
+
+    Status: Supported
+
+### ARM/GICv3 ITS
+
+    Status: Experimental
+
+Extension to the GICv3 interrupt controller to support MSI.
+
 ## Guest Type
 
 ### x86/PV
@@ -69,6 +92,12 @@ During development this was sometimes called HVMLite or PVHv2.
 
 Requires hardware virtualisation support (Intel VMX / AMD SVM)
 
+### ARM guest
+
+    Status: Supported
+
+ARM only has one guest type at the moment
+
 ## Memory Management
 
 ### Memory Ballooning
-- 
2.15.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 05/16] SUPPORT.md: Toolstack core
  2017-11-13 15:41 [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
                   ` (2 preceding siblings ...)
  2017-11-13 15:41 ` [PATCH 04/16] SUPPORT.md: Add core ARM features George Dunlap
@ 2017-11-13 15:41 ` George Dunlap
  2017-11-13 15:41 ` [PATCH 06/16] SUPPORT.md: Add scalability features George Dunlap
                   ` (12 subsequent siblings)
  16 siblings, 0 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-13 15:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, George Dunlap, Jan Beulich, Ian Jackson

For now only include xl-specific features, or interaction with the
system.  Feature support matrix will be added when features are
mentioned.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Konrad Wilk <konrad.wilk@oracle.com>
CC: Tim Deegan <tim@xen.org>
---
 SUPPORT.md | 38 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index 7c01d8cf9a..c884fac7f5 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -98,6 +98,44 @@ Requires hardware virtualisation support (Intel VMX / AMD SVM)
 
 ARM only has one guest type at the moment
 
+## Toolstack
+
+### xl
+
+    Status: Supported
+
+### Direct-boot kernel image format
+
+    Supported, x86: bzImage
+    Supported, ARM32: zImage
+    Supported, ARM64: Image
+
+Format which the toolstack accept for direct-boot kernels
+
+### systemd support for xl
+
+    Status: Supported
+
+### JSON output support for xl
+
+    Status: Experimental
+
+Output of information in machine-parseable JSON format
+
+### Open vSwitch integration for xl
+
+    Status, Linux: Supported
+
+### Virtual cpu hotplug
+
+    Status: Supported
+
+## Toolstack/3rd party
+
+### libvirt driver for xl
+
+    Status: Supported, Security support external
+
 ## Memory Management
 
 ### Memory Ballooning
-- 
2.15.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 06/16] SUPPORT.md: Add scalability features
  2017-11-13 15:41 [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
                   ` (3 preceding siblings ...)
  2017-11-13 15:41 ` [PATCH 05/16] SUPPORT.md: Toolstack core George Dunlap
@ 2017-11-13 15:41 ` George Dunlap
  2017-11-16 15:19   ` Julien Grall
  2017-11-21  8:16   ` Jan Beulich
  2017-11-13 15:41 ` [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86 George Dunlap
                   ` (11 subsequent siblings)
  16 siblings, 2 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-13 15:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, George Dunlap, Julien Grall, Jan Beulich,
	Ian Jackson

Superpage support and PVHVM.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Konrad Wilk <konrad.wilk@oracle.com>
CC: Tim Deegan <tim@xen.org>
CC: Julien Grall <julien.grall@arm.com>
---
 SUPPORT.md | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index c884fac7f5..a8c56d13dd 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -195,6 +195,27 @@ on embedded platforms.
 
 Enables NUMA aware scheduling in Xen
 
+## Scalability
+
+### 1GB/2MB super page support
+
+    Status, x86 HVM/PVH: : Supported
+    Status, ARM: Supported
+
+NB that this refers to the ability of guests
+to have higher-level page table entries point directly to memory,
+improving TLB performance.
+This is independent of the ARM "page granularity" feature (see below).
+
+### x86/PVHVM
+
+    Status: Supported
+
+This is a useful label for a set of hypervisor features
+which add paravirtualized functionality to HVM guests 
+for improved performance and scalability.
+This includes exposing event channels to HVM guests.
+
 # Format and definitions
 
 This file contains prose, and machine-readable fragments.
-- 
2.15.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86
  2017-11-13 15:41 [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
                   ` (4 preceding siblings ...)
  2017-11-13 15:41 ` [PATCH 06/16] SUPPORT.md: Add scalability features George Dunlap
@ 2017-11-13 15:41 ` George Dunlap
  2017-11-21  8:29   ` Jan Beulich
  2017-11-13 15:41 ` [PATCH 08/16] SUPPORT.md: Add x86-specific virtual hardware George Dunlap
                   ` (10 subsequent siblings)
  16 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-13 15:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, George Dunlap, Julien Grall, Paul Durrant,
	Jan Beulich, Anthony Perard, Ian Jackson, Roger Pau Monne

Mostly PV protocols.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
The xl side of this seems a bit incomplete: There are a number of
things supported but not mentioned (like networking, &c), and a number
of things not in xl (PV SCSI).  Couldn't find evidence of pvcall or pv
keyboard support.  Also we seem to be missing "PV channels" from this
list entirely

CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Konrad Wilk <konrad.wilk@oracle.com>
CC: Tim Deegan <tim@xen.org>
CC: Roger Pau Monne <roger.pau@citrix.com>
CC: Anthony Perard <anthony.perard@citrix.com>
CC: Paul Durrant <paul.durrant@citrix.com>
CC: Julien Grall <julien.grall@arm.com>
---
 SUPPORT.md | 160 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 160 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index a8c56d13dd..20c58377a5 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -130,6 +130,22 @@ Output of information in machine-parseable JSON format
 
     Status: Supported
 
+### Qemu based disk backend (qdisk) for xl
+
+    Status: Supported
+
+### PV USB support for xl
+
+    Status: Supported
+
+### PV 9pfs support for xl
+
+    Status: Tech Preview
+
+### QEMU backend hotplugging for xl
+
+    Status: Supported
+
 ## Toolstack/3rd party
 
 ### libvirt driver for xl
@@ -216,6 +232,150 @@ which add paravirtualized functionality to HVM guests
 for improved performance and scalability.
 This includes exposing event channels to HVM guests.
 
+## Virtual driver support, guest side
+
+### Blkfront
+
+    Status, Linux: Supported
+    Status, FreeBSD: Supported, Security support external
+    Status, NetBSD: Supported, Security support external
+    Status, Windows: Supported
+
+Guest-side driver capable of speaking the Xen PV block protocol
+
+### Netfront
+
+    Status, Linux: Supported
+    States, Windows: Supported
+    Status, FreeBSD: Supported, Security support external
+    Status, NetBSD: Supported, Security support external
+    Status, OpenBSD: Supported, Security support external
+
+Guest-side driver capable of speaking the Xen PV networking protocol
+
+### PV Framebuffer (frontend)
+
+    Status, Linux (xen-fbfront): Supported
+
+Guest-side driver capable of speaking the Xen PV Framebuffer protocol
+
+### PV Console (frontend)
+
+    Status, Linux (hvc_xen): Supported
+    Status, Windows: Supported
+    Status, FreeBSD: Supported, Security support external
+    Status, NetBSD: Supported, Security support external
+
+Guest-side driver capable of speaking the Xen PV console protocol
+
+### PV keyboard (frontend)
+
+    Status, Linux (xen-kbdfront): Supported
+    Status, Windows: Supported
+
+Guest-side driver capable of speaking the Xen PV keyboard protocol
+
+[XXX 'Supported' here depends on the version we ship in 4.10 having some fixes]
+
+### PV USB (frontend)
+
+    Status, Linux: Supported
+
+### PV SCSI protocol (frontend)
+
+    Status, Linux: Supported, with caveats
+
+NB that while the PV SCSI backend is in Linux and tested regularly,
+there is currently no xl support.
+
+### PV TPM (frontend)
+
+    Status, Linux (xen-tpmfront): Tech Preview
+
+Guest-side driver capable of speaking the Xen PV TPM protocol
+
+### PV 9pfs frontend
+
+    Status, Linux: Tech Preview
+
+Guest-side driver capable of speaking the Xen 9pfs protocol
+
+### PVCalls (frontend)
+
+    Status, Linux: Tech Preview
+
+Guest-side driver capable of making pv system calls
+
+Note that there is currently no xl support for pvcalls.
+
+## Virtual device support, host side
+
+### Blkback
+
+    Status, Linux (blkback): Supported
+    Status, FreeBSD (blkback): Supported, Security support external
+    Status, NetBSD (xbdback): Supported, security support external
+    Status, QEMU (xen_disk): Supported
+    Status, Blktap2: Deprecated
+
+Host-side implementations of the Xen PV block protocol
+
+### Netback
+
+    Status, Linux (netback): Supported
+    Status, FreeBSD (netback): Supported, Security support external
+    Status, NetBSD (xennetback): Supported, Security support external
+
+Host-side implementations of Xen PV network protocol
+
+### PV Framebuffer (backend)
+
+    Status, QEMU: Supported
+
+Host-side implementaiton of the Xen PV framebuffer protocol
+
+### PV Console (xenconsoled)
+
+    Status: Supported
+
+Host-side implementation of the Xen PV console protocol
+
+### PV keyboard (backend)
+
+    Status, QEMU: Supported
+
+Host-side implementation fo the Xen PV keyboard protocol
+
+### PV USB (backend)
+
+    Status, Linux: Experimental
+    Status, QEMU: Supported
+
+Host-side implementation of the Xen PV USB protocol
+
+### PV SCSI protocol (backend)
+
+    Status, Linux: Supported, with caveats
+
+NB that while the PV SCSI backend is in Linux and tested regularly,
+there is currently no xl support.
+
+### PV TPM (backend)
+
+    Status: Tech Preview
+
+### PV 9pfs (backend)
+
+    Status, QEMU: Tech Preview
+
+### PVCalls (backend)
+
+    Status, Linux: Tech Preview
+
+### Online resize of virtual disks
+
+    Status: Supported
+
 # Format and definitions
 
 This file contains prose, and machine-readable fragments.
-- 
2.15.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 08/16] SUPPORT.md: Add x86-specific virtual hardware
  2017-11-13 15:41 [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
                   ` (5 preceding siblings ...)
  2017-11-13 15:41 ` [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86 George Dunlap
@ 2017-11-13 15:41 ` George Dunlap
  2017-11-21  8:39   ` Jan Beulich
  2017-11-13 15:41 ` [PATCH 09/16] SUPPORT.md: Add ARM-specific " George Dunlap
                   ` (9 subsequent siblings)
  16 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-13 15:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, George Dunlap, Paul Durrant, Jan Beulich,
	Anthony Perard, Ian Jackson, Roger Pau Monne

x86-specific virtual hardware provided by the hypervisor, toolstack,
or QEMU.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
Added emulated QEMU support, to replace docs/misc/qemu-xen-security.

Need to figure out what to do with the "backing storage image format"
section of that document.

CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Konrad Wilk <konrad.wilk@oracle.com>
CC: Tim Deegan <tim@xen.org>
CC: Roger Pau Monne <roger.pau@citrix.com>
CC: Anthony Perard <anthony.perard@citrix.com>
CC: Paul Durrant <paul.durrant@citrix.com>
---
 SUPPORT.md | 106 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 106 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index 20c58377a5..b95ee0ebe7 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -376,6 +376,112 @@ there is currently no xl support.
 
     Status: Supported
 
+## Virtual Hardware, Hypervisor
+
+### x86/Nested PV
+
+    Status, x86 HVM: Tech Preview
+
+This means running a Xen hypervisor inside an HVM domain,
+with support for PV L2 guests only
+(i.e., hardware virtualization extensions not provided
+to the guest).
+
+This works, but has performance limitations
+because the L1 dom0 can only access emulated L1 devices.
+
+### x86/Nested HVM
+
+    Status, x86 HVM: Experimental
+
+This means running a Xen hypervisor inside an HVM domain,
+with support for running both PV and HVM L2 guests
+(i.e., hardware virtualization extensions provided
+to the guest).
+
+### x86/Advanced Vector eXtension
+
+    Status: Supported
+
+### vPMU
+
+    Status, x86: Supported, Not security supported
+
+Virtual Performance Management Unit for HVM guests
+
+Disabled by default (enable with hypervisor command line option).
+This feature is not security supported: see http://xenbits.xen.org/xsa/advisory-163.html
+
+## Virtual Hardware, QEMU
+
+These are devices available in HVM mode using a qemu devicemodel (the default).
+Note that other devices are available but not security supported.
+
+### x86/Emulated platform devices (QEMU):
+
+    Status, piix3: Supported
+
+### x86/Emulated network (QEMU):
+
+    Status, e1000: Supported
+    Status, rtl8193: Supported
+    Status, virtio-net: Supported
+
+### x86/Emulated storage (QEMU):
+
+    Status, piix3 ide: Supported
+    Status, ahci: Supported
+
+### x86/Emulated graphics (QEMU):
+
+    Status, cirrus-vga: Supported
+    Status, stgvga: Supported
+
+### x86/Emulated audio (QEMU):
+
+    Status, sb16: Supported
+    Status, es1370: Supported
+    Status, ac97: Supported
+
+### x86/Emulated input (QEMU):
+
+    Status, usbmouse: Supported
+    Status, usbtablet: Supported
+    Status, ps/2 keyboard: Supported
+    Status, ps/2 mouse: Supported
+    
+### x86/Emulated serial card (QEMU):
+
+    Status, UART 16550A: Supported
+
+### x86/Host USB passthrough (QEMU):
+
+    Status: Supported, not security supported 
+
+## Virtual Firmware
+
+### x86/HVM iPXE
+
+    Status: Supported, with caveats
+
+Booting a guest via PXE.
+PXE inherently places full trust of the guest in the network,
+and so should only be used
+when the guest network is under the same administrative control
+as the guest itself.
+
+### x86/HVM BIOS
+
+    Status: Supported
+
+Booting a guest via guest BIOS firmware
+
+### x86/HVM EFI
+
+    Status: Supported
+
+Booting a guest via guest EFI firmware
+
 # Format and definitions
 
 This file contains prose, and machine-readable fragments.
-- 
2.15.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 09/16] SUPPORT.md: Add ARM-specific virtual hardware
  2017-11-13 15:41 [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
                   ` (6 preceding siblings ...)
  2017-11-13 15:41 ` [PATCH 08/16] SUPPORT.md: Add x86-specific virtual hardware George Dunlap
@ 2017-11-13 15:41 ` George Dunlap
  2017-11-16 15:41   ` Julien Grall
  2017-11-16 15:41   ` Julien Grall
  2017-11-13 15:41 ` [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem George Dunlap
                   ` (8 subsequent siblings)
  16 siblings, 2 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-13 15:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, George Dunlap, Julien Grall, Jan Beulich,
	Ian Jackson

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
Do we need to add anything more here?

And do we need to include ARM ACPI for guests?

CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Konrad Wilk <konrad.wilk@oracle.com>
CC: Tim Deegan <tim@xen.org>
CC: Julien Grall <julien.grall@arm.com>
---
 SUPPORT.md | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index b95ee0ebe7..8235336c41 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -412,6 +412,16 @@ Virtual Performance Management Unit for HVM guests
 Disabled by default (enable with hypervisor command line option).
 This feature is not security supported: see http://xenbits.xen.org/xsa/advisory-163.html
 
+### ARM/Non-PCI device passthrough
+
+    Status: Supported
+
+### ARM: 16K and 64K page granularity in guests
+
+    Status: Supported, with caveats
+
+No support for QEMU backends in a 16K or 64K domain.
+
 ## Virtual Hardware, QEMU
 
 These are devices available in HVM mode using a qemu devicemodel (the default).
-- 
2.15.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem
  2017-11-13 15:41 [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
                   ` (7 preceding siblings ...)
  2017-11-13 15:41 ` [PATCH 09/16] SUPPORT.md: Add ARM-specific " George Dunlap
@ 2017-11-13 15:41 ` George Dunlap
  2017-11-21  8:48   ` Jan Beulich
  2017-11-13 15:41 ` [PATCH 11/16] SUPPORT.md: Add 'easy' HA / FT features George Dunlap
                   ` (7 subsequent siblings)
  16 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-13 15:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, George Dunlap, Jan Beulich, Ian Jackson

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Konrad Wilk <konrad.wilk@oracle.com>
CC: Tim Deegan <tim@xen.org>
---
 SUPPORT.md | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index 8235336c41..bd83c81557 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -152,6 +152,35 @@ Output of information in machine-parseable JSON format
 
     Status: Supported, Security support external
 
+## Debugging, analysis, and crash post-mortem
+
+### gdbsx
+
+    Status, x86: Supported
+
+Debugger to debug ELF guests
+
+### Soft-reset for PV guests
+
+    Status: Supported
+    
+Soft-reset allows a new kernel to start 'from scratch' with a fresh VM state, 
+but with all the memory from the previous state of the VM intact.
+This is primarily designed to allow "crash kernels", 
+which can do core dumps of memory to help with debugging in the event of a crash.
+
+### xentrace
+
+    Status, x86: Supported
+
+Tool to capture Xen trace buffer data
+
+### gcov
+
+    Status: Supported, Not security supported
+
+Export hypervisor coverage data suitable for analysis by gcov or lcov.
+
 ## Memory Management
 
 ### Memory Ballooning
-- 
2.15.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 11/16] SUPPORT.md: Add 'easy' HA / FT features
  2017-11-13 15:41 [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
                   ` (8 preceding siblings ...)
  2017-11-13 15:41 ` [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem George Dunlap
@ 2017-11-13 15:41 ` George Dunlap
  2017-11-21  8:49   ` Jan Beulich
  2017-11-13 15:41 ` [PATCH 12/16] SUPPORT.md: Add Security-releated features George Dunlap
                   ` (6 subsequent siblings)
  16 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-13 15:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, George Dunlap, Jan Beulich, Ian Jackson

Migration being one of the key 'non-easy' ones to be added later.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Konrad Wilk <konrad.wilk@oracle.com>
CC: Tim Deegan <tim@xen.org>
---
 SUPPORT.md | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index bd83c81557..722a29fec5 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -261,6 +261,22 @@ which add paravirtualized functionality to HVM guests
 for improved performance and scalability.
 This includes exposing event channels to HVM guests.
 
+## High Availability and Fault Tolerance
+
+### Remus Fault Tolerance
+
+    Status: Experimental
+
+### COLO Manager
+
+    Status: Experimental
+
+### x86/vMCE
+
+    Status: Supported
+
+Forward Machine Check Exceptions to Appropriate guests
+
 ## Virtual driver support, guest side
 
 ### Blkfront
-- 
2.15.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 12/16] SUPPORT.md: Add Security-releated features
  2017-11-13 15:41 [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
                   ` (9 preceding siblings ...)
  2017-11-13 15:41 ` [PATCH 11/16] SUPPORT.md: Add 'easy' HA / FT features George Dunlap
@ 2017-11-13 15:41 ` George Dunlap
  2017-11-16 16:23   ` Konrad Rzeszutek Wilk
  2017-11-21  8:52   ` Jan Beulich
  2017-11-13 15:41 ` [PATCH 13/16] SUPPORT.md: Add secondary memory management features George Dunlap
                   ` (5 subsequent siblings)
  16 siblings, 2 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-13 15:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, George Dunlap, Rich Persaud, Jan Beulich,
	Tamas K Lengyel, Ian Jackson

With the exception of driver domains, which depend on PCI passthrough,
and will be introduced later.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Konrad Wilk <konrad.wilk@oracle.com>
CC: Tim Deegan <tim@xen.org>
CC: Tamas K Lengyel <tamas.lengyel@zentific.com>
CC: Rich Persaud <persaur@gmail.com>
---
 SUPPORT.md | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index 722a29fec5..0f7426593e 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -421,6 +421,40 @@ there is currently no xl support.
 
     Status: Supported
 
+## Security
+
+### Device Model Stub Domains
+
+    Status: Supported
+
+### KCONFIG Expert
+
+    Status: Experimental
+
+### Live Patching
+
+    Status, x86: Supported
+    Status, ARM: Experimental
+
+Compile time disabled for ARM
+
+### Virtual Machine Introspection
+
+    Status, x86: Supported, not security supported
+
+### XSM & FLASK
+
+    Status: Experimental
+
+Compile time disabled
+
+### FLASK default policy
+
+    Status: Experimental
+    
+The default policy includes FLASK labels and roles for a "typical" Xen-based system
+with dom0, driver domains, stub domains, domUs, and so on.
+
 ## Virtual Hardware, Hypervisor
 
 ### x86/Nested PV
-- 
2.15.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 13/16] SUPPORT.md: Add secondary memory management features
  2017-11-13 15:41 [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
                   ` (10 preceding siblings ...)
  2017-11-13 15:41 ` [PATCH 12/16] SUPPORT.md: Add Security-releated features George Dunlap
@ 2017-11-13 15:41 ` George Dunlap
  2017-11-21  8:54   ` Jan Beulich
  2017-11-21 19:55   ` Andrew Cooper
  2017-11-13 15:41 ` [PATCH 14/16] SUPPORT.md: Add statement on PCI passthrough George Dunlap
                   ` (4 subsequent siblings)
  16 siblings, 2 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-13 15:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, George Dunlap, Jan Beulich, Tamas K Lengyel,
	Ian Jackson

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Konrad Wilk <konrad.wilk@oracle.com>
CC: Tim Deegan <tim@xen.org>
CC: Tamas K Lengyel <tamas.lengyel@zentific.com>
---
 SUPPORT.md | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index 0f7426593e..3e352198ce 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -187,6 +187,37 @@ Export hypervisor coverage data suitable for analysis by gcov or lcov.
 
     Status: Supported
 
+### Memory Sharing
+
+    Status, x86 HVM: Tech Preview
+    Status, ARM: Tech Preview
+
+Allow sharing of identical pages between guests
+
+### Memory Paging
+
+    Status, x86 HVM: Experimenal
+
+Allow pages belonging to guests to be paged to disk
+
+### Transcendent Memory
+
+    Status: Experimental
+
+Transcendent Memory (tmem) allows the creation of hypervisor memory pools
+which guests can use to store memory 
+rather than caching in its own memory or swapping to disk.
+Having these in the hypervisor
+can allow more efficient aggregate use of memory across VMs.
+
+### Alternative p2m
+
+    Status, x86 HVM: Tech Preview
+    Status, ARM: Tech Preview
+
+Allows external monitoring of hypervisor memory
+by maintaining multiple physical to machine (p2m) memory mappings.
+
 ## Resource Management
 
 ### CPU Pools
-- 
2.15.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 14/16] SUPPORT.md: Add statement on PCI passthrough
  2017-11-13 15:41 [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
                   ` (11 preceding siblings ...)
  2017-11-13 15:41 ` [PATCH 13/16] SUPPORT.md: Add secondary memory management features George Dunlap
@ 2017-11-13 15:41 ` George Dunlap
  2017-11-14 13:25   ` Marek Marczykowski-Górecki
                     ` (2 more replies)
  2017-11-13 15:41 ` [PATCH 15/16] SUPPORT.md: Add statement on migration RFC George Dunlap
                   ` (3 subsequent siblings)
  16 siblings, 3 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-13 15:41 UTC (permalink / raw)
  To: xen-devel
  Cc: James McKenzie, Christopher Clark, Stefano Stabellini, Wei Liu,
	Konrad Wilk, Andrew Cooper, Tim Deegan, George Dunlap,
	Marek Marczykowski-Górecki, Rich Persaud, Jan Beulich,
	Ian Jackson

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Konrad Wilk <konrad.wilk@oracle.com>
CC: Tim Deegan <tim@xen.org>
CC: Rich Persaud <persaur@gmail.com>
CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
CC: Christopher Clark <christopher.w.clark@gmail.com>
CC: James McKenzie <james.mckenzie@bromium.com>
---
 SUPPORT.md | 33 ++++++++++++++++++++++++++++++++-
 1 file changed, 32 insertions(+), 1 deletion(-)

diff --git a/SUPPORT.md b/SUPPORT.md
index 3e352198ce..a8388f3dc5 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -454,9 +454,23 @@ there is currently no xl support.
 
 ## Security
 
+### Driver Domains
+
+    Status: Supported, with caveats
+
+"Driver domains" means allowing non-Domain 0 domains 
+with access to physical devices to act as back-ends.
+
+See the appropriate "Device Passthrough" section
+for more information about security support.
+
 ### Device Model Stub Domains
 
-    Status: Supported
+    Status: Supported, with caveats
+
+Vulnerabilities of a device model stub domain 
+to a hostile driver domain (either compromised or untrusted)
+are excluded from security support.
 
 ### KCONFIG Expert
 
@@ -522,6 +536,23 @@ Virtual Performance Management Unit for HVM guests
 Disabled by default (enable with hypervisor command line option).
 This feature is not security supported: see http://xenbits.xen.org/xsa/advisory-163.html
 
+### x86/PCI Device Passthrough
+
+    Status: Supported, with caveats
+
+Only systems using IOMMUs will be supported.
+
+Not compatible with migration, altp2m, introspection, memory sharing, or memory paging.
+
+Because of hardware limitations
+(affecting any operating system or hypervisor),
+it is generally not safe to use this feature 
+to expose a physical device to completely untrusted guests.
+However, this feature can still confer significant security benefit 
+when used to remove drivers and backends from domain 0
+(i.e., Driver Domains).
+See docs/PCI-IOMMU-bugs.txt for more information.
+
 ### ARM/Non-PCI device passthrough
 
     Status: Supported
-- 
2.15.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 15/16] SUPPORT.md: Add statement on migration RFC
  2017-11-13 15:41 [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
                   ` (12 preceding siblings ...)
  2017-11-13 15:41 ` [PATCH 14/16] SUPPORT.md: Add statement on PCI passthrough George Dunlap
@ 2017-11-13 15:41 ` George Dunlap
  2017-11-13 15:41 ` [PATCH 16/16] SUPPORT.md: Add limits RFC George Dunlap
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-13 15:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, George Dunlap, Julien Grall, Paul Durrant,
	Jan Beulich, Anthony Perard, Ian Jackson, Roger Pau Monne

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
Would someone be willing to take over this one?

CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Konrad Wilk <konrad.wilk@oracle.com>
CC: Tim Deegan <tim@xen.org>
CC: Roger Pau Monne <roger.pau@citrix.com>
CC: Anthony Perard <anthony.perard@citrix.com>
CC: Paul Durrant <paul.durrant@citrix.com>
CC: Julien Grall <julien.grall@arm.com>
---
 SUPPORT.md | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index a8388f3dc5..e72f9f3892 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -294,6 +294,36 @@ This includes exposing event channels to HVM guests.
 
 ## High Availability and Fault Tolerance
 
+### Live Migration, Save & Restore
+
+    Status, x86: Supported, with caveats
+
+A number of features don't work with live migration / save / restore.  These include:
+ * PCI passthrough
+ * vNUMA
+ * Nested HVM
+
+XXX Need to check the following:
+ 
+ * Guest serial console
+ * Crash kernels
+ * Transcendent Memory
+ * Alternative p2m
+ * vMCE
+ * vPMU
+ * Intel Platform QoS
+ * Remus
+ * COLO
+ * PV protocols: Keyboard, PVUSB, PVSCSI, PVTPM, 9pfs, pvcalls?
+ * FlASK?
+ * CPU / memory hotplug?
+
+Additionally, if an HVM guest was booted with memory != maxmem,
+and the balloon driver hadn't hit the target before migration,
+the size of the guest on the far side might be unexpected.
+
+See docs/features/migration.pandoc for more details
+
 ### Remus Fault Tolerance
 
     Status: Experimental
-- 
2.15.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 90+ messages in thread

* [PATCH 16/16] SUPPORT.md: Add limits RFC
  2017-11-13 15:41 [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
                   ` (13 preceding siblings ...)
  2017-11-13 15:41 ` [PATCH 15/16] SUPPORT.md: Add statement on migration RFC George Dunlap
@ 2017-11-13 15:41 ` George Dunlap
  2017-11-21  9:26   ` Jan Beulich
  2017-11-13 15:43 ` [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
  2017-11-20 17:01 ` Jan Beulich
  16 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-13 15:41 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, George Dunlap, Jan Beulich, Ian Jackson

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
Could someone take this one over as well?

CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Konrad Wilk <konrad.wilk@oracle.com>
CC: Tim Deegan <tim@xen.org>
---
 SUPPORT.md | 61 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 61 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index e72f9f3892..d11e05fc2a 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -64,6 +64,53 @@ for the definitions of the support status levels etc.
 
 Extension to the GICv3 interrupt controller to support MSI.
 
+## Limits/Host
+
+### CPUs
+
+    Limit, x86: 4095
+    Limit, ARM32: 8
+    Limit, ARM64: 128
+
+Note that for x86, very large number of cpus may not work/boot,
+but we will still provide security support
+
+### x86/RAM
+
+    Limit, x86: 123TiB
+    Limit, ARM32: 16GiB
+    Limit, ARM64: 5TiB
+
+## Limits/Guest
+
+### Virtual CPUs
+
+    Limit, x86 PV: 8192
+    Limit-security, x86 PV: 32
+    Limit, x86 HVM: 128
+    Limit-security, x86 HVM: 32
+    Limit, ARM32: 8
+    Limit, ARM64: 128
+
+### Virtual RAM
+
+    Limit-security, x86 PV: 2047GiB
+    Limit-security, x86 HVM: 1.5TiB
+    Limit, ARM32: 16GiB
+    Limit, ARM64: 1TiB
+
+Note that there are no theoretical limits to PV or HVM guest sizes
+other than those determined by the processor architecture.
+
+### Event Channel 2-level ABI
+
+    Limit, 32-bit: 1024
+    Limit, 64-bit: 4096
+
+### Event Channel FIFO ABI
+
+    Limit: 131072
+
 ## Guest Type
 
 ### x86/PV
@@ -685,6 +732,20 @@ If support differs based on implementation
 (for instance, x86 / ARM, Linux / QEMU / FreeBSD),
 one line for each set of implementations will be listed.
 
+### Limit-security
+
+For size limits.
+This figure shows the largest configuration which will receive
+security support.
+It is generally determined by the maximum amount that is regularly tested.
+This limit will only be listed explicitly
+if it is different than the theoretical limit.
+
+### Limit
+
+This figure shows a theoretical size limit.
+This does not mean that such a large configuration will actually work.
+
 ## Definition of Status labels
 
 Each Status value corresponds to levels of security support,
-- 
2.15.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 90+ messages in thread

* Re: [PATCH 01/16] Introduce skeleton SUPPORT.md
  2017-11-13 15:41 [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
                   ` (14 preceding siblings ...)
  2017-11-13 15:41 ` [PATCH 16/16] SUPPORT.md: Add limits RFC George Dunlap
@ 2017-11-13 15:43 ` George Dunlap
  2017-11-20 17:01 ` Jan Beulich
  16 siblings, 0 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-13 15:43 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Dario Faggioli, Tim Deegan, Julien Grall, Paul Durrant,
	Jan Beulich, Tamas K Lengyel, Anthony Perard, Ian Jackson,
	Roger Pau Monne

On 11/13/2017 03:41 PM, George Dunlap wrote:
> Add a machine-readable file to describe what features are in what
> state of being 'supported', as well as information about how long this
> release will be supported, and so on.
> 
> The document should be formatted using "semantic newlines" [1], to make
> changes easier.
> 
> Begin with the basic framework.
> 
> Signed-off-by: Ian Jackson <ian.jackson@citrix.com>
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>

Sending this series out slightly unfinished, as I've gotten diverted
with some security issues.

I think patches 1-14 should be mostly ready.  Patches 15 and 16 both
need some work; if anyone could pick them up I'd appreciate it.

Thanks,
 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 14/16] SUPPORT.md: Add statement on PCI passthrough
  2017-11-13 15:41 ` [PATCH 14/16] SUPPORT.md: Add statement on PCI passthrough George Dunlap
@ 2017-11-14 13:25   ` Marek Marczykowski-Górecki
  2017-11-22 17:18     ` George Dunlap
  2017-11-16 15:43   ` Julien Grall
  2017-11-21  8:59   ` Jan Beulich
  2 siblings, 1 reply; 90+ messages in thread
From: Marek Marczykowski-Górecki @ 2017-11-14 13:25 UTC (permalink / raw)
  To: George Dunlap
  Cc: James McKenzie, Stefano Stabellini, Wei Liu, Konrad Wilk,
	Andrew Cooper, Tim Deegan, Christopher Clark, Rich Persaud,
	Jan Beulich, Ian Jackson, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2067 bytes --]

On Mon, Nov 13, 2017 at 03:41:24PM +0000, George Dunlap wrote:
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> ---
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Konrad Wilk <konrad.wilk@oracle.com>
> CC: Tim Deegan <tim@xen.org>
> CC: Rich Persaud <persaur@gmail.com>
> CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> CC: Christopher Clark <christopher.w.clark@gmail.com>
> CC: James McKenzie <james.mckenzie@bromium.com>
> ---
>  SUPPORT.md | 33 ++++++++++++++++++++++++++++++++-
>  1 file changed, 32 insertions(+), 1 deletion(-)
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> index 3e352198ce..a8388f3dc5 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md

(...)

> @@ -522,6 +536,23 @@ Virtual Performance Management Unit for HVM guests
>  Disabled by default (enable with hypervisor command line option).
>  This feature is not security supported: see http://xenbits.xen.org/xsa/advisory-163.html
>  
> +### x86/PCI Device Passthrough
> +
> +    Status: Supported, with caveats
> +
> +Only systems using IOMMUs will be supported.

s/will be/are/ ?

> +
> +Not compatible with migration, altp2m, introspection, memory sharing, or memory paging.
> +
> +Because of hardware limitations
> +(affecting any operating system or hypervisor),
> +it is generally not safe to use this feature 
> +to expose a physical device to completely untrusted guests.
> +However, this feature can still confer significant security benefit 
> +when used to remove drivers and backends from domain 0
> +(i.e., Driver Domains).
> +See docs/PCI-IOMMU-bugs.txt for more information.
> +
>  ### ARM/Non-PCI device passthrough
>  
>      Status: Supported

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 06/16] SUPPORT.md: Add scalability features
  2017-11-13 15:41 ` [PATCH 06/16] SUPPORT.md: Add scalability features George Dunlap
@ 2017-11-16 15:19   ` Julien Grall
  2017-11-16 15:30     ` George Dunlap
  2017-11-21 16:43     ` George Dunlap
  2017-11-21  8:16   ` Jan Beulich
  1 sibling, 2 replies; 90+ messages in thread
From: Julien Grall @ 2017-11-16 15:19 UTC (permalink / raw)
  To: George Dunlap, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Jan Beulich, Ian Jackson

Hi George,

On 13/11/17 15:41, George Dunlap wrote:
> Superpage support and PVHVM.
> 
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> ---
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Konrad Wilk <konrad.wilk@oracle.com>
> CC: Tim Deegan <tim@xen.org>
> CC: Julien Grall <julien.grall@arm.com>
> ---
>   SUPPORT.md | 21 +++++++++++++++++++++
>   1 file changed, 21 insertions(+)
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> index c884fac7f5..a8c56d13dd 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -195,6 +195,27 @@ on embedded platforms.
>   
>   Enables NUMA aware scheduling in Xen
>   
> +## Scalability
> +
> +### 1GB/2MB super page support
> +
> +    Status, x86 HVM/PVH: : Supported
> +    Status, ARM: Supported
> +
> +NB that this refers to the ability of guests
> +to have higher-level page table entries point directly to memory,
> +improving TLB performance.
> +This is independent of the ARM "page granularity" feature (see below).

I am not entirely sure about this paragraph for Arm. I understood this 
section as support for stage-2 page-table (aka EPT on x86) but the 
paragraph lead me to believe to it is for guest.

The size of super pages of guests will depend on the page granularity 
used by itself and the format of the page-table (e.g LPAE vs short 
descriptor). We have no control on that.

What we have control is the size of mapping used for stage-2 page-table.

> +
> +### x86/PVHVM
> +
> +    Status: Supported
> +
> +This is a useful label for a set of hypervisor features
> +which add paravirtualized functionality to HVM guests
> +for improved performance and scalability.
> +This includes exposing event channels to HVM guests.
> +
>   # Format and definitions
>   
>   This file contains prose, and machine-readable fragments.
> 

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 06/16] SUPPORT.md: Add scalability features
  2017-11-16 15:19   ` Julien Grall
@ 2017-11-16 15:30     ` George Dunlap
  2017-11-21 16:43     ` George Dunlap
  1 sibling, 0 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-16 15:30 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Jan Beulich, Ian Jackson

On 11/16/2017 03:19 PM, Julien Grall wrote:
> Hi George,
> 
> On 13/11/17 15:41, George Dunlap wrote:
>> Superpage support and PVHVM.
>>
>> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
>> ---
>> CC: Ian Jackson <ian.jackson@citrix.com>
>> CC: Wei Liu <wei.liu2@citrix.com>
>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Jan Beulich <jbeulich@suse.com>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Konrad Wilk <konrad.wilk@oracle.com>
>> CC: Tim Deegan <tim@xen.org>
>> CC: Julien Grall <julien.grall@arm.com>
>> ---
>>   SUPPORT.md | 21 +++++++++++++++++++++
>>   1 file changed, 21 insertions(+)
>>
>> diff --git a/SUPPORT.md b/SUPPORT.md
>> index c884fac7f5..a8c56d13dd 100644
>> --- a/SUPPORT.md
>> +++ b/SUPPORT.md
>> @@ -195,6 +195,27 @@ on embedded platforms.
>>     Enables NUMA aware scheduling in Xen
>>   +## Scalability
>> +
>> +### 1GB/2MB super page support
>> +
>> +    Status, x86 HVM/PVH: : Supported
>> +    Status, ARM: Supported
>> +
>> +NB that this refers to the ability of guests
>> +to have higher-level page table entries point directly to memory,
>> +improving TLB performance.
>> +This is independent of the ARM "page granularity" feature (see below).
> 
> I am not entirely sure about this paragraph for Arm. I understood this
> section as support for stage-2 page-table (aka EPT on x86) but the
> paragraph lead me to believe to it is for guest.

Hmm, yes likely there was some confusion when this was listed.  We
probably should make separate entries for HAP / stage 2 superpage
support and guest PT superpage support.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 09/16] SUPPORT.md: Add ARM-specific virtual hardware
  2017-11-13 15:41 ` [PATCH 09/16] SUPPORT.md: Add ARM-specific " George Dunlap
@ 2017-11-16 15:41   ` Julien Grall
  2017-11-22 16:32     ` George Dunlap
  2017-11-16 15:41   ` Julien Grall
  1 sibling, 1 reply; 90+ messages in thread
From: Julien Grall @ 2017-11-16 15:41 UTC (permalink / raw)
  To: George Dunlap, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Jan Beulich, Ian Jackson

Hi George,

On 13/11/17 15:41, George Dunlap wrote:
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> ---
> Do we need to add anything more here?
> 
> And do we need to include ARM ACPI for guests?
> 
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Konrad Wilk <konrad.wilk@oracle.com>
> CC: Tim Deegan <tim@xen.org>
> CC: Julien Grall <julien.grall@arm.com>
> ---
>   SUPPORT.md | 10 ++++++++++
>   1 file changed, 10 insertions(+)
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> index b95ee0ebe7..8235336c41 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -412,6 +412,16 @@ Virtual Performance Management Unit for HVM guests
>   Disabled by default (enable with hypervisor command line option).
>   This feature is not security supported: see http://xenbits.xen.org/xsa/advisory-163.html
>   
> +### ARM/Non-PCI device passthrough
> +
> +    Status: Supported

Sorry I didn't notice that until now. I am not comfortable to say 
"Supported" without any caveats.

As with PCI device passthrough, you at least need an IOMMU present on 
the platform. Sadly, it does not mean all DMA-capable devices on that 
platform will be protected by the IOMMU. This is also assuming, the 
IOMMU do sane things.

There are potentially other problem coming up with MSI support. But I 
haven't yet fully thought about it.

> +
> +### ARM: 16K and 64K page granularity in guests
> +
> +    Status: Supported, with caveats
> +
> +No support for QEMU backends in a 16K or 64K domain.
> +
>   ## Virtual Hardware, QEMU
>   
>   These are devices available in HVM mode using a qemu devicemodel (the default).
> 

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 09/16] SUPPORT.md: Add ARM-specific virtual hardware
  2017-11-13 15:41 ` [PATCH 09/16] SUPPORT.md: Add ARM-specific " George Dunlap
  2017-11-16 15:41   ` Julien Grall
@ 2017-11-16 15:41   ` Julien Grall
  1 sibling, 0 replies; 90+ messages in thread
From: Julien Grall @ 2017-11-16 15:41 UTC (permalink / raw)
  To: George Dunlap, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Jan Beulich, Ian Jackson

Hi George,

On 13/11/17 15:41, George Dunlap wrote:
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> ---
> Do we need to add anything more here?
> 
> And do we need to include ARM ACPI for guests?

I don't have any opinion here. However, if we decide to include, then we 
should also include Device-Tree.

> 
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Konrad Wilk <konrad.wilk@oracle.com>
> CC: Tim Deegan <tim@xen.org>
> CC: Julien Grall <julien.grall@arm.com>
> ---
>   SUPPORT.md | 10 ++++++++++
>   1 file changed, 10 insertions(+)
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> index b95ee0ebe7..8235336c41 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -412,6 +412,16 @@ Virtual Performance Management Unit for HVM guests
>   Disabled by default (enable with hypervisor command line option).
>   This feature is not security supported: see http://xenbits.xen.org/xsa/advisory-163.html
>   
> +### ARM/Non-PCI device passthrough
> +
> +    Status: Supported
> +
> +### ARM: 16K and 64K page granularity in guests
> +
> +    Status: Supported, with caveats
> +
> +No support for QEMU backends in a 16K or 64K domain.
> +
>   ## Virtual Hardware, QEMU
>   
>   These are devices available in HVM mode using a qemu devicemodel (the default).
> 

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 14/16] SUPPORT.md: Add statement on PCI passthrough
  2017-11-13 15:41 ` [PATCH 14/16] SUPPORT.md: Add statement on PCI passthrough George Dunlap
  2017-11-14 13:25   ` Marek Marczykowski-Górecki
@ 2017-11-16 15:43   ` Julien Grall
  2017-11-22 18:58     ` George Dunlap
  2017-11-21  8:59   ` Jan Beulich
  2 siblings, 1 reply; 90+ messages in thread
From: Julien Grall @ 2017-11-16 15:43 UTC (permalink / raw)
  To: George Dunlap, xen-devel
  Cc: James McKenzie, Stefano Stabellini, Wei Liu, Konrad Wilk,
	Andrew Cooper, Tim Deegan, Christopher Clark,
	Marek Marczykowski-Górecki, Rich Persaud, Jan Beulich,
	Ian Jackson

Hi George,

On 13/11/17 15:41, George Dunlap wrote:
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> ---
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Konrad Wilk <konrad.wilk@oracle.com>
> CC: Tim Deegan <tim@xen.org>
> CC: Rich Persaud <persaur@gmail.com>
> CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> CC: Christopher Clark <christopher.w.clark@gmail.com>
> CC: James McKenzie <james.mckenzie@bromium.com>
> ---
>   SUPPORT.md | 33 ++++++++++++++++++++++++++++++++-
>   1 file changed, 32 insertions(+), 1 deletion(-)
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> index 3e352198ce..a8388f3dc5 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -454,9 +454,23 @@ there is currently no xl support.
>   
>   ## Security
>   
> +### Driver Domains
> +
> +    Status: Supported, with caveats
> +
> +"Driver domains" means allowing non-Domain 0 domains
> +with access to physical devices to act as back-ends.
> +
> +See the appropriate "Device Passthrough" section
> +for more information about security support.
> +
>   ### Device Model Stub Domains
>   
> -    Status: Supported
> +    Status: Supported, with caveats
> +
> +Vulnerabilities of a device model stub domain
> +to a hostile driver domain (either compromised or untrusted)
> +are excluded from security support.
>   
>   ### KCONFIG Expert
>   
> @@ -522,6 +536,23 @@ Virtual Performance Management Unit for HVM guests
>   Disabled by default (enable with hypervisor command line option).
>   This feature is not security supported: see http://xenbits.xen.org/xsa/advisory-163.html
>   
> +### x86/PCI Device Passthrough
> +
> +    Status: Supported, with caveats
> +
> +Only systems using IOMMUs will be supported.
> +
> +Not compatible with migration, altp2m, introspection, memory sharing, or memory paging.
> +
> +Because of hardware limitations
> +(affecting any operating system or hypervisor),
> +it is generally not safe to use this feature
> +to expose a physical device to completely untrusted guests.
> +However, this feature can still confer significant security benefit
> +when used to remove drivers and backends from domain 0
> +(i.e., Driver Domains).
> +See docs/PCI-IOMMU-bugs.txt for more information.

Where can I find this file? Is it in staging?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 12/16] SUPPORT.md: Add Security-releated features
  2017-11-13 15:41 ` [PATCH 12/16] SUPPORT.md: Add Security-releated features George Dunlap
@ 2017-11-16 16:23   ` Konrad Rzeszutek Wilk
  2017-11-21  8:52   ` Jan Beulich
  1 sibling, 0 replies; 90+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-11-16 16:23 UTC (permalink / raw)
  To: George Dunlap
  Cc: Tamas K Lengyel, Stefano Stabellini, Wei Liu, Andrew Cooper,
	Tim Deegan, Rich Persaud, Jan Beulich, Ian Jackson, xen-devel

On Mon, Nov 13, 2017 at 03:41:22PM +0000, George Dunlap wrote:
> With the exception of driver domains, which depend on PCI passthrough,
> and will be introduced later.
> 
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> ---
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Konrad Wilk <konrad.wilk@oracle.com>

Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
[the livepatching part]

> CC: Tim Deegan <tim@xen.org>
> CC: Tamas K Lengyel <tamas.lengyel@zentific.com>
> CC: Rich Persaud <persaur@gmail.com>
> ---
>  SUPPORT.md | 34 ++++++++++++++++++++++++++++++++++
>  1 file changed, 34 insertions(+)
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> index 722a29fec5..0f7426593e 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -421,6 +421,40 @@ there is currently no xl support.
>  
>      Status: Supported
>  
> +## Security
> +
> +### Device Model Stub Domains
> +
> +    Status: Supported
> +
> +### KCONFIG Expert
> +
> +    Status: Experimental
> +
> +### Live Patching
> +
> +    Status, x86: Supported
> +    Status, ARM: Experimental
> +
> +Compile time disabled for ARM
> +
> +### Virtual Machine Introspection
> +
> +    Status, x86: Supported, not security supported
> +
> +### XSM & FLASK
> +
> +    Status: Experimental
> +
> +Compile time disabled
> +
> +### FLASK default policy
> +
> +    Status: Experimental
> +    
> +The default policy includes FLASK labels and roles for a "typical" Xen-based system
> +with dom0, driver domains, stub domains, domUs, and so on.
> +
>  ## Virtual Hardware, Hypervisor
>  
>  ### x86/Nested PV
> -- 
> 2.15.0
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 01/16] Introduce skeleton SUPPORT.md
  2017-11-13 15:41 [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
                   ` (15 preceding siblings ...)
  2017-11-13 15:43 ` [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
@ 2017-11-20 17:01 ` Jan Beulich
  16 siblings, 0 replies; 90+ messages in thread
From: Jan Beulich @ 2017-11-20 17:01 UTC (permalink / raw)
  To: George Dunlap
  Cc: Tamas K Lengyel, StefanoStabellini, Wei Liu, KonradWilk,
	Andrew Cooper, Dario Faggioli, Tim Deegan, Julien Grall,
	Paul Durrant, xen-devel, Anthony Perard, Ian Jackson,
	Roger Pau Monne

>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
> Add a machine-readable file to describe what features are in what
> state of being 'supported', as well as information about how long this
> release will be supported, and so on.
> 
> The document should be formatted using "semantic newlines" [1], to make
> changes easier.
> 
> Begin with the basic framework.
> 
> Signed-off-by: Ian Jackson <ian.jackson@citrix.com>
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>
despite ...

> +We also provide security support for Xen-related code in Linux,
> +which is an external project but doesn't have its own security process.

... not fully agreeing with this part. But at least this way the state
of things is properly spelled out in a sufficiently official place.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 02/16] SUPPORT.md: Add core functionality
  2017-11-13 15:41 ` [PATCH 02/16] SUPPORT.md: Add core functionality George Dunlap
@ 2017-11-21  8:03   ` Jan Beulich
  2017-11-21 10:36     ` George Dunlap
  0 siblings, 1 reply; 90+ messages in thread
From: Jan Beulich @ 2017-11-21  8:03 UTC (permalink / raw)
  To: George Dunlap
  Cc: Nathan Studer, Wei Liu, Andrew Cooper, Dario Faggioli,
	Tim Deegan, Ian Jackson, xen-devel

>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -16,6 +16,65 @@ for the definitions of the support status levels etc.
>  
>  # Feature Support
>  
> +## Memory Management
> +
> +### Memory Ballooning
> +
> +    Status: Supported

Is this a proper feature in the context we're talking about? To me
it's meaningful in guest OS context only. I also wouldn't really
consider it "core", but placement within the series clearly is a minor
aspect.

I'd prefer this to be dropped altogether as a feature, but
Acked-by: Jan Beulich <jbeulich@suse.com>
is independent of that.

> +### Credit2 Scheduler
> +
> +    Status: Supported

Sort of unrelated, but with this having been the case since 4.8 as it
looks, is there a reason it still isn't the default scheduler?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 03/16] SUPPORT.md: Add some x86 features
  2017-11-13 15:41 ` [PATCH 03/16] SUPPORT.md: Add some x86 features George Dunlap
@ 2017-11-21  8:09   ` Jan Beulich
  2017-11-21 10:42     ` George Dunlap
  0 siblings, 1 reply; 90+ messages in thread
From: Jan Beulich @ 2017-11-21  8:09 UTC (permalink / raw)
  To: George Dunlap
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Ian Jackson, xen-devel, Roger Pau Monne

>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
> +### Host ACPI (via Domain 0)
> +
> +    Status, x86 PV: Supported
> +    Status, x86 PVH: Tech preview

Are we this far already? Preview implies functional completeness,
but I'm not sure about all ACPI related parts actually having been
implemented (and see also below). But perhaps things like P and C
state handling come as individual features later on.

> +### x86/PVH guest
> +
> +    Status: Supported
> +
> +PVH is a next-generation paravirtualized mode 
> +designed to take advantage of hardware virtualization support when possible.
> +During development this was sometimes called HVMLite or PVHv2.
> +
> +Requires hardware virtualisation support (Intel VMX / AMD SVM)

I think it needs to be said that only DomU is considered supported.
Dom0 is perhaps not even experimental at this point, considering
the panic() in dom0_construct_pvh().

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 04/16] SUPPORT.md: Add core ARM features
  2017-11-13 15:41 ` [PATCH 04/16] SUPPORT.md: Add core ARM features George Dunlap
@ 2017-11-21  8:11   ` Jan Beulich
  2017-11-21 10:45     ` George Dunlap
  0 siblings, 1 reply; 90+ messages in thread
From: Jan Beulich @ 2017-11-21  8:11 UTC (permalink / raw)
  To: George Dunlap
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Ian Jackson, xen-devel

>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
> +### ARM/SMMUv1
> +
> +    Status: Supported
> +
> +### ARM/SMMUv2
> +
> +    Status: Supported

Do these belong here, when IOMMU isn't part of the corresponding
x86 patch?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 06/16] SUPPORT.md: Add scalability features
  2017-11-13 15:41 ` [PATCH 06/16] SUPPORT.md: Add scalability features George Dunlap
  2017-11-16 15:19   ` Julien Grall
@ 2017-11-21  8:16   ` Jan Beulich
  1 sibling, 0 replies; 90+ messages in thread
From: Jan Beulich @ 2017-11-21  8:16 UTC (permalink / raw)
  To: George Dunlap
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Ian Jackson, xen-devel

>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -195,6 +195,27 @@ on embedded platforms.
>  
>  Enables NUMA aware scheduling in Xen
>  
> +## Scalability
> +
> +### 1GB/2MB super page support
> +
> +    Status, x86 HVM/PVH: : Supported

On top of what you and Julien have worked out here already: Don't
we need to clarify here that this for HAP mode, while shadow more
doesn't support 1Gb guest pages (and doesn't use 2Mb host pages)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86
  2017-11-13 15:41 ` [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86 George Dunlap
@ 2017-11-21  8:29   ` Jan Beulich
  2017-11-21  9:19     ` Paul Durrant
                       ` (2 more replies)
  0 siblings, 3 replies; 90+ messages in thread
From: Jan Beulich @ 2017-11-21  8:29 UTC (permalink / raw)
  To: George Dunlap
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Paul Durrant, xen-devel, AnthonyPerard,
	Ian Jackson, Roger Pau Monne

>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
> +### PV USB support for xl
> +
> +    Status: Supported
> +
> +### PV 9pfs support for xl
> +
> +    Status: Tech Preview

Why are these two being called out, but xl support for other device
types isn't?

> +### QEMU backend hotplugging for xl
> +
> +    Status: Supported

Wouldn't this more appropriately be

### QEMU backend hotplugging

    Status, xl: Supported

?

> +## Virtual driver support, guest side
> +
> +### Blkfront
> +
> +    Status, Linux: Supported
> +    Status, FreeBSD: Supported, Security support external
> +    Status, NetBSD: Supported, Security support external
> +    Status, Windows: Supported
> +
> +Guest-side driver capable of speaking the Xen PV block protocol
> +
> +### Netfront
> +
> +    Status, Linux: Supported
> +    States, Windows: Supported
> +    Status, FreeBSD: Supported, Security support external
> +    Status, NetBSD: Supported, Security support external
> +    Status, OpenBSD: Supported, Security support external

Seeing the difference in OSes between the two (with the variance
increasing in entries further down) - what does the absence of an
OS on one list, but its presence on another mean? While not
impossible, I would find it surprising if e.g. OpenBSD had netfront
but not even a basic blkfront.

> +Guest-side driver capable of speaking the Xen PV networking protocol
> +
> +### PV Framebuffer (frontend)
> +
> +    Status, Linux (xen-fbfront): Supported
> +
> +Guest-side driver capable of speaking the Xen PV Framebuffer protocol
> +
> +### PV Console (frontend)
> +
> +    Status, Linux (hvc_xen): Supported
> +    Status, Windows: Supported
> +    Status, FreeBSD: Supported, Security support external
> +    Status, NetBSD: Supported, Security support external
> +
> +Guest-side driver capable of speaking the Xen PV console protocol
> +
> +### PV keyboard (frontend)
> +
> +    Status, Linux (xen-kbdfront): Supported
> +    Status, Windows: Supported
> +
> +Guest-side driver capable of speaking the Xen PV keyboard protocol

Are these three active/usable in guests regardless of whether the
guest is being run PV, PVH, or HVM? If not, wouldn't this need
spelling out?

> +## Virtual device support, host side
> +
> +### Blkback
> +
> +    Status, Linux (blkback): Supported

Strictly speaking, if the driver name is to be spelled out here in
the first place, it's xen-blkback here and ...

> +    Status, FreeBSD (blkback): Supported, Security support external
> +    Status, NetBSD (xbdback): Supported, security support external
> +    Status, QEMU (xen_disk): Supported
> +    Status, Blktap2: Deprecated
> +
> +Host-side implementations of the Xen PV block protocol
> +
> +### Netback
> +
> +    Status, Linux (netback): Supported

... xen-netback here for the upstream kernels.

> +### PV USB (backend)
> +
> +    Status, Linux: Experimental

What existing/upstream code does this refer to?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 08/16] SUPPORT.md: Add x86-specific virtual hardware
  2017-11-13 15:41 ` [PATCH 08/16] SUPPORT.md: Add x86-specific virtual hardware George Dunlap
@ 2017-11-21  8:39   ` Jan Beulich
  2017-11-21 18:02     ` George Dunlap
  0 siblings, 1 reply; 90+ messages in thread
From: Jan Beulich @ 2017-11-21  8:39 UTC (permalink / raw)
  To: George Dunlap
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Paul Durrant, xen-devel, AnthonyPerard, Ian Jackson,
	Roger Pau Monne

>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
> +### x86/Nested PV
> +
> +    Status, x86 HVM: Tech Preview
> +
> +This means running a Xen hypervisor inside an HVM domain,
> +with support for PV L2 guests only
> +(i.e., hardware virtualization extensions not provided
> +to the guest).
> +
> +This works, but has performance limitations
> +because the L1 dom0 can only access emulated L1 devices.

So is this explicitly meaning Xen-on-Xen? Xen-on-KVM, for example,
could be considered "nested PV", too. IOW I think it needs to be
spelled out whether this means the host side of things here, the
guest one, or both.

> +### x86/Nested HVM
> +
> +    Status, x86 HVM: Experimental
> +
> +This means running a Xen hypervisor inside an HVM domain,
> +with support for running both PV and HVM L2 guests
> +(i.e., hardware virtualization extensions provided
> +to the guest).

"Nested HVM" generally means more than using Xen as the L1
hypervisor. If this is really to mean just L1 Xen, I think the title
should already say so, not just the description.

> +### x86/Advanced Vector eXtension
> +
> +    Status: Supported

As indicated before, I think this either needs to be dropped or
be extended by an entry for virtually every CPUID bit exposed
to guests. Furthermore, in this isolated fashion it is not clear
what derived features (e.g. FMA, FMA4, AVX2, or even AVX-512)
it is meant to imply. If any of them are implied, "with caveats"
would need to be added as long as the instruction emulator isn't
capable of handling the instructions, yet.

> +### x86/HVM EFI
> +
> +    Status: Supported
> +
> +Booting a guest via guest EFI firmware

Shouldn't this say OVMF, to avoid covering possible other
implementations?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem
  2017-11-13 15:41 ` [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem George Dunlap
@ 2017-11-21  8:48   ` Jan Beulich
  2017-11-21 18:19     ` George Dunlap
  0 siblings, 1 reply; 90+ messages in thread
From: Jan Beulich @ 2017-11-21  8:48 UTC (permalink / raw)
  To: George Dunlap
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Ian Jackson, xen-devel

>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -152,6 +152,35 @@ Output of information in machine-parseable JSON format
>  
>      Status: Supported, Security support external
>  
> +## Debugging, analysis, and crash post-mortem
> +
> +### gdbsx
> +
> +    Status, x86: Supported
> +
> +Debugger to debug ELF guests
> +
> +### Soft-reset for PV guests
> +
> +    Status: Supported
> +    
> +Soft-reset allows a new kernel to start 'from scratch' with a fresh VM state, 
> +but with all the memory from the previous state of the VM intact.
> +This is primarily designed to allow "crash kernels", 
> +which can do core dumps of memory to help with debugging in the event of a crash.
> +
> +### xentrace
> +
> +    Status, x86: Supported
> +
> +Tool to capture Xen trace buffer data
> +
> +### gcov
> +
> +    Status: Supported, Not security supported

I agree with excluding security support here, but why wouldn't the
same be the case for gdbsx and xentrace?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 11/16] SUPPORT.md: Add 'easy' HA / FT features
  2017-11-13 15:41 ` [PATCH 11/16] SUPPORT.md: Add 'easy' HA / FT features George Dunlap
@ 2017-11-21  8:49   ` Jan Beulich
  0 siblings, 0 replies; 90+ messages in thread
From: Jan Beulich @ 2017-11-21  8:49 UTC (permalink / raw)
  To: George Dunlap
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Ian Jackson, xen-devel

>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
> +### x86/vMCE
> +
> +    Status: Supported
> +
> +Forward Machine Check Exceptions to Appropriate guests

Acked-by: Jan Beulich <jbeulich@suse.com>
perhaps with the A converted to lower case.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 12/16] SUPPORT.md: Add Security-releated features
  2017-11-13 15:41 ` [PATCH 12/16] SUPPORT.md: Add Security-releated features George Dunlap
  2017-11-16 16:23   ` Konrad Rzeszutek Wilk
@ 2017-11-21  8:52   ` Jan Beulich
  2017-11-22 17:13     ` George Dunlap
  1 sibling, 1 reply; 90+ messages in thread
From: Jan Beulich @ 2017-11-21  8:52 UTC (permalink / raw)
  To: George Dunlap
  Cc: Tamas K Lengyel, StefanoStabellini, Wei Liu, Konrad Wilk,
	Andrew Cooper, Tim Deegan, RichPersaud, Ian Jackson, xen-devel

>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
> With the exception of driver domains, which depend on PCI passthrough,
> and will be introduced later.
> 
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>

Shouldn't we also explicitly exclude tool stack disaggregation here,
with reference to XSA-77?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 13/16] SUPPORT.md: Add secondary memory management features
  2017-11-13 15:41 ` [PATCH 13/16] SUPPORT.md: Add secondary memory management features George Dunlap
@ 2017-11-21  8:54   ` Jan Beulich
  2017-11-21 19:55   ` Andrew Cooper
  1 sibling, 0 replies; 90+ messages in thread
From: Jan Beulich @ 2017-11-21  8:54 UTC (permalink / raw)
  To: George Dunlap
  Cc: Tamas K Lengyel, StefanoStabellini, Wei Liu, Konrad Wilk,
	Andrew Cooper, Tim Deegan, Ian Jackson, xen-devel

>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>

Wouldn't PoD belong here too? With that added as supported on x86
HVM
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 14/16] SUPPORT.md: Add statement on PCI passthrough
  2017-11-13 15:41 ` [PATCH 14/16] SUPPORT.md: Add statement on PCI passthrough George Dunlap
  2017-11-14 13:25   ` Marek Marczykowski-Górecki
  2017-11-16 15:43   ` Julien Grall
@ 2017-11-21  8:59   ` Jan Beulich
  2017-11-22 17:20     ` George Dunlap
  2 siblings, 1 reply; 90+ messages in thread
From: Jan Beulich @ 2017-11-21  8:59 UTC (permalink / raw)
  To: George Dunlap
  Cc: James McKenzie, StefanoStabellini, Wei Liu, Konrad Wilk,
	Andrew Cooper, Tim Deegan, Christopher Clark,
	Marek Marczykowski-Górecki, Rich Persaud, xen-devel,
	Ian Jackson

>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
> +### x86/PCI Device Passthrough
> +
> +    Status: Supported, with caveats

I think this wants to be

### PCI Device Passthrough

    Status, x86 HVM: Supported, with caveats
    Status, x86 PV: Supported, with caveats

to (a) allow later extending for ARM and (b) exclude PVH (assuming
that its absence means non-existing code).

> +Only systems using IOMMUs will be supported.
> +
> +Not compatible with migration, altp2m, introspection, memory sharing, or memory paging.

And PoD, iirc.

With these adjustments (or substantially similar ones)
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86
  2017-11-21  8:29   ` Jan Beulich
@ 2017-11-21  9:19     ` Paul Durrant
  2017-11-21 10:56     ` George Dunlap
  2017-11-21 17:35     ` George Dunlap
  2 siblings, 0 replies; 90+ messages in thread
From: Paul Durrant @ 2017-11-21  9:19 UTC (permalink / raw)
  To: 'Jan Beulich', George Dunlap
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim (Xen.org),
	Julien Grall, xen-devel, Anthony Perard, Ian Jackson,
	Roger Pau Monne

> -----Original Message-----
[snip]
> > +### PV keyboard (frontend)
> > +
> > +    Status, Linux (xen-kbdfront): Supported
> > +    Status, Windows: Supported
> > +
> > +Guest-side driver capable of speaking the Xen PV keyboard protocol
> 
> Are these three active/usable in guests regardless of whether the
> guest is being run PV, PVH, or HVM? If not, wouldn't this need
> spelling out?
> 

I believe the necessary patches to make the PV vkdb protocol usable independently of vfb are at least queued for upstream QEMU.

Stefano, am I correct?

Cheers,

  Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 16/16] SUPPORT.md: Add limits RFC
  2017-11-13 15:41 ` [PATCH 16/16] SUPPORT.md: Add limits RFC George Dunlap
@ 2017-11-21  9:26   ` Jan Beulich
  2017-11-22 18:01     ` George Dunlap
  0 siblings, 1 reply; 90+ messages in thread
From: Jan Beulich @ 2017-11-21  9:26 UTC (permalink / raw)
  To: George Dunlap
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Ian Jackson, xen-devel

>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
> +### Virtual CPUs
> +
> +    Limit, x86 PV: 8192
> +    Limit-security, x86 PV: 32
> +    Limit, x86 HVM: 128
> +    Limit-security, x86 HVM: 32

Personally I consider the "Limit-security" numbers too low here, but
I have no proof that higher numbers will work _in all cases_.

> +### Virtual RAM
> +
> +    Limit-security, x86 PV: 2047GiB

I think this needs splitting for 64- and 32-bit (the latter can go up
to 168Gb only on hosts with no memory past the 168Gb boundary,
and up to 128Gb only on larger ones, without this being a processor
architecture limitation).

> +### Event Channel FIFO ABI
> +
> +    Limit: 131072

Are we certain this is a security supportable limit? There is at least
one loop (in get_free_port()) which can potentially have this number
of iterations.

That's already leaving aside the one in the 'e' key handler. Speaking
of which - I think we should state somewhere that there's no security
support if any key whatsoever was sent to Xen via the console or
the sysctl interface.

And more generally - surely there are items that aren't present in
the series and no-one can realistically spot right away. What do we
mean to imply for functionality not covered in the doc? One thing
coming to mind here are certain command line options, an example
being "sync_console" - the description states "not suitable for
production environments", but I think this should be tightened to
exclude security support.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 02/16] SUPPORT.md: Add core functionality
  2017-11-21  8:03   ` Jan Beulich
@ 2017-11-21 10:36     ` George Dunlap
  2017-11-21 11:34       ` Jan Beulich
  0 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-21 10:36 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Nathan Studer, Wei Liu, Andrew Cooper, Dario Faggioli,
	Tim Deegan, Ian Jackson, xen-devel

On 11/21/2017 08:03 AM, Jan Beulich wrote:
>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>> --- a/SUPPORT.md
>> +++ b/SUPPORT.md
>> @@ -16,6 +16,65 @@ for the definitions of the support status levels etc.
>>  
>>  # Feature Support
>>  
>> +## Memory Management
>> +
>> +### Memory Ballooning
>> +
>> +    Status: Supported
> 
> Is this a proper feature in the context we're talking about? To me
> it's meaningful in guest OS context only. I also wouldn't really
> consider it "core", but placement within the series clearly is a minor
> aspect.
> 
> I'd prefer this to be dropped altogether as a feature, but

This doesn't make any sense to me.  Allowing a guest to modify its own
memory requires a *lot* of support, spread throughout the hypervisor;
and there are a huge number of recent security holes that would have
been much more difficult to exploit if guests didn't have the ability to
balloon up or down.

If what you mean is *specifically* the technique of making a "memory
balloon" to trick the guest OS into handing back memory without knowing
it, then it's just a matter of semantics.  We could call this "dynamic
memory control" or something like that if you prefer (although we'd have
to mention ballooning in the description to make sure people can find it).

> Acked-by: Jan Beulich <jbeulich@suse.com>
> is independent of that.
> 
>> +### Credit2 Scheduler
>> +
>> +    Status: Supported
> 
> Sort of unrelated, but with this having been the case since 4.8 as it
> looks, is there a reason it still isn't the default scheduler?
Well first of all it was missing some features which credit1 had:
namely, soft affinity (i.e., required for host NUMA awareness) and caps.
 These were checked in this release cycle; but we also wanted to switch
the default at the beginning of a development cycle to get the highest
chance of shaking out any weird bugs.

So according to those criteria, we could switch to credit2 being the
default scheduler as soon as 4.10 development window opens.

At some point recently Dario said there were still some unusual behavior
he wanted to dig into; but I think with him not working for Citrix
anymore, it's doubtful we'll have resource to take that up; the best
option might be to just pull the lever and see what happens.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 03/16] SUPPORT.md: Add some x86 features
  2017-11-21  8:09   ` Jan Beulich
@ 2017-11-21 10:42     ` George Dunlap
  2017-11-21 11:35       ` Jan Beulich
  0 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-21 10:42 UTC (permalink / raw)
  To: Jan Beulich
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Ian Jackson, xen-devel, Roger Pau Monne

On 11/21/2017 08:09 AM, Jan Beulich wrote:
>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>> +### x86/PVH guest
>> +
>> +    Status: Supported
>> +
>> +PVH is a next-generation paravirtualized mode 
>> +designed to take advantage of hardware virtualization support when possible.
>> +During development this was sometimes called HVMLite or PVHv2.
>> +
>> +Requires hardware virtualisation support (Intel VMX / AMD SVM)
> 
> I think it needs to be said that only DomU is considered supported.
> Dom0 is perhaps not even experimental at this point, considering
> the panic() in dom0_construct_pvh().

Indeed, that's why dom0 PVH isn't in the list, and why this says 'PVH
guest', and is in the 'Guest Type' section.  We generally don't say,
"Oh, and we don't have this feature at all".

If you think it's important we could add a sentence here explicitly
stating that dom0 PVH isn't supported, but I sort of feel like it isn't
necessary.

>> +### Host ACPI (via Domain 0)
>> +
>> +    Status, x86 PV: Supported
>> +    Status, x86 PVH: Tech preview
>
> Are we this far already? Preview implies functional completeness,
> but I'm not sure about all ACPI related parts actually having been
> implemented (and see also below). But perhaps things like P and C
> state handling come as individual features later on.

Hmm, yeah, it doesn't make much sense to say that we have "Tech preview"
status for a feature with a PVH dom0, when PVH dom0 itself isn't even
'experimental' yet.  I'll remove this (unless Roger or Wei want to object).

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 04/16] SUPPORT.md: Add core ARM features
  2017-11-21  8:11   ` Jan Beulich
@ 2017-11-21 10:45     ` George Dunlap
  2017-11-21 10:59       ` Julien Grall
  2017-11-21 11:37       ` Jan Beulich
  0 siblings, 2 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-21 10:45 UTC (permalink / raw)
  To: Jan Beulich
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Ian Jackson, xen-devel

On 11/21/2017 08:11 AM, Jan Beulich wrote:
>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>> +### ARM/SMMUv1
>> +
>> +    Status: Supported
>> +
>> +### ARM/SMMUv2
>> +
>> +    Status: Supported
> 
> Do these belong here, when IOMMU isn't part of the corresponding
> x86 patch?

Since there was recently a time when these weren't supported, I think
it's useful to have them in here.  (Julien, let me know if you think
otherwise.)

Do you think it would be useful to include an IOMMU line for x86?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86
  2017-11-21  8:29   ` Jan Beulich
  2017-11-21  9:19     ` Paul Durrant
@ 2017-11-21 10:56     ` George Dunlap
  2017-11-21 11:41       ` Jan Beulich
  2017-11-21 17:35     ` George Dunlap
  2 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-21 10:56 UTC (permalink / raw)
  To: Jan Beulich
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Paul Durrant, xen-devel, AnthonyPerard,
	Ian Jackson, Roger Pau Monne

On 11/21/2017 08:29 AM, Jan Beulich wrote:
>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>> +### PV USB support for xl
>> +
>> +    Status: Supported
>> +
>> +### PV 9pfs support for xl
>> +
>> +    Status: Tech Preview
> 
> Why are these two being called out, but xl support for other device
> types isn't?

Do you see how big this document is? :-)  If you think something else
needs to be covered, don't ask why I didn't mention it, just say what
you think I missed.

> 
>> +### QEMU backend hotplugging for xl
>> +
>> +    Status: Supported
> 
> Wouldn't this more appropriately be
> 
> ### QEMU backend hotplugging
> 
>     Status, xl: Supported

Maybe -- let me think about it.

> 
> ?
> 
>> +## Virtual driver support, guest side
>> +
>> +### Blkfront
>> +
>> +    Status, Linux: Supported
>> +    Status, FreeBSD: Supported, Security support external
>> +    Status, NetBSD: Supported, Security support external
>> +    Status, Windows: Supported
>> +
>> +Guest-side driver capable of speaking the Xen PV block protocol
>> +
>> +### Netfront
>> +
>> +    Status, Linux: Supported
>> +    States, Windows: Supported
>> +    Status, FreeBSD: Supported, Security support external
>> +    Status, NetBSD: Supported, Security support external
>> +    Status, OpenBSD: Supported, Security support external
> 
> Seeing the difference in OSes between the two (with the variance
> increasing in entries further down) - what does the absence of an
> OS on one list, but its presence on another mean? While not
> impossible, I would find it surprising if e.g. OpenBSD had netfront
> but not even a basic blkfront.

Good catch.  Roger suggested that I add the OpenBSD Netfront; he's away
so I'll have to see if I can figure out if they have blkfront support or
not.

>> +Guest-side driver capable of speaking the Xen PV networking protocol
>> +
>> +### PV Framebuffer (frontend)
>> +
>> +    Status, Linux (xen-fbfront): Supported
>> +
>> +Guest-side driver capable of speaking the Xen PV Framebuffer protocol
>> +
>> +### PV Console (frontend)
>> +
>> +    Status, Linux (hvc_xen): Supported
>> +    Status, Windows: Supported
>> +    Status, FreeBSD: Supported, Security support external
>> +    Status, NetBSD: Supported, Security support external
>> +
>> +Guest-side driver capable of speaking the Xen PV console protocol
>> +
>> +### PV keyboard (frontend)
>> +
>> +    Status, Linux (xen-kbdfront): Supported
>> +    Status, Windows: Supported
>> +
>> +Guest-side driver capable of speaking the Xen PV keyboard protocol
> 
> Are these three active/usable in guests regardless of whether the
> guest is being run PV, PVH, or HVM? If not, wouldn't this need
> spelling out?

In theory I think they could be used; I suspect it's just that they
aren't used.  Let me see if I can think of a way to concisely express that.

>> +## Virtual device support, host side
>> +
>> +### Blkback
>> +
>> +    Status, Linux (blkback): Supported
> 
> Strictly speaking, if the driver name is to be spelled out here in
> the first place, it's xen-blkback here and ...
> 
>> +    Status, FreeBSD (blkback): Supported, Security support external
>> +    Status, NetBSD (xbdback): Supported, security support external
>> +    Status, QEMU (xen_disk): Supported
>> +    Status, Blktap2: Deprecated
>> +
>> +Host-side implementations of the Xen PV block protocol
>> +
>> +### Netback
>> +
>> +    Status, Linux (netback): Supported
> 
> ... xen-netback here for the upstream kernels.

Ack.


>> +### PV USB (backend)
>> +
>> +    Status, Linux: Experimental
> 
> What existing/upstream code does this refer to?

I guess a bunch of patches posted to a mailing list?  Yeah, that's
probably something we should take out.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 04/16] SUPPORT.md: Add core ARM features
  2017-11-21 10:45     ` George Dunlap
@ 2017-11-21 10:59       ` Julien Grall
  2017-11-21 11:37       ` Jan Beulich
  1 sibling, 0 replies; 90+ messages in thread
From: Julien Grall @ 2017-11-21 10:59 UTC (permalink / raw)
  To: George Dunlap, Jan Beulich
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, xen-devel, Ian Jackson

Hi George,

On 11/21/2017 10:45 AM, George Dunlap wrote:
> On 11/21/2017 08:11 AM, Jan Beulich wrote:
>>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>>> +### ARM/SMMUv1
>>> +
>>> +    Status: Supported
>>> +
>>> +### ARM/SMMUv2
>>> +
>>> +    Status: Supported
>>
>> Do these belong here, when IOMMU isn't part of the corresponding
>> x86 patch?
> 
> Since there was recently a time when these weren't supported, I think
> it's useful to have them in here.  (Julien, let me know if you think
> otherwise.)

I think it is useful to keep them. There are other IOMMUs existing on 
Arm (e.g SMMUv3, IPMMU-VMSA) that we don't yet support in Xen.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 02/16] SUPPORT.md: Add core functionality
  2017-11-21 10:36     ` George Dunlap
@ 2017-11-21 11:34       ` Jan Beulich
  0 siblings, 0 replies; 90+ messages in thread
From: Jan Beulich @ 2017-11-21 11:34 UTC (permalink / raw)
  To: George Dunlap
  Cc: Nathan Studer, Wei Liu, Andrew Cooper, Dario Faggioli,
	Tim Deegan, Ian Jackson, xen-devel

>>> On 21.11.17 at 11:36, <george.dunlap@citrix.com> wrote:
> On 11/21/2017 08:03 AM, Jan Beulich wrote:
>>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>>> --- a/SUPPORT.md
>>> +++ b/SUPPORT.md
>>> @@ -16,6 +16,65 @@ for the definitions of the support status levels etc.
>>>  
>>>  # Feature Support
>>>  
>>> +## Memory Management
>>> +
>>> +### Memory Ballooning
>>> +
>>> +    Status: Supported
>> 
>> Is this a proper feature in the context we're talking about? To me
>> it's meaningful in guest OS context only. I also wouldn't really
>> consider it "core", but placement within the series clearly is a minor
>> aspect.
>> 
>> I'd prefer this to be dropped altogether as a feature, but
> 
> This doesn't make any sense to me.  Allowing a guest to modify its own
> memory requires a *lot* of support, spread throughout the hypervisor;
> and there are a huge number of recent security holes that would have
> been much more difficult to exploit if guests didn't have the ability to
> balloon up or down.
> 
> If what you mean is *specifically* the technique of making a "memory
> balloon" to trick the guest OS into handing back memory without knowing
> it, then it's just a matter of semantics.  We could call this "dynamic
> memory control" or something like that if you prefer (although we'd have
> to mention ballooning in the description to make sure people can find it).

Indeed I'd prefer the alternative naming: Outside of p2m-pod.c there's
no mention of the term "balloon" in any of the hypervisor source files.
Furthermore this "dynamic memory control" can be used for things other
than ballooning, all of which I think is (to be) supported.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 03/16] SUPPORT.md: Add some x86 features
  2017-11-21 10:42     ` George Dunlap
@ 2017-11-21 11:35       ` Jan Beulich
  2017-11-21 12:24         ` George Dunlap
  2017-11-21 12:32         ` Ian Jackson
  0 siblings, 2 replies; 90+ messages in thread
From: Jan Beulich @ 2017-11-21 11:35 UTC (permalink / raw)
  To: George Dunlap
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	TimDeegan, Ian Jackson, xen-devel, Roger Pau Monne

>>> On 21.11.17 at 11:42, <george.dunlap@citrix.com> wrote:
> On 11/21/2017 08:09 AM, Jan Beulich wrote:
>>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>>> +### x86/PVH guest
>>> +
>>> +    Status: Supported
>>> +
>>> +PVH is a next-generation paravirtualized mode 
>>> +designed to take advantage of hardware virtualization support when possible.
>>> +During development this was sometimes called HVMLite or PVHv2.
>>> +
>>> +Requires hardware virtualisation support (Intel VMX / AMD SVM)
>> 
>> I think it needs to be said that only DomU is considered supported.
>> Dom0 is perhaps not even experimental at this point, considering
>> the panic() in dom0_construct_pvh().
> 
> Indeed, that's why dom0 PVH isn't in the list, and why this says 'PVH
> guest', and is in the 'Guest Type' section.  We generally don't say,
> "Oh, and we don't have this feature at all".
> 
> If you think it's important we could add a sentence here explicitly
> stating that dom0 PVH isn't supported, but I sort of feel like it isn't
> necessary.

Much depends on whether you think "guest" == "DomU". To me
Dom0 is a guest, too.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 04/16] SUPPORT.md: Add core ARM features
  2017-11-21 10:45     ` George Dunlap
  2017-11-21 10:59       ` Julien Grall
@ 2017-11-21 11:37       ` Jan Beulich
  2017-11-21 12:39         ` George Dunlap
  1 sibling, 1 reply; 90+ messages in thread
From: Jan Beulich @ 2017-11-21 11:37 UTC (permalink / raw)
  To: George Dunlap
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	TimDeegan, Julien Grall, Ian Jackson, xen-devel

>>> On 21.11.17 at 11:45, <george.dunlap@citrix.com> wrote:
> On 11/21/2017 08:11 AM, Jan Beulich wrote:
>>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>>> +### ARM/SMMUv1
>>> +
>>> +    Status: Supported
>>> +
>>> +### ARM/SMMUv2
>>> +
>>> +    Status: Supported
>> 
>> Do these belong here, when IOMMU isn't part of the corresponding
>> x86 patch?
> 
> Since there was recently a time when these weren't supported, I think
> it's useful to have them in here.  (Julien, let me know if you think
> otherwise.)
> 
> Do you think it would be useful to include an IOMMU line for x86?

At this point of the series I would surely have said "yes". The
later PCI passthrough additions state this implicitly at least (by
requiring an IOMMU for passthrough to be supported at all).
But even then saying so explicitly may be better.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86
  2017-11-21 10:56     ` George Dunlap
@ 2017-11-21 11:41       ` Jan Beulich
  2017-11-21 17:20         ` George Dunlap
  0 siblings, 1 reply; 90+ messages in thread
From: Jan Beulich @ 2017-11-21 11:41 UTC (permalink / raw)
  To: George Dunlap
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Paul Durrant, xen-devel, AnthonyPerard,
	IanJackson, Roger Pau Monne

>>> On 21.11.17 at 11:56, <george.dunlap@citrix.com> wrote:
> On 11/21/2017 08:29 AM, Jan Beulich wrote:
>>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>>> +### PV USB support for xl
>>> +
>>> +    Status: Supported
>>> +
>>> +### PV 9pfs support for xl
>>> +
>>> +    Status: Tech Preview
>> 
>> Why are these two being called out, but xl support for other device
>> types isn't?
> 
> Do you see how big this document is? :-)  If you think something else
> needs to be covered, don't ask why I didn't mention it, just say what
> you think I missed.

Well, (not very) implicitly here: The same for all other PV protocols.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 03/16] SUPPORT.md: Add some x86 features
  2017-11-21 11:35       ` Jan Beulich
@ 2017-11-21 12:24         ` George Dunlap
  2017-11-21 13:00           ` Jan Beulich
  2017-11-21 12:32         ` Ian Jackson
  1 sibling, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-21 12:24 UTC (permalink / raw)
  To: Jan Beulich
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim (Xen.org),
	Ian Jackson, xen-devel, Roger Pau Monne


[-- Attachment #1.1: Type: text/plain, Size: 2003 bytes --]



On Nov 21, 2017, at 11:35 AM, Jan Beulich <JBeulich@suse.com<mailto:JBeulich@suse.com>> wrote:

On 21.11.17 at 11:42, <george.dunlap@citrix.com<mailto:george.dunlap@citrix.com>> wrote:
On 11/21/2017 08:09 AM, Jan Beulich wrote:
On 13.11.17 at 16:41, <george.dunlap@citrix.com<mailto:george.dunlap@citrix.com>> wrote:
+### x86/PVH guest
+
+    Status: Supported
+
+PVH is a next-generation paravirtualized mode
+designed to take advantage of hardware virtualization support when possible.
+During development this was sometimes called HVMLite or PVHv2.
+
+Requires hardware virtualisation support (Intel VMX / AMD SVM)

I think it needs to be said that only DomU is considered supported.
Dom0 is perhaps not even experimental at this point, considering
the panic() in dom0_construct_pvh().

Indeed, that's why dom0 PVH isn't in the list, and why this says 'PVH
guest', and is in the 'Guest Type' section.  We generally don't say,
"Oh, and we don't have this feature at all".

If you think it's important we could add a sentence here explicitly
stating that dom0 PVH isn't supported, but I sort of feel like it isn't
necessary.

Much depends on whether you think "guest" == "DomU". To me
Dom0 is a guest, too.

That’s not how I’ve ever understood those terms.

A guest at a hotel is someone who is served, and who does not have (legal) access to the internals of the system.  The maids who clean the room and the janitors who sweep the floors are hosts, because they have (to various degrees) extra access designed to help them serve the guests.

A “guest” is a virtual machine that does not have access to the internals of the system; that is the “target” of virtualization.  As such, the dom0 kernel and all the toolstack / emulation code running in domain 0 are part of the “host”.

Domain 0 is a domain and a VM, but only domUs are guests.

Any other opinions on this?  Do we need to add these to the terms defined at the bottom?

 -George

[-- Attachment #1.2: Type: text/html, Size: 5468 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 03/16] SUPPORT.md: Add some x86 features
  2017-11-21 11:35       ` Jan Beulich
  2017-11-21 12:24         ` George Dunlap
@ 2017-11-21 12:32         ` Ian Jackson
  1 sibling, 0 replies; 90+ messages in thread
From: Ian Jackson @ 2017-11-21 12:32 UTC (permalink / raw)
  To: Jan Beulich
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	TimDeegan, George Dunlap, xen-devel, Roger Pau Monne

Jan Beulich writes ("Re: [PATCH 03/16] SUPPORT.md: Add some x86 features"):
> Much depends on whether you think "guest" == "DomU". To me
> Dom0 is a guest, too.

Not to me.  I'm with George.  (As far as I can make out his message,
which I think was sent with HTML-style quoting which some Citrix thing
has stripped out, so I can't see who said what.)

But I don't think this is important and I would like to see this
document go in.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 04/16] SUPPORT.md: Add core ARM features
  2017-11-21 11:37       ` Jan Beulich
@ 2017-11-21 12:39         ` George Dunlap
  2017-11-21 13:01           ` Jan Beulich
  0 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-21 12:39 UTC (permalink / raw)
  To: Jan Beulich
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim (Xen.org),
	Julien Grall, Ian Jackson, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1235 bytes --]



On Nov 21, 2017, at 11:37 AM, Jan Beulich <JBeulich@suse.com<mailto:JBeulich@suse.com>> wrote:

On 21.11.17 at 11:45, <george.dunlap@citrix.com<mailto:george.dunlap@citrix.com>> wrote:
On 11/21/2017 08:11 AM, Jan Beulich wrote:
On 13.11.17 at 16:41, <george.dunlap@citrix.com<mailto:george.dunlap@citrix.com>> wrote:
+### ARM/SMMUv1
+
+    Status: Supported
+
+### ARM/SMMUv2
+
+    Status: Supported

Do these belong here, when IOMMU isn't part of the corresponding
x86 patch?

Since there was recently a time when these weren't supported, I think
it's useful to have them in here.  (Julien, let me know if you think
otherwise.)

Do you think it would be useful to include an IOMMU line for x86?

At this point of the series I would surely have said "yes". The
later PCI passthrough additions state this implicitly at least (by
requiring an IOMMU for passthrough to be supported at all).
But even then saying so explicitly may be better.

How much do we specifically need to break down?  AMD / Intel?

What about something like this?

### IOMMU

    Status, AMD IOMMU: Supported
    Status, Intel VT-d: Supported
    Status, ARM SMMUv1: Supported
    Status, ARM SMMUv2: Supported

 -George

[-- Attachment #1.2: Type: text/html, Size: 5895 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 03/16] SUPPORT.md: Add some x86 features
  2017-11-21 12:24         ` George Dunlap
@ 2017-11-21 13:00           ` Jan Beulich
  0 siblings, 0 replies; 90+ messages in thread
From: Jan Beulich @ 2017-11-21 13:00 UTC (permalink / raw)
  To: George Dunlap
  Cc: StefanoStabellini, Wei Liu, KonradWilk, Andrew Cooper,
	Tim (Xen.org),
	Ian Jackson, xen-devel, Roger Pau Monne

>>> On 21.11.17 at 13:24, <George.Dunlap@citrix.com> wrote:
>> On Nov 21, 2017, at 11:35 AM, Jan Beulich 
>> Much depends on whether you think "guest" == "DomU". To me
>> Dom0 is a guest, too.
> 
> That’s not how I’ve ever understood those terms.
> 
> A guest at a hotel is someone who is served, and who does not have (legal) 
> access to the internals of the system.  The maids who clean the room and the 
> janitors who sweep the floors are hosts, because they have (to various 
> degrees) extra access designed to help them serve the guests.
> 
> A “guest” is a virtual machine that does not have access to the internals of 
> the system; that is the “target” of virtualization.  As such, the dom0 kernel 
> and all the toolstack / emulation code running in domain 0 are part of the 
> “host”.
> 
> Domain 0 is a domain and a VM, but only domUs are guests.

Okay then; just FTR I've always been considering "domain" ==
"guest" == "VM".

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 04/16] SUPPORT.md: Add core ARM features
  2017-11-21 12:39         ` George Dunlap
@ 2017-11-21 13:01           ` Jan Beulich
  0 siblings, 0 replies; 90+ messages in thread
From: Jan Beulich @ 2017-11-21 13:01 UTC (permalink / raw)
  To: George Dunlap
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim(Xen.org),
	Julien Grall, Ian Jackson, xen-devel

>>> On 21.11.17 at 13:39, <George.Dunlap@citrix.com> wrote:
> What about something like this?
> 
> ### IOMMU
> 
>     Status, AMD IOMMU: Supported
>     Status, Intel VT-d: Supported
>     Status, ARM SMMUv1: Supported
>     Status, ARM SMMUv2: Supported

Fine with me, as it makes things explicit.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 06/16] SUPPORT.md: Add scalability features
  2017-11-16 15:19   ` Julien Grall
  2017-11-16 15:30     ` George Dunlap
@ 2017-11-21 16:43     ` George Dunlap
  2017-11-21 17:31       ` Julien Grall
  1 sibling, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-21 16:43 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Jan Beulich, Ian Jackson

On 11/16/2017 03:19 PM, Julien Grall wrote:
> Hi George,
> 
> On 13/11/17 15:41, George Dunlap wrote:
>> Superpage support and PVHVM.
>>
>> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
>> ---
>> CC: Ian Jackson <ian.jackson@citrix.com>
>> CC: Wei Liu <wei.liu2@citrix.com>
>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Jan Beulich <jbeulich@suse.com>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Konrad Wilk <konrad.wilk@oracle.com>
>> CC: Tim Deegan <tim@xen.org>
>> CC: Julien Grall <julien.grall@arm.com>
>> ---
>>   SUPPORT.md | 21 +++++++++++++++++++++
>>   1 file changed, 21 insertions(+)
>>
>> diff --git a/SUPPORT.md b/SUPPORT.md
>> index c884fac7f5..a8c56d13dd 100644
>> --- a/SUPPORT.md
>> +++ b/SUPPORT.md
>> @@ -195,6 +195,27 @@ on embedded platforms.
>>     Enables NUMA aware scheduling in Xen
>>   +## Scalability
>> +
>> +### 1GB/2MB super page support
>> +
>> +    Status, x86 HVM/PVH: : Supported
>> +    Status, ARM: Supported
>> +
>> +NB that this refers to the ability of guests
>> +to have higher-level page table entries point directly to memory,
>> +improving TLB performance.
>> +This is independent of the ARM "page granularity" feature (see below).
> 
> I am not entirely sure about this paragraph for Arm. I understood this
> section as support for stage-2 page-table (aka EPT on x86) but the
> paragraph lead me to believe to it is for guest.
> 
> The size of super pages of guests will depend on the page granularity
> used by itself and the format of the page-table (e.g LPAE vs short
> descriptor). We have no control on that.
> 
> What we have control is the size of mapping used for stage-2 page-table.

Stepping back from the document for a minute: would it make sense to use
"hardware assisted paging" (HAP) for Intel EPT, AMD RVI (previously
NPT), and ARM stage-2 pagetables?  HAP was already a general term used
to describe the two x86 technologies; and I think the description makes
sense, because if we didn't have hardware-assisted stage 2 pagetables
we'd need Xen-provided shadow pagetables.

Back to the question at hand, there are four different things:

1. Whether Xen itself uses superpage mappings for its virtual address
space.  (Not sure if Xen does this or not.)

2. Whether Xen uses superpage mappings for HAP.  Xen uses this on x86
when hardware support is -- I take it Xen does this on ARM as well?

3. Whether Xen provides the *interface* for a guest to use L2 or L3
superpages (for 4k page granularity, 2MiB or 1GiB respectively) in its
own pagetables.  I *think* HAP on x86 provides the interface whenever
the underlying hardware does.  I assume it's the same for ARM?  In the
case of shadow mode, we only provide the interface for 2MiB pagetables.

4. Whether a guest using L2 or L3 superpages will actually have
superpages, or whether it's "only emulated".  As Jan said, for shadow
pagetables on x86, the underlying pagetables still only have 4k pages,
so the guest will get no benefit from using L2 superpages in its
pagetables (either in terms of reduced memory reads on a tlb miss, or in
terms of larger effectiveness of each TLB entry).

#3 and #4 are probably the most pertinent to users, with #2 being next
on the list, and #1 being least.

Does that make sense?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86
  2017-11-21 11:41       ` Jan Beulich
@ 2017-11-21 17:20         ` George Dunlap
  2017-11-22 11:05           ` Jan Beulich
  0 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-21 17:20 UTC (permalink / raw)
  To: Jan Beulich
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Paul Durrant, xen-devel, AnthonyPerard,
	IanJackson, Roger Pau Monne

On 11/21/2017 11:41 AM, Jan Beulich wrote:
>>>> On 21.11.17 at 11:56, <george.dunlap@citrix.com> wrote:
>> On 11/21/2017 08:29 AM, Jan Beulich wrote:
>>>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>>>> +### PV USB support for xl
>>>> +
>>>> +    Status: Supported
>>>> +
>>>> +### PV 9pfs support for xl
>>>> +
>>>> +    Status: Tech Preview
>>>
>>> Why are these two being called out, but xl support for other device
>>> types isn't?
>>
>> Do you see how big this document is? :-)  If you think something else
>> needs to be covered, don't ask why I didn't mention it, just say what
>> you think I missed.
> 
> Well, (not very) implicitly here: The same for all other PV protocols.

Oh, I see -- you didn't read my comment below the `---` pointing this
out.  :-)

Yes, I wasn't quite sure what to do here.  We already list all the PV
protocols in at least 2 places (frontend and backend support); it seemed
a bit redundant to list them all again in xl and/or libxl support.

Except, of course, that there are a number of protocols *not* plumbed
through the toolstack yet -- PVSCSI being one example.

Any suggestions would be welcome.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 06/16] SUPPORT.md: Add scalability features
  2017-11-21 16:43     ` George Dunlap
@ 2017-11-21 17:31       ` Julien Grall
  2017-11-21 17:51         ` George Dunlap
  0 siblings, 1 reply; 90+ messages in thread
From: Julien Grall @ 2017-11-21 17:31 UTC (permalink / raw)
  To: George Dunlap, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Jan Beulich, Ian Jackson

Hi George,

On 11/21/2017 04:43 PM, George Dunlap wrote:
> On 11/16/2017 03:19 PM, Julien Grall wrote:
>> On 13/11/17 15:41, George Dunlap wrote:
>>> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
>>> ---
>>> CC: Ian Jackson <ian.jackson@citrix.com>
>>> CC: Wei Liu <wei.liu2@citrix.com>
>>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>>> CC: Jan Beulich <jbeulich@suse.com>
>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>> CC: Konrad Wilk <konrad.wilk@oracle.com>
>>> CC: Tim Deegan <tim@xen.org>
>>> CC: Julien Grall <julien.grall@arm.com>
>>> ---
>>>    SUPPORT.md | 21 +++++++++++++++++++++
>>>    1 file changed, 21 insertions(+)
>>>
>>> diff --git a/SUPPORT.md b/SUPPORT.md
>>> index c884fac7f5..a8c56d13dd 100644
>>> --- a/SUPPORT.md
>>> +++ b/SUPPORT.md
>>> @@ -195,6 +195,27 @@ on embedded platforms.
>>>      Enables NUMA aware scheduling in Xen
>>>    +## Scalability
>>> +
>>> +### 1GB/2MB super page support
>>> +
>>> +    Status, x86 HVM/PVH: : Supported
>>> +    Status, ARM: Supported
>>> +
>>> +NB that this refers to the ability of guests
>>> +to have higher-level page table entries point directly to memory,
>>> +improving TLB performance.
>>> +This is independent of the ARM "page granularity" feature (see below).
>>
>> I am not entirely sure about this paragraph for Arm. I understood this
>> section as support for stage-2 page-table (aka EPT on x86) but the
>> paragraph lead me to believe to it is for guest.
>>
>> The size of super pages of guests will depend on the page granularity
>> used by itself and the format of the page-table (e.g LPAE vs short
>> descriptor). We have no control on that.
>>
>> What we have control is the size of mapping used for stage-2 page-table.
> 
> Stepping back from the document for a minute: would it make sense to use
> "hardware assisted paging" (HAP) for Intel EPT, AMD RVI (previously
> NPT), and ARM stage-2 pagetables?  HAP was already a general term used
> to describe the two x86 technologies; and I think the description makes
> sense, because if we didn't have hardware-assisted stage 2 pagetables
> we'd need Xen-provided shadow pagetables.

I think using the term "hardware assisted paging" should be fine to 
refer the 3 technologies.

> 
> Back to the question at hand, there are four different things:
> 
> 1. Whether Xen itself uses superpage mappings for its virtual address
> space.  (Not sure if Xen does this or not.)

Xen is trying to use superpage mappings for itself whenever it is possible.

> 
> 2. Whether Xen uses superpage mappings for HAP.  Xen uses this on x86
> when hardware support is -- I take it Xen does this on ARM as well?

The size of superpages supported will depend on the page-table format 
(short-descriptor vs LPAE) and the granularity used.

Supersection (16MB) for short-descriptor is optional but mandatory when 
the processor support LPAE. LPAE is mandatory with virtualization. So 
all size of superpages are supported.

Note that stage-2 page-tables can only use LPAE page-table.

I would also rather avoid to mention any superpage size for Arm in 
SUPPORT.MD as there are a lot.

Short-descriptor is always using 4KB granularity supports 16MB, 1MB, 64KB

LPAE supports 4KB, 16KB, 64KB granularities. Each of them having 
different size of superpage.

> 
> 3. Whether Xen provides the *interface* for a guest to use L2 or L3
> superpages (for 4k page granularity, 2MiB or 1GiB respectively) in its
> own pagetables.  I *think* HAP on x86 provides the interface whenever
> the underlying hardware does.  I assume it's the same for ARM?  In the
> case of shadow mode, we only provide the interface for 2MiB pagetables.

See above. We have no way to control that in the guest.

> 
> 4. Whether a guest using L2 or L3 superpages will actually have
> superpages, or whether it's "only emulated".  As Jan said, for shadow
> pagetables on x86, the underlying pagetables still only have 4k pages,
> so the guest will get no benefit from using L2 superpages in its
> pagetables (either in terms of reduced memory reads on a tlb miss, or in
> terms of larger effectiveness of each TLB entry).
> 
> #3 and #4 are probably the most pertinent to users, with #2 being next
> on the list, and #1 being least.
> 
> Does that make sense?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86
  2017-11-21  8:29   ` Jan Beulich
  2017-11-21  9:19     ` Paul Durrant
  2017-11-21 10:56     ` George Dunlap
@ 2017-11-21 17:35     ` George Dunlap
  2017-11-22 11:07       ` Jan Beulich
  2 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-21 17:35 UTC (permalink / raw)
  To: Jan Beulich
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Paul Durrant, xen-devel, AnthonyPerard,
	Ian Jackson, Roger Pau Monne

On 11/21/2017 08:29 AM, Jan Beulich wrote:
>> +### QEMU backend hotplugging for xl
>> +
>> +    Status: Supported
> 
> Wouldn't this more appropriately be
> 
> ### QEMU backend hotplugging
> 
>     Status, xl: Supported

You mean, for this whole section (i.e., everything here that says 'for
xl')?  If not, why this one in particular?

>> +## Virtual driver support, guest side
>> +
>> +### Blkfront
>> +
>> +    Status, Linux: Supported
>> +    Status, FreeBSD: Supported, Security support external
>> +    Status, NetBSD: Supported, Security support external
>> +    Status, Windows: Supported
>> +
>> +Guest-side driver capable of speaking the Xen PV block protocol
>> +
>> +### Netfront
>> +
>> +    Status, Linux: Supported
>> +    States, Windows: Supported
>> +    Status, FreeBSD: Supported, Security support external
>> +    Status, NetBSD: Supported, Security support external
>> +    Status, OpenBSD: Supported, Security support external
> 
> Seeing the difference in OSes between the two (with the variance
> increasing in entries further down) - what does the absence of an
> OS on one list, but its presence on another mean? While not
> impossible, I would find it surprising if e.g. OpenBSD had netfront
> but not even a basic blkfront.

Actually -- at least according to the paper presenting PV frontends for
OpenBSD in 2016 [1], they implemented xenstore and netfront frontends,
but not (at least at that point) a disk frontend.

However, blktfront does appear as a feature in OpenBSD 6.1, released in
April [2]; so I'll add that one in.  (Perhaps Roger hadn't heard that it
had been implemented.)

[1] https://www.openbsd.org/papers/asiabsdcon2016-xen-paper.pdf

[2] https://www.openbsd.org/61.html

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 06/16] SUPPORT.md: Add scalability features
  2017-11-21 17:31       ` Julien Grall
@ 2017-11-21 17:51         ` George Dunlap
  0 siblings, 0 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-21 17:51 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Jan Beulich, Ian Jackson

On 11/21/2017 05:31 PM, Julien Grall wrote:
> Hi George,
> 
> On 11/21/2017 04:43 PM, George Dunlap wrote:
>> On 11/16/2017 03:19 PM, Julien Grall wrote:
>>> On 13/11/17 15:41, George Dunlap wrote:
>>>> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
>>>> ---
>>>> CC: Ian Jackson <ian.jackson@citrix.com>
>>>> CC: Wei Liu <wei.liu2@citrix.com>
>>>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> CC: Jan Beulich <jbeulich@suse.com>
>>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>>> CC: Konrad Wilk <konrad.wilk@oracle.com>
>>>> CC: Tim Deegan <tim@xen.org>
>>>> CC: Julien Grall <julien.grall@arm.com>
>>>> ---
>>>>    SUPPORT.md | 21 +++++++++++++++++++++
>>>>    1 file changed, 21 insertions(+)
>>>>
>>>> diff --git a/SUPPORT.md b/SUPPORT.md
>>>> index c884fac7f5..a8c56d13dd 100644
>>>> --- a/SUPPORT.md
>>>> +++ b/SUPPORT.md
>>>> @@ -195,6 +195,27 @@ on embedded platforms.
>>>>      Enables NUMA aware scheduling in Xen
>>>>    +## Scalability
>>>> +
>>>> +### 1GB/2MB super page support
>>>> +
>>>> +    Status, x86 HVM/PVH: : Supported
>>>> +    Status, ARM: Supported
>>>> +
>>>> +NB that this refers to the ability of guests
>>>> +to have higher-level page table entries point directly to memory,
>>>> +improving TLB performance.
>>>> +This is independent of the ARM "page granularity" feature (see below).
>>>
>>> I am not entirely sure about this paragraph for Arm. I understood this
>>> section as support for stage-2 page-table (aka EPT on x86) but the
>>> paragraph lead me to believe to it is for guest.
>>>
>>> The size of super pages of guests will depend on the page granularity
>>> used by itself and the format of the page-table (e.g LPAE vs short
>>> descriptor). We have no control on that.
>>>
>>> What we have control is the size of mapping used for stage-2 page-table.
>>
>> Stepping back from the document for a minute: would it make sense to use
>> "hardware assisted paging" (HAP) for Intel EPT, AMD RVI (previously
>> NPT), and ARM stage-2 pagetables?  HAP was already a general term used
>> to describe the two x86 technologies; and I think the description makes
>> sense, because if we didn't have hardware-assisted stage 2 pagetables
>> we'd need Xen-provided shadow pagetables.
> 
> I think using the term "hardware assisted paging" should be fine to
> refer the 3 technologies.

OK, great.

[snip]

> Short-descriptor is always using 4KB granularity supports 16MB, 1MB, 64KB
> 
> LPAE supports 4KB, 16KB, 64KB granularities. Each of them having
> different size of superpage.

Yes, that's why I started saying "L2 and L3 superpages", to mean
"Superpage entries in L2 or L3 pagetables", instead of 2MiB or 1GiB.
(Let me know if you can think of a better way to describe that.)

>> 3. Whether Xen provides the *interface* for a guest to use L2 or L3
>> superpages (for 4k page granularity, 2MiB or 1GiB respectively) in its
>> own pagetables.  I *think* HAP on x86 provides the interface whenever
>> the underlying hardware does.  I assume it's the same for ARM?  In the
>> case of shadow mode, we only provide the interface for 2MiB pagetables.
> 
> See above. We have no way to control that in the guest.

We don't control whether the guest uses *any* features.  Should we not
mention PV disks or SMMUv2 or whatever because we don't know if the
guest will use them?

Of course not.  This document describes whether the guest *has the
features available to use*, either provided by the hardware or emulated
by Xen.

It sounds like you may not have ever thought about whether an ARM guest
has L2 or L3 superpages available, because it's always had all of them;
but it's different on x86.

[snip]

>> 2. Whether Xen uses superpage mappings for HAP.  Xen uses this on x86
>> when hardware support is -- I take it Xen does this on ARM as well?
>
> The size of superpages supported will depend on the page-table format
> (short-descriptor vs LPAE) and the granularity used.
>
> Supersection (16MB) for short-descriptor is optional but mandatory when
> the processor support LPAE. LPAE is mandatory with virtualization. So
> all size of superpages are supported.
>
> Note that stage-2 page-tables can only use LPAE page-table.
>
> I would also rather avoid to mention any superpage size for Arm in
> SUPPORT.MD as there are a lot.

So it sounds like basically everything supported on native was supported
in virtualization (and under Xen) from the start, so it's probably less
important to mention.  But since we *will* need to do that for x86, we
probably need to say *something* in case people want to know.

Let me see what I can come up with.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 08/16] SUPPORT.md: Add x86-specific virtual hardware
  2017-11-21  8:39   ` Jan Beulich
@ 2017-11-21 18:02     ` George Dunlap
  2017-11-22 11:11       ` Jan Beulich
  0 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-21 18:02 UTC (permalink / raw)
  To: Jan Beulich
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Paul Durrant, xen-devel, AnthonyPerard, Ian Jackson,
	Roger Pau Monne

On 11/21/2017 08:39 AM, Jan Beulich wrote:
>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>> +### x86/Nested PV
>> +
>> +    Status, x86 HVM: Tech Preview
>> +
>> +This means running a Xen hypervisor inside an HVM domain,
>> +with support for PV L2 guests only
>> +(i.e., hardware virtualization extensions not provided
>> +to the guest).
>> +
>> +This works, but has performance limitations
>> +because the L1 dom0 can only access emulated L1 devices.
> 
> So is this explicitly meaning Xen-on-Xen? Xen-on-KVM, for example,
> could be considered "nested PV", too. IOW I think it needs to be
> spelled out whether this means the host side of things here, the
> guest one, or both.

Yes, that's true.  But I forget: Can a Xen dom0 use virtio guest
drivers?  I'm pretty sure Stefano tried it at some point but I don't
remember what the result was.

>> +### x86/Nested HVM
>> +
>> +    Status, x86 HVM: Experimental
>> +
>> +This means running a Xen hypervisor inside an HVM domain,
>> +with support for running both PV and HVM L2 guests
>> +(i.e., hardware virtualization extensions provided
>> +to the guest).
> 
> "Nested HVM" generally means more than using Xen as the L1
> hypervisor. If this is really to mean just L1 Xen, I think the title
> should already say so, not just the description.

Yes, I mean any sort of nested guest support here.

>> +### x86/Advanced Vector eXtension
>> +
>> +    Status: Supported
> 
> As indicated before, I think this either needs to be dropped or
> be extended by an entry for virtually every CPUID bit exposed
> to guests. Furthermore, in this isolated fashion it is not clear
> what derived features (e.g. FMA, FMA4, AVX2, or even AVX-512)
> it is meant to imply. If any of them are implied, "with caveats"
> would need to be added as long as the instruction emulator isn't
> capable of handling the instructions, yet.

Adding a section for CPUID bits supported (and to what level) sounds
like a useful thing to do, perhaps in the next release.

>> +### x86/HVM EFI
>> +
>> +    Status: Supported
>> +
>> +Booting a guest via guest EFI firmware
> 
> Shouldn't this say OVMF, to avoid covering possible other
> implementations?

I don't expect that we'll ever need more than one EFI implementation in
the tree.  If a time comes when it makes sense to have two, we can
adjust the entry accordingly.

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem
  2017-11-21  8:48   ` Jan Beulich
@ 2017-11-21 18:19     ` George Dunlap
  2017-11-21 19:05       ` Ian Jackson
  2017-11-22 11:15       ` Jan Beulich
  0 siblings, 2 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-21 18:19 UTC (permalink / raw)
  To: Jan Beulich
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Ian Jackson, xen-devel

On 11/21/2017 08:48 AM, Jan Beulich wrote:
>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>> --- a/SUPPORT.md
>> +++ b/SUPPORT.md
>> @@ -152,6 +152,35 @@ Output of information in machine-parseable JSON format
>>  
>>      Status: Supported, Security support external
>>  
>> +## Debugging, analysis, and crash post-mortem
>> +
>> +### gdbsx
>> +
>> +    Status, x86: Supported
>> +
>> +Debugger to debug ELF guests
>> +
>> +### Soft-reset for PV guests
>> +
>> +    Status: Supported
>> +    
>> +Soft-reset allows a new kernel to start 'from scratch' with a fresh VM state, 
>> +but with all the memory from the previous state of the VM intact.
>> +This is primarily designed to allow "crash kernels", 
>> +which can do core dumps of memory to help with debugging in the event of a crash.
>> +
>> +### xentrace
>> +
>> +    Status, x86: Supported
>> +
>> +Tool to capture Xen trace buffer data
>> +
>> +### gcov
>> +
>> +    Status: Supported, Not security supported
> 
> I agree with excluding security support here, but why wouldn't the
> same be the case for gdbsx and xentrace?

From my initial post:

---

gdbsx security support: Someone may want to debug an untrusted guest,
so I think we should say 'yes' here.

xentrace: Users may want to trace guests in production environments,
so I think we should say 'yes'.

gcov: No good reason to run a gcov hypervisor in a production
environment.  May be ways for a rogue guest to DoS.

---

xentrace I would argue for security support; I've asked customers to
send me xentrace data as part of analysis before.  I also know enough
about it that I'm reasonably confident the risk of an attack vector is
pretty low.

I don't have a strong opinion on gdbsx; I'd call it 'supported', but if
you think we need to exclude it from security support I'm happy with
that as well.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem
  2017-11-21 18:19     ` George Dunlap
@ 2017-11-21 19:05       ` Ian Jackson
  2017-11-21 19:21         ` Andrew Cooper
  2017-11-22 11:15       ` Jan Beulich
  1 sibling, 1 reply; 90+ messages in thread
From: Ian Jackson @ 2017-11-21 19:05 UTC (permalink / raw)
  To: George Dunlap
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Jan Beulich, xen-devel

George Dunlap writes ("Re: [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem"):
> gdbsx security support: Someone may want to debug an untrusted guest,
> so I think we should say 'yes' here.

I think running gdb on an potentially hostile program is foolish.

> I don't have a strong opinion on gdbsx; I'd call it 'supported', but if
> you think we need to exclude it from security support I'm happy with
> that as well.

gdbsx itself is probably simple enough to be fine but I would rather
not call it security supported because that might encourage people to
use it with gdb.

If someone wants to use gdbsx with something that's not gdb then they
might want to ask us to revisit that.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem
  2017-11-21 19:05       ` Ian Jackson
@ 2017-11-21 19:21         ` Andrew Cooper
  2017-11-22 10:51           ` George Dunlap
  0 siblings, 1 reply; 90+ messages in thread
From: Andrew Cooper @ 2017-11-21 19:21 UTC (permalink / raw)
  To: Ian Jackson, George Dunlap
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Tim Deegan, Jan Beulich,
	xen-devel

On 21/11/17 19:05, Ian Jackson wrote:
> George Dunlap writes ("Re: [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem"):
>> gdbsx security support: Someone may want to debug an untrusted guest,
>> so I think we should say 'yes' here.
> I think running gdb on an potentially hostile program is foolish.
>
>> I don't have a strong opinion on gdbsx; I'd call it 'supported', but if
>> you think we need to exclude it from security support I'm happy with
>> that as well.
> gdbsx itself is probably simple enough to be fine but I would rather
> not call it security supported because that might encourage people to
> use it with gdb.
>
> If someone wants to use gdbsx with something that's not gdb then they
> might want to ask us to revisit that.

If gdbsx chooses (or gets tricked into using) DOMID_XEN, then it gets
arbitrary read/write access over hypervisor virtual address space, due
to the behaviour of the hypercalls it uses.

As a tool, it mostly functions (there are some rather sharp corners
which I've not gotten time to fix so far), but it is definitely not
something I would trust in a hostile environment.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 13/16] SUPPORT.md: Add secondary memory management features
  2017-11-13 15:41 ` [PATCH 13/16] SUPPORT.md: Add secondary memory management features George Dunlap
  2017-11-21  8:54   ` Jan Beulich
@ 2017-11-21 19:55   ` Andrew Cooper
  2017-11-22 17:15     ` George Dunlap
  1 sibling, 1 reply; 90+ messages in thread
From: Andrew Cooper @ 2017-11-21 19:55 UTC (permalink / raw)
  To: George Dunlap, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Tamas K Lengyel,
	Tim Deegan, Jan Beulich, Ian Jackson

On 13/11/17 15:41, George Dunlap wrote:
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> ---
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Konrad Wilk <konrad.wilk@oracle.com>
> CC: Tim Deegan <tim@xen.org>
> CC: Tamas K Lengyel <tamas.lengyel@zentific.com>
> ---
>  SUPPORT.md | 31 +++++++++++++++++++++++++++++++
>  1 file changed, 31 insertions(+)
>
> diff --git a/SUPPORT.md b/SUPPORT.md
> index 0f7426593e..3e352198ce 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -187,6 +187,37 @@ Export hypervisor coverage data suitable for analysis by gcov or lcov.
>  
>      Status: Supported
>  
> +### Memory Sharing
> +
> +    Status, x86 HVM: Tech Preview
> +    Status, ARM: Tech Preview
> +
> +Allow sharing of identical pages between guests

"Tech Preview" should imply there is any kind of `xl dedup-these-domains
$X $Y` functionality.

The only thing we appears to have an example wrapper around the libxc
interface, which requires the user to nominate individual frames, and
this doesn't qualify as "functionally complete" IMO.

There also doesn't appear to be any ARM support in the slightest. 
mem_sharing_{memop,domctl}() are only implemented for x86.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem
  2017-11-21 19:21         ` Andrew Cooper
@ 2017-11-22 10:51           ` George Dunlap
  0 siblings, 0 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-22 10:51 UTC (permalink / raw)
  To: Andrew Cooper, Ian Jackson
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Tim Deegan, Jan Beulich,
	xen-devel

On 11/21/2017 07:21 PM, Andrew Cooper wrote:
> On 21/11/17 19:05, Ian Jackson wrote:
>> George Dunlap writes ("Re: [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem"):
>>> gdbsx security support: Someone may want to debug an untrusted guest,
>>> so I think we should say 'yes' here.
>> I think running gdb on an potentially hostile program is foolish.
>>
>>> I don't have a strong opinion on gdbsx; I'd call it 'supported', but if
>>> you think we need to exclude it from security support I'm happy with
>>> that as well.
>> gdbsx itself is probably simple enough to be fine but I would rather
>> not call it security supported because that might encourage people to
>> use it with gdb.
>>
>> If someone wants to use gdbsx with something that's not gdb then they
>> might want to ask us to revisit that.
> 
> If gdbsx chooses (or gets tricked into using) DOMID_XEN, then it gets
> arbitrary read/write access over hypervisor virtual address space, due
> to the behaviour of the hypercalls it uses.
> 
> As a tool, it mostly functions (there are some rather sharp corners
> which I've not gotten time to fix so far), but it is definitely not
> something I would trust in a hostile environment.

Right -- "not security supported" it is. :-)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86
  2017-11-21 17:20         ` George Dunlap
@ 2017-11-22 11:05           ` Jan Beulich
  2017-11-22 16:16             ` George Dunlap
  0 siblings, 1 reply; 90+ messages in thread
From: Jan Beulich @ 2017-11-22 11:05 UTC (permalink / raw)
  To: George Dunlap
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Paul Durrant, xen-devel, AnthonyPerard,
	IanJackson, Roger Pau Monne

>>> On 21.11.17 at 18:20, <george.dunlap@citrix.com> wrote:
> On 11/21/2017 11:41 AM, Jan Beulich wrote:
>>>>> On 21.11.17 at 11:56, <george.dunlap@citrix.com> wrote:
>>> On 11/21/2017 08:29 AM, Jan Beulich wrote:
>>>>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>>>>> +### PV USB support for xl
>>>>> +
>>>>> +    Status: Supported
>>>>> +
>>>>> +### PV 9pfs support for xl
>>>>> +
>>>>> +    Status: Tech Preview
>>>>
>>>> Why are these two being called out, but xl support for other device
>>>> types isn't?
>>>
>>> Do you see how big this document is? :-)  If you think something else
>>> needs to be covered, don't ask why I didn't mention it, just say what
>>> you think I missed.
>> 
>> Well, (not very) implicitly here: The same for all other PV protocols.
> 
> Oh, I see -- you didn't read my comment below the `---` pointing this
> out.  :-)

Oops, sorry.

> Yes, I wasn't quite sure what to do here.  We already list all the PV
> protocols in at least 2 places (frontend and backend support); it seemed
> a bit redundant to list them all again in xl and/or libxl support.
> 
> Except, of course, that there are a number of protocols *not* plumbed
> through the toolstack yet -- PVSCSI being one example.
> 
> Any suggestions would be welcome.

How about putting that as a note to the respective frontend /
backend entries? And then, wouldn't lack of xl support anyway
mean "experimental" at best?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86
  2017-11-21 17:35     ` George Dunlap
@ 2017-11-22 11:07       ` Jan Beulich
  0 siblings, 0 replies; 90+ messages in thread
From: Jan Beulich @ 2017-11-22 11:07 UTC (permalink / raw)
  To: George Dunlap
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Paul Durrant, xen-devel, AnthonyPerard,
	IanJackson, Roger Pau Monne

>>> On 21.11.17 at 18:35, <george.dunlap@citrix.com> wrote:
> On 11/21/2017 08:29 AM, Jan Beulich wrote:
>>> +### QEMU backend hotplugging for xl
>>> +
>>> +    Status: Supported
>> 
>> Wouldn't this more appropriately be
>> 
>> ### QEMU backend hotplugging
>> 
>>     Status, xl: Supported
> 
> You mean, for this whole section (i.e., everything here that says 'for
> xl')?  If not, why this one in particular?

Well, I had commented on the other two entries separately, and
from my other reply just sent it would follow that I'd rather see
those other two entries go away (information moved elsewhere).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 08/16] SUPPORT.md: Add x86-specific virtual hardware
  2017-11-21 18:02     ` George Dunlap
@ 2017-11-22 11:11       ` Jan Beulich
  2017-11-22 11:21         ` George Dunlap
                           ` (2 more replies)
  0 siblings, 3 replies; 90+ messages in thread
From: Jan Beulich @ 2017-11-22 11:11 UTC (permalink / raw)
  To: George Dunlap
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	TimDeegan, PaulDurrant, xen-devel, AnthonyPerard, Ian Jackson,
	Roger Pau Monne

>>> On 21.11.17 at 19:02, <george.dunlap@citrix.com> wrote:
> On 11/21/2017 08:39 AM, Jan Beulich wrote:
>>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>>> +### x86/Nested PV
>>> +
>>> +    Status, x86 HVM: Tech Preview
>>> +
>>> +This means running a Xen hypervisor inside an HVM domain,
>>> +with support for PV L2 guests only
>>> +(i.e., hardware virtualization extensions not provided
>>> +to the guest).
>>> +
>>> +This works, but has performance limitations
>>> +because the L1 dom0 can only access emulated L1 devices.
>> 
>> So is this explicitly meaning Xen-on-Xen? Xen-on-KVM, for example,
>> could be considered "nested PV", too. IOW I think it needs to be
>> spelled out whether this means the host side of things here, the
>> guest one, or both.
> 
> Yes, that's true.  But I forget: Can a Xen dom0 use virtio guest
> drivers?  I'm pretty sure Stefano tried it at some point but I don't
> remember what the result was.

I have no idea at all.

>>> +### x86/Nested HVM
>>> +
>>> +    Status, x86 HVM: Experimental
>>> +
>>> +This means running a Xen hypervisor inside an HVM domain,
>>> +with support for running both PV and HVM L2 guests
>>> +(i.e., hardware virtualization extensions provided
>>> +to the guest).
>> 
>> "Nested HVM" generally means more than using Xen as the L1
>> hypervisor. If this is really to mean just L1 Xen, I think the title
>> should already say so, not just the description.
> 
> Yes, I mean any sort of nested guest support here.

In which case would you ind inserting "for example"?

>>> +### x86/Advanced Vector eXtension
>>> +
>>> +    Status: Supported
>> 
>> As indicated before, I think this either needs to be dropped or
>> be extended by an entry for virtually every CPUID bit exposed
>> to guests. Furthermore, in this isolated fashion it is not clear
>> what derived features (e.g. FMA, FMA4, AVX2, or even AVX-512)
>> it is meant to imply. If any of them are implied, "with caveats"
>> would need to be added as long as the instruction emulator isn't
>> capable of handling the instructions, yet.
> 
> Adding a section for CPUID bits supported (and to what level) sounds
> like a useful thing to do, perhaps in the next release.

May I suggest then that until then the section above be dropped?

>>> +### x86/HVM EFI
>>> +
>>> +    Status: Supported
>>> +
>>> +Booting a guest via guest EFI firmware
>> 
>> Shouldn't this say OVMF, to avoid covering possible other
>> implementations?
> 
> I don't expect that we'll ever need more than one EFI implementation in
> the tree.  If a time comes when it makes sense to have two, we can
> adjust the entry accordingly.

But that's part of my point - you say "in the tree", but this is a
separate tree, and there could be any number of separate ones.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem
  2017-11-21 18:19     ` George Dunlap
  2017-11-21 19:05       ` Ian Jackson
@ 2017-11-22 11:15       ` Jan Beulich
  2017-11-22 17:06         ` George Dunlap
  1 sibling, 1 reply; 90+ messages in thread
From: Jan Beulich @ 2017-11-22 11:15 UTC (permalink / raw)
  To: George Dunlap
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Ian Jackson, xen-devel

>>> On 21.11.17 at 19:19, <george.dunlap@citrix.com> wrote:
> xentrace I would argue for security support; I've asked customers to
> send me xentrace data as part of analysis before.  I also know enough
> about it that I'm reasonably confident the risk of an attack vector is
> pretty low.

Knowing pretty little about xentrace I will trust you here. What I
was afraid of is that generally anything adding overhead can have
unintended side effects, the more with the - aiui - huge amounts of
data this may produce.

> I don't have a strong opinion on gdbsx; I'd call it 'supported', but if
> you think we need to exclude it from security support I'm happy with
> that as well.

Looks like on another sub-thread it was meanwhile already agreed
to mark it not security supported.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 08/16] SUPPORT.md: Add x86-specific virtual hardware
  2017-11-22 11:11       ` Jan Beulich
@ 2017-11-22 11:21         ` George Dunlap
  2017-11-22 11:45         ` George Dunlap
  2017-11-22 16:30         ` George Dunlap
  2 siblings, 0 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-22 11:21 UTC (permalink / raw)
  To: Jan Beulich
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	TimDeegan, PaulDurrant, xen-devel, AnthonyPerard, Ian Jackson,
	Roger Pau Monne

On 11/22/2017 11:11 AM, Jan Beulich wrote:
>>>> On 21.11.17 at 19:02, <george.dunlap@citrix.com> wrote:
>> On 11/21/2017 08:39 AM, Jan Beulich wrote:
>>>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>>>> +### x86/Nested PV
>>>> +
>>>> +    Status, x86 HVM: Tech Preview
>>>> +
>>>> +This means running a Xen hypervisor inside an HVM domain,
>>>> +with support for PV L2 guests only
>>>> +(i.e., hardware virtualization extensions not provided
>>>> +to the guest).
>>>> +
>>>> +This works, but has performance limitations
>>>> +because the L1 dom0 can only access emulated L1 devices.
>>>
>>> So is this explicitly meaning Xen-on-Xen? Xen-on-KVM, for example,
>>> could be considered "nested PV", too. IOW I think it needs to be
>>> spelled out whether this means the host side of things here, the
>>> guest one, or both.
>>
>> Yes, that's true.  But I forget: Can a Xen dom0 use virtio guest
>> drivers?  I'm pretty sure Stefano tried it at some point but I don't
>> remember what the result was.
> 
> I have no idea at all.
> 
>>>> +### x86/Nested HVM
>>>> +
>>>> +    Status, x86 HVM: Experimental
>>>> +
>>>> +This means running a Xen hypervisor inside an HVM domain,
>>>> +with support for running both PV and HVM L2 guests
>>>> +(i.e., hardware virtualization extensions provided
>>>> +to the guest).
>>>
>>> "Nested HVM" generally means more than using Xen as the L1
>>> hypervisor. If this is really to mean just L1 Xen, I think the title
>>> should already say so, not just the description.
>>
>> Yes, I mean any sort of nested guest support here.
> 
> In which case would you ind inserting "for example"?

Yes, I was planning doing that.  Sorry for not making my intention clear.

> 
>>>> +### x86/Advanced Vector eXtension
>>>> +
>>>> +    Status: Supported
>>>
>>> As indicated before, I think this either needs to be dropped or
>>> be extended by an entry for virtually every CPUID bit exposed
>>> to guests. Furthermore, in this isolated fashion it is not clear
>>> what derived features (e.g. FMA, FMA4, AVX2, or even AVX-512)
>>> it is meant to imply. If any of them are implied, "with caveats"
>>> would need to be added as long as the instruction emulator isn't
>>> capable of handling the instructions, yet.
>>
>> Adding a section for CPUID bits supported (and to what level) sounds
>> like a useful thing to do, perhaps in the next release.
> 
> May I suggest then that until then the section above be dropped?

Ditto.

>>>> +### x86/HVM EFI
>>>> +
>>>> +    Status: Supported
>>>> +
>>>> +Booting a guest via guest EFI firmware
>>>
>>> Shouldn't this say OVMF, to avoid covering possible other
>>> implementations?
>>
>> I don't expect that we'll ever need more than one EFI implementation in
>> the tree.  If a time comes when it makes sense to have two, we can
>> adjust the entry accordingly.
> 
> But that's part of my point - you say "in the tree", but this is a
> separate tree, and there could be any number of separate ones.

But not ones wired into xl or libxl.

On the other hand, it looks like the actual value you put in the xl
config file is 'ovmf', so that probably makes more sense.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 08/16] SUPPORT.md: Add x86-specific virtual hardware
  2017-11-22 11:11       ` Jan Beulich
  2017-11-22 11:21         ` George Dunlap
@ 2017-11-22 11:45         ` George Dunlap
  2017-11-22 16:30         ` George Dunlap
  2 siblings, 0 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-22 11:45 UTC (permalink / raw)
  To: Jan Beulich
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	TimDeegan, PaulDurrant, xen-devel, AnthonyPerard, Ian Jackson,
	Roger Pau Monne

On 11/22/2017 11:11 AM, Jan Beulich wrote:
>>>> +### x86/HVM EFI
>>>> +
>>>> +    Status: Supported
>>>> +
>>>> +Booting a guest via guest EFI firmware
>>>
>>> Shouldn't this say OVMF, to avoid covering possible other
>>> implementations?
>>
>> I don't expect that we'll ever need more than one EFI implementation in
>> the tree.  If a time comes when it makes sense to have two, we can
>> adjust the entry accordingly.
> 
> But that's part of my point - you say "in the tree", but this is a
> separate tree, and there could be any number of separate ones.

I've put the following:

---
### x86/HVM OVMF

    Status: Supported

OVMF firmware implements the UEFI boot protocol.
---

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86
  2017-11-22 11:05           ` Jan Beulich
@ 2017-11-22 16:16             ` George Dunlap
  0 siblings, 0 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-22 16:16 UTC (permalink / raw)
  To: Jan Beulich
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Paul Durrant, xen-devel, AnthonyPerard,
	IanJackson, Roger Pau Monne

On 11/22/2017 11:05 AM, Jan Beulich wrote:
>>>> On 21.11.17 at 18:20, <george.dunlap@citrix.com> wrote:
>> On 11/21/2017 11:41 AM, Jan Beulich wrote:
>>>>>> On 21.11.17 at 11:56, <george.dunlap@citrix.com> wrote:
>>>> On 11/21/2017 08:29 AM, Jan Beulich wrote:
>>>>>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>>>>>> +### PV USB support for xl
>>>>>> +
>>>>>> +    Status: Supported
>>>>>> +
>>>>>> +### PV 9pfs support for xl
>>>>>> +
>>>>>> +    Status: Tech Preview
>>>>>
>>>>> Why are these two being called out, but xl support for other device
>>>>> types isn't?
>>>>
>>>> Do you see how big this document is? :-)  If you think something else
>>>> needs to be covered, don't ask why I didn't mention it, just say what
>>>> you think I missed.
>>>
>>> Well, (not very) implicitly here: The same for all other PV protocols.
>>
>> Oh, I see -- you didn't read my comment below the `---` pointing this
>> out.  :-)
> 
> Oops, sorry.
> 
>> Yes, I wasn't quite sure what to do here.  We already list all the PV
>> protocols in at least 2 places (frontend and backend support); it seemed
>> a bit redundant to list them all again in xl and/or libxl support.
>>
>> Except, of course, that there are a number of protocols *not* plumbed
>> through the toolstack yet -- PVSCSI being one example.
>>
>> Any suggestions would be welcome.
> 
> How about putting that as a note to the respective frontend /
> backend entries? And then, wouldn't lack of xl support anyway
> mean "experimental" at best?

Yes.

Since the toolstack mainly sets up the backend, I added a  note in the
'backend' section saying that unless otherwise noted, "Tech preview" and
"Supported" imply xl support for creating backends.

We might want to add in libvirt support enumeration at some point.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 08/16] SUPPORT.md: Add x86-specific virtual hardware
  2017-11-22 11:11       ` Jan Beulich
  2017-11-22 11:21         ` George Dunlap
  2017-11-22 11:45         ` George Dunlap
@ 2017-11-22 16:30         ` George Dunlap
  2 siblings, 0 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-22 16:30 UTC (permalink / raw)
  To: Jan Beulich
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	TimDeegan, PaulDurrant, xen-devel, AnthonyPerard, Ian Jackson,
	Roger Pau Monne

On 11/22/2017 11:11 AM, Jan Beulich wrote:
>>>> On 21.11.17 at 19:02, <george.dunlap@citrix.com> wrote:
>> On 11/21/2017 08:39 AM, Jan Beulich wrote:
>>>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>>>> +### x86/Nested PV
>>>> +
>>>> +    Status, x86 HVM: Tech Preview
>>>> +
>>>> +This means running a Xen hypervisor inside an HVM domain,
>>>> +with support for PV L2 guests only
>>>> +(i.e., hardware virtualization extensions not provided
>>>> +to the guest).
>>>> +
>>>> +This works, but has performance limitations
>>>> +because the L1 dom0 can only access emulated L1 devices.
>>>
>>> So is this explicitly meaning Xen-on-Xen? Xen-on-KVM, for example,
>>> could be considered "nested PV", too. IOW I think it needs to be
>>> spelled out whether this means the host side of things here, the
>>> guest one, or both.
>>
>> Yes, that's true.  But I forget: Can a Xen dom0 use virtio guest
>> drivers?  I'm pretty sure Stefano tried it at some point but I don't
>> remember what the result was.
> 
> I have no idea at all.

I've changed this to "Status, x86 Xen HVM: Tech Preview", and noted
that it may work for other hypervisors but we haven't received
any concrete reports.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 09/16] SUPPORT.md: Add ARM-specific virtual hardware
  2017-11-16 15:41   ` Julien Grall
@ 2017-11-22 16:32     ` George Dunlap
  0 siblings, 0 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-22 16:32 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Julien Grall, Jan Beulich, Ian Jackson

On 11/16/2017 03:41 PM, Julien Grall wrote:
> Hi George,
> 
> On 13/11/17 15:41, George Dunlap wrote:
>> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
>> ---
>> Do we need to add anything more here?
>>
>> And do we need to include ARM ACPI for guests?
>>
>> CC: Ian Jackson <ian.jackson@citrix.com>
>> CC: Wei Liu <wei.liu2@citrix.com>
>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Jan Beulich <jbeulich@suse.com>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Konrad Wilk <konrad.wilk@oracle.com>
>> CC: Tim Deegan <tim@xen.org>
>> CC: Julien Grall <julien.grall@arm.com>
>> ---
>>   SUPPORT.md | 10 ++++++++++
>>   1 file changed, 10 insertions(+)
>>
>> diff --git a/SUPPORT.md b/SUPPORT.md
>> index b95ee0ebe7..8235336c41 100644
>> --- a/SUPPORT.md
>> +++ b/SUPPORT.md
>> @@ -412,6 +412,16 @@ Virtual Performance Management Unit for HVM guests
>>   Disabled by default (enable with hypervisor command line option).
>>   This feature is not security supported: see
>> http://xenbits.xen.org/xsa/advisory-163.html
>>   +### ARM/Non-PCI device passthrough
>> +
>> +    Status: Supported
> 
> Sorry I didn't notice that until now. I am not comfortable to say
> "Supported" without any caveats.
> 
> As with PCI device passthrough, you at least need an IOMMU present on
> the platform. Sadly, it does not mean all DMA-capable devices on that
> platform will be protected by the IOMMU. This is also assuming, the
> IOMMU do sane things.
> 
> There are potentially other problem coming up with MSI support. But I
> haven't yet fully thought about it.

Shall we make this simply, 'Not security supported' for now?

I'll also mention needing an SMMU and other caveats.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem
  2017-11-22 11:15       ` Jan Beulich
@ 2017-11-22 17:06         ` George Dunlap
  0 siblings, 0 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-22 17:06 UTC (permalink / raw)
  To: Jan Beulich
  Cc: StefanoStabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Ian Jackson, xen-devel

On 11/22/2017 11:15 AM, Jan Beulich wrote:
>>>> On 21.11.17 at 19:19, <george.dunlap@citrix.com> wrote:
>> xentrace I would argue for security support; I've asked customers to
>> send me xentrace data as part of analysis before.  I also know enough
>> about it that I'm reasonably confident the risk of an attack vector is
>> pretty low.
> 
> Knowing pretty little about xentrace I will trust you here. What I
> was afraid of is that generally anything adding overhead can have
> unintended side effects, the more with the - aiui - huge amounts of
> data this may produce.

The data is fundamentally limited by the size of the in-hypervisor
buffers.  Once those are full, the trace overhead shouldn't be
significantly different than having tracing disabled.  And regardless of
how big they are, the total amount of trace data will be limited by the
throughput of the dom0-based xentrace process writing to disk.  If the
throughput of that process is (say) 50MB/s, then the "steady state" of
trace creation will be the same (one way or another).  Or, at very most,
at the rate a single processor can copy data out of the in-hypervisor
buffers.

Back when I was using xentrace heavily, I regularly hit this limit, and
never had any stability issues.

I suppose with faster disks (SSDs?  SAN on a 40GiB NIC?) this limit will
be higher, but I still have trouble thinking that it would be
significantly more dangerous than, say, any other kind of domain 0 logging.

I mean, there may be something I'm missing; but I've just spent 10
minutes or so trying to brainstorm ways that an attacker could cause
problems on the system, and other than "fill the buffers with junk so
that the admin can't find what she's looking for".  Any other flaws
should be no more likely than from any other feature we expose to guests.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 12/16] SUPPORT.md: Add Security-releated features
  2017-11-21  8:52   ` Jan Beulich
@ 2017-11-22 17:13     ` George Dunlap
  2017-11-23 10:13       ` Jan Beulich
  0 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-22 17:13 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tamas K Lengyel, StefanoStabellini, Wei Liu, Konrad Wilk,
	Andrew Cooper, Tim Deegan, RichPersaud, Ian Jackson, xen-devel

On 11/21/2017 08:52 AM, Jan Beulich wrote:
>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>> With the exception of driver domains, which depend on PCI passthrough,
>> and will be introduced later.
>>
>> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> 
> Shouldn't we also explicitly exclude tool stack disaggregation here,
> with reference to XSA-77?

Well in this document, we already consider XSM "experimental"; that
would seem to subsume the specific exclusions listed in XSA-77.

I've modified the "XSM & FLASK" as below; let me know what you think.

The other option would be to make separate entries for specific uses of
XSM (i.e., "for simple domain restriction" vs "for domain disaggregation").

 -George


### XSM & FLASK

    Status: Experimental

Compile time disabled.

Also note that using XSM
to delegate various domain control hypercalls
to particular other domains, rather than only permitting use by dom0,
is also specifically excluded from security support for many hypercalls.
Please see XSA-77 for more details.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 13/16] SUPPORT.md: Add secondary memory management features
  2017-11-21 19:55   ` Andrew Cooper
@ 2017-11-22 17:15     ` George Dunlap
  2017-11-23 10:35       ` Jan Beulich
  0 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-22 17:15 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Tamas K Lengyel,
	Tim Deegan, Jan Beulich, Ian Jackson

On 11/21/2017 07:55 PM, Andrew Cooper wrote:
> On 13/11/17 15:41, George Dunlap wrote:
>> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
>> ---
>> CC: Ian Jackson <ian.jackson@citrix.com>
>> CC: Wei Liu <wei.liu2@citrix.com>
>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Jan Beulich <jbeulich@suse.com>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Konrad Wilk <konrad.wilk@oracle.com>
>> CC: Tim Deegan <tim@xen.org>
>> CC: Tamas K Lengyel <tamas.lengyel@zentific.com>
>> ---
>>  SUPPORT.md | 31 +++++++++++++++++++++++++++++++
>>  1 file changed, 31 insertions(+)
>>
>> diff --git a/SUPPORT.md b/SUPPORT.md
>> index 0f7426593e..3e352198ce 100644
>> --- a/SUPPORT.md
>> +++ b/SUPPORT.md
>> @@ -187,6 +187,37 @@ Export hypervisor coverage data suitable for analysis by gcov or lcov.
>>  
>>      Status: Supported
>>  
>> +### Memory Sharing
>> +
>> +    Status, x86 HVM: Tech Preview
>> +    Status, ARM: Tech Preview
>> +
>> +Allow sharing of identical pages between guests
> 
> "Tech Preview" should imply there is any kind of `xl dedup-these-domains
> $X $Y` functionality.
> 
> The only thing we appears to have an example wrapper around the libxc
> interface, which requires the user to nominate individual frames, and
> this doesn't qualify as "functionally complete" IMO.

Right, I was getting confused with paging, which does have at least some
code in the tools/ directory.  (But perhaps should also be considered
experimental?  When was the last time anyone tried to use it?)

> There also doesn't appear to be any ARM support in the slightest. 
> mem_sharing_{memop,domctl}() are only implemented for x86.

Ack.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 14/16] SUPPORT.md: Add statement on PCI passthrough
  2017-11-14 13:25   ` Marek Marczykowski-Górecki
@ 2017-11-22 17:18     ` George Dunlap
  0 siblings, 0 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-22 17:18 UTC (permalink / raw)
  To: Marek Marczykowski-Górecki
  Cc: James McKenzie, Stefano Stabellini, Wei Liu, Konrad Wilk,
	Andrew Cooper, Tim Deegan, Christopher Clark, Rich Persaud,
	Jan Beulich, Ian Jackson, xen-devel

On 11/14/2017 01:25 PM, Marek Marczykowski-Górecki wrote:
> On Mon, Nov 13, 2017 at 03:41:24PM +0000, George Dunlap wrote:
>> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
>> ---
>> CC: Ian Jackson <ian.jackson@citrix.com>
>> CC: Wei Liu <wei.liu2@citrix.com>
>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Jan Beulich <jbeulich@suse.com>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Konrad Wilk <konrad.wilk@oracle.com>
>> CC: Tim Deegan <tim@xen.org>
>> CC: Rich Persaud <persaur@gmail.com>
>> CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
>> CC: Christopher Clark <christopher.w.clark@gmail.com>
>> CC: James McKenzie <james.mckenzie@bromium.com>
>> ---
>>  SUPPORT.md | 33 ++++++++++++++++++++++++++++++++-
>>  1 file changed, 32 insertions(+), 1 deletion(-)
>>
>> diff --git a/SUPPORT.md b/SUPPORT.md
>> index 3e352198ce..a8388f3dc5 100644
>> --- a/SUPPORT.md
>> +++ b/SUPPORT.md
> 
> (...)
> 
>> @@ -522,6 +536,23 @@ Virtual Performance Management Unit for HVM guests
>>  Disabled by default (enable with hypervisor command line option).
>>  This feature is not security supported: see http://xenbits.xen.org/xsa/advisory-163.html
>>  
>> +### x86/PCI Device Passthrough
>> +
>> +    Status: Supported, with caveats
>> +
>> +Only systems using IOMMUs will be supported.
> 
> s/will be/are/ ?

Ack

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 14/16] SUPPORT.md: Add statement on PCI passthrough
  2017-11-21  8:59   ` Jan Beulich
@ 2017-11-22 17:20     ` George Dunlap
  0 siblings, 0 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-22 17:20 UTC (permalink / raw)
  To: Jan Beulich
  Cc: James McKenzie, StefanoStabellini, Wei Liu, Konrad Wilk,
	Andrew Cooper, Tim Deegan, Christopher Clark,
	Marek Marczykowski-Górecki, Rich Persaud, xen-devel,
	Ian Jackson

On 11/21/2017 08:59 AM, Jan Beulich wrote:
>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>> +### x86/PCI Device Passthrough
>> +
>> +    Status: Supported, with caveats
> 
> I think this wants to be
> 
> ### PCI Device Passthrough
> 
>     Status, x86 HVM: Supported, with caveats
>     Status, x86 PV: Supported, with caveats
> 
> to (a) allow later extending for ARM and (b) exclude PVH (assuming
> that its absence means non-existing code).

Good call.

> 
>> +Only systems using IOMMUs will be supported.
>> +
>> +Not compatible with migration, altp2m, introspection, memory sharing, or memory paging.
> 
> And PoD, iirc.

Ack

> 
> With these adjustments (or substantially similar ones)
> Acked-by: Jan Beulich <jbeulich@suse.com>

Great, thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 16/16] SUPPORT.md: Add limits RFC
  2017-11-21  9:26   ` Jan Beulich
@ 2017-11-22 18:01     ` George Dunlap
  2017-11-23 10:33       ` Jan Beulich
  0 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-22 18:01 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Andrew Cooper,
	Tim Deegan, Ian Jackson, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 4797 bytes --]



> On Nov 21, 2017, at 9:26 AM, Jan Beulich <JBeulich@suse.com> wrote:
>
>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>> +### Virtual CPUs
>> +
>> +    Limit, x86 PV: 8192
>> +    Limit-security, x86 PV: 32
>> +    Limit, x86 HVM: 128
>> +    Limit-security, x86 HVM: 32
>
> Personally I consider the "Limit-security" numbers too low here, but
> I have no proof that higher numbers will work _in all cases_.

You don’t have to have conclusive proof that the numbers work in all
cases; we only need to have reasonable evidence that higher numbers are
generally reliable.  To use US legal terminology, it’s “preponderance of
evidence” (usually used in civil trials) rather than “beyond a
reasonable doubt” (used in criminal trials).

In this case, there are credible claims that using more vcpus opens
users up to a host DoS, and no evidence (or arguments) to the contrary.
 I think it would be irresponsible, under those circumstances, to tell
people that they should provide more vcpus to untrusted guests.

It wouldn’t be too hard to gather further evidence.  If someone
competent spent a few days trying to crash a larger guest and failed,
then that would be reason to think that perhaps larger numbers were safe.

>
>> +### Virtual RAM
>> +
>> +    Limit-security, x86 PV: 2047GiB
>
> I think this needs splitting for 64- and 32-bit (the latter can go up
> to 168Gb only on hosts with no memory past the 168Gb boundary,
> and up to 128Gb only on larger ones, without this being a processor
> architecture limitation).

OK.  Below is an updated section.  It might be good to specify how large
is "larger".

---
### Virtual RAM

    Limit-security, x86 PV 64-bit: 2047GiB
    Limit-security, x86 PV 32-bit: 168GiB (see below)
    Limit-security, x86 HVM: 1.5TiB
    Limit, ARM32: 16GiB
    Limit, ARM64: 1TiB

Note that there are no theoretical limits to 64-bit PV or HVM guest sizes
other than those determined by the processor architecture.

All 32-bit PV guest memory must be under 168GiB;
this means the total memory for all 32-bit PV guests cannot exced 168GiB.
On larger hosts, this limit is 128GiB.
---

>> +### Event Channel FIFO ABI
>> +
>> +    Limit: 131072
>
> Are we certain this is a security supportable limit? There is at least
> one loop (in get_free_port()) which can potentially have this number
> of iterations.

I have no idea.  Do you have another limit you’d like to propose instead?

> That's already leaving aside the one in the 'e' key handler. Speaking
> of which - I think we should state somewhere that there's no security
> support if any key whatsoever was sent to Xen via the console or
> the sysctl interface.

That's a good starting point.  I've added the following:

---
### Hypervisor synchronous console output (sync_console)

    Status: Supported, not security supported

Xen command-line flag to force synchronous console output.
Useful for debugging, but not suitable for production environments
due to incurred overhead.
---
> And more generally - surely there are items that aren't present in
> the series and no-one can realistically spot right away. What do we
> mean to imply for functionality not covered in the doc? One thing
> coming to mind here are certain command line options, an example
> being "sync_console" - the description states "not suitable for
> production environments", but I think this should be tightened to
> exclude security support.

Well specifically for sync_console, I would think given our definition
of "Supported", "not suitable for production environments" would imply
"not security supported"; but it wouldn't hurt to add an entry for it
under "Debugging, analysis, and post-mortem", so I've written one up:

---
### Hypervisor 'debug keys'

    Status: Supported, not security supported
   
These are functions triggered either from the host serial console,
or via the xl 'debug-keys' command,
which cause Xen to dump various hypervisor state to the console.
---

In general, if a feature is explicitly listed *but* some configuration
is not listed (e.g., 'x86 PV' and 'x86 HVM' are listed but not 'x86
PVH') then that feature is not implemented for that configuration is not
implemented.

If a feature is not listed at all, then this document isn't saying
anything one way or another (which is no worse than you were before).

Also, I realized that I somehow failed to send out the 17th patch (!),
which primarily had XXX entries for qemu-upstream/qemu-traditional, and
host serial console support.

Shall I try to make a list of supported serial cards from
/build/hg/xen.git/xen/drivers/char/Kconfig?

 -George


[-- Attachment #1.2: Type: text/html, Size: 6893 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 14/16] SUPPORT.md: Add statement on PCI passthrough
  2017-11-16 15:43   ` Julien Grall
@ 2017-11-22 18:58     ` George Dunlap
  2017-11-22 19:05       ` Rich Persaud
  0 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-22 18:58 UTC (permalink / raw)
  To: Julien Grall, xen-devel
  Cc: James McKenzie, Stefano Stabellini, Wei Liu, Konrad Wilk,
	Andrew Cooper, Tim Deegan, Christopher Clark,
	Marek Marczykowski-Górecki, Rich Persaud, Jan Beulich,
	Ian Jackson

On 11/16/2017 03:43 PM, Julien Grall wrote:
> Hi George,
> 
> On 13/11/17 15:41, George Dunlap wrote:
>> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
>> ---
>> CC: Ian Jackson <ian.jackson@citrix.com>
>> CC: Wei Liu <wei.liu2@citrix.com>
>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Jan Beulich <jbeulich@suse.com>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Konrad Wilk <konrad.wilk@oracle.com>
>> CC: Tim Deegan <tim@xen.org>
>> CC: Rich Persaud <persaur@gmail.com>
>> CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
>> CC: Christopher Clark <christopher.w.clark@gmail.com>
>> CC: James McKenzie <james.mckenzie@bromium.com>
>> ---
>>   SUPPORT.md | 33 ++++++++++++++++++++++++++++++++-
>>   1 file changed, 32 insertions(+), 1 deletion(-)
>>
>> diff --git a/SUPPORT.md b/SUPPORT.md
>> index 3e352198ce..a8388f3dc5 100644
>> --- a/SUPPORT.md
>> +++ b/SUPPORT.md
>> @@ -454,9 +454,23 @@ there is currently no xl support.
>>     ## Security
>>   +### Driver Domains
>> +
>> +    Status: Supported, with caveats
>> +
>> +"Driver domains" means allowing non-Domain 0 domains
>> +with access to physical devices to act as back-ends.
>> +
>> +See the appropriate "Device Passthrough" section
>> +for more information about security support.
>> +
>>   ### Device Model Stub Domains
>>   -    Status: Supported
>> +    Status: Supported, with caveats
>> +
>> +Vulnerabilities of a device model stub domain
>> +to a hostile driver domain (either compromised or untrusted)
>> +are excluded from security support.
>>     ### KCONFIG Expert
>>   @@ -522,6 +536,23 @@ Virtual Performance Management Unit for HVM guests
>>   Disabled by default (enable with hypervisor command line option).
>>   This feature is not security supported: see
>> http://xenbits.xen.org/xsa/advisory-163.html
>>   +### x86/PCI Device Passthrough
>> +
>> +    Status: Supported, with caveats
>> +
>> +Only systems using IOMMUs will be supported.
>> +
>> +Not compatible with migration, altp2m, introspection, memory sharing,
>> or memory paging.
>> +
>> +Because of hardware limitations
>> +(affecting any operating system or hypervisor),
>> +it is generally not safe to use this feature
>> +to expose a physical device to completely untrusted guests.
>> +However, this feature can still confer significant security benefit
>> +when used to remove drivers and backends from domain 0
>> +(i.e., Driver Domains).
>> +See docs/PCI-IOMMU-bugs.txt for more information.
> 
> Where can I find this file? Is it in staging?

No, I took this from a recommendation made to me, without checking.

Rich, are you going to send a patch adding this file, or did you mean to
point to a different file?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 14/16] SUPPORT.md: Add statement on PCI passthrough
  2017-11-22 18:58     ` George Dunlap
@ 2017-11-22 19:05       ` Rich Persaud
  0 siblings, 0 replies; 90+ messages in thread
From: Rich Persaud @ 2017-11-22 19:05 UTC (permalink / raw)
  To: George Dunlap
  Cc: James McKenzie, Stefano Stabellini, Wei Liu, Konrad Wilk,
	Andrew Cooper, Julien Grall, Tim Deegan, Christopher Clark,
	Marek Marczykowski-Górecki, Jan Beulich, Ian Jackson,
	xen-devel

On Nov 22, 2017, at 13:58, George Dunlap <george.dunlap@citrix.com> wrote:
> 
>> On 11/16/2017 03:43 PM, Julien Grall wrote:
>> Hi George,
>> 
>>> On 13/11/17 15:41, George Dunlap wrote:
>>> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
>>> ---
>>> CC: Ian Jackson <ian.jackson@citrix.com>
>>> CC: Wei Liu <wei.liu2@citrix.com>
>>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>>> CC: Jan Beulich <jbeulich@suse.com>
>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>> CC: Konrad Wilk <konrad.wilk@oracle.com>
>>> CC: Tim Deegan <tim@xen.org>
>>> CC: Rich Persaud <persaur@gmail.com>
>>> CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
>>> CC: Christopher Clark <christopher.w.clark@gmail.com>
>>> CC: James McKenzie <james.mckenzie@bromium.com>
>>> ---
>>>   SUPPORT.md | 33 ++++++++++++++++++++++++++++++++-
>>>   1 file changed, 32 insertions(+), 1 deletion(-)
>>> 
>>> diff --git a/SUPPORT.md b/SUPPORT.md
>>> index 3e352198ce..a8388f3dc5 100644
>>> --- a/SUPPORT.md
>>> +++ b/SUPPORT.md
>>> @@ -454,9 +454,23 @@ there is currently no xl support.
>>>     ## Security
>>>   +### Driver Domains
>>> +
>>> +    Status: Supported, with caveats
>>> +
>>> +"Driver domains" means allowing non-Domain 0 domains
>>> +with access to physical devices to act as back-ends.
>>> +
>>> +See the appropriate "Device Passthrough" section
>>> +for more information about security support.
>>> +
>>>   ### Device Model Stub Domains
>>>   -    Status: Supported
>>> +    Status: Supported, with caveats
>>> +
>>> +Vulnerabilities of a device model stub domain
>>> +to a hostile driver domain (either compromised or untrusted)
>>> +are excluded from security support.
>>>     ### KCONFIG Expert
>>>   @@ -522,6 +536,23 @@ Virtual Performance Management Unit for HVM guests
>>>   Disabled by default (enable with hypervisor command line option).
>>>   This feature is not security supported: see
>>> http://xenbits.xen.org/xsa/advisory-163.html
>>>   +### x86/PCI Device Passthrough
>>> +
>>> +    Status: Supported, with caveats
>>> +
>>> +Only systems using IOMMUs will be supported.
>>> +
>>> +Not compatible with migration, altp2m, introspection, memory sharing,
>>> or memory paging.
>>> +
>>> +Because of hardware limitations
>>> +(affecting any operating system or hypervisor),
>>> +it is generally not safe to use this feature
>>> +to expose a physical device to completely untrusted guests.
>>> +However, this feature can still confer significant security benefit
>>> +when used to remove drivers and backends from domain 0
>>> +(i.e., Driver Domains).
>>> +See docs/PCI-IOMMU-bugs.txt for more information.
>> 
>> Where can I find this file? Is it in staging?
> 
> No, I took this from a recommendation made to me, without checking.
> 
> Rich, are you going to send a patch adding this file, or did you mean to
> point to a different file?

Yes, I’ll send a patch to add this file.

Rich
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 12/16] SUPPORT.md: Add Security-releated features
  2017-11-22 17:13     ` George Dunlap
@ 2017-11-23 10:13       ` Jan Beulich
  0 siblings, 0 replies; 90+ messages in thread
From: Jan Beulich @ 2017-11-23 10:13 UTC (permalink / raw)
  To: George Dunlap
  Cc: Tamas K Lengyel, StefanoStabellini, Wei Liu, Konrad Wilk,
	Andrew Cooper, TimDeegan, RichPersaud, Ian Jackson, xen-devel

>>> On 22.11.17 at 18:13, <george.dunlap@citrix.com> wrote:
> On 11/21/2017 08:52 AM, Jan Beulich wrote:
>>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>>> With the exception of driver domains, which depend on PCI passthrough,
>>> and will be introduced later.
>>>
>>> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
>> 
>> Shouldn't we also explicitly exclude tool stack disaggregation here,
>> with reference to XSA-77?
> 
> Well in this document, we already consider XSM "experimental"; that
> would seem to subsume the specific exclusions listed in XSA-77.
> 
> I've modified the "XSM & FLASK" as below; let me know what you think.
> 
> The other option would be to make separate entries for specific uses of
> XSM (i.e., "for simple domain restriction" vs "for domain disaggregation").
> 
>  -George
> 
> 
> ### XSM & FLASK
> 
>     Status: Experimental
> 
> Compile time disabled.
> 
> Also note that using XSM
> to delegate various domain control hypercalls
> to particular other domains, rather than only permitting use by dom0,
> is also specifically excluded from security support for many hypercalls.
> Please see XSA-77 for more details.

That's fine with mel.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 16/16] SUPPORT.md: Add limits RFC
  2017-11-22 18:01     ` George Dunlap
@ 2017-11-23 10:33       ` Jan Beulich
  0 siblings, 0 replies; 90+ messages in thread
From: Jan Beulich @ 2017-11-23 10:33 UTC (permalink / raw)
  To: George Dunlap
  Cc: Stefano Stabellini, Wei Liu, KonradWilk, Andrew Cooper,
	Tim Deegan, Ian Jackson, xen-devel

>>> On 22.11.17 at 19:01, <george.dunlap@citrix.com> wrote:

> 
>> On Nov 21, 2017, at 9:26 AM, Jan Beulich <JBeulich@suse.com> wrote:
>>
>>>>> On 13.11.17 at 16:41, <george.dunlap@citrix.com> wrote:
>>> +### Virtual CPUs
>>> +
>>> +    Limit, x86 PV: 8192
>>> +    Limit-security, x86 PV: 32
>>> +    Limit, x86 HVM: 128
>>> +    Limit-security, x86 HVM: 32
>>
>> Personally I consider the "Limit-security" numbers too low here, but
>> I have no proof that higher numbers will work _in all cases_.
> 
> You don’t have to have conclusive proof that the numbers work in all
> cases; we only need to have reasonable evidence that higher numbers are
> generally reliable.  To use US legal terminology, it’s “preponderance of
> evidence” (usually used in civil trials) rather than “beyond a
> reasonable doubt” (used in criminal trials).
> 
> In this case, there are credible claims that using more vcpus opens
> users up to a host DoS, and no evidence (or arguments) to the contrary.
>  I think it would be irresponsible, under those circumstances, to tell
> people that they should provide more vcpus to untrusted guests.
> 
> It wouldn’t be too hard to gather further evidence.  If someone
> competent spent a few days trying to crash a larger guest and failed,
> then that would be reason to think that perhaps larger numbers were safe.
> 
>>
>>> +### Virtual RAM
>>> +
>>> +    Limit-security, x86 PV: 2047GiB
>>
>> I think this needs splitting for 64- and 32-bit (the latter can go up
>> to 168Gb only on hosts with no memory past the 168Gb boundary,
>> and up to 128Gb only on larger ones, without this being a processor
>> architecture limitation).
> 
> OK.  Below is an updated section.  It might be good to specify how large
> is "larger".

Well, simply anything with memory extending beyond the 168Gb
boundary, i.e. ...

> ---
> ### Virtual RAM
> 
>     Limit-security, x86 PV 64-bit: 2047GiB
>     Limit-security, x86 PV 32-bit: 168GiB (see below)
>     Limit-security, x86 HVM: 1.5TiB
>     Limit, ARM32: 16GiB
>     Limit, ARM64: 1TiB
> 
> Note that there are no theoretical limits to 64-bit PV or HVM guest sizes
> other than those determined by the processor architecture.
> 
> All 32-bit PV guest memory must be under 168GiB;
> this means the total memory for all 32-bit PV guests cannot exced 168GiB.
> On larger hosts, this limit is 128GiB.

... "On hosts with memory extending beyond 168GiB, this limit is
128GiB."

>>> +### Event Channel FIFO ABI
>>> +
>>> +    Limit: 131072
>>
>> Are we certain this is a security supportable limit? There is at least
>> one loop (in get_free_port()) which can potentially have this number
>> of iterations.
> 
> I have no idea.  Do you have another limit you’d like to propose instead?

Since I can't prove the given limit might be a problem, it's also
hard to suggest an alternative. Probably the limit is fine as is,
despite the number looking pretty big: In x86 PV page table
handling we're fine processing a single L2 in one go, which
involves twice as many iterations (otoh I'm struggling to find a
call tree where {alloc,free}_l2_table() would actually be called
with "preemptible" set to false).

> Also, I realized that I somehow failed to send out the 17th patch (!),
> which primarily had XXX entries for qemu-upstream/qemu-traditional, and
> host serial console support.
> 
> Shall I try to make a list of supported serial cards from
> /build/hg/xen.git/xen/drivers/char/Kconfig?

Hmm, interesting question. For the moment I'm having a hard time
seeing how someone using an arbitrary serial card, problems with
it could be caused by guest behavior. Other functionality problems
(read: bugs or missing code for unknown cards/quirks) aren't
security support relevant afaict.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 13/16] SUPPORT.md: Add secondary memory management features
  2017-11-22 17:15     ` George Dunlap
@ 2017-11-23 10:35       ` Jan Beulich
  2017-11-23 10:42         ` Olaf Hering
  0 siblings, 1 reply; 90+ messages in thread
From: Jan Beulich @ 2017-11-23 10:35 UTC (permalink / raw)
  To: Olaf Hering, George Dunlap
  Cc: Tamas KLengyel, Stefano Stabellini, Wei Liu, Konrad Wilk,
	Andrew Cooper, Tim Deegan, Ian Jackson, xen-devel

>>> On 22.11.17 at 18:15, <george.dunlap@citrix.com> wrote:
> On 11/21/2017 07:55 PM, Andrew Cooper wrote:
>> On 13/11/17 15:41, George Dunlap wrote:
>>> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
>>> ---
>>> CC: Ian Jackson <ian.jackson@citrix.com>
>>> CC: Wei Liu <wei.liu2@citrix.com>
>>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>>> CC: Jan Beulich <jbeulich@suse.com>
>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>> CC: Konrad Wilk <konrad.wilk@oracle.com>
>>> CC: Tim Deegan <tim@xen.org>
>>> CC: Tamas K Lengyel <tamas.lengyel@zentific.com>
>>> ---
>>>  SUPPORT.md | 31 +++++++++++++++++++++++++++++++
>>>  1 file changed, 31 insertions(+)
>>>
>>> diff --git a/SUPPORT.md b/SUPPORT.md
>>> index 0f7426593e..3e352198ce 100644
>>> --- a/SUPPORT.md
>>> +++ b/SUPPORT.md
>>> @@ -187,6 +187,37 @@ Export hypervisor coverage data suitable for analysis 
> by gcov or lcov.
>>>  
>>>      Status: Supported
>>>  
>>> +### Memory Sharing
>>> +
>>> +    Status, x86 HVM: Tech Preview
>>> +    Status, ARM: Tech Preview
>>> +
>>> +Allow sharing of identical pages between guests
>> 
>> "Tech Preview" should imply there is any kind of `xl dedup-these-domains
>> $X $Y` functionality.
>> 
>> The only thing we appears to have an example wrapper around the libxc
>> interface, which requires the user to nominate individual frames, and
>> this doesn't qualify as "functionally complete" IMO.
> 
> Right, I was getting confused with paging, which does have at least some
> code in the tools/ directory.  (But perhaps should also be considered
> experimental?  When was the last time anyone tried to use it?)

Olaf, are you still playing with it every now and then?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 13/16] SUPPORT.md: Add secondary memory management features
  2017-11-23 10:35       ` Jan Beulich
@ 2017-11-23 10:42         ` Olaf Hering
  2017-11-23 11:55           ` Olaf Hering
  0 siblings, 1 reply; 90+ messages in thread
From: Olaf Hering @ 2017-11-23 10:42 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tamas KLengyel, Stefano Stabellini, Wei Liu, Konrad Wilk,
	Andrew Cooper, Tim Deegan, George Dunlap, Ian Jackson, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 295 bytes --]

On Thu, Nov 23, Jan Beulich wrote:

> Olaf, are you still playing with it every now and then?

No, I have not tried it since I last touched it.
The last thing I know was that integrating it into libxl was difficult
because it was not straight forward to describe "memory usage" properly.


Olaf

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 13/16] SUPPORT.md: Add secondary memory management features
  2017-11-23 10:42         ` Olaf Hering
@ 2017-11-23 11:55           ` Olaf Hering
  2017-11-23 12:00             ` George Dunlap
  0 siblings, 1 reply; 90+ messages in thread
From: Olaf Hering @ 2017-11-23 11:55 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tamas KLengyel, Stefano Stabellini, Wei Liu, Konrad Wilk,
	Andrew Cooper, Tim Deegan, George Dunlap, Ian Jackson, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 538 bytes --]

On Thu, Nov 23, Olaf Hering wrote:

> On Thu, Nov 23, Jan Beulich wrote:
> > Olaf, are you still playing with it every now and then?
> No, I have not tried it since I last touched it.

I just tried it, and it failed:

root@stein-schneider:~ # /usr/lib/xen/bin/xenpaging -d 7 -f /dev/shm/p -v
xc: detail: xenpaging init
xc: detail: watching '/local/domain/7/memory/target-tot_pages'
xc: detail: Failed allocation for dom 7: 1 extents of order 0
xc: error: Failed to populate ring gfn
 (16 = Device or resource busy): Internal error


Olaf

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 13/16] SUPPORT.md: Add secondary memory management features
  2017-11-23 11:55           ` Olaf Hering
@ 2017-11-23 12:00             ` George Dunlap
  2017-11-23 12:17               ` Andrew Cooper
  0 siblings, 1 reply; 90+ messages in thread
From: George Dunlap @ 2017-11-23 12:00 UTC (permalink / raw)
  To: Olaf Hering, Jan Beulich
  Cc: Tamas KLengyel, Stefano Stabellini, Wei Liu, Konrad Wilk,
	Andrew Cooper, Tim Deegan, Ian Jackson, xen-devel

On 11/23/2017 11:55 AM, Olaf Hering wrote:
> On Thu, Nov 23, Olaf Hering wrote:
> 
>> On Thu, Nov 23, Jan Beulich wrote:
>>> Olaf, are you still playing with it every now and then?
>> No, I have not tried it since I last touched it.
> 
> I just tried it, and it failed:
> 
> root@stein-schneider:~ # /usr/lib/xen/bin/xenpaging -d 7 -f /dev/shm/p -v
> xc: detail: xenpaging init
> xc: detail: watching '/local/domain/7/memory/target-tot_pages'
> xc: detail: Failed allocation for dom 7: 1 extents of order 0
> xc: error: Failed to populate ring gfn
>  (16 = Device or resource busy): Internal error

That looks like just a memory allocation.  Do you use autoballooning
dom0?  Maybe try ballooning dom0 down first?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 13/16] SUPPORT.md: Add secondary memory management features
  2017-11-23 12:00             ` George Dunlap
@ 2017-11-23 12:17               ` Andrew Cooper
  2017-11-23 12:45                 ` Olaf Hering
  0 siblings, 1 reply; 90+ messages in thread
From: Andrew Cooper @ 2017-11-23 12:17 UTC (permalink / raw)
  To: George Dunlap, Olaf Hering, Jan Beulich
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Tamas KLengyel,
	Tim Deegan, xen-devel, Ian Jackson

On 23/11/17 12:00, George Dunlap wrote:
> On 11/23/2017 11:55 AM, Olaf Hering wrote:
>> On Thu, Nov 23, Olaf Hering wrote:
>>
>>> On Thu, Nov 23, Jan Beulich wrote:
>>>> Olaf, are you still playing with it every now and then?
>>> No, I have not tried it since I last touched it.
>> I just tried it, and it failed:
>>
>> root@stein-schneider:~ # /usr/lib/xen/bin/xenpaging -d 7 -f /dev/shm/p -v
>> xc: detail: xenpaging init
>> xc: detail: watching '/local/domain/7/memory/target-tot_pages'
>> xc: detail: Failed allocation for dom 7: 1 extents of order 0
>> xc: error: Failed to populate ring gfn
>>  (16 = Device or resource busy): Internal error
> That looks like just a memory allocation.  Do you use autoballooning
> dom0?  Maybe try ballooning dom0 down first?

Its not that.  This failure comes from the ring living inside the p2m,
and has already been found with introspection.

When a domain has ballooned exactly to its allocation, it is not
possible to attach a vmevent/sharing/paging ring, because attaching the
ring requires an add_to_physmap.  In principle, the toolstack could bump
the allocation by one frame, but that's racy with the guest trying to
claim the frame itself.

Pauls work to allow access to pages not in the p2m is the precursor to
fixing this problem, after which the rings move out of the guest
(reduction in attack surface), and there is nothing the guest can do to
inhibit toolstack/privileged operations like this.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 13/16] SUPPORT.md: Add secondary memory management features
  2017-11-23 12:17               ` Andrew Cooper
@ 2017-11-23 12:45                 ` Olaf Hering
  2017-11-23 12:58                   ` Andrew Cooper
  0 siblings, 1 reply; 90+ messages in thread
From: Olaf Hering @ 2017-11-23 12:45 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Tamas KLengyel,
	Tim Deegan, George Dunlap, Jan Beulich, xen-devel, Ian Jackson


[-- Attachment #1.1: Type: text/plain, Size: 1030 bytes --]

On Thu, Nov 23, Andrew Cooper wrote:

> Its not that.  This failure comes from the ring living inside the p2m,
> and has already been found with introspection.

In my case it was just a wrong domid. Now I use 'xl domid domU' and
xenpaging does something. It seems paging out and in works still to some
degree.  But it still/again needs lots of testing and fixing.

I get errors like this, and xl dmesg has also errors:

...
xc: detail: populate_page < gfn 10100 pageslot 127
xc: detail: Need to resume 200 pages to reach 131328 target_tot_pages
xc: detail: Got event from evtchn
xc: detail: populate_page < gfn 10101 pageslot 128
xenforeignmemory: error: mmap failedxc: : Invalid argument
detail: populate_page < gfn 10102 pageslot 129
xc: detail: populate_page < gfn 10103 pageslot 130
xc: detail: populate_page < gfn 10104 pageslot 131
...

...
(XEN) vm_event.c:289:d0v0 d2v0 was not paused.
(XEN) vm_event.c:289:d0v0 d2v0 was not paused.
(XEN) vm_event.c:289:d0v2 d2v2 was not paused.
...


Olaf

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

[-- Attachment #2: Type: text/plain, Size: 157 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 13/16] SUPPORT.md: Add secondary memory management features
  2017-11-23 12:45                 ` Olaf Hering
@ 2017-11-23 12:58                   ` Andrew Cooper
  2017-11-23 17:58                     ` George Dunlap
  0 siblings, 1 reply; 90+ messages in thread
From: Andrew Cooper @ 2017-11-23 12:58 UTC (permalink / raw)
  To: Olaf Hering
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Tamas KLengyel,
	Tim Deegan, George Dunlap, Jan Beulich, xen-devel, Ian Jackson

On 23/11/17 12:45, Olaf Hering wrote:
> On Thu, Nov 23, Andrew Cooper wrote:
>
>> Its not that.  This failure comes from the ring living inside the p2m,
>> and has already been found with introspection.
> In my case it was just a wrong domid. Now I use 'xl domid domU' and
> xenpaging does something. It seems paging out and in works still to some
> degree.  But it still/again needs lots of testing and fixing.
>
> I get errors like this, and xl dmesg has also errors:
>
> ...
> xc: detail: populate_page < gfn 10100 pageslot 127
> xc: detail: Need to resume 200 pages to reach 131328 target_tot_pages
> xc: detail: Got event from evtchn
> xc: detail: populate_page < gfn 10101 pageslot 128
> xenforeignmemory: error: mmap failedxc: : Invalid argument
> detail: populate_page < gfn 10102 pageslot 129
> xc: detail: populate_page < gfn 10103 pageslot 130
> xc: detail: populate_page < gfn 10104 pageslot 131
> ...
>
> ...
> (XEN) vm_event.c:289:d0v0 d2v0 was not paused.
> (XEN) vm_event.c:289:d0v0 d2v0 was not paused.
> (XEN) vm_event.c:289:d0v2 d2v2 was not paused.
> ...

Hmm ok.  Either way, I think this demonstrates that the feature is not
of "Tech Preview" quality.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

* Re: [PATCH 13/16] SUPPORT.md: Add secondary memory management features
  2017-11-23 12:58                   ` Andrew Cooper
@ 2017-11-23 17:58                     ` George Dunlap
  0 siblings, 0 replies; 90+ messages in thread
From: George Dunlap @ 2017-11-23 17:58 UTC (permalink / raw)
  To: Andrew Cooper, Olaf Hering
  Cc: Stefano Stabellini, Wei Liu, Konrad Wilk, Tamas KLengyel,
	Tim Deegan, Jan Beulich, xen-devel, Ian Jackson

On 11/23/2017 12:58 PM, Andrew Cooper wrote:
> On 23/11/17 12:45, Olaf Hering wrote:
>> On Thu, Nov 23, Andrew Cooper wrote:
>>
>>> Its not that.  This failure comes from the ring living inside the p2m,
>>> and has already been found with introspection.
>> In my case it was just a wrong domid. Now I use 'xl domid domU' and
>> xenpaging does something. It seems paging out and in works still to some
>> degree.  But it still/again needs lots of testing and fixing.
>>
>> I get errors like this, and xl dmesg has also errors:
>>
>> ...
>> xc: detail: populate_page < gfn 10100 pageslot 127
>> xc: detail: Need to resume 200 pages to reach 131328 target_tot_pages
>> xc: detail: Got event from evtchn
>> xc: detail: populate_page < gfn 10101 pageslot 128
>> xenforeignmemory: error: mmap failedxc: : Invalid argument
>> detail: populate_page < gfn 10102 pageslot 129
>> xc: detail: populate_page < gfn 10103 pageslot 130
>> xc: detail: populate_page < gfn 10104 pageslot 131
>> ...
>>
>> ...
>> (XEN) vm_event.c:289:d0v0 d2v0 was not paused.
>> (XEN) vm_event.c:289:d0v0 d2v0 was not paused.
>> (XEN) vm_event.c:289:d0v2 d2v2 was not paused.
>> ...
> 
> Hmm ok.  Either way, I think this demonstrates that the feature is not
> of "Tech Preview" quality.

Indeed; I've changed it back to "Experimental".

Thanks all,
 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 90+ messages in thread

end of thread, other threads:[~2017-11-23 17:58 UTC | newest]

Thread overview: 90+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-13 15:41 [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
2017-11-13 15:41 ` [PATCH 02/16] SUPPORT.md: Add core functionality George Dunlap
2017-11-21  8:03   ` Jan Beulich
2017-11-21 10:36     ` George Dunlap
2017-11-21 11:34       ` Jan Beulich
2017-11-13 15:41 ` [PATCH 03/16] SUPPORT.md: Add some x86 features George Dunlap
2017-11-21  8:09   ` Jan Beulich
2017-11-21 10:42     ` George Dunlap
2017-11-21 11:35       ` Jan Beulich
2017-11-21 12:24         ` George Dunlap
2017-11-21 13:00           ` Jan Beulich
2017-11-21 12:32         ` Ian Jackson
2017-11-13 15:41 ` [PATCH 04/16] SUPPORT.md: Add core ARM features George Dunlap
2017-11-21  8:11   ` Jan Beulich
2017-11-21 10:45     ` George Dunlap
2017-11-21 10:59       ` Julien Grall
2017-11-21 11:37       ` Jan Beulich
2017-11-21 12:39         ` George Dunlap
2017-11-21 13:01           ` Jan Beulich
2017-11-13 15:41 ` [PATCH 05/16] SUPPORT.md: Toolstack core George Dunlap
2017-11-13 15:41 ` [PATCH 06/16] SUPPORT.md: Add scalability features George Dunlap
2017-11-16 15:19   ` Julien Grall
2017-11-16 15:30     ` George Dunlap
2017-11-21 16:43     ` George Dunlap
2017-11-21 17:31       ` Julien Grall
2017-11-21 17:51         ` George Dunlap
2017-11-21  8:16   ` Jan Beulich
2017-11-13 15:41 ` [PATCH 07/16] SUPPORT.md: Add virtual devices common to ARM and x86 George Dunlap
2017-11-21  8:29   ` Jan Beulich
2017-11-21  9:19     ` Paul Durrant
2017-11-21 10:56     ` George Dunlap
2017-11-21 11:41       ` Jan Beulich
2017-11-21 17:20         ` George Dunlap
2017-11-22 11:05           ` Jan Beulich
2017-11-22 16:16             ` George Dunlap
2017-11-21 17:35     ` George Dunlap
2017-11-22 11:07       ` Jan Beulich
2017-11-13 15:41 ` [PATCH 08/16] SUPPORT.md: Add x86-specific virtual hardware George Dunlap
2017-11-21  8:39   ` Jan Beulich
2017-11-21 18:02     ` George Dunlap
2017-11-22 11:11       ` Jan Beulich
2017-11-22 11:21         ` George Dunlap
2017-11-22 11:45         ` George Dunlap
2017-11-22 16:30         ` George Dunlap
2017-11-13 15:41 ` [PATCH 09/16] SUPPORT.md: Add ARM-specific " George Dunlap
2017-11-16 15:41   ` Julien Grall
2017-11-22 16:32     ` George Dunlap
2017-11-16 15:41   ` Julien Grall
2017-11-13 15:41 ` [PATCH 10/16] SUPPORT.md: Add Debugging, analysis, crash post-portem George Dunlap
2017-11-21  8:48   ` Jan Beulich
2017-11-21 18:19     ` George Dunlap
2017-11-21 19:05       ` Ian Jackson
2017-11-21 19:21         ` Andrew Cooper
2017-11-22 10:51           ` George Dunlap
2017-11-22 11:15       ` Jan Beulich
2017-11-22 17:06         ` George Dunlap
2017-11-13 15:41 ` [PATCH 11/16] SUPPORT.md: Add 'easy' HA / FT features George Dunlap
2017-11-21  8:49   ` Jan Beulich
2017-11-13 15:41 ` [PATCH 12/16] SUPPORT.md: Add Security-releated features George Dunlap
2017-11-16 16:23   ` Konrad Rzeszutek Wilk
2017-11-21  8:52   ` Jan Beulich
2017-11-22 17:13     ` George Dunlap
2017-11-23 10:13       ` Jan Beulich
2017-11-13 15:41 ` [PATCH 13/16] SUPPORT.md: Add secondary memory management features George Dunlap
2017-11-21  8:54   ` Jan Beulich
2017-11-21 19:55   ` Andrew Cooper
2017-11-22 17:15     ` George Dunlap
2017-11-23 10:35       ` Jan Beulich
2017-11-23 10:42         ` Olaf Hering
2017-11-23 11:55           ` Olaf Hering
2017-11-23 12:00             ` George Dunlap
2017-11-23 12:17               ` Andrew Cooper
2017-11-23 12:45                 ` Olaf Hering
2017-11-23 12:58                   ` Andrew Cooper
2017-11-23 17:58                     ` George Dunlap
2017-11-13 15:41 ` [PATCH 14/16] SUPPORT.md: Add statement on PCI passthrough George Dunlap
2017-11-14 13:25   ` Marek Marczykowski-Górecki
2017-11-22 17:18     ` George Dunlap
2017-11-16 15:43   ` Julien Grall
2017-11-22 18:58     ` George Dunlap
2017-11-22 19:05       ` Rich Persaud
2017-11-21  8:59   ` Jan Beulich
2017-11-22 17:20     ` George Dunlap
2017-11-13 15:41 ` [PATCH 15/16] SUPPORT.md: Add statement on migration RFC George Dunlap
2017-11-13 15:41 ` [PATCH 16/16] SUPPORT.md: Add limits RFC George Dunlap
2017-11-21  9:26   ` Jan Beulich
2017-11-22 18:01     ` George Dunlap
2017-11-23 10:33       ` Jan Beulich
2017-11-13 15:43 ` [PATCH 01/16] Introduce skeleton SUPPORT.md George Dunlap
2017-11-20 17:01 ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.