* [DOCSDAY PATCH] xen: rework the comments for struct xen_domctl_vnuma.
@ 2015-02-25 15:33 Dario Faggioli
2015-02-26 12:15 ` Wei Liu
0 siblings, 1 reply; 2+ messages in thread
From: Dario Faggioli @ 2015-02-25 15:33 UTC (permalink / raw)
To: Xen-devel
Cc: Elena Ufimtseva, Andrew Cooper, Keir Fraser, Wei Liu, Jan Beulich
In fact: vnode_to_pnode is an array, not a mask; there was a
typo in the one about vmemrange; there was no indication
of the data directions.
Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Jan Beulich <JBeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Keir Fraser <keir@xen.org>
Cc: Elena Ufimtseva <elena.ufimtseva@oracle.com>
---
xen/include/public/domctl.h | 32 +++++++++++++++++++++-----------
1 file changed, 21 insertions(+), 11 deletions(-)
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index b3413a2..ad76853 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -958,27 +958,37 @@ typedef struct xen_domctl_vcpu_msrs xen_domctl_vcpu_msrs_t;
DEFINE_XEN_GUEST_HANDLE(xen_domctl_vcpu_msrs_t);
#endif
-/*
- * Use in XEN_DOMCTL_setvnumainfo to set
- * vNUMA domain topology.
- */
+/* XEN_DOMCTL_setvnumainfo: specifies a virtual NUMA topology for the guest */
struct xen_domctl_vnuma {
+ /* IN: number of vNUMA nodes to setup. Shall be greater than 0 */
uint32_t nr_vnodes;
+ /* IN: number of memory ranges to setup */
uint32_t nr_vmemranges;
+ /*
+ * IN: number of vCPUs of the domain (used as size of the vcpu_to_vnode
+ * array declared below). Shall be equal to the domain's max_vcpus.
+ */
uint32_t nr_vcpus;
- uint32_t pad;
+ uint32_t pad; /* must be zero */
+
+ /*
+ * IN: array for specifying the distances of the vNUMA nodes
+ * between each others. Shall have nr_vnodes*nr_vnodes elements.
+ */
XEN_GUEST_HANDLE_64(uint) vdistance;
+ /*
+ * IN: array for specifying to what vNUMA node each vCPU
+ * belongs. Shall have nr_vcpus elements.
+ */
XEN_GUEST_HANDLE_64(uint) vcpu_to_vnode;
-
/*
- * vnodes to physical NUMA nodes mask.
- * This kept on per-domain basis for
- * interested consumers, such as numa aware ballooning.
+ * IN: array for specifying on what physical NUMA node
+ * each vNUMA node is placed. Shall have nr_vnodes elements.
*/
XEN_GUEST_HANDLE_64(uint) vnode_to_pnode;
-
/*
- * memory rages for each vNUMA node
+ * IN: array for specifying the memory ranges.
+ * Shall have nr_vmemranges elements.
*/
XEN_GUEST_HANDLE_64(xen_vmemrange_t) vmemrange;
};
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [DOCSDAY PATCH] xen: rework the comments for struct xen_domctl_vnuma.
2015-02-25 15:33 [DOCSDAY PATCH] xen: rework the comments for struct xen_domctl_vnuma Dario Faggioli
@ 2015-02-26 12:15 ` Wei Liu
0 siblings, 0 replies; 2+ messages in thread
From: Wei Liu @ 2015-02-26 12:15 UTC (permalink / raw)
To: Dario Faggioli
Cc: Elena Ufimtseva, Keir Fraser, Andrew Cooper, Xen-devel,
Jan Beulich, Wei Liu
On Wed, Feb 25, 2015 at 04:33:17PM +0100, Dario Faggioli wrote:
> In fact: vnode_to_pnode is an array, not a mask; there was a
> typo in the one about vmemrange; there was no indication
> of the data directions.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Jan Beulich <JBeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Keir Fraser <keir@xen.org>
> Cc: Elena Ufimtseva <elena.ufimtseva@oracle.com>
> ---
> xen/include/public/domctl.h | 32 +++++++++++++++++++++-----------
> 1 file changed, 21 insertions(+), 11 deletions(-)
>
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index b3413a2..ad76853 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -958,27 +958,37 @@ typedef struct xen_domctl_vcpu_msrs xen_domctl_vcpu_msrs_t;
> DEFINE_XEN_GUEST_HANDLE(xen_domctl_vcpu_msrs_t);
> #endif
>
> -/*
> - * Use in XEN_DOMCTL_setvnumainfo to set
> - * vNUMA domain topology.
> - */
> +/* XEN_DOMCTL_setvnumainfo: specifies a virtual NUMA topology for the guest */
> struct xen_domctl_vnuma {
> + /* IN: number of vNUMA nodes to setup. Shall be greater than 0 */
> uint32_t nr_vnodes;
> + /* IN: number of memory ranges to setup */
> uint32_t nr_vmemranges;
> + /*
> + * IN: number of vCPUs of the domain (used as size of the vcpu_to_vnode
> + * array declared below). Shall be equal to the domain's max_vcpus.
> + */
> uint32_t nr_vcpus;
> - uint32_t pad;
> + uint32_t pad; /* must be zero */
> +
> + /*
> + * IN: array for specifying the distances of the vNUMA nodes
> + * between each others. Shall have nr_vnodes*nr_vnodes elements.
> + */
> XEN_GUEST_HANDLE_64(uint) vdistance;
> + /*
> + * IN: array for specifying to what vNUMA node each vCPU
> + * belongs. Shall have nr_vcpus elements.
> + */
> XEN_GUEST_HANDLE_64(uint) vcpu_to_vnode;
> -
> /*
> - * vnodes to physical NUMA nodes mask.
> - * This kept on per-domain basis for
> - * interested consumers, such as numa aware ballooning.
> + * IN: array for specifying on what physical NUMA node
> + * each vNUMA node is placed. Shall have nr_vnodes elements.
> */
At the very least this hunk should be backported to 4.5?
> XEN_GUEST_HANDLE_64(uint) vnode_to_pnode;
> -
> /*
> - * memory rages for each vNUMA node
> + * IN: array for specifying the memory ranges.
> + * Shall have nr_vmemranges elements.
Inconsistent filling column setup? This line wraps at around 55
characters while others appear to be around 80. I think it might be
better to wrap around 72.
Wei.
> */
> XEN_GUEST_HANDLE_64(xen_vmemrange_t) vmemrange;
> };
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2015-02-26 12:15 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-25 15:33 [DOCSDAY PATCH] xen: rework the comments for struct xen_domctl_vnuma Dario Faggioli
2015-02-26 12:15 ` Wei Liu
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.