All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] libxl: create PVH guests with max memory assigned
@ 2014-07-17 11:02 Roger Pau Monne
  2014-07-18 16:49 ` Konrad Rzeszutek Wilk
                   ` (2 more replies)
  0 siblings, 3 replies; 29+ messages in thread
From: Roger Pau Monne @ 2014-07-17 11:02 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Jackson, Ian Campbell, Roger Pau Monne

Since PVH guests are very similar to HVM guests in terms of memory
management, start the guest with the maximum memory assigned and let
it balloon down.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 tools/libxl/libxl_dom.c |   13 ++++++++++---
 1 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 661999c..eada87d 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -233,6 +233,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     libxl_domain_build_info *const info = &d_config->b_info;
     libxl_ctx *ctx = libxl__gc_owner(gc);
     char *xs_domid, *con_domid;
+    unsigned long mem;
     int rc;
 
     if (xc_domain_max_vcpus(ctx->xch, domid, info->max_vcpus) != 0) {
@@ -263,8 +264,12 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
     libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus, &info->cpumap);
 
-    if (xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
-        LIBXL_MAXMEM_CONSTANT) < 0) {
+    if (info->type == LIBXL_DOMAIN_TYPE_PV)
+        mem = libxl_defbool_val(d_config->c_info.pvh) ? info->max_memkb :
+                                                        info->target_memkb;
+    else
+        mem = info->target_memkb;
+    if (xc_domain_setmaxmem(ctx->xch, domid, mem + LIBXL_MAXMEM_CONSTANT) < 0) {
         LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Couldn't set max memory");
         return ERROR_FAIL;
     }
@@ -370,6 +375,7 @@ int libxl__build_pv(libxl__gc *gc, uint32_t domid,
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     struct xc_dom_image *dom;
+    unsigned long mem;
     int ret;
     int flags = 0;
 
@@ -440,7 +446,8 @@ int libxl__build_pv(libxl__gc *gc, uint32_t domid,
         LOGE(ERROR, "libxl__arch_domain_init_hw_description failed");
         goto out;
     }
-    if ( (ret = xc_dom_mem_init(dom, info->target_memkb / 1024)) != 0 ) {
+    mem = state->pvh_enabled ? info->max_memkb : info->target_memkb;
+    if ( (ret = xc_dom_mem_init(dom, mem / 1024)) != 0 ) {
         LOGE(ERROR, "xc_dom_mem_init failed");
         goto out;
     }
-- 
1.7.7.5 (Apple Git-26)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-07-17 11:02 [PATCH] libxl: create PVH guests with max memory assigned Roger Pau Monne
@ 2014-07-18 16:49 ` Konrad Rzeszutek Wilk
  2014-07-18 17:00   ` Roger Pau Monné
  2014-07-18 17:19   ` Olaf Hering
  2014-08-01 15:34 ` Roger Pau Monné
  2014-08-05  8:55 ` Ian Campbell
  2 siblings, 2 replies; 29+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-07-18 16:49 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: xen-devel, Ian Jackson, Ian Campbell

On Thu, Jul 17, 2014 at 01:02:22PM +0200, Roger Pau Monne wrote:
> Since PVH guests are very similar to HVM guests in terms of memory
> management, start the guest with the maximum memory assigned and let
> it balloon down.

There is something odd about your email. When I look at in
mutt I see the patch, but if I save it and try do git am
it complains.

Looking at the file I see:

     U2luY2UgUFZIIGd1ZXN0cyBhcmUgdmVyeSBzaW1pbGFyIHRvIEhWTSBndWVzdHMgaW4gdGVybXMg
     b2YgbWVtb3J5Cm1hbmFnZW1lbnQsIHN0YXJ0IHRoZSBndWVzdCB3aXRoIHRoZSBtYXhpbXVtIG1l
     bW9yeSBhc3NpZ25lZCBhbmQgbGV0Cml0IGJhbGxvb24gZG93bi4KClNpZ25lZC1vZmYtYnk6IFJv
     Z2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpDYzogSWFuIEphY2tzb24gPGlh
     bi5qYWNrc29uQGV1LmNpdHJpeC5jb20+CkNjOiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBj
     aXRyaXguY29tPgpDYzogTXVrZXNoIFJhdGhvciA8bXVrZXNoLnJhdGhvckBvcmFjbGUuY29tPgot

.. and so.

Have you changed something recently in your git sendmail setup?

Thanks.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  tools/libxl/libxl_dom.c |   13 ++++++++++---
>  1 files changed, 10 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index 661999c..eada87d 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -233,6 +233,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>      libxl_domain_build_info *const info = &d_config->b_info;
>      libxl_ctx *ctx = libxl__gc_owner(gc);
>      char *xs_domid, *con_domid;
> +    unsigned long mem;
>      int rc;
>  
>      if (xc_domain_max_vcpus(ctx->xch, domid, info->max_vcpus) != 0) {
> @@ -263,8 +264,12 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>      libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
>      libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus, &info->cpumap);
>  
> -    if (xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
> -        LIBXL_MAXMEM_CONSTANT) < 0) {
> +    if (info->type == LIBXL_DOMAIN_TYPE_PV)
> +        mem = libxl_defbool_val(d_config->c_info.pvh) ? info->max_memkb :
> +                                                        info->target_memkb;
> +    else
> +        mem = info->target_memkb;
> +    if (xc_domain_setmaxmem(ctx->xch, domid, mem + LIBXL_MAXMEM_CONSTANT) < 0) {
>          LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Couldn't set max memory");
>          return ERROR_FAIL;
>      }
> @@ -370,6 +375,7 @@ int libxl__build_pv(libxl__gc *gc, uint32_t domid,
>  {
>      libxl_ctx *ctx = libxl__gc_owner(gc);
>      struct xc_dom_image *dom;
> +    unsigned long mem;
>      int ret;
>      int flags = 0;
>  
> @@ -440,7 +446,8 @@ int libxl__build_pv(libxl__gc *gc, uint32_t domid,
>          LOGE(ERROR, "libxl__arch_domain_init_hw_description failed");
>          goto out;
>      }
> -    if ( (ret = xc_dom_mem_init(dom, info->target_memkb / 1024)) != 0 ) {
> +    mem = state->pvh_enabled ? info->max_memkb : info->target_memkb;
> +    if ( (ret = xc_dom_mem_init(dom, mem / 1024)) != 0 ) {
>          LOGE(ERROR, "xc_dom_mem_init failed");
>          goto out;
>      }
> -- 
> 1.7.7.5 (Apple Git-26)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-07-18 16:49 ` Konrad Rzeszutek Wilk
@ 2014-07-18 17:00   ` Roger Pau Monné
  2014-07-18 17:11     ` Andrew Cooper
  2014-07-18 20:53     ` Konrad Rzeszutek Wilk
  2014-07-18 17:19   ` Olaf Hering
  1 sibling, 2 replies; 29+ messages in thread
From: Roger Pau Monné @ 2014-07-18 17:00 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel, Ian Jackson, Ian Campbell

[-- Attachment #1: Type: text/plain, Size: 1231 bytes --]

On 18/07/14 18:49, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 17, 2014 at 01:02:22PM +0200, Roger Pau Monne wrote:
>> Since PVH guests are very similar to HVM guests in terms of memory
>> management, start the guest with the maximum memory assigned and let
>> it balloon down.
> 
> There is something odd about your email. When I look at in
> mutt I see the patch, but if I save it and try do git am
> it complains.
> 
> Looking at the file I see:
> 
>      U2luY2UgUFZIIGd1ZXN0cyBhcmUgdmVyeSBzaW1pbGFyIHRvIEhWTSBndWVzdHMgaW4gdGVybXMg
>      b2YgbWVtb3J5Cm1hbmFnZW1lbnQsIHN0YXJ0IHRoZSBndWVzdCB3aXRoIHRoZSBtYXhpbXVtIG1l
>      bW9yeSBhc3NpZ25lZCBhbmQgbGV0Cml0IGJhbGxvb24gZG93bi4KClNpZ25lZC1vZmYtYnk6IFJv
>      Z2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpDYzogSWFuIEphY2tzb24gPGlh
>      bi5qYWNrc29uQGV1LmNpdHJpeC5jb20+CkNjOiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBj
>      aXRyaXguY29tPgpDYzogTXVrZXNoIFJhdGhvciA8bXVrZXNoLnJhdGhvckBvcmFjbGUuY29tPgot
> 
> .. and so.
> 
> Have you changed something recently in your git sendmail setup?

No, I haven't touched my git config since more than two years probably.
Maybe Citrix smtp server is mangling it?

Anyway, I'm attaching a copy, I hope you will be able to fetch it.

Roger.


[-- Attachment #2: 0001-libxl-create-PVH-guests-with-max-memory-assigned.patch --]
[-- Type: text/plain, Size: 2733 bytes --]

>From 59227b7a8fd3b891259a8a2b3154e17c3b5651d7 Mon Sep 17 00:00:00 2001
From: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 8 Jul 2014 10:35:20 +0200
Subject: [PATCH] libxl: create PVH guests with max memory assigned
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Since PVH guests are very similar to HVM guests in terms of memory
management, start the guest with the maximum memory assigned and let
it balloon down.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 tools/libxl/libxl_dom.c |   13 ++++++++++---
 1 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 661999c..eada87d 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -233,6 +233,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     libxl_domain_build_info *const info = &d_config->b_info;
     libxl_ctx *ctx = libxl__gc_owner(gc);
     char *xs_domid, *con_domid;
+    unsigned long mem;
     int rc;

     if (xc_domain_max_vcpus(ctx->xch, domid, info->max_vcpus) != 0) {
@@ -263,8 +264,12 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
     libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus, &info->cpumap);

-    if (xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
-        LIBXL_MAXMEM_CONSTANT) < 0) {
+    if (info->type == LIBXL_DOMAIN_TYPE_PV)
+        mem = libxl_defbool_val(d_config->c_info.pvh) ? info->max_memkb :
+                                                        info->target_memkb;
+    else
+        mem = info->target_memkb;
+    if (xc_domain_setmaxmem(ctx->xch, domid, mem + LIBXL_MAXMEM_CONSTANT) < 0) {
         LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Couldn't set max memory");
         return ERROR_FAIL;
     }
@@ -370,6 +375,7 @@ int libxl__build_pv(libxl__gc *gc, uint32_t domid,
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     struct xc_dom_image *dom;
+    unsigned long mem;
     int ret;
     int flags = 0;

@@ -440,7 +446,8 @@ int libxl__build_pv(libxl__gc *gc, uint32_t domid,
         LOGE(ERROR, "libxl__arch_domain_init_hw_description failed");
         goto out;
     }
-    if ( (ret = xc_dom_mem_init(dom, info->target_memkb / 1024)) != 0 ) {
+    mem = state->pvh_enabled ? info->max_memkb : info->target_memkb;
+    if ( (ret = xc_dom_mem_init(dom, mem / 1024)) != 0 ) {
         LOGE(ERROR, "xc_dom_mem_init failed");
         goto out;
     }
--
1.7.7.5 (Apple Git-26)


[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-07-18 17:00   ` Roger Pau Monné
@ 2014-07-18 17:11     ` Andrew Cooper
  2014-07-21 10:16       ` Ian Campbell
  2014-07-18 20:53     ` Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 29+ messages in thread
From: Andrew Cooper @ 2014-07-18 17:11 UTC (permalink / raw)
  To: Roger Pau Monné, Konrad Rzeszutek Wilk
  Cc: xen-devel, Ian Jackson, Ian Campbell

On 18/07/14 18:00, Roger Pau Monné wrote:
> On 18/07/14 18:49, Konrad Rzeszutek Wilk wrote:
>> On Thu, Jul 17, 2014 at 01:02:22PM +0200, Roger Pau Monne wrote:
>>> Since PVH guests are very similar to HVM guests in terms of memory
>>> management, start the guest with the maximum memory assigned and let
>>> it balloon down.
>> There is something odd about your email. When I look at in
>> mutt I see the patch, but if I save it and try do git am
>> it complains.
>>
>> Looking at the file I see:
>>
>>      U2luY2UgUFZIIGd1ZXN0cyBhcmUgdmVyeSBzaW1pbGFyIHRvIEhWTSBndWVzdHMgaW4gdGVybXMg
>>      b2YgbWVtb3J5Cm1hbmFnZW1lbnQsIHN0YXJ0IHRoZSBndWVzdCB3aXRoIHRoZSBtYXhpbXVtIG1l
>>      bW9yeSBhc3NpZ25lZCBhbmQgbGV0Cml0IGJhbGxvb24gZG93bi4KClNpZ25lZC1vZmYtYnk6IFJv
>>      Z2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpDYzogSWFuIEphY2tzb24gPGlh
>>      bi5qYWNrc29uQGV1LmNpdHJpeC5jb20+CkNjOiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBj
>>      aXRyaXguY29tPgpDYzogTXVrZXNoIFJhdGhvciA8bXVrZXNoLnJhdGhvckBvcmFjbGUuY29tPgot
>>
>> .. and so.
>>
>> Have you changed something recently in your git sendmail setup?
> No, I haven't touched my git config since more than two years probably.
> Maybe Citrix smtp server is mangling it?
>
> Anyway, I'm attaching a copy, I hope you will be able to fetch it.
>
> Roger.

Message-ID: <1405594942-20760-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

Something decided to base64 encode it...

~Andrew

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-07-18 16:49 ` Konrad Rzeszutek Wilk
  2014-07-18 17:00   ` Roger Pau Monné
@ 2014-07-18 17:19   ` Olaf Hering
  2014-07-18 19:33     ` Konrad Rzeszutek Wilk
  1 sibling, 1 reply; 29+ messages in thread
From: Olaf Hering @ 2014-07-18 17:19 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: xen-devel, Ian Jackson, Ian Campbell, Roger Pau Monne

On Fri, Jul 18, Konrad Rzeszutek Wilk wrote:

> There is something odd about your email. When I look at in
> mutt I see the patch, but if I save it and try do git am
> it complains.

Try "ESC s" for decoded-safe (or something like that).

Olaf

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-07-18 17:19   ` Olaf Hering
@ 2014-07-18 19:33     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 29+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-07-18 19:33 UTC (permalink / raw)
  To: Olaf Hering; +Cc: xen-devel, Ian Jackson, Ian Campbell, Roger Pau Monne

On Fri, Jul 18, 2014 at 07:19:22PM +0200, Olaf Hering wrote:
> On Fri, Jul 18, Konrad Rzeszutek Wilk wrote:
> 
> > There is something odd about your email. When I look at in
> > mutt I see the patch, but if I save it and try do git am
> > it complains.
> 
> Try "ESC s" for decoded-safe (or something like that).

That is awesome. Thanks!
> 
> Olaf

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-07-18 17:00   ` Roger Pau Monné
  2014-07-18 17:11     ` Andrew Cooper
@ 2014-07-18 20:53     ` Konrad Rzeszutek Wilk
  2014-08-05  8:57       ` Ian Campbell
  1 sibling, 1 reply; 29+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-07-18 20:53 UTC (permalink / raw)
  To: Roger Pau Monné, boris.ostrovsky
  Cc: xen-devel, Ian Jackson, Ian Campbell

On Fri, Jul 18, 2014 at 07:00:42PM +0200, Roger Pau Monné wrote:
> On 18/07/14 18:49, Konrad Rzeszutek Wilk wrote:
> > On Thu, Jul 17, 2014 at 01:02:22PM +0200, Roger Pau Monne wrote:
> >> Since PVH guests are very similar to HVM guests in terms of memory
> >> management, start the guest with the maximum memory assigned and let
> >> it balloon down.
> > 
> > There is something odd about your email. When I look at in
> > mutt I see the patch, but if I save it and try do git am
> > it complains.
> > 
> > Looking at the file I see:
> > 
> >      U2luY2UgUFZIIGd1ZXN0cyBhcmUgdmVyeSBzaW1pbGFyIHRvIEhWTSBndWVzdHMgaW4gdGVybXMg
> >      b2YgbWVtb3J5Cm1hbmFnZW1lbnQsIHN0YXJ0IHRoZSBndWVzdCB3aXRoIHRoZSBtYXhpbXVtIG1l
> >      bW9yeSBhc3NpZ25lZCBhbmQgbGV0Cml0IGJhbGxvb24gZG93bi4KClNpZ25lZC1vZmYtYnk6IFJv
> >      Z2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpDYzogSWFuIEphY2tzb24gPGlh
> >      bi5qYWNrc29uQGV1LmNpdHJpeC5jb20+CkNjOiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBj
> >      aXRyaXguY29tPgpDYzogTXVrZXNoIFJhdGhvciA8bXVrZXNoLnJhdGhvckBvcmFjbGUuY29tPgot
> > 
> > .. and so.
> > 
> > Have you changed something recently in your git sendmail setup?
> 
> No, I haven't touched my git config since more than two years probably.
> Maybe Citrix smtp server is mangling it?
> 
> Anyway, I'm attaching a copy, I hope you will be able to fetch it.

And I can do better:

Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

It makes PVH guests with maxmem != memory boot

> 
> Roger.
> 

> >From 59227b7a8fd3b891259a8a2b3154e17c3b5651d7 Mon Sep 17 00:00:00 2001
> From: Roger Pau Monne <roger.pau@citrix.com>
> Date: Tue, 8 Jul 2014 10:35:20 +0200
> Subject: [PATCH] libxl: create PVH guests with max memory assigned
> MIME-Version: 1.0
> Content-Type: text/plain; charset=UTF-8
> Content-Transfer-Encoding: 8bit
> 
> Since PVH guests are very similar to HVM guests in terms of memory
> management, start the guest with the maximum memory assigned and let
> it balloon down.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  tools/libxl/libxl_dom.c |   13 ++++++++++---
>  1 files changed, 10 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index 661999c..eada87d 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -233,6 +233,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>      libxl_domain_build_info *const info = &d_config->b_info;
>      libxl_ctx *ctx = libxl__gc_owner(gc);
>      char *xs_domid, *con_domid;
> +    unsigned long mem;
>      int rc;
>  
>      if (xc_domain_max_vcpus(ctx->xch, domid, info->max_vcpus) != 0) {
> @@ -263,8 +264,12 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
>      libxl_domain_set_nodeaffinity(ctx, domid, &info->nodemap);
>      libxl_set_vcpuaffinity_all(ctx, domid, info->max_vcpus, &info->cpumap);
>  
> -    if (xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
> -        LIBXL_MAXMEM_CONSTANT) < 0) {
> +    if (info->type == LIBXL_DOMAIN_TYPE_PV)
> +        mem = libxl_defbool_val(d_config->c_info.pvh) ? info->max_memkb :
> +                                                        info->target_memkb;
> +    else
> +        mem = info->target_memkb;
> +    if (xc_domain_setmaxmem(ctx->xch, domid, mem + LIBXL_MAXMEM_CONSTANT) < 0) {
>          LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "Couldn't set max memory");
>          return ERROR_FAIL;
>      }
> @@ -370,6 +375,7 @@ int libxl__build_pv(libxl__gc *gc, uint32_t domid,
>  {
>      libxl_ctx *ctx = libxl__gc_owner(gc);
>      struct xc_dom_image *dom;
> +    unsigned long mem;
>      int ret;
>      int flags = 0;
>  
> @@ -440,7 +446,8 @@ int libxl__build_pv(libxl__gc *gc, uint32_t domid,
>          LOGE(ERROR, "libxl__arch_domain_init_hw_description failed");
>          goto out;
>      }
> -    if ( (ret = xc_dom_mem_init(dom, info->target_memkb / 1024)) != 0 ) {
> +    mem = state->pvh_enabled ? info->max_memkb : info->target_memkb;
> +    if ( (ret = xc_dom_mem_init(dom, mem / 1024)) != 0 ) {
>          LOGE(ERROR, "xc_dom_mem_init failed");
>          goto out;
>      }
> -- 
> 1.7.7.5 (Apple Git-26)
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-07-18 17:11     ` Andrew Cooper
@ 2014-07-21 10:16       ` Ian Campbell
  0 siblings, 0 replies; 29+ messages in thread
From: Ian Campbell @ 2014-07-21 10:16 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel, Ian Jackson, Roger Pau Monné

On Fri, 2014-07-18 at 18:11 +0100, Andrew Cooper wrote:
> On 18/07/14 18:00, Roger Pau Monné wrote:
> > On 18/07/14 18:49, Konrad Rzeszutek Wilk wrote:
> >> On Thu, Jul 17, 2014 at 01:02:22PM +0200, Roger Pau Monne wrote:
> >>> Since PVH guests are very similar to HVM guests in terms of memory
> >>> management, start the guest with the maximum memory assigned and let
> >>> it balloon down.
> >> There is something odd about your email. When I look at in
> >> mutt I see the patch, but if I save it and try do git am
> >> it complains.
> >>
> >> Looking at the file I see:
> >>
> >>      U2luY2UgUFZIIGd1ZXN0cyBhcmUgdmVyeSBzaW1pbGFyIHRvIEhWTSBndWVzdHMgaW4gdGVybXMg
> >>      b2YgbWVtb3J5Cm1hbmFnZW1lbnQsIHN0YXJ0IHRoZSBndWVzdCB3aXRoIHRoZSBtYXhpbXVtIG1l
> >>      bW9yeSBhc3NpZ25lZCBhbmQgbGV0Cml0IGJhbGxvb24gZG93bi4KClNpZ25lZC1vZmYtYnk6IFJv
> >>      Z2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpDYzogSWFuIEphY2tzb24gPGlh
> >>      bi5qYWNrc29uQGV1LmNpdHJpeC5jb20+CkNjOiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBj
> >>      aXRyaXguY29tPgpDYzogTXVrZXNoIFJhdGhvciA8bXVrZXNoLnJhdGhvckBvcmFjbGUuY29tPgot
> >>
> >> .. and so.
> >>
> >> Have you changed something recently in your git sendmail setup?
> > No, I haven't touched my git config since more than two years probably.
> > Maybe Citrix smtp server is mangling it?
> >
> > Anyway, I'm attaching a copy, I hope you will be able to fetch it.
> >
> > Roger.
> 
> Message-ID: <1405594942-20760-1-git-send-email-roger.pau@citrix.com>
> X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
> Content-Type: text/plain; charset="utf-8"
> Content-Transfer-Encoding: base64
> 
> Something decided to base64 encode it...

Exchange is prone to doing this sort of thing IME...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-07-17 11:02 [PATCH] libxl: create PVH guests with max memory assigned Roger Pau Monne
  2014-07-18 16:49 ` Konrad Rzeszutek Wilk
@ 2014-08-01 15:34 ` Roger Pau Monné
  2014-08-04 18:44   ` Konrad Rzeszutek Wilk
  2014-08-05  8:55 ` Ian Campbell
  2 siblings, 1 reply; 29+ messages in thread
From: Roger Pau Monné @ 2014-08-01 15:34 UTC (permalink / raw)
  To: xen-devel; +Cc: Ian Jackson, Ian Campbell

On 17/07/14 13:02, Roger Pau Monne wrote:
> Since PVH guests are very similar to HVM guests in terms of memory
> management, start the guest with the maximum memory assigned and let
> it balloon down.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Mukesh Rathor <mukesh.rathor@oracle.com>

Ping?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-08-01 15:34 ` Roger Pau Monné
@ 2014-08-04 18:44   ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 29+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-08-04 18:44 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel, Ian Jackson, Ian Campbell

On Fri, Aug 01, 2014 at 05:34:42PM +0200, Roger Pau Monné wrote:
> On 17/07/14 13:02, Roger Pau Monne wrote:
> > Since PVH guests are very similar to HVM guests in terms of memory
> > management, start the guest with the maximum memory assigned and let
> > it balloon down.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

And Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> > Cc: Ian Campbell <ian.campbell@citrix.com>
> > Cc: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> Ping?

Indeed - could it please go in?

> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-07-17 11:02 [PATCH] libxl: create PVH guests with max memory assigned Roger Pau Monne
  2014-07-18 16:49 ` Konrad Rzeszutek Wilk
  2014-08-01 15:34 ` Roger Pau Monné
@ 2014-08-05  8:55 ` Ian Campbell
  2014-08-05  9:34   ` David Vrabel
  2 siblings, 1 reply; 29+ messages in thread
From: Ian Campbell @ 2014-08-05  8:55 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: xen-devel, Ian Jackson

On Thu, 2014-07-17 at 13:02 +0200, Roger Pau Monne wrote:

Sorry for the delay replying, this somehow slipped through my net.

> Since PVH guests are very similar to HVM guests in terms of memory
> management, start the guest with the maximum memory assigned and let
> it balloon down.

Both before and after this patch an HVM guest would be launched with
target_memkb though, not max_memkb (presumably relying on PoD), so the
comparison made in the commit log doesn't tally to me given that you are
making PVH (and only PVH) use max_memkb.

This patch seems to make it impossible to boot a PVH guest
pre-ballooned. It only appears to "work" because I presume you actually
have enough RAM to satisfy maxmem for a short time, but that defeats the
purpose.

Either a PVH guest is similar enough to an HVM guest in this area to
make use of PoD for early ballooning *or* it is similar enough to a PV
guest that it can use the PV kernel entry point to get in early enough
to initialise the balloon driver (via the XEN_EXTRA_MEM_MAX_REGIONS
stuff, I presume) before the kernels normal init sequence can start
mucking with that memory.

Either way this patch is wrong and is papering over an actual issue.

Ian.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-07-18 20:53     ` Konrad Rzeszutek Wilk
@ 2014-08-05  8:57       ` Ian Campbell
  0 siblings, 0 replies; 29+ messages in thread
From: Ian Campbell @ 2014-08-05  8:57 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: xen-devel, boris.ostrovsky, Ian Jackson, Roger Pau Monné

On Fri, 2014-07-18 at 16:53 -0400, Konrad Rzeszutek Wilk wrote:
> On Fri, Jul 18, 2014 at 07:00:42PM +0200, Roger Pau Monné wrote:
> > On 18/07/14 18:49, Konrad Rzeszutek Wilk wrote:
> > > On Thu, Jul 17, 2014 at 01:02:22PM +0200, Roger Pau Monne wrote:
> > >> Since PVH guests are very similar to HVM guests in terms of memory
> > >> management, start the guest with the maximum memory assigned and let
> > >> it balloon down.
> > > 
> > > There is something odd about your email. When I look at in
> > > mutt I see the patch, but if I save it and try do git am
> > > it complains.
> > > 
> > > Looking at the file I see:
> > > 
> > >      U2luY2UgUFZIIGd1ZXN0cyBhcmUgdmVyeSBzaW1pbGFyIHRvIEhWTSBndWVzdHMgaW4gdGVybXMg
> > >      b2YgbWVtb3J5Cm1hbmFnZW1lbnQsIHN0YXJ0IHRoZSBndWVzdCB3aXRoIHRoZSBtYXhpbXVtIG1l
> > >      bW9yeSBhc3NpZ25lZCBhbmQgbGV0Cml0IGJhbGxvb24gZG93bi4KClNpZ25lZC1vZmYtYnk6IFJv
> > >      Z2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpDYzogSWFuIEphY2tzb24gPGlh
> > >      bi5qYWNrc29uQGV1LmNpdHJpeC5jb20+CkNjOiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBj
> > >      aXRyaXguY29tPgpDYzogTXVrZXNoIFJhdGhvciA8bXVrZXNoLnJhdGhvckBvcmFjbGUuY29tPgot
> > > 
> > > .. and so.
> > > 
> > > Have you changed something recently in your git sendmail setup?
> > 
> > No, I haven't touched my git config since more than two years probably.
> > Maybe Citrix smtp server is mangling it?
> > 
> > Anyway, I'm attaching a copy, I hope you will be able to fetch it.
> 
> And I can do better:
> 
> Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> 
> It makes PVH guests with maxmem != memory boot

No, it doesn't.

It makes guests boot with maxmem regardless the memory setting being
lower, by essentially completely ignoring the latter. As I just
explained in my other reply this is the wrong approach to this problem.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-08-05  8:55 ` Ian Campbell
@ 2014-08-05  9:34   ` David Vrabel
  2014-08-05 11:08     ` Roger Pau Monné
  0 siblings, 1 reply; 29+ messages in thread
From: David Vrabel @ 2014-08-05  9:34 UTC (permalink / raw)
  To: Ian Campbell, Roger Pau Monne; +Cc: xen-devel, Ian Jackson

On 05/08/14 09:55, Ian Campbell wrote:
> On Thu, 2014-07-17 at 13:02 +0200, Roger Pau Monne wrote:
> 
> Sorry for the delay replying, this somehow slipped through my net.
> 
>> Since PVH guests are very similar to HVM guests in terms of memory
>> management, start the guest with the maximum memory assigned and let
>> it balloon down.
> 
> Both before and after this patch an HVM guest would be launched with
> target_memkb though, not max_memkb (presumably relying on PoD), so the
> comparison made in the commit log doesn't tally to me given that you are
> making PVH (and only PVH) use max_memkb.
> 
> This patch seems to make it impossible to boot a PVH guest
> pre-ballooned. It only appears to "work" because I presume you actually
> have enough RAM to satisfy maxmem for a short time, but that defeats the
> purpose.
> 
> Either a PVH guest is similar enough to an HVM guest in this area to
> make use of PoD for early ballooning *or* it is similar enough to a PV
> guest that it can use the PV kernel entry point to get in early enough
> to initialise the balloon driver (via the XEN_EXTRA_MEM_MAX_REGIONS
> stuff, I presume) before the kernels normal init sequence can start
> mucking with that memory.

A decision on which needs to be made and /documented/.  If the PV-like
approach is taken, I won't be accepting any Linux patches without such
documentation.

I now regret accepting the PVH support in Linux without a clear
specification of what PVH actually is.

David

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-08-05  9:34   ` David Vrabel
@ 2014-08-05 11:08     ` Roger Pau Monné
  2014-08-05 14:06       ` Ian Campbell
                         ` (2 more replies)
  0 siblings, 3 replies; 29+ messages in thread
From: Roger Pau Monné @ 2014-08-05 11:08 UTC (permalink / raw)
  To: David Vrabel, Ian Campbell; +Cc: xen-devel, Ian Jackson

On 05/08/14 11:34, David Vrabel wrote:
> On 05/08/14 09:55, Ian Campbell wrote:
>> On Thu, 2014-07-17 at 13:02 +0200, Roger Pau Monne wrote:
>>
>> Sorry for the delay replying, this somehow slipped through my net.
>>
>>> Since PVH guests are very similar to HVM guests in terms of memory
>>> management, start the guest with the maximum memory assigned and let
>>> it balloon down.
>>
>> Both before and after this patch an HVM guest would be launched with
>> target_memkb though, not max_memkb (presumably relying on PoD), so the
>> comparison made in the commit log doesn't tally to me given that you are
>> making PVH (and only PVH) use max_memkb.
>>
>> This patch seems to make it impossible to boot a PVH guest
>> pre-ballooned. It only appears to "work" because I presume you actually
>> have enough RAM to satisfy maxmem for a short time, but that defeats the
>> purpose.
>>
>> Either a PVH guest is similar enough to an HVM guest in this area to
>> make use of PoD for early ballooning *or* it is similar enough to a PV
>> guest that it can use the PV kernel entry point to get in early enough
>> to initialise the balloon driver (via the XEN_EXTRA_MEM_MAX_REGIONS
>> stuff, I presume) before the kernels normal init sequence can start
>> mucking with that memory.

Yes, now that I look at it again I realize the patch is completely wrong.

> A decision on which needs to be made and /documented/.  If the PV-like
> approach is taken, I won't be accepting any Linux patches without such
> documentation.
> 
> I now regret accepting the PVH support in Linux without a clear
> specification of what PVH actually is.

I've always thought of PVH as PVHVM without a device model, so IMHO it
would make more sense to use PoD rather than the PV ballooning approach,
but I would like to hear opinions from others before taking a stab into
implementing it.

Roger.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-08-05 11:08     ` Roger Pau Monné
@ 2014-08-05 14:06       ` Ian Campbell
  2014-08-05 14:10       ` George Dunlap
  2014-08-05 14:18       ` Is: PVH - how to solve maxmem != memory scenario? Was:Re: " Konrad Rzeszutek Wilk
  2 siblings, 0 replies; 29+ messages in thread
From: Ian Campbell @ 2014-08-05 14:06 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel, Ian Jackson, David Vrabel

On Tue, 2014-08-05 at 13:08 +0200, Roger Pau Monné wrote:
> On 05/08/14 11:34, David Vrabel wrote:
> > On 05/08/14 09:55, Ian Campbell wrote:
> >> On Thu, 2014-07-17 at 13:02 +0200, Roger Pau Monne wrote:
> >>
> >> Sorry for the delay replying, this somehow slipped through my net.
> >>
> >>> Since PVH guests are very similar to HVM guests in terms of memory
> >>> management, start the guest with the maximum memory assigned and let
> >>> it balloon down.
> >>
> >> Both before and after this patch an HVM guest would be launched with
> >> target_memkb though, not max_memkb (presumably relying on PoD), so the
> >> comparison made in the commit log doesn't tally to me given that you are
> >> making PVH (and only PVH) use max_memkb.
> >>
> >> This patch seems to make it impossible to boot a PVH guest
> >> pre-ballooned. It only appears to "work" because I presume you actually
> >> have enough RAM to satisfy maxmem for a short time, but that defeats the
> >> purpose.
> >>
> >> Either a PVH guest is similar enough to an HVM guest in this area to
> >> make use of PoD for early ballooning *or* it is similar enough to a PV
> >> guest that it can use the PV kernel entry point to get in early enough
> >> to initialise the balloon driver (via the XEN_EXTRA_MEM_MAX_REGIONS
> >> stuff, I presume) before the kernels normal init sequence can start
> >> mucking with that memory.
> 
> Yes, now that I look at it again I realize the patch is completely wrong.
> 
> > A decision on which needs to be made and /documented/.  If the PV-like
> > approach is taken, I won't be accepting any Linux patches without such
> > documentation.
> > 
> > I now regret accepting the PVH support in Linux without a clear
> > specification of what PVH actually is.
> 
> I've always thought of PVH as PVHVM without a device model, so IMHO it
> would make more sense to use PoD rather than the PV ballooning approach,
> but I would like to hear opinions from others before taking a stab into
> implementing it.

The main argument against PoD is probably that it is somewhat
statistical in nature (ensuring you have enough PoD cache to make it to
balloon_init() most of the time). Doing things the PV way is likely to
be more deterministic.

That's not to say that other factors (e.g. simplicity) wouldn't lead to
PoD being the correct choice...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-08-05 11:08     ` Roger Pau Monné
  2014-08-05 14:06       ` Ian Campbell
@ 2014-08-05 14:10       ` George Dunlap
  2014-08-05 21:22         ` Mukesh Rathor
  2014-08-05 14:18       ` Is: PVH - how to solve maxmem != memory scenario? Was:Re: " Konrad Rzeszutek Wilk
  2 siblings, 1 reply; 29+ messages in thread
From: George Dunlap @ 2014-08-05 14:10 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: xen-devel, Ian Jackson, David Vrabel, Ian Campbell

On Tue, Aug 5, 2014 at 7:08 AM, Roger Pau Monné <roger.pau@citrix.com> wrote:
> On 05/08/14 11:34, David Vrabel wrote:
>> On 05/08/14 09:55, Ian Campbell wrote:
>>> On Thu, 2014-07-17 at 13:02 +0200, Roger Pau Monne wrote:
>>>
>>> Sorry for the delay replying, this somehow slipped through my net.
>>>
>>>> Since PVH guests are very similar to HVM guests in terms of memory
>>>> management, start the guest with the maximum memory assigned and let
>>>> it balloon down.
>>>
>>> Both before and after this patch an HVM guest would be launched with
>>> target_memkb though, not max_memkb (presumably relying on PoD), so the
>>> comparison made in the commit log doesn't tally to me given that you are
>>> making PVH (and only PVH) use max_memkb.
>>>
>>> This patch seems to make it impossible to boot a PVH guest
>>> pre-ballooned. It only appears to "work" because I presume you actually
>>> have enough RAM to satisfy maxmem for a short time, but that defeats the
>>> purpose.
>>>
>>> Either a PVH guest is similar enough to an HVM guest in this area to
>>> make use of PoD for early ballooning *or* it is similar enough to a PV
>>> guest that it can use the PV kernel entry point to get in early enough
>>> to initialise the balloon driver (via the XEN_EXTRA_MEM_MAX_REGIONS
>>> stuff, I presume) before the kernels normal init sequence can start
>>> mucking with that memory.
>
> Yes, now that I look at it again I realize the patch is completely wrong.
>
>> A decision on which needs to be made and /documented/.  If the PV-like
>> approach is taken, I won't be accepting any Linux patches without such
>> documentation.
>>
>> I now regret accepting the PVH support in Linux without a clear
>> specification of what PVH actually is.
>
> I've always thought of PVH as PVHVM without a device model, so IMHO it
> would make more sense to use PoD rather than the PV ballooning approach,
> but I would like to hear opinions from others before taking a stab into
> implementing it.

I think the original idea was to have PVH be PV with the addition of
an "HVM container" -- just a minimal bit of HVM that would allow us to
get rid of a lot of the unnecessary PV stuff.

But as it turned out, the "minimal HVM container" was 70% of the size
of the fully-virtualized HVM container.  Rather than have thousands of
lines of duplicate code, we decided to merge the HVM and PVH code
paths.  At which point, it makes more sense to just go the other
direction, and make PVH basically PVHVM without a device model.

I'm not sure how PV deals with memory != maxmem at boot: it seems like
PVH could do it the same way; or it could use PoD.  But just setting
memory=maxmem is certainly the wrong approach.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Is: PVH - how to solve maxmem != memory scenario? Was:Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-08-05 11:08     ` Roger Pau Monné
  2014-08-05 14:06       ` Ian Campbell
  2014-08-05 14:10       ` George Dunlap
@ 2014-08-05 14:18       ` Konrad Rzeszutek Wilk
  2014-08-05 14:36         ` Jan Beulich
                           ` (2 more replies)
  2 siblings, 3 replies; 29+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-08-05 14:18 UTC (permalink / raw)
  To: Roger Pau Monné, george.dunlap, tim, Mukesh Rathor, jbeulich
  Cc: xen-devel, Ian Jackson, David Vrabel, Ian Campbell

On Tue, Aug 05, 2014 at 01:08:22PM +0200, Roger Pau Monné wrote:
> On 05/08/14 11:34, David Vrabel wrote:
> > On 05/08/14 09:55, Ian Campbell wrote:
> >> On Thu, 2014-07-17 at 13:02 +0200, Roger Pau Monne wrote:
> >>
> >> Sorry for the delay replying, this somehow slipped through my net.
> >>
> >>> Since PVH guests are very similar to HVM guests in terms of memory
> >>> management, start the guest with the maximum memory assigned and let
> >>> it balloon down.
> >>
> >> Both before and after this patch an HVM guest would be launched with
> >> target_memkb though, not max_memkb (presumably relying on PoD), so the
> >> comparison made in the commit log doesn't tally to me given that you are
> >> making PVH (and only PVH) use max_memkb.
> >>
> >> This patch seems to make it impossible to boot a PVH guest
> >> pre-ballooned. It only appears to "work" because I presume you actually
> >> have enough RAM to satisfy maxmem for a short time, but that defeats the
> >> purpose.
> >>
> >> Either a PVH guest is similar enough to an HVM guest in this area to
> >> make use of PoD for early ballooning *or* it is similar enough to a PV
> >> guest that it can use the PV kernel entry point to get in early enough
> >> to initialise the balloon driver (via the XEN_EXTRA_MEM_MAX_REGIONS
> >> stuff, I presume) before the kernels normal init sequence can start
> >> mucking with that memory.
> 
> Yes, now that I look at it again I realize the patch is completely wrong.
> 
> > A decision on which needs to be made and /documented/.  If the PV-like
> > approach is taken, I won't be accepting any Linux patches without such
> > documentation.
> > 
> > I now regret accepting the PVH support in Linux without a clear
> > specification of what PVH actually is.

It is evolving :-)
> 
> I've always thought of PVH as PVHVM without a device model, so IMHO it
> would make more sense to use PoD rather than the PV ballooning approach,
> but I would like to hear opinions from others before taking a stab into
> implementing it.

Lets rope Mukesh, Tim, George and Jan in here.

Mukesh's feeling was that it is an PV.

I believe George is the opinion of 'HVM' without the device model.

In the past I  was thinking that since it is from the PV it would
be more of that (PV) without the P2M and M2P. And the memory management
(so E820) would follow the PV paths and do the proper ballooning/decreasing.

However I think it was you (David) who suggested that we just
setup the E820 properly in the toolstack/hypervisor and have it
match the hypervisors' P2M. That I believe is what Roger's patch
was aiming at.

A bit of past history:
Mukesh's initial patches (v3, see https://lkml.org/lkml/2012/10/17/553,
and https://lkml.org/lkml/2013/12/12/627, for new hypercall)
took the path that the PV guest will act as PV. And it will do
the proper hypercalls to expand/contract the Xen's P2M to balloon
out and in. However the only reason for this was to match
the P2M (assuming it was flat and up to nr_pages) to E820
(which would be discontingous) and setup the correct EPT entries
in the hypervisor.

My personal opinion is that the easiest path is the best.
If it is just the matter of making Xen's P2M and E820 be exactly
the same and let the Linux guest figure out based on 'nr_pages'
how many RAM pages are really provided), is the way, then
lets do it that way.

> 
> Roger.
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Is: PVH - how to solve maxmem != memory scenario? Was:Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-08-05 14:18       ` Is: PVH - how to solve maxmem != memory scenario? Was:Re: " Konrad Rzeszutek Wilk
@ 2014-08-05 14:36         ` Jan Beulich
  2014-08-05 14:48           ` Konrad Rzeszutek Wilk
  2014-08-05 15:05         ` David Vrabel
  2014-08-05 21:36         ` Mukesh Rathor
  2 siblings, 1 reply; 29+ messages in thread
From: Jan Beulich @ 2014-08-05 14:36 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Ian Campbell, george.dunlap, tim, Ian Jackson, David Vrabel,
	xen-devel, roger.pau

>>> On 05.08.14 at 16:18, <konrad.wilk@oracle.com> wrote:
> Mukesh's feeling was that it is an PV.
> 
> I believe George is the opinion of 'HVM' without the device model.

I think we settled already that this is the intended long term model.
However, what's wrong with having the kernel act PV-like on top of
a PoD-based hypervisor implementation. Simply not touching the
memory amount beyond the initial allocation would already make
things work afaict, i.e. even without any decrease-reservation
calls (and it would therefore desirable but mostly cosmetic to get
them done as early as possible).

Jan

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Is: PVH - how to solve maxmem != memory scenario? Was:Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-08-05 14:36         ` Jan Beulich
@ 2014-08-05 14:48           ` Konrad Rzeszutek Wilk
  2014-08-05 15:12             ` Jan Beulich
  0 siblings, 1 reply; 29+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-08-05 14:48 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Ian Campbell, george.dunlap, tim, Ian Jackson, David Vrabel,
	xen-devel, roger.pau

On Tue, Aug 05, 2014 at 03:36:19PM +0100, Jan Beulich wrote:
> >>> On 05.08.14 at 16:18, <konrad.wilk@oracle.com> wrote:
> > Mukesh's feeling was that it is an PV.
> > 
> > I believe George is the opinion of 'HVM' without the device model.
> 
> I think we settled already that this is the intended long term model.
> However, what's wrong with having the kernel act PV-like on top of
> a PoD-based hypervisor implementation. Simply not touching the
> memory amount beyond the initial allocation would already make
> things work afaict, i.e. even without any decrease-reservation
> calls (and it would therefore desirable but mostly cosmetic to get
> them done as early as possible).

Linux sets its page-tables (beyound the bootstrap) using an interesting
mechanism which ends up touching those pages.

Bear with the explanation as it is a bit complex.

When it setups page-tables for a new range of memory (1GB
or 2MB, or 4KB ranges - in PVH it will likely be in 2GB
since the 1GB cpuid parameter is not exposed), it ends up
populating the L2, L3, and L4 (as needed) from the earlier
range. When it is done with this range (say 2MB), it will
put the page table entries in the physical area of the newly
added region.

Something like this:

    +--------------------------------------+                     
    |                                      v                     
    |   2MB region------------->|<------  2MB region ---------->
+---+------+--------------------+----------+--------------------+
|          |                    |          |                    |
|pgtable   |                    |pgtable   |                    |
+----------+--------------------+----------+--------------------+


In effect the pages beyound 'memory' will be touched during bootup.

> 
> Jan
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Is: PVH - how to solve maxmem != memory scenario? Was:Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-08-05 14:18       ` Is: PVH - how to solve maxmem != memory scenario? Was:Re: " Konrad Rzeszutek Wilk
  2014-08-05 14:36         ` Jan Beulich
@ 2014-08-05 15:05         ` David Vrabel
  2014-08-05 15:40           ` Konrad Rzeszutek Wilk
  2014-08-05 19:45           ` Tim Deegan
  2014-08-05 21:36         ` Mukesh Rathor
  2 siblings, 2 replies; 29+ messages in thread
From: David Vrabel @ 2014-08-05 15:05 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk, Roger Pau Monné,
	george.dunlap, tim, Mukesh Rathor, jbeulich
  Cc: xen-devel, Ian Jackson, Ian Campbell

On 05/08/14 15:18, Konrad Rzeszutek Wilk wrote:
> On Tue, Aug 05, 2014 at 01:08:22PM +0200, Roger Pau Monné wrote:
>> On 05/08/14 11:34, David Vrabel wrote:
>>>
>>> I now regret accepting the PVH support in Linux without a clear
>>> specification of what PVH actually is.
> 
> It is evolving :-)

I don't think this is a valid excuse for not having documentation.

> My personal opinion is that the easiest path is the best.
> If it is just the matter of making Xen's P2M and E820 be exactly
> the same and let the Linux guest figure out based on 'nr_pages'
> how many RAM pages are really provided), is the way, then
> lets do it that way.

Here's a rough design for two different options that I think would be
sensible.

[Note: hypercalls are from memory and may be wrong, I also can't
remember whether PVH guests get a start_info page and the existing docs
don't say.]

The first is the PV-like approach.

  The toolstack shall construct an e820 memory map including all
  appropriate holes for MMIO regions.  This memory map will be well
  ordered, no regions shall overlap and all regions shall begin and end
  on page boundaries.

  The toolstack shall issue a XENMEM_set_memory_map hypercall for this
  memory map.

  The toolstack shall issue a XENMEM_set_maximum_reservation hypercall.

  Xen (or toolstack via populate_physmap? I can't remember) shall
  populate the guest's p2m using the provided e820 map.  Frames shall
  be added starting from the first E820_RAM region, fully
  populating each RAM region before moving onto the next, until the
  initial number of pages is reached.

  Xen shall write this initial number of pages into the nr_pages field
  of the start_info frame.

  The guest shall issue a XENMEM_memory_map hypercall to obtain the
  e820 memory map (as set by the toolstack).

  The guest shall obtain the initial number of pages from
  start_info->nr_pages.

  The guest may then iterate over the e820 map, adding (sub) RAM
  regions that are unpopulated to the balloon driver (or similar).

This second one uses PoD, but we can require specific behaviour on the
guest to ensure the PoD pool (cache) is large enough.

  The toolstack shall construct an e820 memory map including all
  appropriate holes for MMIO regions.  This memory map will be well
  ordered, no regions shall overlap and all regions shall begin and end
  on page boundaries.

  The toolstack shall issue a XENMEM_set_memory_map hypercall for this
  memory map.

  The toolstack shall issue a XENMEM_set_maximum_reservation hypercall.

  Xen shall initialize and fill the PoD cache to the initial number of
  pages.

  Xen (toolstack?) shall write the initial number of pages into the
  nr_pages field of the start_info frame.

  The guest shall issue a XENMEM_memory_map hypercall to obtain the
  e820 memory map (as set by the toolstack).

  The guest shall obtain the initial number of pages from
  start_info->nr_pages.

  The guest may then iterate over the e820 map, adding (sub) RAM
  regions that are unpopulated to the balloon driver (or similar).

  Xen must not use the PoD pool for allocations outside the initial
  regions.  Xen must inject a fault into the guest should it attempt to
  access frames outside of the initial region without an appropriate
  XENMEM_populate_physmap hypercall to mark the region as populated (or
  it could inject a fault/kill the domain if it runs out of PoD pool for
  the initial allocation).

>From the guest's point-of-view both approaches are the same.  PoD could
allow for deferred allocation which might help with start-up times for
large guests, if that's something that interests people.

David

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Is: PVH - how to solve maxmem != memory scenario? Was:Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-08-05 14:48           ` Konrad Rzeszutek Wilk
@ 2014-08-05 15:12             ` Jan Beulich
  2014-08-05 15:41               ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 29+ messages in thread
From: Jan Beulich @ 2014-08-05 15:12 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Ian Campbell, george.dunlap, tim, Ian Jackson, David Vrabel,
	xen-devel, roger.pau

>>> On 05.08.14 at 16:48, <konrad.wilk@oracle.com> wrote:
> On Tue, Aug 05, 2014 at 03:36:19PM +0100, Jan Beulich wrote:
>> >>> On 05.08.14 at 16:18, <konrad.wilk@oracle.com> wrote:
>> > Mukesh's feeling was that it is an PV.
>> > 
>> > I believe George is the opinion of 'HVM' without the device model.
>> 
>> I think we settled already that this is the intended long term model.
>> However, what's wrong with having the kernel act PV-like on top of
>> a PoD-based hypervisor implementation. Simply not touching the
>> memory amount beyond the initial allocation would already make
>> things work afaict, i.e. even without any decrease-reservation
>> calls (and it would therefore desirable but mostly cosmetic to get
>> them done as early as possible).
> 
> Linux sets its page-tables (beyound the bootstrap) using an interesting
> mechanism which ends up touching those pages.
> 
> Bear with the explanation as it is a bit complex.
> 
> When it setups page-tables for a new range of memory (1GB
> or 2MB, or 4KB ranges - in PVH it will likely be in 2GB
> since the 1GB cpuid parameter is not exposed), it ends up
> populating the L2, L3, and L4 (as needed) from the earlier
> range. When it is done with this range (say 2MB), it will
> put the page table entries in the physical area of the newly
> added region.
> 
> Something like this:
> 
>     +--------------------------------------+                     
>     |                                      v                     
>     |   2MB region------------->|<------  2MB region ---------->
> +---+------+--------------------+----------+--------------------+
> |          |                    |          |                    |
> |pgtable   |                    |pgtable   |                    |
> +----------+--------------------+----------+--------------------+
> 
> 
> In effect the pages beyound 'memory' will be touched during bootup.

I don't think that's how it works, as this wouldn't leave any
contiguous 2M pages for allocation. I think the block sizes are
much larger, and what memblock_find_in_range() returns isn't
necessarily always from the most recently added block I'd
suppose.

Jan

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Is: PVH - how to solve maxmem != memory scenario? Was:Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-08-05 15:05         ` David Vrabel
@ 2014-08-05 15:40           ` Konrad Rzeszutek Wilk
  2014-08-05 15:51             ` Jan Beulich
  2014-08-05 19:45           ` Tim Deegan
  1 sibling, 1 reply; 29+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-08-05 15:40 UTC (permalink / raw)
  To: David Vrabel
  Cc: Ian Campbell, george.dunlap, Ian Jackson, tim, jbeulich,
	xen-devel, Roger Pau Monné

On Tue, Aug 05, 2014 at 04:05:28PM +0100, David Vrabel wrote:
> On 05/08/14 15:18, Konrad Rzeszutek Wilk wrote:
> > On Tue, Aug 05, 2014 at 01:08:22PM +0200, Roger Pau Monné wrote:
> >> On 05/08/14 11:34, David Vrabel wrote:
> >>>
> >>> I now regret accepting the PVH support in Linux without a clear
> >>> specification of what PVH actually is.
> > 
> > It is evolving :-)
> 
> I don't think this is a valid excuse for not having documentation.
> 
> > My personal opinion is that the easiest path is the best.
> > If it is just the matter of making Xen's P2M and E820 be exactly
> > the same and let the Linux guest figure out based on 'nr_pages'
> > how many RAM pages are really provided), is the way, then
> > lets do it that way.
> 
> Here's a rough design for two different options that I think would be
> sensible.
> 
> [Note: hypercalls are from memory and may be wrong, I also can't
> remember whether PVH guests get a start_info page and the existing docs
> don't say.]
> 
> The first is the PV-like approach.
> 
>   The toolstack shall construct an e820 memory map including all
>   appropriate holes for MMIO regions.  This memory map will be well
>   ordered, no regions shall overlap and all regions shall begin and end
>   on page boundaries.
> 
>   The toolstack shall issue a XENMEM_set_memory_map hypercall for this
>   memory map.
> 
>   The toolstack shall issue a XENMEM_set_maximum_reservation hypercall.
> 
>   Xen (or toolstack via populate_physmap? I can't remember) shall
>   populate the guest's p2m using the provided e820 map.  Frames shall
>   be added starting from the first E820_RAM region, fully
>   populating each RAM region before moving onto the next, until the
>   initial number of pages is reached.
> 
>   Xen shall write this initial number of pages into the nr_pages field
>   of the start_info frame.
> 
>   The guest shall issue a XENMEM_memory_map hypercall to obtain the
>   e820 memory map (as set by the toolstack).
> 
>   The guest shall obtain the initial number of pages from
>   start_info->nr_pages.
> 
>   The guest may then iterate over the e820 map, adding (sub) RAM
>   regions that are unpopulated to the balloon driver (or similar).
> 
> This second one uses PoD, but we can require specific behaviour on the
> guest to ensure the PoD pool (cache) is large enough.
> 
>   The toolstack shall construct an e820 memory map including all
>   appropriate holes for MMIO regions.  This memory map will be well
>   ordered, no regions shall overlap and all regions shall begin and end
>   on page boundaries.
> 
>   The toolstack shall issue a XENMEM_set_memory_map hypercall for this
>   memory map.
> 
>   The toolstack shall issue a XENMEM_set_maximum_reservation hypercall.
> 
>   Xen shall initialize and fill the PoD cache to the initial number of
>   pages.
> 
>   Xen (toolstack?) shall write the initial number of pages into the
>   nr_pages field of the start_info frame.
> 
>   The guest shall issue a XENMEM_memory_map hypercall to obtain the
>   e820 memory map (as set by the toolstack).
> 
>   The guest shall obtain the initial number of pages from
>   start_info->nr_pages.
> 
>   The guest may then iterate over the e820 map, adding (sub) RAM
>   regions that are unpopulated to the balloon driver (or similar).
> 
>   Xen must not use the PoD pool for allocations outside the initial
>   regions.  Xen must inject a fault into the guest should it attempt to
>   access frames outside of the initial region without an appropriate
>   XENMEM_populate_physmap hypercall to mark the region as populated (or
>   it could inject a fault/kill the domain if it runs out of PoD pool for
>   the initial allocation).
> 
> >From the guest's point-of-view both approaches are the same.  PoD could
> allow for deferred allocation which might help with start-up times for
> large guests, if that's something that interests people.

There is a tiny problem with PoD: PCI passthrough.

Right now we don't allow PoD + PCI until the hardware allows it
 (if it ever will).

Thought for PV we do allow ballooning and PCI passthrough as we can
at least control that the pages that are being ballooned are not going
to be used for DMA operations (unless the guest does something stupid
and returns an page back to the allocated but still does DMA ops on 
the PFN).
> 
> David

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Is: PVH - how to solve maxmem != memory scenario? Was:Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-08-05 15:12             ` Jan Beulich
@ 2014-08-05 15:41               ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 29+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-08-05 15:41 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Ian Campbell, george.dunlap, tim, Ian Jackson, David Vrabel,
	xen-devel, roger.pau

On Tue, Aug 05, 2014 at 04:12:39PM +0100, Jan Beulich wrote:
> >>> On 05.08.14 at 16:48, <konrad.wilk@oracle.com> wrote:
> > On Tue, Aug 05, 2014 at 03:36:19PM +0100, Jan Beulich wrote:
> >> >>> On 05.08.14 at 16:18, <konrad.wilk@oracle.com> wrote:
> >> > Mukesh's feeling was that it is an PV.
> >> > 
> >> > I believe George is the opinion of 'HVM' without the device model.
> >> 
> >> I think we settled already that this is the intended long term model.
> >> However, what's wrong with having the kernel act PV-like on top of
> >> a PoD-based hypervisor implementation. Simply not touching the
> >> memory amount beyond the initial allocation would already make
> >> things work afaict, i.e. even without any decrease-reservation
> >> calls (and it would therefore desirable but mostly cosmetic to get
> >> them done as early as possible).
> > 
> > Linux sets its page-tables (beyound the bootstrap) using an interesting
> > mechanism which ends up touching those pages.
> > 
> > Bear with the explanation as it is a bit complex.
> > 
> > When it setups page-tables for a new range of memory (1GB
> > or 2MB, or 4KB ranges - in PVH it will likely be in 2GB
> > since the 1GB cpuid parameter is not exposed), it ends up
> > populating the L2, L3, and L4 (as needed) from the earlier
> > range. When it is done with this range (say 2MB), it will
> > put the page table entries in the physical area of the newly
> > added region.
> > 
> > Something like this:
> > 
> >     +--------------------------------------+                     
> >     |                                      v                     
> >     |   2MB region------------->|<------  2MB region ---------->
> > +---+------+--------------------+----------+--------------------+
> > |          |                    |          |                    |
> > |pgtable   |                    |pgtable   |                    |
> > +----------+--------------------+----------+--------------------+
> > 
> > 
> > In effect the pages beyound 'memory' will be touched during bootup.
> 
> I don't think that's how it works, as this wouldn't leave any
> contiguous 2M pages for allocation. I think the block sizes are

Ah right.

> much larger, and what memblock_find_in_range() returns isn't
> necessarily always from the most recently added block I'd
> suppose.

That sounds right. The logic in there is to expand the region
for the pagetables in the range as possible. And then when it
can't use it anymore - it will jump to the next one. This is all
from memory when I reviewed it years ago so take this with a grain
of salt please.
> 
> Jan
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Is: PVH - how to solve maxmem != memory scenario? Was:Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-08-05 15:40           ` Konrad Rzeszutek Wilk
@ 2014-08-05 15:51             ` Jan Beulich
  2014-08-05 15:56               ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 29+ messages in thread
From: Jan Beulich @ 2014-08-05 15:51 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Ian Campbell, george.dunlap, tim, Ian Jackson, David Vrabel,
	xen-devel, roger.pau

>>> On 05.08.14 at 17:40, <konrad.wilk@oracle.com> wrote:
> There is a tiny problem with PoD: PCI passthrough.
> 
> Right now we don't allow PoD + PCI until the hardware allows it
>  (if it ever will).

But PVH doesn't support pass-through so far, does it?

> Thought for PV we do allow ballooning and PCI passthrough as we can
> at least control that the pages that are being ballooned are not going
> to be used for DMA operations (unless the guest does something stupid
> and returns an page back to the allocated but still does DMA ops on 
> the PFN).

And it would seem possible to permit the combination for PVH if
there was a documented requirement for the ballooning down to
happen before any device I/O starts.

Jan

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Is: PVH - how to solve maxmem != memory scenario? Was:Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-08-05 15:51             ` Jan Beulich
@ 2014-08-05 15:56               ` Konrad Rzeszutek Wilk
  2014-08-05 16:07                 ` Jan Beulich
  0 siblings, 1 reply; 29+ messages in thread
From: Konrad Rzeszutek Wilk @ 2014-08-05 15:56 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Ian Campbell, george.dunlap, tim, Ian Jackson, David Vrabel,
	xen-devel, roger.pau

On Tue, Aug 05, 2014 at 04:51:02PM +0100, Jan Beulich wrote:
> >>> On 05.08.14 at 17:40, <konrad.wilk@oracle.com> wrote:
> > There is a tiny problem with PoD: PCI passthrough.
> > 
> > Right now we don't allow PoD + PCI until the hardware allows it
> >  (if it ever will).
> 
> But PVH doesn't support pass-through so far, does it?

Dom0 is an case of where it does work. My understanding is
that for domU cases it just needs the proper E820 hookup
and it _should_ work.
> 
> > Thought for PV we do allow ballooning and PCI passthrough as we can
> > at least control that the pages that are being ballooned are not going
> > to be used for DMA operations (unless the guest does something stupid
> > and returns an page back to the allocated but still does DMA ops on 
> > the PFN).
> 
> And it would seem possible to permit the combination for PVH if
> there was a documented requirement for the ballooning down to
> happen before any device I/O starts.

The same requirement should be applied to a normal PV guest as well.

> 
> Jan
> 

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Is: PVH - how to solve maxmem != memory scenario? Was:Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-08-05 15:56               ` Konrad Rzeszutek Wilk
@ 2014-08-05 16:07                 ` Jan Beulich
  0 siblings, 0 replies; 29+ messages in thread
From: Jan Beulich @ 2014-08-05 16:07 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Ian Campbell, george.dunlap, tim, Ian Jackson, David Vrabel,
	xen-devel, roger.pau

>>> On 05.08.14 at 17:56, <konrad.wilk@oracle.com> wrote:
> On Tue, Aug 05, 2014 at 04:51:02PM +0100, Jan Beulich wrote:
>> >>> On 05.08.14 at 17:40, <konrad.wilk@oracle.com> wrote:
>> > Thought for PV we do allow ballooning and PCI passthrough as we can
>> > at least control that the pages that are being ballooned are not going
>> > to be used for DMA operations (unless the guest does something stupid
>> > and returns an page back to the allocated but still does DMA ops on 
>> > the PFN).
>> 
>> And it would seem possible to permit the combination for PVH if
>> there was a documented requirement for the ballooning down to
>> happen before any device I/O starts.
> 
> The same requirement should be applied to a normal PV guest as well.

Not really, as there's no PoD in that case.

Jan

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Is: PVH - how to solve maxmem != memory scenario? Was:Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-08-05 15:05         ` David Vrabel
  2014-08-05 15:40           ` Konrad Rzeszutek Wilk
@ 2014-08-05 19:45           ` Tim Deegan
  1 sibling, 0 replies; 29+ messages in thread
From: Tim Deegan @ 2014-08-05 19:45 UTC (permalink / raw)
  To: David Vrabel
  Cc: Ian Campbell, george.dunlap, Ian Jackson, jbeulich, xen-devel,
	Roger Pau Monné

At 16:05 +0100 on 05 Aug (1407251128), David Vrabel wrote:
> The first is the PV-like approach.
> 
>   The toolstack shall construct an e820 memory map including all
>   appropriate holes for MMIO regions.  This memory map will be well
>   ordered, no regions shall overlap and all regions shall begin and end
>   on page boundaries.
> 
>   The toolstack shall issue a XENMEM_set_memory_map hypercall for this
>   memory map.
> 
>   The toolstack shall issue a XENMEM_set_maximum_reservation hypercall.
> 
>   Xen (or toolstack via populate_physmap? I can't remember) shall
>   populate the guest's p2m using the provided e820 map.  Frames shall
>   be added starting from the first E820_RAM region, fully
>   populating each RAM region before moving onto the next, until the
>   initial number of pages is reached.

This, please.  The tools should tell the guest what memory it has and
the guest should use that memory.  There's _far_ too much messing
about in PVH memory allocation as it is.

Tim.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-08-05 14:10       ` George Dunlap
@ 2014-08-05 21:22         ` Mukesh Rathor
  0 siblings, 0 replies; 29+ messages in thread
From: Mukesh Rathor @ 2014-08-05 21:22 UTC (permalink / raw)
  To: George Dunlap
  Cc: xen-devel, Ian Campbell, Ian Jackson, David Vrabel, Roger Pau Monné

On Tue, 5 Aug 2014 10:10:30 -0400
George Dunlap <George.Dunlap@eu.citrix.com> wrote:

> On Tue, Aug 5, 2014 at 7:08 AM, Roger Pau Monné
> <roger.pau@citrix.com> wrote:
> > On 05/08/14 11:34, David Vrabel wrote:
> >> On 05/08/14 09:55, Ian Campbell wrote:
> >>> On Thu, 2014-07-17 at 13:02 +0200, Roger Pau Monne wrote:
> >>>
> >>> Sorry for the delay replying, this somehow slipped through my net.
> >>>
> >>>> Since PVH guests are very similar to HVM guests in terms of
> >>>> memory management, start the guest with the maximum memory
> >>>> assigned and let it balloon down.
...
> >>> This patch seems to make it impossible to boot a PVH guest
> >>> pre-ballooned. It only appears to "work" because I presume you
> >>> actually have enough RAM to satisfy maxmem for a short time, but
> >>> that defeats the purpose.
> >>>
> >>> Either a PVH guest is similar enough to an HVM guest in this area
> >>> to make use of PoD for early ballooning *or* it is similar enough
> >>> to a PV guest that it can use the PV kernel entry point to get in
> >>> early enough to initialise the balloon driver (via the
> >>> XEN_EXTRA_MEM_MAX_REGIONS stuff, I presume) before the kernels
> >>> normal init sequence can start mucking with that memory.
> >
> > Yes, now that I look at it again I realize the patch is completely
> > wrong.
> >
> >> A decision on which needs to be made and /documented/.  If the
> >> PV-like approach is taken, I won't be accepting any Linux patches
> >> without such documentation.
> >>
> >> I now regret accepting the PVH support in Linux without a clear
> >> specification of what PVH actually is.
> >
> > I've always thought of PVH as PVHVM without a device model, so IMHO
> > it would make more sense to use PoD rather than the PV ballooning
> > approach, but I would like to hear opinions from others before
> > taking a stab into implementing it.
> 
> I think the original idea was to have PVH be PV with the addition of
> an "HVM container" -- just a minimal bit of HVM that would allow us to
> get rid of a lot of the unnecessary PV stuff.
> 
> But as it turned out, the "minimal HVM container" was 70% of the size
> of the fully-virtualized HVM container.  Rather than have thousands of
> lines of duplicate code, we decided to merge the HVM and PVH code
> paths.  At which point, it makes more sense to just go the other
> direction, and make PVH basically PVHVM without a device model.

It's tempting to go that way, but I think keeping PV model has a very
important conecptual benefit to xen. It allows us to modify the guest
OS for anything that could benefit it's performance on xen. That is 
something other hypervisors may not be able to do. So, at least for that
reason, I hope PVH can remain a PV, at least on the guest side.

> I'm not sure how PV deals with memory != maxmem at boot: it seems like
> PVH could do it the same way; or it could use PoD.  But just setting
> memory=maxmem is certainly the wrong approach.

I forget also, I'll have to look at all that code again to refresh 
my memory.

thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: Is: PVH - how to solve maxmem != memory scenario? Was:Re: [PATCH] libxl: create PVH guests with max memory assigned
  2014-08-05 14:18       ` Is: PVH - how to solve maxmem != memory scenario? Was:Re: " Konrad Rzeszutek Wilk
  2014-08-05 14:36         ` Jan Beulich
  2014-08-05 15:05         ` David Vrabel
@ 2014-08-05 21:36         ` Mukesh Rathor
  2 siblings, 0 replies; 29+ messages in thread
From: Mukesh Rathor @ 2014-08-05 21:36 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk
  Cc: Ian Campbell, george.dunlap, Ian Jackson, tim, David Vrabel,
	jbeulich, xen-devel, Roger Pau Monné

On Tue, 5 Aug 2014 10:18:44 -0400
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Tue, Aug 05, 2014 at 01:08:22PM +0200, Roger Pau Monné wrote:
> > On 05/08/14 11:34, David Vrabel wrote:
> > > On 05/08/14 09:55, Ian Campbell wrote:
> > >> On Thu, 2014-07-17 at 13:02 +0200, Roger Pau Monne wrote:
> > >>
> > >> Sorry for the delay replying, this somehow slipped through my

........

> It is evolving :-)
> > 
> > I've always thought of PVH as PVHVM without a device model, so IMHO
> > it would make more sense to use PoD rather than the PV ballooning
> > approach, but I would like to hear opinions from others before
> > taking a stab into implementing it.
> 
> Lets rope Mukesh, Tim, George and Jan in here.
> 
> Mukesh's feeling was that it is an PV.
> 
> I believe George is the opinion of 'HVM' without the device model.
> 
> In the past I  was thinking that since it is from the PV it would
> be more of that (PV) without the P2M and M2P. And the memory
> management (so E820) would follow the PV paths and do the proper
> ballooning/decreasing.
> 
> However I think it was you (David) who suggested that we just
> setup the E820 properly in the toolstack/hypervisor and have it
> match the hypervisors' P2M. That I believe is what Roger's patch
> was aiming at.
> 
> A bit of past history:
> Mukesh's initial patches (v3, see
> https://lkml.org/lkml/2012/10/17/553, and
> https://lkml.org/lkml/2013/12/12/627, for new hypercall) took the
> path that the PV guest will act as PV. And it will do the proper
> hypercalls to expand/contract the Xen's P2M to balloon out and in.
> However the only reason for this was to match the P2M (assuming it
> was flat and up to nr_pages) to E820 (which would be discontingous)
> and setup the correct EPT entries in the hypervisor.
> 
> My personal opinion is that the easiest path is the best.

Agree. My opinion, that I had expressed to Roger few weeks ago when we
initially talked of which approach to take here, is we should pick
the best path in terms of performance and simplicity. My intention always 
has been to examine both PV and HVM, and pick the best, or if a third
option makes sense, go that way - that IMO is the whole point of PVH.

thanks,
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2014-08-05 21:36 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-17 11:02 [PATCH] libxl: create PVH guests with max memory assigned Roger Pau Monne
2014-07-18 16:49 ` Konrad Rzeszutek Wilk
2014-07-18 17:00   ` Roger Pau Monné
2014-07-18 17:11     ` Andrew Cooper
2014-07-21 10:16       ` Ian Campbell
2014-07-18 20:53     ` Konrad Rzeszutek Wilk
2014-08-05  8:57       ` Ian Campbell
2014-07-18 17:19   ` Olaf Hering
2014-07-18 19:33     ` Konrad Rzeszutek Wilk
2014-08-01 15:34 ` Roger Pau Monné
2014-08-04 18:44   ` Konrad Rzeszutek Wilk
2014-08-05  8:55 ` Ian Campbell
2014-08-05  9:34   ` David Vrabel
2014-08-05 11:08     ` Roger Pau Monné
2014-08-05 14:06       ` Ian Campbell
2014-08-05 14:10       ` George Dunlap
2014-08-05 21:22         ` Mukesh Rathor
2014-08-05 14:18       ` Is: PVH - how to solve maxmem != memory scenario? Was:Re: " Konrad Rzeszutek Wilk
2014-08-05 14:36         ` Jan Beulich
2014-08-05 14:48           ` Konrad Rzeszutek Wilk
2014-08-05 15:12             ` Jan Beulich
2014-08-05 15:41               ` Konrad Rzeszutek Wilk
2014-08-05 15:05         ` David Vrabel
2014-08-05 15:40           ` Konrad Rzeszutek Wilk
2014-08-05 15:51             ` Jan Beulich
2014-08-05 15:56               ` Konrad Rzeszutek Wilk
2014-08-05 16:07                 ` Jan Beulich
2014-08-05 19:45           ` Tim Deegan
2014-08-05 21:36         ` Mukesh Rathor

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.