All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/2] spapr_nvram: Support migration
@ 2014-09-25  7:02 Alexey Kardashevskiy
  2014-09-25  7:02 ` [Qemu-devel] [PATCH 1/2] vmstate: Allow dynamic allocation for VBUFFER during migration Alexey Kardashevskiy
  2014-09-25  7:02 ` [Qemu-devel] [PATCH 2/2] spapr_nvram: Enable migration Alexey Kardashevskiy
  0 siblings, 2 replies; 8+ messages in thread
From: Alexey Kardashevskiy @ 2014-09-25  7:02 UTC (permalink / raw)
  To: qemu-devel; +Cc: Alexey Kardashevskiy, Paolo Bonzini, qemu-ppc, Alexander Graf

Here are 2 patches to enable sPAPR NVRAM migration.

Please comment. Thanks!


Alexey Kardashevskiy (2):
  vmstate: Allow dynamic allocation for VBUFFER during migration
  spapr_nvram: Enable migration

 hw/nvram/spapr_nvram.c      | 68 +++++++++++++++++++++++++++++++++++++++------
 include/migration/vmstate.h | 11 ++++++++
 vmstate.c                   | 13 +++++++--
 3 files changed, 80 insertions(+), 12 deletions(-)

-- 
2.0.0

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Qemu-devel] [PATCH 1/2] vmstate: Allow dynamic allocation for VBUFFER during migration
  2014-09-25  7:02 [Qemu-devel] [PATCH 0/2] spapr_nvram: Support migration Alexey Kardashevskiy
@ 2014-09-25  7:02 ` Alexey Kardashevskiy
  2014-09-25  7:02 ` [Qemu-devel] [PATCH 2/2] spapr_nvram: Enable migration Alexey Kardashevskiy
  1 sibling, 0 replies; 8+ messages in thread
From: Alexey Kardashevskiy @ 2014-09-25  7:02 UTC (permalink / raw)
  To: qemu-devel; +Cc: Alexey Kardashevskiy, Paolo Bonzini, qemu-ppc, Alexander Graf

This extends use of VMS_ALLOC flag from arrays to VBUFFER as well.

This defines VMSTATE_VBUFFER_ALLOC_UINT32 which makes use of VMS_ALLOC
and uses uint32_t type for a size.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 include/migration/vmstate.h | 11 +++++++++++
 vmstate.c                   | 13 ++++++++++---
 2 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
index 9a001bd..e45fc49 100644
--- a/include/migration/vmstate.h
+++ b/include/migration/vmstate.h
@@ -484,6 +484,17 @@ extern const VMStateInfo vmstate_info_bitmap;
     .start        = (_start),                                        \
 }
 
+#define VMSTATE_VBUFFER_ALLOC_UINT32(_field, _state, _version, _test, _start, _field_size) { \
+    .name         = (stringify(_field)),                             \
+    .version_id   = (_version),                                      \
+    .field_exists = (_test),                                         \
+    .size_offset  = vmstate_offset_value(_state, _field_size, uint32_t),\
+    .info         = &vmstate_info_buffer,                            \
+    .flags        = VMS_VBUFFER|VMS_POINTER|VMS_ALLOC,               \
+    .offset       = offsetof(_state, _field),                        \
+    .start        = (_start),                                        \
+}
+
 #define VMSTATE_BUFFER_UNSAFE_INFO(_field, _state, _version, _info, _size) { \
     .name       = (stringify(_field)),                               \
     .version_id = (_version),                                        \
diff --git a/vmstate.c b/vmstate.c
index ef2f87b..3dde574 100644
--- a/vmstate.c
+++ b/vmstate.c
@@ -49,9 +49,16 @@ static void *vmstate_base_addr(void *opaque, VMStateField *field, bool alloc)
 
     if (field->flags & VMS_POINTER) {
         if (alloc && (field->flags & VMS_ALLOC)) {
-            int n_elems = vmstate_n_elems(opaque, field);
-            if (n_elems) {
-                gsize size = n_elems * field->size;
+            gsize size = 0;
+            if (field->flags & VMS_VBUFFER) {
+                size = vmstate_size(opaque, field);
+            } else {
+                int n_elems = vmstate_n_elems(opaque, field);
+                if (n_elems) {
+                    size = n_elems * field->size;
+                }
+            }
+            if (size) {
                 *((void **)base_addr + field->start) = g_malloc(size);
             }
         }
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [Qemu-devel] [PATCH 2/2] spapr_nvram: Enable migration
  2014-09-25  7:02 [Qemu-devel] [PATCH 0/2] spapr_nvram: Support migration Alexey Kardashevskiy
  2014-09-25  7:02 ` [Qemu-devel] [PATCH 1/2] vmstate: Allow dynamic allocation for VBUFFER during migration Alexey Kardashevskiy
@ 2014-09-25  7:02 ` Alexey Kardashevskiy
  2014-09-25  9:43   ` Alexander Graf
  1 sibling, 1 reply; 8+ messages in thread
From: Alexey Kardashevskiy @ 2014-09-25  7:02 UTC (permalink / raw)
  To: qemu-devel; +Cc: Alexey Kardashevskiy, Paolo Bonzini, qemu-ppc, Alexander Graf

The only case when sPAPR NVRAM migrates now is if is backed by a file and
copy-storage migration is performed.

This enables RAM copy of NVRAM even if NVRAM is backed by a file.

This defines a VMSTATE descriptor for NVRAM device so the memory copy
of NVRAM can migrate and be written to a backing file on the destination
if one is provided.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
 hw/nvram/spapr_nvram.c | 68 +++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 59 insertions(+), 9 deletions(-)

diff --git a/hw/nvram/spapr_nvram.c b/hw/nvram/spapr_nvram.c
index 6a72ef4..254009e 100644
--- a/hw/nvram/spapr_nvram.c
+++ b/hw/nvram/spapr_nvram.c
@@ -76,15 +76,20 @@ static void rtas_nvram_fetch(PowerPCCPU *cpu, sPAPREnvironment *spapr,
         return;
     }
 
+    assert(nvram->buf);
+
     membuf = cpu_physical_memory_map(buffer, &len, 1);
+
+    alen = len;
     if (nvram->drive) {
         alen = bdrv_pread(nvram->drive, offset, membuf, len);
+        if (alen > 0) {
+            memcpy(nvram->buf + offset, membuf, alen);
+        }
     } else {
-        assert(nvram->buf);
-
         memcpy(membuf, nvram->buf + offset, len);
-        alen = len;
     }
+
     cpu_physical_memory_unmap(membuf, len, 1, len);
 
     rtas_st(rets, 0, (alen < len) ? RTAS_OUT_HW_ERROR : RTAS_OUT_SUCCESS);
@@ -122,14 +127,15 @@ static void rtas_nvram_store(PowerPCCPU *cpu, sPAPREnvironment *spapr,
     }
 
     membuf = cpu_physical_memory_map(buffer, &len, 0);
+
+    alen = len;
     if (nvram->drive) {
         alen = bdrv_pwrite(nvram->drive, offset, membuf, len);
-    } else {
-        assert(nvram->buf);
-
-        memcpy(nvram->buf + offset, membuf, len);
-        alen = len;
     }
+
+    assert(nvram->buf);
+    memcpy(nvram->buf + offset, membuf, len);
+
     cpu_physical_memory_unmap(membuf, len, 0, len);
 
     rtas_st(rets, 0, (alen < len) ? RTAS_OUT_HW_ERROR : RTAS_OUT_SUCCESS);
@@ -144,9 +150,10 @@ static int spapr_nvram_init(VIOsPAPRDevice *dev)
         nvram->size = bdrv_getlength(nvram->drive);
     } else {
         nvram->size = DEFAULT_NVRAM_SIZE;
-        nvram->buf = g_malloc0(nvram->size);
     }
 
+    nvram->buf = g_malloc0(nvram->size);
+
     if ((nvram->size < MIN_NVRAM_SIZE) || (nvram->size > MAX_NVRAM_SIZE)) {
         fprintf(stderr, "spapr-nvram must be between %d and %d bytes in size\n",
                 MIN_NVRAM_SIZE, MAX_NVRAM_SIZE);
@@ -166,6 +173,48 @@ static int spapr_nvram_devnode(VIOsPAPRDevice *dev, void *fdt, int node_off)
     return fdt_setprop_cell(fdt, node_off, "#bytes", nvram->size);
 }
 
+static int spapr_nvram_pre_load(void *opaque)
+{
+    sPAPRNVRAM *nvram = VIO_SPAPR_NVRAM(opaque);
+
+    g_free(nvram->buf);
+    nvram->buf = NULL;
+    nvram->size = 0;
+
+    return 0;
+}
+
+static int spapr_nvram_post_load(void *opaque, int version_id)
+{
+    sPAPRNVRAM *nvram = VIO_SPAPR_NVRAM(opaque);
+
+    if (nvram->drive) {
+        int alen = bdrv_pwrite(nvram->drive, 0, nvram->buf, nvram->size);
+
+        if (alen < 0) {
+            return alen;
+        }
+        if (alen != nvram->size) {
+            return -1;
+        }
+    }
+
+    return 0;
+}
+
+static const VMStateDescription vmstate_spapr_nvram = {
+    .name = "spapr_nvram",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .pre_load = spapr_nvram_pre_load,
+    .post_load = spapr_nvram_post_load,
+    .fields = (VMStateField[]) {
+        VMSTATE_UINT32(size, sPAPRNVRAM),
+        VMSTATE_VBUFFER_ALLOC_UINT32(buf, sPAPRNVRAM, 1, NULL, 0, size),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
 static Property spapr_nvram_properties[] = {
     DEFINE_SPAPR_PROPERTIES(sPAPRNVRAM, sdev),
     DEFINE_PROP_DRIVE("drive", sPAPRNVRAM, drive),
@@ -184,6 +233,7 @@ static void spapr_nvram_class_init(ObjectClass *klass, void *data)
     k->dt_compatible = "qemu,spapr-nvram";
     set_bit(DEVICE_CATEGORY_MISC, dc->categories);
     dc->props = spapr_nvram_properties;
+    dc->vmsd = &vmstate_spapr_nvram;
 }
 
 static const TypeInfo spapr_nvram_type_info = {
-- 
2.0.0

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] spapr_nvram: Enable migration
  2014-09-25  7:02 ` [Qemu-devel] [PATCH 2/2] spapr_nvram: Enable migration Alexey Kardashevskiy
@ 2014-09-25  9:43   ` Alexander Graf
  2014-09-25 10:06     ` Alexey Kardashevskiy
  0 siblings, 1 reply; 8+ messages in thread
From: Alexander Graf @ 2014-09-25  9:43 UTC (permalink / raw)
  To: Alexey Kardashevskiy, qemu-devel; +Cc: Paolo Bonzini, qemu-ppc



On 25.09.14 09:02, Alexey Kardashevskiy wrote:
> The only case when sPAPR NVRAM migrates now is if is backed by a file and
> copy-storage migration is performed.
> 
> This enables RAM copy of NVRAM even if NVRAM is backed by a file.
> 
> This defines a VMSTATE descriptor for NVRAM device so the memory copy
> of NVRAM can migrate and be written to a backing file on the destination
> if one is provided.
> 
> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> ---
>  hw/nvram/spapr_nvram.c | 68 +++++++++++++++++++++++++++++++++++++++++++-------
>  1 file changed, 59 insertions(+), 9 deletions(-)
> 
> diff --git a/hw/nvram/spapr_nvram.c b/hw/nvram/spapr_nvram.c
> index 6a72ef4..254009e 100644
> --- a/hw/nvram/spapr_nvram.c
> +++ b/hw/nvram/spapr_nvram.c
> @@ -76,15 +76,20 @@ static void rtas_nvram_fetch(PowerPCCPU *cpu, sPAPREnvironment *spapr,
>          return;
>      }
>  
> +    assert(nvram->buf);
> +
>      membuf = cpu_physical_memory_map(buffer, &len, 1);
> +
> +    alen = len;
>      if (nvram->drive) {
>          alen = bdrv_pread(nvram->drive, offset, membuf, len);
> +        if (alen > 0) {
> +            memcpy(nvram->buf + offset, membuf, alen);

Why?

> +        }
>      } else {
> -        assert(nvram->buf);
> -
>          memcpy(membuf, nvram->buf + offset, len);
> -        alen = len;
>      }
> +
>      cpu_physical_memory_unmap(membuf, len, 1, len);
>  
>      rtas_st(rets, 0, (alen < len) ? RTAS_OUT_HW_ERROR : RTAS_OUT_SUCCESS);
> @@ -122,14 +127,15 @@ static void rtas_nvram_store(PowerPCCPU *cpu, sPAPREnvironment *spapr,
>      }
>  
>      membuf = cpu_physical_memory_map(buffer, &len, 0);
> +
> +    alen = len;
>      if (nvram->drive) {
>          alen = bdrv_pwrite(nvram->drive, offset, membuf, len);
> -    } else {
> -        assert(nvram->buf);
> -
> -        memcpy(nvram->buf + offset, membuf, len);
> -        alen = len;
>      }
> +
> +    assert(nvram->buf);
> +    memcpy(nvram->buf + offset, membuf, len);
> +
>      cpu_physical_memory_unmap(membuf, len, 0, len);
>  
>      rtas_st(rets, 0, (alen < len) ? RTAS_OUT_HW_ERROR : RTAS_OUT_SUCCESS);
> @@ -144,9 +150,10 @@ static int spapr_nvram_init(VIOsPAPRDevice *dev)
>          nvram->size = bdrv_getlength(nvram->drive);
>      } else {
>          nvram->size = DEFAULT_NVRAM_SIZE;
> -        nvram->buf = g_malloc0(nvram->size);
>      }
>  
> +    nvram->buf = g_malloc0(nvram->size);
> +
>      if ((nvram->size < MIN_NVRAM_SIZE) || (nvram->size > MAX_NVRAM_SIZE)) {
>          fprintf(stderr, "spapr-nvram must be between %d and %d bytes in size\n",
>                  MIN_NVRAM_SIZE, MAX_NVRAM_SIZE);
> @@ -166,6 +173,48 @@ static int spapr_nvram_devnode(VIOsPAPRDevice *dev, void *fdt, int node_off)
>      return fdt_setprop_cell(fdt, node_off, "#bytes", nvram->size);
>  }
>  
> +static int spapr_nvram_pre_load(void *opaque)
> +{
> +    sPAPRNVRAM *nvram = VIO_SPAPR_NVRAM(opaque);
> +
> +    g_free(nvram->buf);
> +    nvram->buf = NULL;
> +    nvram->size = 0;
> +
> +    return 0;
> +}
> +
> +static int spapr_nvram_post_load(void *opaque, int version_id)
> +{
> +    sPAPRNVRAM *nvram = VIO_SPAPR_NVRAM(opaque);
> +
> +    if (nvram->drive) {
> +        int alen = bdrv_pwrite(nvram->drive, 0, nvram->buf, nvram->size);

In the file backed case you're already overwriting the disk backed
version, no?

Also, couldn't you just do the copy and provisioning of buf in a
pre_save hook?


Alex

> +
> +        if (alen < 0) {
> +            return alen;
> +        }
> +        if (alen != nvram->size) {
> +            return -1;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
> +static const VMStateDescription vmstate_spapr_nvram = {
> +    .name = "spapr_nvram",
> +    .version_id = 1,
> +    .minimum_version_id = 1,
> +    .pre_load = spapr_nvram_pre_load,
> +    .post_load = spapr_nvram_post_load,
> +    .fields = (VMStateField[]) {
> +        VMSTATE_UINT32(size, sPAPRNVRAM),
> +        VMSTATE_VBUFFER_ALLOC_UINT32(buf, sPAPRNVRAM, 1, NULL, 0, size),
> +        VMSTATE_END_OF_LIST()
> +    },
> +};
> +
>  static Property spapr_nvram_properties[] = {
>      DEFINE_SPAPR_PROPERTIES(sPAPRNVRAM, sdev),
>      DEFINE_PROP_DRIVE("drive", sPAPRNVRAM, drive),
> @@ -184,6 +233,7 @@ static void spapr_nvram_class_init(ObjectClass *klass, void *data)
>      k->dt_compatible = "qemu,spapr-nvram";
>      set_bit(DEVICE_CATEGORY_MISC, dc->categories);
>      dc->props = spapr_nvram_properties;
> +    dc->vmsd = &vmstate_spapr_nvram;
>  }
>  
>  static const TypeInfo spapr_nvram_type_info = {
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] spapr_nvram: Enable migration
  2014-09-25  9:43   ` Alexander Graf
@ 2014-09-25 10:06     ` Alexey Kardashevskiy
  2014-09-26  2:31       ` David Gibson
  0 siblings, 1 reply; 8+ messages in thread
From: Alexey Kardashevskiy @ 2014-09-25 10:06 UTC (permalink / raw)
  To: Alexander Graf, qemu-devel; +Cc: Paolo Bonzini, qemu-ppc

On 09/25/2014 07:43 PM, Alexander Graf wrote:
> 
> 
> On 25.09.14 09:02, Alexey Kardashevskiy wrote:
>> The only case when sPAPR NVRAM migrates now is if is backed by a file and
>> copy-storage migration is performed.
>>
>> This enables RAM copy of NVRAM even if NVRAM is backed by a file.
>>
>> This defines a VMSTATE descriptor for NVRAM device so the memory copy
>> of NVRAM can migrate and be written to a backing file on the destination
>> if one is provided.
>>
>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>> ---
>>  hw/nvram/spapr_nvram.c | 68 +++++++++++++++++++++++++++++++++++++++++++-------
>>  1 file changed, 59 insertions(+), 9 deletions(-)
>>
>> diff --git a/hw/nvram/spapr_nvram.c b/hw/nvram/spapr_nvram.c
>> index 6a72ef4..254009e 100644
>> --- a/hw/nvram/spapr_nvram.c
>> +++ b/hw/nvram/spapr_nvram.c
>> @@ -76,15 +76,20 @@ static void rtas_nvram_fetch(PowerPCCPU *cpu, sPAPREnvironment *spapr,
>>          return;
>>      }
>>  
>> +    assert(nvram->buf);
>> +
>>      membuf = cpu_physical_memory_map(buffer, &len, 1);
>> +
>> +    alen = len;
>>      if (nvram->drive) {
>>          alen = bdrv_pread(nvram->drive, offset, membuf, len);
>> +        if (alen > 0) {
>> +            memcpy(nvram->buf + offset, membuf, alen);
> 
> Why?

This way I do not need pre_save hook and I keep the buf in sync with the
file. If I implement pre_save, then buf will serve 2 purposes - it is
either NVRAM itself (if there is no backing file, exists during guest's
lifetime) or it is a migration copy (exists between pre_save and post_load
and then it is disposed). Two quite different uses of the same thing
confuse me. But - I do not mind doing it your way, no big deal, should I?


>> +        }
>>      } else {
>> -        assert(nvram->buf);
>> -
>>          memcpy(membuf, nvram->buf + offset, len);
>> -        alen = len;
>>      }
>> +
>>      cpu_physical_memory_unmap(membuf, len, 1, len);
>>  
>>      rtas_st(rets, 0, (alen < len) ? RTAS_OUT_HW_ERROR : RTAS_OUT_SUCCESS);
>> @@ -122,14 +127,15 @@ static void rtas_nvram_store(PowerPCCPU *cpu, sPAPREnvironment *spapr,
>>      }
>>  
>>      membuf = cpu_physical_memory_map(buffer, &len, 0);
>> +
>> +    alen = len;
>>      if (nvram->drive) {
>>          alen = bdrv_pwrite(nvram->drive, offset, membuf, len);
>> -    } else {
>> -        assert(nvram->buf);
>> -
>> -        memcpy(nvram->buf + offset, membuf, len);
>> -        alen = len;
>>      }
>> +
>> +    assert(nvram->buf);
>> +    memcpy(nvram->buf + offset, membuf, len);
>> +
>>      cpu_physical_memory_unmap(membuf, len, 0, len);
>>  
>>      rtas_st(rets, 0, (alen < len) ? RTAS_OUT_HW_ERROR : RTAS_OUT_SUCCESS);
>> @@ -144,9 +150,10 @@ static int spapr_nvram_init(VIOsPAPRDevice *dev)
>>          nvram->size = bdrv_getlength(nvram->drive);
>>      } else {
>>          nvram->size = DEFAULT_NVRAM_SIZE;
>> -        nvram->buf = g_malloc0(nvram->size);
>>      }
>>  
>> +    nvram->buf = g_malloc0(nvram->size);
>> +
>>      if ((nvram->size < MIN_NVRAM_SIZE) || (nvram->size > MAX_NVRAM_SIZE)) {
>>          fprintf(stderr, "spapr-nvram must be between %d and %d bytes in size\n",
>>                  MIN_NVRAM_SIZE, MAX_NVRAM_SIZE);
>> @@ -166,6 +173,48 @@ static int spapr_nvram_devnode(VIOsPAPRDevice *dev, void *fdt, int node_off)
>>      return fdt_setprop_cell(fdt, node_off, "#bytes", nvram->size);
>>  }
>>  
>> +static int spapr_nvram_pre_load(void *opaque)
>> +{
>> +    sPAPRNVRAM *nvram = VIO_SPAPR_NVRAM(opaque);
>> +
>> +    g_free(nvram->buf);
>> +    nvram->buf = NULL;
>> +    nvram->size = 0;
>> +
>> +    return 0;
>> +}
>> +
>> +static int spapr_nvram_post_load(void *opaque, int version_id)
>> +{
>> +    sPAPRNVRAM *nvram = VIO_SPAPR_NVRAM(opaque);
>> +
>> +    if (nvram->drive) {
>> +        int alen = bdrv_pwrite(nvram->drive, 0, nvram->buf, nvram->size);
> 
> In the file backed case you're already overwriting the disk backed
> version, no?

Yes. Is that bad?


> Also, couldn't you just do the copy and provisioning of buf in a
> pre_save hook?


I can do this too. I just do not see why that would be lot better though :)



-- 
Alexey

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] spapr_nvram: Enable migration
  2014-09-25 10:06     ` Alexey Kardashevskiy
@ 2014-09-26  2:31       ` David Gibson
  2014-09-26  2:53         ` Alexey Kardashevskiy
  0 siblings, 1 reply; 8+ messages in thread
From: David Gibson @ 2014-09-26  2:31 UTC (permalink / raw)
  To: Alexey Kardashevskiy; +Cc: Paolo Bonzini, qemu-ppc, Alexander Graf, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 2697 bytes --]

On Thu, Sep 25, 2014 at 08:06:40PM +1000, Alexey Kardashevskiy wrote:
> On 09/25/2014 07:43 PM, Alexander Graf wrote:
> > 
> > 
> > On 25.09.14 09:02, Alexey Kardashevskiy wrote:
> >> The only case when sPAPR NVRAM migrates now is if is backed by a file and
> >> copy-storage migration is performed.
> >>
> >> This enables RAM copy of NVRAM even if NVRAM is backed by a file.
> >>
> >> This defines a VMSTATE descriptor for NVRAM device so the memory copy
> >> of NVRAM can migrate and be written to a backing file on the destination
> >> if one is provided.
> >>
> >> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
> >> ---
> >>  hw/nvram/spapr_nvram.c | 68 +++++++++++++++++++++++++++++++++++++++++++-------
> >>  1 file changed, 59 insertions(+), 9 deletions(-)
> >>
> >> diff --git a/hw/nvram/spapr_nvram.c b/hw/nvram/spapr_nvram.c
> >> index 6a72ef4..254009e 100644
> >> --- a/hw/nvram/spapr_nvram.c
> >> +++ b/hw/nvram/spapr_nvram.c
> >> @@ -76,15 +76,20 @@ static void rtas_nvram_fetch(PowerPCCPU *cpu, sPAPREnvironment *spapr,
> >>          return;
> >>      }
> >>  
> >> +    assert(nvram->buf);
> >> +
> >>      membuf = cpu_physical_memory_map(buffer, &len, 1);
> >> +
> >> +    alen = len;
> >>      if (nvram->drive) {
> >>          alen = bdrv_pread(nvram->drive, offset, membuf, len);
> >> +        if (alen > 0) {
> >> +            memcpy(nvram->buf + offset, membuf, alen);
> > 
> > Why?
> 
> This way I do not need pre_save hook and I keep the buf in sync with the
> file. If I implement pre_save, then buf will serve 2 purposes - it is
> either NVRAM itself (if there is no backing file, exists during guest's
> lifetime) or it is a migration copy (exists between pre_save and post_load
> and then it is disposed). Two quite different uses of the same thing
> confuse me. But - I do not mind doing it your way, no big deal,
> should I?

This doesn't seem quite right to me.  I don't see anything that pulls
in the whole of the nvram contents at initialization, so it looks like
the buffer will only be in sync with the driver for the portions that
are either read or written by the guest.  Then, if you migrate while
not all of the memory copy is in sync, you could clobber the
out-of-sync parts of the disk copy as well.

Instead, I think you need to suck in the whole of the contents during
init, then all reads can just be supplied from the memory buffer, and
you'll only need to access the backing disk for stores.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] spapr_nvram: Enable migration
  2014-09-26  2:31       ` David Gibson
@ 2014-09-26  2:53         ` Alexey Kardashevskiy
  2014-09-29  8:30           ` Alexander Graf
  0 siblings, 1 reply; 8+ messages in thread
From: Alexey Kardashevskiy @ 2014-09-26  2:53 UTC (permalink / raw)
  To: David Gibson; +Cc: Paolo Bonzini, qemu-ppc, Alexander Graf, qemu-devel

On 09/26/2014 12:31 PM, David Gibson wrote:
> On Thu, Sep 25, 2014 at 08:06:40PM +1000, Alexey Kardashevskiy wrote:
>> On 09/25/2014 07:43 PM, Alexander Graf wrote:
>>>
>>>
>>> On 25.09.14 09:02, Alexey Kardashevskiy wrote:
>>>> The only case when sPAPR NVRAM migrates now is if is backed by a file and
>>>> copy-storage migration is performed.
>>>>
>>>> This enables RAM copy of NVRAM even if NVRAM is backed by a file.
>>>>
>>>> This defines a VMSTATE descriptor for NVRAM device so the memory copy
>>>> of NVRAM can migrate and be written to a backing file on the destination
>>>> if one is provided.
>>>>
>>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>>>> ---
>>>>  hw/nvram/spapr_nvram.c | 68 +++++++++++++++++++++++++++++++++++++++++++-------
>>>>  1 file changed, 59 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/hw/nvram/spapr_nvram.c b/hw/nvram/spapr_nvram.c
>>>> index 6a72ef4..254009e 100644
>>>> --- a/hw/nvram/spapr_nvram.c
>>>> +++ b/hw/nvram/spapr_nvram.c
>>>> @@ -76,15 +76,20 @@ static void rtas_nvram_fetch(PowerPCCPU *cpu, sPAPREnvironment *spapr,
>>>>          return;
>>>>      }
>>>>  
>>>> +    assert(nvram->buf);
>>>> +
>>>>      membuf = cpu_physical_memory_map(buffer, &len, 1);
>>>> +
>>>> +    alen = len;
>>>>      if (nvram->drive) {
>>>>          alen = bdrv_pread(nvram->drive, offset, membuf, len);
>>>> +        if (alen > 0) {
>>>> +            memcpy(nvram->buf + offset, membuf, alen);
>>>
>>> Why?
>>
>> This way I do not need pre_save hook and I keep the buf in sync with the
>> file. If I implement pre_save, then buf will serve 2 purposes - it is
>> either NVRAM itself (if there is no backing file, exists during guest's
>> lifetime) or it is a migration copy (exists between pre_save and post_load
>> and then it is disposed). Two quite different uses of the same thing
>> confuse me. But - I do not mind doing it your way, no big deal,
>> should I?
> 
> This doesn't seem quite right to me.  I don't see anything that pulls
> in the whole of the nvram contents at initialization, so it looks like
> the buffer will only be in sync with the driver for the portions that
> are either read or written by the guest.  Then, if you migrate while
> not all of the memory copy is in sync, you could clobber the
> out-of-sync parts of the disk copy as well.

Yes. I missed that :-/


> Instead, I think you need to suck in the whole of the contents during
> init, then all reads can just be supplied from the memory buffer, and
> you'll only need to access the backing disk for stores.

I like this and I will do this if Alex does not mind.
Thanks!


-- 
Alexey

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] spapr_nvram: Enable migration
  2014-09-26  2:53         ` Alexey Kardashevskiy
@ 2014-09-29  8:30           ` Alexander Graf
  0 siblings, 0 replies; 8+ messages in thread
From: Alexander Graf @ 2014-09-29  8:30 UTC (permalink / raw)
  To: Alexey Kardashevskiy, David Gibson; +Cc: Paolo Bonzini, qemu-ppc, qemu-devel



On 26.09.14 04:53, Alexey Kardashevskiy wrote:
> On 09/26/2014 12:31 PM, David Gibson wrote:
>> On Thu, Sep 25, 2014 at 08:06:40PM +1000, Alexey Kardashevskiy wrote:
>>> On 09/25/2014 07:43 PM, Alexander Graf wrote:
>>>>
>>>>
>>>> On 25.09.14 09:02, Alexey Kardashevskiy wrote:
>>>>> The only case when sPAPR NVRAM migrates now is if is backed by a file and
>>>>> copy-storage migration is performed.
>>>>>
>>>>> This enables RAM copy of NVRAM even if NVRAM is backed by a file.
>>>>>
>>>>> This defines a VMSTATE descriptor for NVRAM device so the memory copy
>>>>> of NVRAM can migrate and be written to a backing file on the destination
>>>>> if one is provided.
>>>>>
>>>>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>>>>> ---
>>>>>  hw/nvram/spapr_nvram.c | 68 +++++++++++++++++++++++++++++++++++++++++++-------
>>>>>  1 file changed, 59 insertions(+), 9 deletions(-)
>>>>>
>>>>> diff --git a/hw/nvram/spapr_nvram.c b/hw/nvram/spapr_nvram.c
>>>>> index 6a72ef4..254009e 100644
>>>>> --- a/hw/nvram/spapr_nvram.c
>>>>> +++ b/hw/nvram/spapr_nvram.c
>>>>> @@ -76,15 +76,20 @@ static void rtas_nvram_fetch(PowerPCCPU *cpu, sPAPREnvironment *spapr,
>>>>>          return;
>>>>>      }
>>>>>  
>>>>> +    assert(nvram->buf);
>>>>> +
>>>>>      membuf = cpu_physical_memory_map(buffer, &len, 1);
>>>>> +
>>>>> +    alen = len;
>>>>>      if (nvram->drive) {
>>>>>          alen = bdrv_pread(nvram->drive, offset, membuf, len);
>>>>> +        if (alen > 0) {
>>>>> +            memcpy(nvram->buf + offset, membuf, alen);
>>>>
>>>> Why?
>>>
>>> This way I do not need pre_save hook and I keep the buf in sync with the
>>> file. If I implement pre_save, then buf will serve 2 purposes - it is
>>> either NVRAM itself (if there is no backing file, exists during guest's
>>> lifetime) or it is a migration copy (exists between pre_save and post_load
>>> and then it is disposed). Two quite different uses of the same thing
>>> confuse me. But - I do not mind doing it your way, no big deal,
>>> should I?
>>
>> This doesn't seem quite right to me.  I don't see anything that pulls
>> in the whole of the nvram contents at initialization, so it looks like
>> the buffer will only be in sync with the driver for the portions that
>> are either read or written by the guest.  Then, if you migrate while
>> not all of the memory copy is in sync, you could clobber the
>> out-of-sync parts of the disk copy as well.
> 
> Yes. I missed that :-/
> 
> 
>> Instead, I think you need to suck in the whole of the contents during
>> init, then all reads can just be supplied from the memory buffer, and
>> you'll only need to access the backing disk for stores.
> 
> I like this and I will do this if Alex does not mind.

So you'd always keep a shadow copy in RAM and only use the file for
writes? Sounds like a good plan to me.


Alex

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-09-29  8:31 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-25  7:02 [Qemu-devel] [PATCH 0/2] spapr_nvram: Support migration Alexey Kardashevskiy
2014-09-25  7:02 ` [Qemu-devel] [PATCH 1/2] vmstate: Allow dynamic allocation for VBUFFER during migration Alexey Kardashevskiy
2014-09-25  7:02 ` [Qemu-devel] [PATCH 2/2] spapr_nvram: Enable migration Alexey Kardashevskiy
2014-09-25  9:43   ` Alexander Graf
2014-09-25 10:06     ` Alexey Kardashevskiy
2014-09-26  2:31       ` David Gibson
2014-09-26  2:53         ` Alexey Kardashevskiy
2014-09-29  8:30           ` Alexander Graf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.