All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage
@ 2013-05-23 17:44 Corey Bryant
  2013-05-23 17:44 ` [Qemu-devel] [PATCH 1/7] vnvram: VNVRAM bdrv support Corey Bryant
                   ` (8 more replies)
  0 siblings, 9 replies; 26+ messages in thread
From: Corey Bryant @ 2013-05-23 17:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, aliguori, stefanb, Corey Bryant, mdroth, lcapitulino,
	jschopp, stefanha

This patch series provides VNVRAM persistent storage support that
QEMU can use internally.  The initial target user will be a software
vTPM 1.2 backend that needs to store keys in VNVRAM and be able to
reboot/migrate and retain the keys.

This support uses QEMU's block driver to provide persistent storage
by reading/writing VNVRAM data from/to a drive image.  The VNVRAM
drive image is provided with the -drive command line option just like
any other drive image and the vnvram_create() API will find it.

The APIs allow for VNVRAM entries to be registered, one at a time,
each with a maximum blob size.  Entry blobs can then be read/written
from/to an entry on the drive.  Here's an example of usage:

VNVRAM *vnvram;
int errcode
const VNVRAMEntryName entry_name;
const char *blob_w = "blob data";
char *blob_r;
uint32_t blob_r_size;

vnvram = vnvram_create("drive-ide0-0-0", false, &errcode);
strcpy((char *)entry_name, "first-entry");
vnvram_register_entry(vnvram, &entry_name, 1024);
vnvram_write_entry(vnvram, &entry_name, (char *)blob_w, strlen(blob_w)+1);
vnvram_read_entry(vnvram, &entry_name, &blob_r, &blob_r_size);
vnvram_delete(vnvram);

Thanks,
Corey

Corey Bryant (7):
  vnvram: VNVRAM bdrv support
  vnvram: VNVRAM in-memory support
  vnvram: VNVRAM bottom-half r/w scheduling support
  vnvram: VNVRAM internal APIs
  vnvram: VNVRAM additional debug support
  main: Initialize VNVRAM
  monitor: QMP/HMP support for retrieving VNVRAM details

 Makefile.objs    |    2 +
 hmp.c            |   32 ++
 hmp.h            |    1 +
 monitor.c        |    7 +
 qapi-schema.json |   47 ++
 qmp-commands.hx  |   41 ++
 vl.c             |    6 +
 vnvram.c         | 1254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 vnvram.h         |   36 ++
 9 files changed, 1426 insertions(+), 0 deletions(-)
 create mode 100644 vnvram.c
 create mode 100644 vnvram.h

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [Qemu-devel] [PATCH 1/7] vnvram: VNVRAM bdrv support
  2013-05-23 17:44 [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage Corey Bryant
@ 2013-05-23 17:44 ` Corey Bryant
  2013-05-24 13:06   ` Kevin Wolf
  2013-05-23 17:44 ` [Qemu-devel] [PATCH 2/7] vnvram: VNVRAM in-memory support Corey Bryant
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 26+ messages in thread
From: Corey Bryant @ 2013-05-23 17:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, aliguori, stefanb, Corey Bryant, mdroth, lcapitulino,
	jschopp, stefanha

Provides low-level VNVRAM functionality that reads and writes data,
such as an entry's binary blob, to a drive image using the block
driver.

Signed-off-by: Corey Bryant <coreyb@linux.vnet.ibm.com>
---
 Makefile.objs |    2 +
 vnvram.c      |  487 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 vnvram.h      |   22 +++
 3 files changed, 511 insertions(+), 0 deletions(-)
 create mode 100644 vnvram.c
 create mode 100644 vnvram.h

diff --git a/Makefile.objs b/Makefile.objs
index 286ce06..4875a94 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -76,6 +76,8 @@ common-obj-$(CONFIG_SECCOMP) += qemu-seccomp.o
 
 common-obj-$(CONFIG_SMARTCARD_NSS) += $(libcacard-y)
 
+common-obj-y += vnvram.o
+
 ######################################################################
 # qapi
 
diff --git a/vnvram.c b/vnvram.c
new file mode 100644
index 0000000..e467198
--- /dev/null
+++ b/vnvram.c
@@ -0,0 +1,487 @@
+/*
+ * VNVRAM -- stores persistent data in image files
+ *
+ * Copyright (C) 2013 IBM Corporation
+ *
+ * Authors:
+ *  Stefan Berger    <stefanb@us.ibm.com>
+ *  Corey Bryant     <coreyb@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "vnvram.h"
+#include "block/block.h"
+
+/*
+#define VNVRAM_DEBUG
+*/
+
+#ifdef VNVRAM_DEBUG
+#define DPRINTF(fmt, ...) \
+    do { fprintf(stderr, fmt, ## __VA_ARGS__); } while (0)
+#else
+#define DPRINTF(fmt, ...) \
+    do { } while (0)
+#endif
+
+#define VNVRAM_ENTRY_DATA                              \
+    VNVRAMEntryName name; /* name of entry */          \
+    uint64_t blob_offset; /* start of blob on drive */ \
+    uint32_t cur_size;    /* current size of blob */   \
+    uint32_t max_size;    /* max size of blob */
+
+/* The following VNVRAM information is stored in-memory */
+typedef struct VNVRAMEntry {
+    VNVRAM_ENTRY_DATA
+    QLIST_ENTRY(VNVRAMEntry) next;
+} VNVRAMEntry;
+
+struct VNVRAM {
+    char *drv_id;            /* corresponds to -drive id= on command line */
+    BlockDriverState *bds;   /* bds for the VNVRAM drive */
+    uint64_t end_offset;     /* offset on drive where next entry will go */
+    QLIST_HEAD(entries_head, VNVRAMEntry) entries_head; /* in-memory entries */
+    QLIST_ENTRY(VNVRAM) list;
+};
+
+/* There can be multiple VNVRAMS */
+static QLIST_HEAD(, VNVRAM) vnvrams = QLIST_HEAD_INITIALIZER(vnvrams);
+
+#define VNVRAM_VERSION_1        1
+#define VNVRAM_CURRENT_VERSION  VNVRAM_VERSION_1
+#define VNVRAM_MAGIC            0x4E56524D /* NVRM */
+
+/* VNVRAM drive data consists of a header followed by entries and their blobs.
+ * For example:
+ *   | header | entry 1 | entry 1's blob | entry 2 | entry 2's blob | ... |
+ */
+typedef struct VNVRAMDrvHdr {
+    uint16_t version;
+    uint32_t magic;
+    uint32_t num_entries;
+} QEMU_PACKED VNVRAMDrvHdr;
+
+typedef struct VNVRAMDrvEntry {
+    VNVRAM_ENTRY_DATA
+} QEMU_PACKED VNVRAMDrvEntry;
+
+static int vnvram_drv_entry_create(VNVRAM *, VNVRAMEntry *, uint64_t, uint32_t);
+static int vnvram_drv_entry_update(VNVRAM *, VNVRAMEntry *, uint64_t, uint32_t);
+
+/*
+ * Macros for finding entries and their drive offsets
+ */
+#define VNVRAM_FIRST_ENTRY(vnvram) \
+        QLIST_FIRST(&(vnvram)->entries_head)
+
+#define VNVRAM_NEXT_ENTRY(cur_entry) \
+    QLIST_NEXT(cur_entry, next)
+
+#define VNVRAM_FIRST_ENTRY_OFFSET() \
+    sizeof(VNVRAMDrvHdr)
+
+#define VNVRAM_NEXT_ENTRY_OFFSET(entry) \
+    ((entry)->blob_offset + (entry)->max_size)
+
+#define VNVRAM_NEXT_AVAIL_BLOB_OFFSET(vnvram) \
+    ((vnvram)->end_offset + sizeof(VNVRAMDrvEntry))
+
+#define VNVRAM_ENTRY_OFFSET_FROM_BLOB(blob_offset) \
+    (blob_offset - sizeof(VNVRAMDrvEntry))
+
+#define VNVRAM_BLOB_OFFSET_FROM_ENTRY(entry_offset) \
+    (entry_offset + sizeof(VNVRAMDrvEntry))
+
+/************************* VNVRAM drv ********************************/
+/* Low-level VNVRAM functions that work with the drive header and    */
+/* entries.                                                          */
+/*********************************************************************/
+
+/*
+ * Big-endian conversions
+ */
+static void vnvram_drv_hdr_cpu_to_be(VNVRAMDrvHdr *hdr)
+{
+    hdr->version = cpu_to_be16(hdr->version);
+    hdr->magic = cpu_to_be32(hdr->magic);
+    hdr->num_entries = cpu_to_be32(hdr->num_entries);
+}
+
+static void vnvram_drv_hdr_be_to_cpu(VNVRAMDrvHdr *hdr)
+{
+    hdr->version = be16_to_cpu(hdr->version);
+    hdr->magic = be32_to_cpu(hdr->magic);
+    hdr->num_entries = be32_to_cpu(hdr->num_entries);
+}
+
+static void vnvram_drv_entry_cpu_to_be(VNVRAMDrvEntry *drv_entry)
+{
+    drv_entry->blob_offset = cpu_to_be64(drv_entry->blob_offset);
+    drv_entry->cur_size = cpu_to_be32(drv_entry->cur_size);
+    drv_entry->max_size = cpu_to_be32(drv_entry->max_size);
+}
+
+static void vnvram_drv_entry_be_to_cpu(VNVRAMDrvEntry *drv_entry)
+{
+    drv_entry->blob_offset = be64_to_cpu(drv_entry->blob_offset);
+    drv_entry->cur_size = be32_to_cpu(drv_entry->cur_size);
+    drv_entry->max_size = be32_to_cpu(drv_entry->max_size);
+}
+
+/*
+ * Find the VNVRAM that corresponds to the specified drive ID string
+ */
+static VNVRAM *vnvram_drv_find_by_id(const char *drv_id)
+{
+    VNVRAM *vnvram;
+
+    QLIST_FOREACH(vnvram, &vnvrams, list) {
+        if (strcmp(vnvram->drv_id, drv_id) == 0) {
+            return vnvram;
+        }
+    }
+
+    return NULL;
+}
+
+/*
+ * Increase the drive size if it's too small to fit the VNVRAM data
+ */
+static int vnvram_drv_adjust_size(VNVRAM *vnvram)
+{
+    int rc = 0;
+    int64_t needed_size;
+
+    needed_size = 0;
+
+    if (bdrv_getlength(vnvram->bds) < needed_size) {
+        rc = bdrv_truncate(vnvram->bds, needed_size);
+        if (rc != 0) {
+            DPRINTF("%s: VNVRAM drive too small\n", __func__);
+        }
+    }
+
+    return rc;
+}
+
+/*
+ * Write a header to the drive with entry count of zero
+ */
+static int vnvram_drv_hdr_create_empty(VNVRAM *vnvram)
+{
+    VNVRAMDrvHdr hdr;
+
+    hdr.version = VNVRAM_CURRENT_VERSION;
+    hdr.magic = VNVRAM_MAGIC;
+    hdr.num_entries = 0;
+
+    vnvram_drv_hdr_cpu_to_be((&hdr));
+
+    if (bdrv_pwrite(vnvram->bds, 0, (&hdr), sizeof(hdr)) != sizeof(hdr)) {
+        DPRINTF("%s: Write of header to drive failed\n", __func__);
+        return -EIO;
+    }
+
+    vnvram->end_offset = sizeof(VNVRAMDrvHdr);
+
+    return 0;
+}
+
+/*
+ * Read the header from the drive
+ */
+static int vnvram_drv_hdr_read(VNVRAM *vnvram, VNVRAMDrvHdr *hdr)
+{
+    if (bdrv_pread(vnvram->bds, 0, hdr, sizeof(*hdr)) != sizeof(*hdr)) {
+        DPRINTF("%s: Read of header from drive failed\n", __func__);
+        return -EIO;
+    }
+
+    vnvram_drv_hdr_be_to_cpu(hdr);
+
+    return 0;
+}
+
+/*
+ * Write the header to the drive
+ */
+static int vnvram_drv_hdr_write(VNVRAM *vnvram, VNVRAMDrvHdr *hdr)
+{
+    vnvram_drv_hdr_cpu_to_be(hdr);
+
+    if (bdrv_pwrite(vnvram->bds, 0, hdr, sizeof(*hdr)) != sizeof(*hdr)) {
+        DPRINTF("%s: Write of header to drive failed\n", __func__);
+        return -EIO;
+    }
+
+    vnvram_drv_hdr_be_to_cpu(hdr);
+
+    return 0;
+}
+
+/*
+ * Read an entry from the drive (does not include blob)
+ */
+static int vnvram_drv_entry_read(VNVRAM *vnvram, uint64_t entry_offset,
+                                 VNVRAMDrvEntry *drv_entry)
+{
+    if (bdrv_pread(vnvram->bds, entry_offset, drv_entry, sizeof(*drv_entry))
+            != sizeof(*drv_entry)) {
+        DPRINTF("%s: VNVRAM error reading entry from drive\n", __func__);
+        return -EIO;
+    }
+
+    vnvram_drv_entry_be_to_cpu(drv_entry);
+
+    return 0;
+}
+
+/*
+ * Write an entry to the drive (does not include blob)
+ */
+static int vnvram_drv_entry_write(VNVRAM *vnvram, uint64_t entry_offset,
+                                  VNVRAMDrvEntry *drv_entry)
+{
+    vnvram_drv_entry_cpu_to_be(drv_entry);
+
+    if (bdrv_pwrite(vnvram->bds, entry_offset, drv_entry, sizeof(*drv_entry))
+            != sizeof(*drv_entry)) {
+        DPRINTF("%s: VNVRAM error writing entry to drive\n", __func__);
+        return -EIO;
+    }
+
+    vnvram_drv_entry_be_to_cpu(drv_entry);
+
+    return 0;
+}
+
+/*
+ * Read an entry's blob from the drive
+ */
+static int vnvram_drv_entry_read_blob(VNVRAM *vnvram, const VNVRAMEntry *entry,
+                                      char **blob, uint32_t *blob_size)
+{
+    int rc = 0;
+
+    *blob = NULL;
+    *blob_size = 0;
+
+    if (entry->cur_size == 0) {
+        DPRINTF("%s: VNVRAM entry not found\n", __func__);
+        rc = -ENOENT;
+        goto err_exit;
+    }
+
+    *blob = g_malloc(entry->cur_size);
+
+    DPRINTF("%s: VNVRAM read: name=%s, blob_offset=%"PRIu64", size=%"PRIu32"\n",
+            __func__, (char *)entry->name, entry->blob_offset, entry->cur_size);
+
+    if (bdrv_pread(vnvram->bds, entry->blob_offset, *blob, entry->cur_size)
+            != entry->cur_size) {
+        DPRINTF("%s: VNVRAM error reading blob from drive\n", __func__);
+        rc = -EIO;
+        goto err_exit;
+    }
+
+    *blob_size = entry->cur_size;
+
+    return rc;
+
+err_exit:
+    g_free(*blob);
+    *blob = NULL;
+
+    return rc;
+}
+
+/*
+ * Write an entry's blob to the drive
+ */
+static int vnvram_drv_entry_write_blob(VNVRAM *vnvram, VNVRAMEntry *entry,
+                                       char *blob, uint32_t blob_size)
+{
+    int rc;
+    uint64_t blob_offset;
+
+    if (blob_size == 0 || blob_size > entry->max_size) {
+        DPRINTF("%s: Blob size is not valid for entry\n", __func__);
+        rc = -EMSGSIZE;
+        goto err_exit;
+    }
+
+    rc = vnvram_drv_adjust_size(vnvram);
+    if (rc != 0) {
+        goto err_exit;
+    }
+
+    if (entry->blob_offset == 0) {
+        /* Entry doesn't exist on the drive yet */
+        blob_offset = VNVRAM_NEXT_AVAIL_BLOB_OFFSET(vnvram);
+    } else {
+        blob_offset = entry->blob_offset;
+    }
+
+    DPRINTF("%s: VNVRAM write: name=%s, blob_offset=%"PRIu64", "
+            "size=%"PRIu32"\n", __func__, (char *)entry->name,
+            blob_offset, blob_size);
+
+    if (bdrv_pwrite(vnvram->bds, blob_offset, blob, blob_size) != blob_size) {
+        DPRINTF("%s: VNVRAM error writing blob to drive\n", __func__);
+        rc = -EIO;
+        goto err_exit;
+    }
+
+    if (entry->blob_offset == 0) {
+        /* Entry doesn't exist on the drive yet */
+        rc = vnvram_drv_entry_create(vnvram, entry,
+                                     VNVRAM_ENTRY_OFFSET_FROM_BLOB(blob_offset),
+                                     blob_size);
+        if (rc != 0) {
+            DPRINTF("%s: Unable to create VNVRAM entry\n", __func__);
+            goto err_exit;
+        }
+    } else {
+        rc = vnvram_drv_entry_update(vnvram, entry,
+                                     VNVRAM_ENTRY_OFFSET_FROM_BLOB(blob_offset),
+                                     blob_size);
+        if (rc != 0) {
+            DPRINTF("%s: Unable to update VNVRAM entry\n", __func__);
+            goto err_exit;
+        }
+    }
+
+    entry->blob_offset = blob_offset;
+    entry->cur_size = blob_size;
+
+err_exit:
+    return rc;
+}
+
+/*
+ * Create an entry and write it to the drive (does not include blob)
+ */
+static int vnvram_drv_entry_create(VNVRAM *vnvram, VNVRAMEntry *entry,
+                                   uint64_t entry_offset, uint32_t blob_size)
+{
+    int rc;
+    VNVRAMDrvHdr hdr;
+    VNVRAMDrvEntry *drv_entry;
+
+    drv_entry = g_new0(VNVRAMDrvEntry, 1);
+
+    pstrcpy(drv_entry->name, sizeof(drv_entry->name), (char *)entry->name);
+    drv_entry->blob_offset = VNVRAM_BLOB_OFFSET_FROM_ENTRY(entry_offset);
+    drv_entry->cur_size = blob_size;
+    drv_entry->max_size = entry->max_size;
+
+    rc = vnvram_drv_entry_write(vnvram, entry_offset, drv_entry);
+    if (rc != 0) {
+        goto err_exit;
+    }
+
+    rc = vnvram_drv_hdr_read(vnvram, (&hdr));
+    if (rc != 0) {
+        goto err_exit;
+    }
+
+    hdr.num_entries++;
+
+    rc = vnvram_drv_hdr_write(vnvram, (&hdr));
+    if (rc != 0) {
+        goto err_exit;
+    }
+
+    vnvram->end_offset = drv_entry->blob_offset + drv_entry->max_size;
+
+err_exit:
+    g_free(drv_entry);
+
+    return rc;
+}
+
+/*
+ * Update an entry on the drive (does not include blob)
+ */
+static int vnvram_drv_entry_update(VNVRAM *vnvram, VNVRAMEntry *entry,
+                                   uint64_t entry_offset, uint32_t blob_size)
+{
+    int rc;
+    VNVRAMDrvEntry *drv_entry;
+
+    drv_entry = g_new0(VNVRAMDrvEntry, 1);
+
+    pstrcpy(drv_entry->name, sizeof(drv_entry->name), (char *)entry->name);
+    drv_entry->blob_offset = VNVRAM_BLOB_OFFSET_FROM_ENTRY(entry_offset);
+    drv_entry->cur_size = blob_size;
+    drv_entry->max_size = entry->max_size;
+
+    rc = vnvram_drv_entry_write(vnvram, entry_offset, drv_entry);
+
+    g_free(drv_entry);
+
+    return rc;
+}
+
+/*
+ * Get all entry data from the drive (does not get blob data)
+ */
+static int vnvram_drv_entries_get(VNVRAM *vnvram, VNVRAMDrvHdr *hdr,
+                                  VNVRAMDrvEntry **drv_entries,
+                                  int *num_entries)
+{
+    int i, rc;
+    uint64_t entry_offset;
+
+    *drv_entries = NULL;
+    *num_entries = 0;
+
+    *num_entries = hdr->num_entries;
+    if (*num_entries == 0) {
+        return 0;
+    }
+
+    *drv_entries = g_malloc_n(hdr->num_entries, sizeof(VNVRAMDrvEntry));
+
+    entry_offset = VNVRAM_FIRST_ENTRY_OFFSET();
+
+    for (i = 0; i < hdr->num_entries; i++) {
+        VNVRAMDrvEntry *drv_entry = &(*drv_entries)[i];
+
+        rc = vnvram_drv_entry_read(vnvram, entry_offset, drv_entry);
+        if (rc != 0) {
+            goto err_exit;
+        }
+
+        entry_offset = VNVRAM_NEXT_ENTRY_OFFSET(drv_entry);
+    }
+
+    return 0;
+
+err_exit:
+    g_free(*drv_entries);
+    *drv_entries = NULL;
+    *num_entries = 0;
+
+    return rc;
+}
+
+/*
+ * Check if the VNVRAM drive header is valid
+ */
+static bool vnvram_drv_hdr_is_valid(VNVRAM *vnvram, VNVRAMDrvHdr *hdr)
+{
+    if (hdr->version != VNVRAM_CURRENT_VERSION) {
+        DPRINTF("%s: VNVRAM drive version not valid\n", __func__);
+        return false;
+    }
+
+    if (hdr->magic != VNVRAM_MAGIC) {
+        DPRINTF("%s: VNVRAM drive magic not valid\n", __func__);
+        return false;
+    }
+
+    return true;
+}
diff --git a/vnvram.h b/vnvram.h
new file mode 100644
index 0000000..b6d7cd7
--- /dev/null
+++ b/vnvram.h
@@ -0,0 +1,22 @@
+/*
+ * VNVRAM -- stores persistent data in image files
+ *
+ * Copyright (C) 2013 IBM Corporation
+ *
+ * Authors:
+ *  Stefan Berger    <stefanb@us.ibm.com>
+ *  Corey Bryant     <coreyb@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#ifndef _QEMU_VNVRAM_H_
+#define _QEMU_VNVRAM_H_
+
+typedef struct VNVRAM VNVRAM;
+
+#define VNVRAM_ENTRY_NAME_LENGTH 16
+typedef char VNVRAMEntryName[VNVRAM_ENTRY_NAME_LENGTH];
+
+#endif
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [PATCH 2/7] vnvram: VNVRAM in-memory support
  2013-05-23 17:44 [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage Corey Bryant
  2013-05-23 17:44 ` [Qemu-devel] [PATCH 1/7] vnvram: VNVRAM bdrv support Corey Bryant
@ 2013-05-23 17:44 ` Corey Bryant
  2013-05-23 17:44 ` [Qemu-devel] [PATCH 3/7] vnvram: VNVRAM bottom-half r/w scheduling support Corey Bryant
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 26+ messages in thread
From: Corey Bryant @ 2013-05-23 17:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, aliguori, stefanb, Corey Bryant, mdroth, lcapitulino,
	jschopp, stefanha

Provides support for in-memory VNVRAM entries.  The in-memory
entries are used for fast access to entry data such as the
current or max size of an entry and the disk offset where an
entry's binary blob data is stored.

Signed-off-by: Corey Bryant <coreyb@linux.vnet.ibm.com>
---
 vnvram.c |  196 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 195 insertions(+), 1 deletions(-)

diff --git a/vnvram.c b/vnvram.c
index e467198..37b7070 100644
--- a/vnvram.c
+++ b/vnvram.c
@@ -13,6 +13,7 @@
 
 #include "vnvram.h"
 #include "block/block.h"
+#include "monitor/monitor.h"
 
 /*
 #define VNVRAM_DEBUG
@@ -69,6 +70,14 @@ typedef struct VNVRAMDrvEntry {
 
 static int vnvram_drv_entry_create(VNVRAM *, VNVRAMEntry *, uint64_t, uint32_t);
 static int vnvram_drv_entry_update(VNVRAM *, VNVRAMEntry *, uint64_t, uint32_t);
+static int vnvram_register_entry_internal(VNVRAM *, const VNVRAMEntryName *,
+                                          uint64_t, uint32_t, uint32_t);
+static VNVRAMEntry *vnvram_find_entry(VNVRAM *, const VNVRAMEntryName *);
+static uint64_t vnvram_get_size_kb(VNVRAM *);
+
+/* Round a value up to the next SIZE */
+#define ROUNDUP(VAL, SIZE) \
+    (((VAL)+(SIZE)-1) & ~((SIZE)-1))
 
 /*
  * Macros for finding entries and their drive offsets
@@ -154,7 +163,8 @@ static int vnvram_drv_adjust_size(VNVRAM *vnvram)
     int rc = 0;
     int64_t needed_size;
 
-    needed_size = 0;
+    /* qcow2 size needs to be multiple of 512 */
+    needed_size = vnvram_get_size_kb(vnvram) * 1024;
 
     if (bdrv_getlength(vnvram->bds) < needed_size) {
         rc = bdrv_truncate(vnvram->bds, needed_size);
@@ -485,3 +495,187 @@ static bool vnvram_drv_hdr_is_valid(VNVRAM *vnvram, VNVRAMDrvHdr *hdr)
 
     return true;
 }
+
+/************************ VNVRAM in-memory ***************************/
+/* High-level VNVRAM functions that work with in-memory entries.     */
+/*********************************************************************/
+
+/*
+ * Check if the specified vnvram has been created
+ */
+static bool vnvram_exists(VNVRAM *vnvram_target)
+{
+    VNVRAM *vnvram;
+
+    QLIST_FOREACH(vnvram, &vnvrams, list) {
+        if (vnvram == vnvram_target) {
+            return true;
+        }
+    }
+
+    return false;
+}
+
+/*
+ * Get total size of the VNVRAM
+ */
+static uint64_t vnvram_get_size(VNVRAM *vnvram)
+{
+    const VNVRAMEntry *entry;
+    uint64_t totsize = sizeof(VNVRAMDrvHdr);
+
+    for (entry = VNVRAM_FIRST_ENTRY(vnvram); entry != NULL;
+         entry = VNVRAM_NEXT_ENTRY(entry)) {
+        totsize += sizeof(VNVRAMDrvEntry) + entry->max_size;
+    }
+
+    return totsize;
+}
+
+/*
+ * Get the total size of the VNVRAM in kilobytes (rounded up to the next kb)
+ */
+static uint64_t vnvram_get_size_kb(VNVRAM *vnvram)
+{
+    return ROUNDUP(vnvram_get_size(vnvram), 1024) / 1024;
+}
+
+/*
+ * Check if the VNVRAM entries are valid
+ */
+static bool vnvram_entries_are_valid(VNVRAM *vnvram, uint64_t drv_size)
+{
+    const VNVRAMEntry *i_entry, *j_entry;
+
+    /* Entries must not overlap or point beyond end of drive size */
+    for (i_entry = VNVRAM_FIRST_ENTRY(vnvram); i_entry != NULL;
+         i_entry = VNVRAM_NEXT_ENTRY(i_entry)) {
+
+        uint64_t i_blob_start = i_entry->blob_offset;
+        uint64_t i_blob_end = i_blob_start + i_entry->max_size-1;
+
+        if (i_entry->max_size == 0) {
+            DPRINTF("%s: VNVRAM entry max size shouldn't be 0\n", __func__);
+            return false;
+        }
+
+        if (i_blob_end > drv_size) {
+            DPRINTF("%s: VNVRAM entry blob too large for drive\n", __func__);
+            return false;
+        }
+
+        for (j_entry = VNVRAM_NEXT_ENTRY(i_entry); j_entry != NULL;
+             j_entry = VNVRAM_NEXT_ENTRY(j_entry)) {
+
+            uint64_t j_blob_start = j_entry->blob_offset;
+            uint64_t j_blob_end = j_blob_start + j_entry->max_size-1;
+
+            if (j_entry->max_size == 0) {
+                DPRINTF("%s: VNVRAM entry max size shouldn't be 0\n", __func__);
+                return false;
+            }
+
+            if (j_blob_end > drv_size) {
+                DPRINTF("%s: VNVRAM entry blob too large for drive\n",
+                        __func__);
+                return false;
+            }
+
+            if ((i_blob_start >= j_blob_start && i_blob_start <= j_blob_end) ||
+                (i_blob_end   >= j_blob_start && i_blob_end   <= j_blob_end)) {
+                DPRINTF("%s: VNVRAM entries overlap\n", __func__);
+                return false;
+            }
+        }
+    }
+
+    return true;
+}
+
+/*
+ * Synchronize the in-memory VNVRAM entries with those found on the drive.
+ */
+static int vnvram_sync_from_drv(VNVRAM *vnvram, VNVRAMDrvHdr *hdr)
+{
+    int rc = 0, num_entries = 0, i;
+    VNVRAMDrvEntry *drv_entries = NULL;
+
+    rc = vnvram_drv_entries_get(vnvram, hdr, &drv_entries, &num_entries);
+    if (rc != 0) {
+        return rc;
+    }
+
+    for (i = 0; i < num_entries; i++) {
+        rc = vnvram_register_entry_internal(vnvram,
+                                (const VNVRAMEntryName *)&drv_entries[i].name,
+                                drv_entries[i].blob_offset,
+                                drv_entries[i].cur_size,
+                                drv_entries[i].max_size);
+        if (rc != 0) {
+            goto err_exit;
+        }
+    }
+
+    vnvram->end_offset = vnvram_get_size(vnvram);
+
+err_exit:
+    g_free(drv_entries);
+
+    return rc;
+}
+
+/*
+ * Register an entry with the in-memory entry list
+ */
+static int vnvram_register_entry_internal(VNVRAM *vnvram,
+                                          const VNVRAMEntryName *entry_name,
+                                          uint64_t blob_offset,
+                                          uint32_t cur_size,
+                                          uint32_t max_size)
+{
+    VNVRAMEntry *new_entry;
+    const VNVRAMEntry *existing_entry;
+    int rc = 0;
+
+    existing_entry = vnvram_find_entry(vnvram, entry_name);
+    if (existing_entry) {
+        if (existing_entry->max_size != max_size) {
+            qerror_report(ERROR_CLASS_GENERIC_ERROR,
+                         "VNVRAM entry already registered with different size");
+            return -EINVAL;
+        }
+        /* Entry already exists with same max size - success */
+        return 0;
+    }
+
+    new_entry = g_new0(VNVRAMEntry, 1);
+
+    pstrcpy(new_entry->name, sizeof(new_entry->name), (char *)entry_name);
+    new_entry->blob_offset = blob_offset;
+    new_entry->cur_size = cur_size;
+    new_entry->max_size = max_size;
+
+    QLIST_INSERT_HEAD(&vnvram->entries_head, new_entry, next);
+
+    DPRINTF("%s: VNVRAM entry '%s' registered with max_size=%"PRIu32"\n",
+            __func__, new_entry->name, new_entry->max_size);
+
+    return rc;
+}
+
+/*
+ * Find the in-memory VNVRAM entry with the specified name
+ */
+static VNVRAMEntry *vnvram_find_entry(VNVRAM *vnvram,
+                                      const VNVRAMEntryName *entry_name)
+{
+    VNVRAMEntry *entry;
+
+    QLIST_FOREACH(entry, &vnvram->entries_head, next) {
+        if (!strncmp(entry->name, (char *)entry_name, sizeof(*entry_name))) {
+            return entry;
+        }
+    }
+
+    return NULL;
+}
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [PATCH 3/7] vnvram: VNVRAM bottom-half r/w scheduling support
  2013-05-23 17:44 [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage Corey Bryant
  2013-05-23 17:44 ` [Qemu-devel] [PATCH 1/7] vnvram: VNVRAM bdrv support Corey Bryant
  2013-05-23 17:44 ` [Qemu-devel] [PATCH 2/7] vnvram: VNVRAM in-memory support Corey Bryant
@ 2013-05-23 17:44 ` Corey Bryant
  2013-05-23 17:44 ` [Qemu-devel] [PATCH 4/7] vnvram: VNVRAM internal APIs Corey Bryant
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 26+ messages in thread
From: Corey Bryant @ 2013-05-23 17:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, aliguori, stefanb, Corey Bryant, mdroth, lcapitulino,
	jschopp, stefanha

Provides support that schedules and executes VNVRAM read/write
requests.  A bottom-half is used to perform reads/writes from
the QEMU main thread.

Signed-off-by: Corey Bryant <coreyb@linux.vnet.ibm.com>
---
 vnvram.c |  142 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 142 insertions(+), 0 deletions(-)

diff --git a/vnvram.c b/vnvram.c
index 37b7070..4157482 100644
--- a/vnvram.c
+++ b/vnvram.c
@@ -14,6 +14,7 @@
 #include "vnvram.h"
 #include "block/block.h"
 #include "monitor/monitor.h"
+#include "qemu/thread.h"
 
 /*
 #define VNVRAM_DEBUG
@@ -68,6 +69,30 @@ typedef struct VNVRAMDrvEntry {
     VNVRAM_ENTRY_DATA
 } QEMU_PACKED VNVRAMDrvEntry;
 
+/* Used to pass read/write requests to the bottom-half function */
+typedef struct VNVRAMRWRequest {
+    VNVRAM *vnvram;
+    VNVRAMEntry *entry;
+    bool is_write;
+    char **blob_r;
+    uint32_t *blob_r_size;
+    char *blob_w;
+    uint32_t blob_w_size;
+    int rc;
+
+    QemuMutex completion_mutex;
+    QemuCond completion;
+
+    QSIMPLEQ_ENTRY(VNVRAMRWRequest) list;
+} VNVRAMRWRequest;
+
+/* A mutex protected queue where read/write requests are stored */
+static QemuMutex vnvram_rwrequests_mutex;
+static QSIMPLEQ_HEAD(, VNVRAMRWRequest) vnvram_rwrequests =
+    QSIMPLEQ_HEAD_INITIALIZER(vnvram_rwrequests);
+
+static QEMUBH *vnvram_bh;
+
 static int vnvram_drv_entry_create(VNVRAM *, VNVRAMEntry *, uint64_t, uint32_t);
 static int vnvram_drv_entry_update(VNVRAM *, VNVRAMEntry *, uint64_t, uint32_t);
 static int vnvram_register_entry_internal(VNVRAM *, const VNVRAMEntryName *,
@@ -679,3 +704,120 @@ static VNVRAMEntry *vnvram_find_entry(VNVRAM *vnvram,
 
     return NULL;
 }
+
+/*********************** VNVRAM rwrequest ****************************/
+/* High-level VNVRAM functions that schedule and kick off read/write */
+/* requests.                                                         */
+/*********************************************************************/
+
+/*
+ * VNVRAMRWRequest initialization for read requests
+ */
+static VNVRAMRWRequest *vnvram_rwrequest_init_read(VNVRAM *vnvram,
+                                                   VNVRAMEntry *entry,
+                                                   char **blob,
+                                                   uint32_t *blob_size)
+{
+    VNVRAMRWRequest *rwr;
+
+    rwr = g_new0(VNVRAMRWRequest, 1);
+
+    rwr->is_write = false;
+    rwr->entry = entry;
+    rwr->vnvram = vnvram;
+    rwr->blob_r = blob;
+    rwr->blob_r_size = blob_size;
+
+    return rwr;
+}
+
+/*
+ * VNVRAMRWRequest initialization for write requests
+ */
+static VNVRAMRWRequest *vnvram_rwrequest_init_write(VNVRAM *vnvram,
+                                                    VNVRAMEntry *entry,
+                                                    char *blob,
+                                                    uint32_t blob_size)
+{
+    VNVRAMRWRequest *rwr;
+
+    rwr = g_new0(VNVRAMRWRequest, 1);
+
+    rwr->is_write = true;
+    rwr->entry = entry;
+    rwr->vnvram = vnvram;
+    rwr->blob_w = blob;
+    rwr->blob_w_size = blob_size;
+
+    return rwr;
+}
+
+/*
+ * Execute a read or write of blob data based on an VNVRAMRWRequest
+ */
+static int vnvram_rwrequest_exec(VNVRAMRWRequest *rwr)
+{
+    int rc = 0;
+
+    if (rwr->is_write) {
+        rc = vnvram_drv_entry_write_blob(rwr->vnvram, rwr->entry,
+                                         rwr->blob_w, rwr->blob_w_size);
+    } else {
+        rc = vnvram_drv_entry_read_blob(rwr->vnvram, rwr->entry,
+                                        rwr->blob_r, rwr->blob_r_size);
+    }
+
+    rwr->rc = rc;
+
+    qemu_mutex_lock(&rwr->completion_mutex);
+    qemu_cond_signal(&rwr->completion);
+    qemu_mutex_unlock(&rwr->completion_mutex);
+
+    return rc;
+}
+
+/*
+ * Bottom-half callback that is invoked by QEMU's main thread to
+ * process VNVRAM read/write requests.
+ */
+static void vnvram_rwrequest_callback(void *opaque)
+{
+    VNVRAMRWRequest *rwr, *next;
+
+    qemu_mutex_lock(&vnvram_rwrequests_mutex);
+
+    QSIMPLEQ_FOREACH_SAFE(rwr, &vnvram_rwrequests, list, next) {
+        QSIMPLEQ_REMOVE(&vnvram_rwrequests, rwr, VNVRAMRWRequest, list);
+
+        qemu_mutex_unlock(&vnvram_rwrequests_mutex);
+
+        vnvram_rwrequest_exec(rwr);
+
+        qemu_mutex_lock(&vnvram_rwrequests_mutex);
+    }
+
+    qemu_mutex_unlock(&vnvram_rwrequests_mutex);
+}
+
+/*
+ * Schedules a bottom-half to read or write a blob to the VNVRAM drive.
+ */
+static int vnvram_rwrequest_schedule(VNVRAMRWRequest *rwr)
+{
+    int rc = 0;
+
+    qemu_mutex_lock(&vnvram_rwrequests_mutex);
+    QSIMPLEQ_INSERT_TAIL(&vnvram_rwrequests, rwr, list);
+    qemu_mutex_unlock(&vnvram_rwrequests_mutex);
+
+    qemu_bh_schedule(vnvram_bh);
+
+    /* All reads/writes are synchronous so we wait for completion */
+    qemu_mutex_lock(&rwr->completion_mutex);
+    qemu_cond_wait(&rwr->completion, &rwr->completion_mutex);
+    qemu_mutex_unlock(&rwr->completion_mutex);
+
+    rc = rwr->rc;
+
+    return rc;
+}
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [PATCH 4/7] vnvram: VNVRAM internal APIs
  2013-05-23 17:44 [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage Corey Bryant
                   ` (2 preceding siblings ...)
  2013-05-23 17:44 ` [Qemu-devel] [PATCH 3/7] vnvram: VNVRAM bottom-half r/w scheduling support Corey Bryant
@ 2013-05-23 17:44 ` Corey Bryant
  2013-05-23 17:44 ` [Qemu-devel] [PATCH 5/7] vnvram: VNVRAM additional debug support Corey Bryant
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 26+ messages in thread
From: Corey Bryant @ 2013-05-23 17:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, aliguori, stefanb, Corey Bryant, mdroth, lcapitulino,
	jschopp, stefanha

Provides VNVRAM APIs that can be used by other areas of QEMU to
provide persistent storage.

Signed-off-by: Corey Bryant <coreyb@linux.vnet.ibm.com>
---
 vnvram.c |  266 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 vnvram.h |   14 +++
 2 files changed, 280 insertions(+), 0 deletions(-)

diff --git a/vnvram.c b/vnvram.c
index 4157482..357923d 100644
--- a/vnvram.c
+++ b/vnvram.c
@@ -15,6 +15,7 @@
 #include "block/block.h"
 #include "monitor/monitor.h"
 #include "qemu/thread.h"
+#include "sysemu/sysemu.h"
 
 /*
 #define VNVRAM_DEBUG
@@ -821,3 +822,268 @@ static int vnvram_rwrequest_schedule(VNVRAMRWRequest *rwr)
 
     return rc;
 }
+
+/************************* VNVRAM APIs *******************************/
+/* VNVRAM APIs that can be used by QEMU to provide persistent storage*/
+/*********************************************************************/
+
+/*
+ * Initialize VNVRAM
+ *
+ * This must be called before any other APIs.
+ */
+int vnvram_init(void)
+{
+    qemu_mutex_init(&vnvram_rwrequests_mutex);
+    vnvram_bh = qemu_bh_new(vnvram_rwrequest_callback, NULL);
+    DPRINTF("%s: VNVRAM initialized\n", __func__);
+
+    return 0;
+}
+
+/*
+ * Create a VNVRAM instance
+ *
+ * The VNVRAM instance will use the drive with the corresponding ID as
+ * its persistent storage device.
+ */
+VNVRAM *vnvram_create(const char *drv_id, bool fail_on_invalid, int *errcode)
+{
+    int rc;
+    VNVRAM *vnvram = NULL;
+    VNVRAMDrvHdr hdr;
+    BlockDriverState *bds;
+
+    *errcode = 0;
+
+    if (runstate_check(RUN_STATE_INMIGRATE)) {
+        qerror_report(QERR_MIGRATION_ACTIVE);
+        rc = -EAGAIN;
+        goto err_exit;
+    }
+
+    bds = bdrv_find(drv_id);
+    if (!bds) {
+        qerror_report(QERR_DEVICE_NOT_FOUND, drv_id);
+        rc = -ENOENT;
+        goto err_exit;
+    }
+
+    if (bdrv_is_read_only(bds)) {
+        qerror_report(QERR_DEVICE_IS_READ_ONLY, drv_id);
+        rc = -EPERM;
+        goto err_exit;
+    }
+
+    bdrv_lock_medium(bds, true);
+
+    vnvram = vnvram_drv_find_by_id(drv_id);
+    if (vnvram) {
+        /* This VNVRAM was already created - sucess */
+        return vnvram;
+    }
+
+    vnvram = g_new0(VNVRAM, 1);
+    vnvram->drv_id = g_strdup(drv_id);
+    vnvram->bds = bds;
+
+    QLIST_INIT(&vnvram->entries_head);
+
+    rc = vnvram_drv_adjust_size(vnvram);
+    if (rc != 0) {
+        qerror_report(QERR_IO_ERROR);
+        goto err_exit;
+    }
+
+    rc = vnvram_drv_hdr_read(vnvram, (&hdr));
+    if (rc != 0) {
+        qerror_report(QERR_IO_ERROR);
+        goto err_exit;
+    }
+
+    if (vnvram_drv_hdr_is_valid(vnvram, (&hdr))) {
+        rc = vnvram_sync_from_drv(vnvram, (&hdr));
+        if (rc != 0) {
+            qerror_report(ERROR_CLASS_GENERIC_ERROR, "VNVRAM drive sync error");
+            goto err_exit;
+        }
+
+        if (vnvram_entries_are_valid(vnvram, bdrv_getlength(vnvram->bds))) {
+            /* Sync'd VNVRAM drive looks good - success */
+            goto exit;
+        }
+    }
+
+    /* Drive data looks invalid.  Could be encrypted and we didn't get key? */
+    if (bdrv_is_encrypted(vnvram->bds)) {
+        DPRINTF("%s: VNVRAM drive is encrypted\n", __func__);
+    }
+
+    /* Either fail or reformat the drive. */
+    if (fail_on_invalid) {
+        qerror_report(ERROR_CLASS_GENERIC_ERROR, "VNVRAM drive not valid");
+        rc = -EIO;
+        goto err_exit;
+    }
+
+    rc = vnvram_drv_hdr_create_empty(vnvram);
+    if (rc != 0) {
+        qerror_report(QERR_IO_ERROR);
+        goto err_exit;
+    }
+
+exit:
+    QLIST_INSERT_HEAD(&vnvrams, vnvram, list);
+    DPRINTF("%s: VNVRAM with drive '%s' created\n", __func__, vnvram->drv_id);
+
+    return vnvram;
+
+err_exit:
+    if (vnvram) {
+        g_free(vnvram->drv_id);
+    }
+    g_free(vnvram);
+    *errcode = rc;
+
+    return NULL;
+}
+
+/*
+ * Register a VNVRAM entry
+ *
+ * The entry information will not be flushed to the drive until the next
+ * write.
+ */
+int vnvram_register_entry(VNVRAM *vnvram, const VNVRAMEntryName *entry_name,
+                          uint32_t max_size)
+{
+    if (!vnvram_exists(vnvram)) {
+        qerror_report(ERROR_CLASS_GENERIC_ERROR, "VNVRAM has not been created");
+        return -EPERM;
+    }
+
+    return vnvram_register_entry_internal(vnvram, entry_name, 0, 0, max_size);
+}
+
+/*
+ * Deregister a VNVRAM entry
+ */
+int vnvram_deregister_entry(VNVRAM *vnvram, const VNVRAMEntryName *entry_name)
+{
+    VNVRAMEntry *entry;
+
+    if (!vnvram_exists(vnvram)) {
+        qerror_report(ERROR_CLASS_GENERIC_ERROR, "VNVRAM has not been created");
+        return -EPERM;
+    }
+
+    QLIST_FOREACH(entry, &vnvram->entries_head, next) {
+        if (!strncmp(entry->name, (char *)entry_name, sizeof(*entry_name))) {
+            if (entry->cur_size != 0) {
+                qerror_report(ERROR_CLASS_GENERIC_ERROR,
+                              "VNVRAM entry already written to disk");
+                return -EPERM;
+            }
+            QLIST_REMOVE(entry, next);
+            g_free(entry);
+            DPRINTF("%s: Deregistered VNVRAM entry '%s'\n", __func__,
+                    (char *)entry_name);
+            return 0;
+        }
+    }
+
+    qerror_report(ERROR_CLASS_GENERIC_ERROR, "VNVRAM entry not found");
+
+    return -ENOENT;
+}
+
+/*
+ * Read a VNVRAM blob from the specified drive entry
+ */
+int vnvram_read_entry(VNVRAM *vnvram, const VNVRAMEntryName *entry_name,
+                      char **blob, uint32_t *blob_size)
+{
+    int rc;
+    VNVRAMEntry *entry;
+    VNVRAMRWRequest *rwr;
+
+    *blob = NULL;
+    *blob_size = 0;
+
+    if (!vnvram_exists(vnvram)) {
+        qerror_report(ERROR_CLASS_GENERIC_ERROR, "VNVRAM has not been created");
+        return -EPERM;
+    }
+
+    entry = vnvram_find_entry(vnvram, entry_name);
+    if (!entry) {
+        qerror_report(ERROR_CLASS_GENERIC_ERROR, "VNVRAM entry not found");
+        return -ENOENT;
+    }
+
+    rwr = vnvram_rwrequest_init_read(vnvram, entry, blob, blob_size);
+
+    rc = vnvram_rwrequest_schedule(rwr);
+
+    g_free(rwr);
+
+    return rc;
+}
+
+/*
+ * Write a VNVRAM blob to the specified drive entry
+ */
+int vnvram_write_entry(VNVRAM *vnvram, const VNVRAMEntryName *entry_name,
+                       char *blob, uint32_t blob_size)
+{
+    int rc;
+    VNVRAMEntry *entry;
+    VNVRAMRWRequest *rwr;
+
+    if (!vnvram_exists(vnvram)) {
+        qerror_report(ERROR_CLASS_GENERIC_ERROR, "VNVRAM has not been created");
+        return -EPERM;
+    }
+
+    entry = vnvram_find_entry(vnvram, entry_name);
+    if (!entry) {
+        qerror_report(ERROR_CLASS_GENERIC_ERROR, "VNVRAM entry not found");
+        return -ENOENT;
+    }
+
+    rwr = vnvram_rwrequest_init_write(vnvram, entry, blob, blob_size);
+
+    rc = vnvram_rwrequest_schedule(rwr);
+
+    g_free(rwr);
+
+    return rc;
+}
+
+/*
+ * Delete a VNVRAM from memory
+ */
+int vnvram_delete(VNVRAM *vnvram)
+{
+    VNVRAMEntry *entry;
+
+    if (!vnvram_exists(vnvram)) {
+        qerror_report(ERROR_CLASS_GENERIC_ERROR, "VNVRAM has not been created");
+        return -EPERM;
+    }
+
+    QLIST_FOREACH(entry, &vnvram->entries_head, next) {
+        QLIST_REMOVE(entry, next);
+        g_free(entry);
+    }
+
+    QLIST_REMOVE(vnvram, list);
+
+    DPRINTF("%s: VNVRAM with drive '%s' deleted from memory\n",
+            __func__, vnvram->drv_id);
+
+    g_free(vnvram->drv_id);
+    g_free(vnvram);
+
+    return 0;
+}
diff --git a/vnvram.h b/vnvram.h
index b6d7cd7..c1055b4 100644
--- a/vnvram.h
+++ b/vnvram.h
@@ -14,9 +14,23 @@
 #ifndef _QEMU_VNVRAM_H_
 #define _QEMU_VNVRAM_H_
 
+#include <stdint.h>
+#include <stdbool.h>
+
 typedef struct VNVRAM VNVRAM;
 
 #define VNVRAM_ENTRY_NAME_LENGTH 16
 typedef char VNVRAMEntryName[VNVRAM_ENTRY_NAME_LENGTH];
 
+int vnvram_init(void);
+VNVRAM *vnvram_create(const char *drv_id, bool fail_on_invalid, int *errcode);
+int vnvram_register_entry(VNVRAM *vnvram, const VNVRAMEntryName *entry_name,
+                          uint32_t max_size);
+int vnvram_deregister_entry(VNVRAM *vnvram, const VNVRAMEntryName *entry_name);
+int vnvram_read_entry(VNVRAM *vnvram, const VNVRAMEntryName *entry_name,
+                      char **blob, uint32_t *blob_size);
+int vnvram_write_entry(VNVRAM *vnvram, const VNVRAMEntryName *entry_name,
+                       char *blob, uint32_t blob_size);
+int vnvram_delete(VNVRAM *vnvram);
+
 #endif
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [PATCH 5/7] vnvram: VNVRAM additional debug support
  2013-05-23 17:44 [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage Corey Bryant
                   ` (3 preceding siblings ...)
  2013-05-23 17:44 ` [Qemu-devel] [PATCH 4/7] vnvram: VNVRAM internal APIs Corey Bryant
@ 2013-05-23 17:44 ` Corey Bryant
  2013-05-23 17:44 ` [Qemu-devel] [PATCH 6/7] main: Initialize VNVRAM Corey Bryant
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 26+ messages in thread
From: Corey Bryant @ 2013-05-23 17:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, aliguori, stefanb, Corey Bryant, mdroth, lcapitulino,
	jschopp, stefanha

Provides debug support that dumps the disk and in-memory VNVRAM
contents to stderr.

Signed-off-by: Corey Bryant <coreyb@linux.vnet.ibm.com>
---
 vnvram.c |   94 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 94 insertions(+), 0 deletions(-)

diff --git a/vnvram.c b/vnvram.c
index 357923d..9c4f64f 100644
--- a/vnvram.c
+++ b/vnvram.c
@@ -19,6 +19,7 @@
 
 /*
 #define VNVRAM_DEBUG
+#define VNVRAM_DEBUG_DUMP
 */
 
 #ifdef VNVRAM_DEBUG
@@ -134,6 +135,47 @@ static uint64_t vnvram_get_size_kb(VNVRAM *);
 /* entries.                                                          */
 /*********************************************************************/
 
+#ifdef VNVRAM_DEBUG_DUMP
+static void vnvram_drv_dump(void)
+{
+    int rc, i, num_entries;
+    VNVRAM *vnvram;
+    VNVRAMDrvHdr hdr;
+    VNVRAMDrvEntry *drv_entries = NULL;
+
+    QLIST_FOREACH(vnvram, &vnvrams, list) {
+        rc = vnvram_drv_hdr_read(vnvram, (&hdr));
+        if (rc != 0) {
+            goto err_exit;
+        }
+
+        rc = vnvram_drv_entries_get(vnvram, (&hdr), &drv_entries, &num_entries);
+        if (rc != 0) {
+            goto err_exit;
+        }
+
+        DPRINTF("VNVRAM drv dump:\n");
+        DPRINTF("  version = %"PRIu16"\n", hdr.version);
+        DPRINTF("  magic = %"PRIu32"\n", hdr.magic);
+        DPRINTF("  num_entries = %"PRIu32"\n", hdr.num_entries);
+
+        for (i = 0; i < num_entries; i++) {
+            DPRINTF("    name = %s\n", drv_entries[i].name);
+            DPRINTF("    blob_offset = %"PRIu64"\n",
+                                       drv_entries[i].blob_offset);
+            DPRINTF("    cur_size = %"PRIu32"\n", drv_entries[i].cur_size);
+            DPRINTF("    max_size = %"PRIu32"\n", drv_entries[i].max_size);
+        }
+
+        g_free(drv_entries);
+        drv_entries = NULL;
+    }
+
+err_exit:
+    g_free(drv_entries);
+}
+#endif
+
 /*
  * Big-endian conversions
  */
@@ -526,6 +568,28 @@ static bool vnvram_drv_hdr_is_valid(VNVRAM *vnvram, VNVRAMDrvHdr *hdr)
 /* High-level VNVRAM functions that work with in-memory entries.     */
 /*********************************************************************/
 
+#ifdef VNVRAM_DEBUG_DUMP
+static void vnvram_dump(void)
+{
+    VNVRAM *vnvram;
+    const VNVRAMEntry *entry;
+
+    QLIST_FOREACH(vnvram, &vnvrams, list) {
+        DPRINTF("VNVRAM dump:\n");
+        DPRINTF("  drv_id = %s\n", vnvram->drv_id);
+        DPRINTF("  end_offset = %"PRIu64"\n", vnvram->end_offset);
+        DPRINTF("  bds = %p\n", vnvram->bds);
+
+        QLIST_FOREACH(entry, &vnvram->entries_head, next) {
+            DPRINTF("    name = %s\n", entry->name);
+            DPRINTF("    blob_offset = %"PRIu64"\n", entry->blob_offset);
+            DPRINTF("    cur_size = %"PRIu32"\n", entry->cur_size);
+            DPRINTF("    max_size = %"PRIu32"\n", entry->max_size);
+        }
+    }
+}
+#endif
+
 /*
  * Check if the specified vnvram has been created
  */
@@ -626,6 +690,11 @@ static int vnvram_sync_from_drv(VNVRAM *vnvram, VNVRAMDrvHdr *hdr)
     int rc = 0, num_entries = 0, i;
     VNVRAMDrvEntry *drv_entries = NULL;
 
+#ifdef VNVRAM_DEBUG_DUMP
+    vnvram_dump();
+    vnvram_drv_dump();
+#endif
+
     rc = vnvram_drv_entries_get(vnvram, hdr, &drv_entries, &num_entries);
     if (rc != 0) {
         return rc;
@@ -644,6 +713,11 @@ static int vnvram_sync_from_drv(VNVRAM *vnvram, VNVRAMDrvHdr *hdr)
 
     vnvram->end_offset = vnvram_get_size(vnvram);
 
+#ifdef VNVRAM_DEBUG_DUMP
+    vnvram_dump();
+    vnvram_drv_dump();
+#endif
+
 err_exit:
     g_free(drv_entries);
 
@@ -1007,6 +1081,11 @@ int vnvram_read_entry(VNVRAM *vnvram, const VNVRAMEntryName *entry_name,
     VNVRAMEntry *entry;
     VNVRAMRWRequest *rwr;
 
+#ifdef VNVRAM_DEBUG_DUMP
+    vnvram_dump();
+    vnvram_drv_dump();
+#endif
+
     *blob = NULL;
     *blob_size = 0;
 
@@ -1027,6 +1106,11 @@ int vnvram_read_entry(VNVRAM *vnvram, const VNVRAMEntryName *entry_name,
 
     g_free(rwr);
 
+#ifdef VNVRAM_DEBUG_DUMP
+    vnvram_dump();
+    vnvram_drv_dump();
+#endif
+
     return rc;
 }
 
@@ -1040,6 +1124,11 @@ int vnvram_write_entry(VNVRAM *vnvram, const VNVRAMEntryName *entry_name,
     VNVRAMEntry *entry;
     VNVRAMRWRequest *rwr;
 
+#ifdef VNVRAM_DEBUG_DUMP
+    vnvram_dump();
+    vnvram_drv_dump();
+#endif
+
     if (!vnvram_exists(vnvram)) {
         qerror_report(ERROR_CLASS_GENERIC_ERROR, "VNVRAM has not been created");
         return -EPERM;
@@ -1057,6 +1146,11 @@ int vnvram_write_entry(VNVRAM *vnvram, const VNVRAMEntryName *entry_name,
 
     g_free(rwr);
 
+#ifdef VNVRAM_DEBUG_DUMP
+    vnvram_dump();
+    vnvram_drv_dump();
+#endif
+
     return rc;
 }
 
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [PATCH 6/7] main: Initialize VNVRAM
  2013-05-23 17:44 [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage Corey Bryant
                   ` (4 preceding siblings ...)
  2013-05-23 17:44 ` [Qemu-devel] [PATCH 5/7] vnvram: VNVRAM additional debug support Corey Bryant
@ 2013-05-23 17:44 ` Corey Bryant
  2013-05-23 17:44 ` [Qemu-devel] [PATCH 7/7] monitor: QMP/HMP support for retrieving VNVRAM details Corey Bryant
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 26+ messages in thread
From: Corey Bryant @ 2013-05-23 17:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, aliguori, stefanb, Corey Bryant, mdroth, lcapitulino,
	jschopp, stefanha

Signed-off-by: Corey Bryant <coreyb@linux.vnet.ibm.com>
---
 vl.c |    6 ++++++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/vl.c b/vl.c
index 59dc0b4..5da88e6 100644
--- a/vl.c
+++ b/vl.c
@@ -171,6 +171,8 @@ int main(int argc, char **argv)
 #include "ui/qemu-spice.h"
 #include "qapi/string-input-visitor.h"
 
+#include "vnvram.h"
+
 //#define DEBUG_NET
 //#define DEBUG_SLIRP
 
@@ -4174,6 +4176,10 @@ int main(int argc, char **argv, char **envp)
         exit(1);
     }
 
+    if (vnvram_init()) {
+        exit(1);
+    }
+
 #ifdef CONFIG_TPM
     if (tpm_init() < 0) {
         exit(1);
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [Qemu-devel] [PATCH 7/7] monitor: QMP/HMP support for retrieving VNVRAM details
  2013-05-23 17:44 [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage Corey Bryant
                   ` (5 preceding siblings ...)
  2013-05-23 17:44 ` [Qemu-devel] [PATCH 6/7] main: Initialize VNVRAM Corey Bryant
@ 2013-05-23 17:44 ` Corey Bryant
  2013-05-23 17:59   ` Eric Blake
  2013-05-29 17:15   ` Luiz Capitulino
  2013-05-23 18:03 ` [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage Anthony Liguori
  2013-05-24  9:59 ` Stefan Hajnoczi
  8 siblings, 2 replies; 26+ messages in thread
From: Corey Bryant @ 2013-05-23 17:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: kwolf, aliguori, stefanb, Corey Bryant, mdroth, lcapitulino,
	jschopp, stefanha

Signed-off-by: Corey Bryant <coreyb@linux.vnet.ibm.com>
---
 hmp.c            |   32 ++++++++++++++++++++++++
 hmp.h            |    1 +
 monitor.c        |    7 +++++
 qapi-schema.json |   47 +++++++++++++++++++++++++++++++++++
 qmp-commands.hx  |   41 +++++++++++++++++++++++++++++++
 vnvram.c         |   71 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 6 files changed, 199 insertions(+), 0 deletions(-)

diff --git a/hmp.c b/hmp.c
index 4fb76ec..a144f73 100644
--- a/hmp.c
+++ b/hmp.c
@@ -653,6 +653,38 @@ void hmp_info_tpm(Monitor *mon, const QDict *qdict)
     qapi_free_TPMInfoList(info_list);
 }
 
+void hmp_info_vnvram(Monitor *mon, const QDict *dict)
+{
+    VNVRAMInfoList *info_list, *info;
+    Error *err = NULL;
+    unsigned int c = 0;
+
+    info_list = qmp_query_vnvram(&err);
+    if (err) {
+        monitor_printf(mon, "VNVRAM not found\n");
+        error_free(err);
+        return;
+    }
+
+    for (info = info_list; info; info = info->next) {
+        VNVRAMInfo *ni = info->value;
+        VNVRAMEntryInfoList *einfo_list = ni->entries, *einfo;
+        unsigned int d = 0;
+        monitor_printf(mon, "vnvram%u: drive-id=%s "
+                       "virtual-disk-size=%"PRId64" vnvram-size=%"PRIu64"\n",
+                       c++, ni->drive_id, ni->virtual_disk_size,
+                       ni->vnvram_size);
+
+        for (einfo = einfo_list; einfo; einfo = einfo->next) {
+            VNVRAMEntryInfo *nei = einfo->value;
+            monitor_printf(mon, "  entry%u: name=%s cur-size=%"PRIu64" "
+                           "max-size=%"PRIu64"\n",
+                           d++, nei->name, nei->cur_size, nei->max_size);
+        }
+    }
+    qapi_free_VNVRAMInfoList(info_list);
+}
+
 void hmp_quit(Monitor *mon, const QDict *qdict)
 {
     monitor_suspend(mon);
diff --git a/hmp.h b/hmp.h
index 95fe76e..e26daf2 100644
--- a/hmp.h
+++ b/hmp.h
@@ -37,6 +37,7 @@ void hmp_info_balloon(Monitor *mon, const QDict *qdict);
 void hmp_info_pci(Monitor *mon, const QDict *qdict);
 void hmp_info_block_jobs(Monitor *mon, const QDict *qdict);
 void hmp_info_tpm(Monitor *mon, const QDict *qdict);
+void hmp_info_vnvram(Monitor *mon, const QDict *dict);
 void hmp_quit(Monitor *mon, const QDict *qdict);
 void hmp_stop(Monitor *mon, const QDict *qdict);
 void hmp_system_reset(Monitor *mon, const QDict *qdict);
diff --git a/monitor.c b/monitor.c
index 62aaebe..c10fe15 100644
--- a/monitor.c
+++ b/monitor.c
@@ -2764,6 +2764,13 @@ static mon_cmd_t info_cmds[] = {
         .mhandler.cmd = hmp_info_tpm,
     },
     {
+        .name       = "vnvram",
+        .args_type  = "",
+        .params     = "",
+        .help       = "show VNVRAM information",
+        .mhandler.cmd = hmp_info_vnvram,
+    },
+    {
         .name       = NULL,
     },
 };
diff --git a/qapi-schema.json b/qapi-schema.json
index 9302e7d..73d42d6 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -3619,3 +3619,50 @@
             '*cpuid-input-ecx': 'int',
             'cpuid-register': 'X86CPURegister32',
             'features': 'int' } }
+
+# @VNVRAMEntryInfo:
+#
+# Information about an entry in the VNVRAM.
+#
+# @name: name of the entry
+#
+# @cur-size: current size of the entry's blob in bytes
+#
+# @max-size: max size of the entry's blob in bytes
+#
+# Since: 1.6
+#
+##
+{ 'type': 'VNVRAMEntryInfo',
+  'data': {'name': 'str', 'cur-size': 'int', 'max-size': 'int', } }
+
+##
+# @VNVRAMInfo:
+#
+# Information about the VNVRAM device.
+#
+# @drive-id: ID of the VNVRAM (and associated drive)
+#
+# @virtual-disk-size: Virtual size of the associated disk drive in bytes
+#
+# @vnvram-size: Size of the VNVRAM in bytes
+#
+# @entries: Array of @VNVRAMEntryInfo
+#
+# Since: 1.6
+#
+##
+{ 'type': 'VNVRAMInfo',
+  'data': {'drive-id': 'str', 'virtual-disk-size': 'int',
+           'vnvram-size': 'int', 'entries' : ['VNVRAMEntryInfo']} }
+
+##
+# @query-vnvram:
+#
+# Return information about the VNVRAM devices.
+#
+# Returns: @VNVRAMInfo on success
+#
+# Since: 1.6
+##
+{ 'command': 'query-vnvram', 'returns': ['VNVRAMInfo'] }
diff --git a/qmp-commands.hx b/qmp-commands.hx
index ffd130e..56a57b7 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -2932,3 +2932,44 @@ Example:
 <- { "return": {} }
 
 EQMP
+
+    {
+        .name       = "query-vnvram",
+        .args_type  = "",
+        .mhandler.cmd_new = qmp_marshal_input_query_vnvram,
+    },
+
+SQMP
+query-vnvram
+------------
+
+Show VNVRAM info.
+
+Return a json-array of json-objects representing VNVRAMs.  Each VNVRAM
+is described by a json-object with the following:
+
+- "drive-id": ID of the VNVRAM (json-string)
+- "vitual-disk-size": Virtual size of associated disk drive in bytes (json-int)
+- "vnvram-size": Size of the VNVRAM in bytes (json-int)
+- "entries": json-array of json-objects representing entries
+
+Each entry is described by a json-object with the following:
+
+- "name": Name of the entry (json-string)
+- "cur-size": Current size of the entry's blob in bytes (json-int)
+- "max-size": Max size of the entry's blob in bytes (json-int)
+
+Example:
+
+-> { "execute": "query-vnvram" }
+<- {"return": [
+      { "vnvram-size": 2050, "virtual-disk-size": 2000896,
+        "drive-id": "drive-ide0-0-0",
+        "entries": [
+         { "name": "this-entry", "cur-size": 2048, "max-size": 21504 },
+         { "name": "that-entry", "cur-size": 1024, "max-size": 21504 },
+         { "name": "other-entry", "cur-size": 4096, "max-size": 41472 } ]
+      } ]
+   }
+
+EQMP
diff --git a/vnvram.c b/vnvram.c
index 9c4f64f..a5fe101 100644
--- a/vnvram.c
+++ b/vnvram.c
@@ -16,6 +16,7 @@
 #include "monitor/monitor.h"
 #include "qemu/thread.h"
 #include "sysemu/sysemu.h"
+#include "qmp-commands.h"
 
 /*
 #define VNVRAM_DEBUG
@@ -897,6 +898,76 @@ static int vnvram_rwrequest_schedule(VNVRAMRWRequest *rwr)
     return rc;
 }
 
+/************************ VNVRAM monitor *****************************/
+/* VNVRAM functions that support QMP and HMP commands                */
+/*********************************************************************/
+
+/*
+ * Get VNVRAM entry details for an in-memory entry
+ */
+static VNVRAMEntryInfo *vnvram_get_vnvram_entry_info(VNVRAMEntry *entry)
+{
+    VNVRAMEntryInfo *res = g_new0(VNVRAMEntryInfo, 1);
+
+    res->name = g_strndup(entry->name, sizeof(entry->name));
+    res->cur_size = entry->cur_size;
+    res->max_size = entry->max_size;
+
+    return res;
+}
+
+/*
+ * Get VNVRAM details based on the VNVRAM struct
+ */
+static VNVRAMInfo *vnvram_get_vnvram_info(VNVRAM *vnvram)
+{
+    VNVRAMEntry *entry;
+    VNVRAMEntryInfoList *info, *head = NULL, *cur = NULL;
+    VNVRAMInfo *res = g_new0(VNVRAMInfo, 1);
+
+    res->drive_id = g_strdup(vnvram->drv_id);
+    res->virtual_disk_size = bdrv_getlength(vnvram->bds);
+    res->vnvram_size = vnvram_get_size(vnvram);
+
+    QLIST_FOREACH(entry, &vnvram->entries_head, next) {
+        info = g_new0(VNVRAMEntryInfoList, 1);
+        info->value = vnvram_get_vnvram_entry_info(entry);
+
+        if (!cur) {
+            head = cur = info;
+        } else {
+            cur->next = info;
+            cur = info;
+        }
+    }
+    res->entries = head;
+
+    return res;
+}
+
+/*
+ * Get VNVRAM data from the in-memory VNVRAM struct and entries
+ */
+VNVRAMInfoList *qmp_query_vnvram(Error **errp)
+{
+    VNVRAM *vnvram;
+    VNVRAMInfoList *info, *head = NULL, *cur = NULL;
+
+    QLIST_FOREACH(vnvram, &vnvrams, list) {
+        info = g_new0(VNVRAMInfoList, 1);
+        info->value = vnvram_get_vnvram_info(vnvram);
+
+        if (!cur) {
+            head = cur = info;
+        } else {
+            cur->next = info;
+            cur = info;
+        }
+    }
+
+    return head;
+}
+
 /************************* VNVRAM APIs *******************************/
 /* VNVRAM APIs that can be used by QEMU to provide persistent storage*/
 /*********************************************************************/
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH 7/7] monitor: QMP/HMP support for retrieving VNVRAM details
  2013-05-23 17:44 ` [Qemu-devel] [PATCH 7/7] monitor: QMP/HMP support for retrieving VNVRAM details Corey Bryant
@ 2013-05-23 17:59   ` Eric Blake
  2013-05-23 18:43     ` Corey Bryant
  2013-05-29 17:15   ` Luiz Capitulino
  1 sibling, 1 reply; 26+ messages in thread
From: Eric Blake @ 2013-05-23 17:59 UTC (permalink / raw)
  To: Corey Bryant
  Cc: kwolf, aliguori, stefanb, qemu-devel, mdroth, lcapitulino,
	jschopp, stefanha

[-- Attachment #1: Type: text/plain, Size: 2255 bytes --]

On 05/23/2013 11:44 AM, Corey Bryant wrote:
> Signed-off-by: Corey Bryant <coreyb@linux.vnet.ibm.com>
> ---

Might help to list a sample HMP or QMP usage in the commit message.

> +++ b/qapi-schema.json
> @@ -3619,3 +3619,50 @@
>              '*cpuid-input-ecx': 'int',
>              'cpuid-register': 'X86CPURegister32',
>              'features': 'int' } }
> +
> +# @VNVRAMEntryInfo:
> +#
> +# Information about an entry in the VNVRAM.
> +#
> +# @name: name of the entry
> +#
> +# @cur-size: current size of the entry's blob in bytes
> +#
> +# @max-size: max size of the entry's blob in bytes
> +#
> +# Since: 1.6
> +#
> +##
> +{ 'type': 'VNVRAMEntryInfo',
> +  'data': {'name': 'str', 'cur-size': 'int', 'max-size': 'int', } }

No trailing commas in JSON.  :(

> +
> +##
> +# @VNVRAMInfo:
> +#
> +# Information about the VNVRAM device.
> +#
> +# @drive-id: ID of the VNVRAM (and associated drive)
> +#
> +# @virtual-disk-size: Virtual size of the associated disk drive in bytes
> +#
> +# @vnvram-size: Size of the VNVRAM in bytes
> +#
> +# @entries: Array of @VNVRAMEntryInfo
> +#
> +# Since: 1.6
> +#
> +##
> +{ 'type': 'VNVRAMInfo',
> +  'data': {'drive-id': 'str', 'virtual-disk-size': 'int',
> +           'vnvram-size': 'int', 'entries' : ['VNVRAMEntryInfo']} }
> +
> +##
> +# @query-vnvram:
> +#
> +# Return information about the VNVRAM devices.
> +#
> +# Returns: @VNVRAMInfo on success
> +#
> +# Since: 1.6
> +##
> +{ 'command': 'query-vnvram', 'returns': ['VNVRAMInfo'] }

Other than that, this looks fine from an interface point of view.  I
haven't closely reviewed code, though.

> +
> +Example:
> +
> +-> { "execute": "query-vnvram" }
> +<- {"return": [
> +      { "vnvram-size": 2050, "virtual-disk-size": 2000896,
> +        "drive-id": "drive-ide0-0-0",
> +        "entries": [
> +         { "name": "this-entry", "cur-size": 2048, "max-size": 21504 },
> +         { "name": "that-entry", "cur-size": 1024, "max-size": 21504 },
> +         { "name": "other-entry", "cur-size": 4096, "max-size": 41472 } ]
> +      } ]
> +   }

Looks reasonable.

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 621 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage
  2013-05-23 17:44 [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage Corey Bryant
                   ` (6 preceding siblings ...)
  2013-05-23 17:44 ` [Qemu-devel] [PATCH 7/7] monitor: QMP/HMP support for retrieving VNVRAM details Corey Bryant
@ 2013-05-23 18:03 ` Anthony Liguori
  2013-05-23 18:41   ` Corey Bryant
  2013-05-24  9:59 ` Stefan Hajnoczi
  8 siblings, 1 reply; 26+ messages in thread
From: Anthony Liguori @ 2013-05-23 18:03 UTC (permalink / raw)
  To: Corey Bryant, qemu-devel
  Cc: kwolf, stefanb, mdroth, lcapitulino, jschopp, stefanha

Corey Bryant <coreyb@linux.vnet.ibm.com> writes:

> This patch series provides VNVRAM persistent storage support that
> QEMU can use internally.  The initial target user will be a software
> vTPM 1.2 backend that needs to store keys in VNVRAM and be able to
> reboot/migrate and retain the keys.
>
> This support uses QEMU's block driver to provide persistent storage
> by reading/writing VNVRAM data from/to a drive image.  The VNVRAM
> drive image is provided with the -drive command line option just like
> any other drive image and the vnvram_create() API will find it.
>
> The APIs allow for VNVRAM entries to be registered, one at a time,
> each with a maximum blob size.  Entry blobs can then be read/written
> from/to an entry on the drive.  Here's an example of usage:

I still don't get why this needs to exist.  This doesn't map to any
hardware concept I know of.

Why can't the vTPM manage on it's own how it stores blobs in it's flash
memory?  I think we're adding an unneeded layer of abstraction here.

Regards,

Anthony Liguori

>
> VNVRAM *vnvram;
> int errcode
> const VNVRAMEntryName entry_name;
> const char *blob_w = "blob data";
> char *blob_r;
> uint32_t blob_r_size;
>
> vnvram = vnvram_create("drive-ide0-0-0", false, &errcode);
> strcpy((char *)entry_name, "first-entry");
> vnvram_register_entry(vnvram, &entry_name, 1024);
> vnvram_write_entry(vnvram, &entry_name, (char *)blob_w, strlen(blob_w)+1);
> vnvram_read_entry(vnvram, &entry_name, &blob_r, &blob_r_size);
> vnvram_delete(vnvram);
>
> Thanks,
> Corey
>
> Corey Bryant (7):
>   vnvram: VNVRAM bdrv support
>   vnvram: VNVRAM in-memory support
>   vnvram: VNVRAM bottom-half r/w scheduling support
>   vnvram: VNVRAM internal APIs
>   vnvram: VNVRAM additional debug support
>   main: Initialize VNVRAM
>   monitor: QMP/HMP support for retrieving VNVRAM details
>
>  Makefile.objs    |    2 +
>  hmp.c            |   32 ++
>  hmp.h            |    1 +
>  monitor.c        |    7 +
>  qapi-schema.json |   47 ++
>  qmp-commands.hx  |   41 ++
>  vl.c             |    6 +
>  vnvram.c         | 1254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  vnvram.h         |   36 ++
>  9 files changed, 1426 insertions(+), 0 deletions(-)
>  create mode 100644 vnvram.c
>  create mode 100644 vnvram.h

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage
  2013-05-23 18:03 ` [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage Anthony Liguori
@ 2013-05-23 18:41   ` Corey Bryant
  2013-05-23 19:15     ` Anthony Liguori
  0 siblings, 1 reply; 26+ messages in thread
From: Corey Bryant @ 2013-05-23 18:41 UTC (permalink / raw)
  To: Anthony Liguori
  Cc: kwolf, stefanb, mdroth, qemu-devel, jschopp, stefanha, lcapitulino



On 05/23/2013 02:03 PM, Anthony Liguori wrote:
> Corey Bryant <coreyb@linux.vnet.ibm.com> writes:
>
>> This patch series provides VNVRAM persistent storage support that
>> QEMU can use internally.  The initial target user will be a software
>> vTPM 1.2 backend that needs to store keys in VNVRAM and be able to
>> reboot/migrate and retain the keys.
>>
>> This support uses QEMU's block driver to provide persistent storage
>> by reading/writing VNVRAM data from/to a drive image.  The VNVRAM
>> drive image is provided with the -drive command line option just like
>> any other drive image and the vnvram_create() API will find it.
>>
>> The APIs allow for VNVRAM entries to be registered, one at a time,
>> each with a maximum blob size.  Entry blobs can then be read/written
>> from/to an entry on the drive.  Here's an example of usage:
>
> I still don't get why this needs to exist.  This doesn't map to any
> hardware concept I know of.
>
> Why can't the vTPM manage on it's own how it stores blobs in it's flash
> memory?  I think we're adding an unneeded layer of abstraction here.
>
> Regards,
>
> Anthony Liguori
>

One of the difficulties in virtualizing a TPM is that it doesn't support 
SR-IOV.  So the existing passthrough vTPM can only be used by one guest. 
  We're planning to provide a software emulated vTPM that uses libtpms 
and it needs to store blobs somewhere that is persistent.  We can't 
store blobs in the host TPM's hardware NVRAM.  So we have to virtualize 
it in software.  And we figured we'd provide a persistent storage 
mechanism that other parts of QEMU could use rather than limit it to 
just the vTPM's use.

-- 
Regards,
Corey Bryant

>>
>> VNVRAM *vnvram;
>> int errcode
>> const VNVRAMEntryName entry_name;
>> const char *blob_w = "blob data";
>> char *blob_r;
>> uint32_t blob_r_size;
>>
>> vnvram = vnvram_create("drive-ide0-0-0", false, &errcode);
>> strcpy((char *)entry_name, "first-entry");
>> vnvram_register_entry(vnvram, &entry_name, 1024);
>> vnvram_write_entry(vnvram, &entry_name, (char *)blob_w, strlen(blob_w)+1);
>> vnvram_read_entry(vnvram, &entry_name, &blob_r, &blob_r_size);
>> vnvram_delete(vnvram);
>>
>> Thanks,
>> Corey
>>
>> Corey Bryant (7):
>>    vnvram: VNVRAM bdrv support
>>    vnvram: VNVRAM in-memory support
>>    vnvram: VNVRAM bottom-half r/w scheduling support
>>    vnvram: VNVRAM internal APIs
>>    vnvram: VNVRAM additional debug support
>>    main: Initialize VNVRAM
>>    monitor: QMP/HMP support for retrieving VNVRAM details
>>
>>   Makefile.objs    |    2 +
>>   hmp.c            |   32 ++
>>   hmp.h            |    1 +
>>   monitor.c        |    7 +
>>   qapi-schema.json |   47 ++
>>   qmp-commands.hx  |   41 ++
>>   vl.c             |    6 +
>>   vnvram.c         | 1254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>   vnvram.h         |   36 ++
>>   9 files changed, 1426 insertions(+), 0 deletions(-)
>>   create mode 100644 vnvram.c
>>   create mode 100644 vnvram.h
>
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH 7/7] monitor: QMP/HMP support for retrieving VNVRAM details
  2013-05-23 17:59   ` Eric Blake
@ 2013-05-23 18:43     ` Corey Bryant
  0 siblings, 0 replies; 26+ messages in thread
From: Corey Bryant @ 2013-05-23 18:43 UTC (permalink / raw)
  To: Eric Blake
  Cc: kwolf, aliguori, stefanb, qemu-devel, mdroth, lcapitulino,
	jschopp, stefanha



On 05/23/2013 01:59 PM, Eric Blake wrote:
> On 05/23/2013 11:44 AM, Corey Bryant wrote:
>> Signed-off-by: Corey Bryant <coreyb@linux.vnet.ibm.com>
>> ---
>
> Might help to list a sample HMP or QMP usage in the commit message.
>
>> +++ b/qapi-schema.json
>> @@ -3619,3 +3619,50 @@
>>               '*cpuid-input-ecx': 'int',
>>               'cpuid-register': 'X86CPURegister32',
>>               'features': 'int' } }
>> +
>> +# @VNVRAMEntryInfo:
>> +#
>> +# Information about an entry in the VNVRAM.
>> +#
>> +# @name: name of the entry
>> +#
>> +# @cur-size: current size of the entry's blob in bytes
>> +#
>> +# @max-size: max size of the entry's blob in bytes
>> +#
>> +# Since: 1.6
>> +#
>> +##
>> +{ 'type': 'VNVRAMEntryInfo',
>> +  'data': {'name': 'str', 'cur-size': 'int', 'max-size': 'int', } }
>
> No trailing commas in JSON.  :(
>

I'll fix that.

>> +
>> +##
>> +# @VNVRAMInfo:
>> +#
>> +# Information about the VNVRAM device.
>> +#
>> +# @drive-id: ID of the VNVRAM (and associated drive)
>> +#
>> +# @virtual-disk-size: Virtual size of the associated disk drive in bytes
>> +#
>> +# @vnvram-size: Size of the VNVRAM in bytes
>> +#
>> +# @entries: Array of @VNVRAMEntryInfo
>> +#
>> +# Since: 1.6
>> +#
>> +##
>> +{ 'type': 'VNVRAMInfo',
>> +  'data': {'drive-id': 'str', 'virtual-disk-size': 'int',
>> +           'vnvram-size': 'int', 'entries' : ['VNVRAMEntryInfo']} }
>> +
>> +##
>> +# @query-vnvram:
>> +#
>> +# Return information about the VNVRAM devices.
>> +#
>> +# Returns: @VNVRAMInfo on success
>> +#
>> +# Since: 1.6
>> +##
>> +{ 'command': 'query-vnvram', 'returns': ['VNVRAMInfo'] }
>
> Other than that, this looks fine from an interface point of view.  I
> haven't closely reviewed code, though.
>
>> +
>> +Example:
>> +
>> +-> { "execute": "query-vnvram" }
>> +<- {"return": [
>> +      { "vnvram-size": 2050, "virtual-disk-size": 2000896,
>> +        "drive-id": "drive-ide0-0-0",
>> +        "entries": [
>> +         { "name": "this-entry", "cur-size": 2048, "max-size": 21504 },
>> +         { "name": "that-entry", "cur-size": 1024, "max-size": 21504 },
>> +         { "name": "other-entry", "cur-size": 4096, "max-size": 41472 } ]
>> +      } ]
>> +   }
>
> Looks reasonable.
>

Thanks for the review!

-- 
Regards,
Corey Bryant

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage
  2013-05-23 18:41   ` Corey Bryant
@ 2013-05-23 19:15     ` Anthony Liguori
  2013-05-24 15:27       ` Corey Bryant
  0 siblings, 1 reply; 26+ messages in thread
From: Anthony Liguori @ 2013-05-23 19:15 UTC (permalink / raw)
  To: Corey Bryant
  Cc: kwolf, stefanb, mdroth, qemu-devel, jschopp, stefanha, lcapitulino

Corey Bryant <coreyb@linux.vnet.ibm.com> writes:

> On 05/23/2013 02:03 PM, Anthony Liguori wrote:
>> Corey Bryant <coreyb@linux.vnet.ibm.com> writes:
>>
> One of the difficulties in virtualizing a TPM is that it doesn't support 
> SR-IOV.  So the existing passthrough vTPM can only be used by one guest. 
>   We're planning to provide a software emulated vTPM that uses libtpms 
> and it needs to store blobs somewhere that is persistent.  We can't 
> store blobs in the host TPM's hardware NVRAM.  So we have to virtualize 
> it in software.  And we figured we'd provide a persistent storage 
> mechanism that other parts of QEMU could use rather than limit it to 
> just the vTPM's use.

I think you are misunderstanding my feedback.

See http://mid.gmane.org/87ehf03dgw.fsf@codemonkey.ws

Regards,

Anthony Liguori

>
> -- 
> Regards,
> Corey Bryant
>
>>>
>>> VNVRAM *vnvram;
>>> int errcode
>>> const VNVRAMEntryName entry_name;
>>> const char *blob_w = "blob data";
>>> char *blob_r;
>>> uint32_t blob_r_size;
>>>
>>> vnvram = vnvram_create("drive-ide0-0-0", false, &errcode);
>>> strcpy((char *)entry_name, "first-entry");
>>> vnvram_register_entry(vnvram, &entry_name, 1024);
>>> vnvram_write_entry(vnvram, &entry_name, (char *)blob_w, strlen(blob_w)+1);
>>> vnvram_read_entry(vnvram, &entry_name, &blob_r, &blob_r_size);
>>> vnvram_delete(vnvram);
>>>
>>> Thanks,
>>> Corey
>>>
>>> Corey Bryant (7):
>>>    vnvram: VNVRAM bdrv support
>>>    vnvram: VNVRAM in-memory support
>>>    vnvram: VNVRAM bottom-half r/w scheduling support
>>>    vnvram: VNVRAM internal APIs
>>>    vnvram: VNVRAM additional debug support
>>>    main: Initialize VNVRAM
>>>    monitor: QMP/HMP support for retrieving VNVRAM details
>>>
>>>   Makefile.objs    |    2 +
>>>   hmp.c            |   32 ++
>>>   hmp.h            |    1 +
>>>   monitor.c        |    7 +
>>>   qapi-schema.json |   47 ++
>>>   qmp-commands.hx  |   41 ++
>>>   vl.c             |    6 +
>>>   vnvram.c         | 1254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>>   vnvram.h         |   36 ++
>>>   9 files changed, 1426 insertions(+), 0 deletions(-)
>>>   create mode 100644 vnvram.c
>>>   create mode 100644 vnvram.h
>>
>>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage
  2013-05-23 17:44 [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage Corey Bryant
                   ` (7 preceding siblings ...)
  2013-05-23 18:03 ` [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage Anthony Liguori
@ 2013-05-24  9:59 ` Stefan Hajnoczi
  2013-05-24 12:13   ` Stefan Berger
  8 siblings, 1 reply; 26+ messages in thread
From: Stefan Hajnoczi @ 2013-05-24  9:59 UTC (permalink / raw)
  To: Corey Bryant
  Cc: kwolf, aliguori, stefanb, qemu-devel, mdroth, lcapitulino,
	jschopp, stefanha

On Thu, May 23, 2013 at 01:44:40PM -0400, Corey Bryant wrote:
> This patch series provides VNVRAM persistent storage support that
> QEMU can use internally.  The initial target user will be a software
> vTPM 1.2 backend that needs to store keys in VNVRAM and be able to
> reboot/migrate and retain the keys.
> 
> This support uses QEMU's block driver to provide persistent storage
> by reading/writing VNVRAM data from/to a drive image.  The VNVRAM
> drive image is provided with the -drive command line option just like
> any other drive image and the vnvram_create() API will find it.
> 
> The APIs allow for VNVRAM entries to be registered, one at a time,
> each with a maximum blob size.  Entry blobs can then be read/written
> from/to an entry on the drive.  Here's an example of usage:
> 
> VNVRAM *vnvram;
> int errcode
> const VNVRAMEntryName entry_name;
> const char *blob_w = "blob data";
> char *blob_r;
> uint32_t blob_r_size;
> 
> vnvram = vnvram_create("drive-ide0-0-0", false, &errcode);
> strcpy((char *)entry_name, "first-entry");

VNVRAMEntryName is very prone to buffer overflow.  I hope real code
doesn't use strcpy().  The cast is ugly, please don't hide the type.

> vnvram_register_entry(vnvram, &entry_name, 1024);
> vnvram_write_entry(vnvram, &entry_name, (char *)blob_w, strlen(blob_w)+1);
> vnvram_read_entry(vnvram, &entry_name, &blob_r, &blob_r_size);

These are synchronous functions.  If I/O is involved then this is a
problem: QEMU will be blocked waiting for host I/O to complete and the
big QEMU lock is held.  This can cause poor guest interactivity and poor
scalability because vcpus cannot make progress, neither can the QEMU
monitor respond.

Stefan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage
  2013-05-24  9:59 ` Stefan Hajnoczi
@ 2013-05-24 12:13   ` Stefan Berger
  2013-05-24 12:36     ` Stefan Hajnoczi
  0 siblings, 1 reply; 26+ messages in thread
From: Stefan Berger @ 2013-05-24 12:13 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: kwolf, aliguori, mdroth, Corey Bryant, qemu-devel, lcapitulino,
	jschopp, stefanha

On 05/24/2013 05:59 AM, Stefan Hajnoczi wrote:
> On Thu, May 23, 2013 at 01:44:40PM -0400, Corey Bryant wrote:
>> This patch series provides VNVRAM persistent storage support that
>> QEMU can use internally.  The initial target user will be a software
>> vTPM 1.2 backend that needs to store keys in VNVRAM and be able to
>> reboot/migrate and retain the keys.
>>
>> This support uses QEMU's block driver to provide persistent storage
>> by reading/writing VNVRAM data from/to a drive image.  The VNVRAM
>> drive image is provided with the -drive command line option just like
>> any other drive image and the vnvram_create() API will find it.
>>
>> The APIs allow for VNVRAM entries to be registered, one at a time,
>> each with a maximum blob size.  Entry blobs can then be read/written
>> from/to an entry on the drive.  Here's an example of usage:
>>
>> VNVRAM *vnvram;
>> int errcode
>> const VNVRAMEntryName entry_name;
>> const char *blob_w = "blob data";
>> char *blob_r;
>> uint32_t blob_r_size;
>>
>> vnvram = vnvram_create("drive-ide0-0-0", false, &errcode);
>> strcpy((char *)entry_name, "first-entry");
> VNVRAMEntryName is very prone to buffer overflow.  I hope real code
> doesn't use strcpy().  The cast is ugly, please don't hide the type.
>
>> vnvram_register_entry(vnvram, &entry_name, 1024);
>> vnvram_write_entry(vnvram, &entry_name, (char *)blob_w, strlen(blob_w)+1);
>> vnvram_read_entry(vnvram, &entry_name, &blob_r, &blob_r_size);
> These are synchronous functions.  If I/O is involved then this is a
> problem: QEMU will be blocked waiting for host I/O to complete and the
> big QEMU lock is held.  This can cause poor guest interactivity and poor
> scalability because vcpus cannot make progress, neither can the QEMU
> monitor respond.

The vTPM is going to run as a thread and will have to write state blobs 
into a bdrv. The above functions will typically be called from this 
thead. When I originally wrote the code, the vTPM thread could not write 
the blobs into bdrv directly, so I had to resort to sending a message to 
the main QEMU thread to write the data to the bdrv. How else could we do 
this?

    Stefan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage
  2013-05-24 12:13   ` Stefan Berger
@ 2013-05-24 12:36     ` Stefan Hajnoczi
  2013-05-24 15:39       ` Corey Bryant
  0 siblings, 1 reply; 26+ messages in thread
From: Stefan Hajnoczi @ 2013-05-24 12:36 UTC (permalink / raw)
  To: Stefan Berger
  Cc: kwolf, aliguori, Stefan Hajnoczi, Corey Bryant, qemu-devel,
	mdroth, jschopp, lcapitulino

On Fri, May 24, 2013 at 08:13:27AM -0400, Stefan Berger wrote:
> On 05/24/2013 05:59 AM, Stefan Hajnoczi wrote:
> >On Thu, May 23, 2013 at 01:44:40PM -0400, Corey Bryant wrote:
> >>This patch series provides VNVRAM persistent storage support that
> >>QEMU can use internally.  The initial target user will be a software
> >>vTPM 1.2 backend that needs to store keys in VNVRAM and be able to
> >>reboot/migrate and retain the keys.
> >>
> >>This support uses QEMU's block driver to provide persistent storage
> >>by reading/writing VNVRAM data from/to a drive image.  The VNVRAM
> >>drive image is provided with the -drive command line option just like
> >>any other drive image and the vnvram_create() API will find it.
> >>
> >>The APIs allow for VNVRAM entries to be registered, one at a time,
> >>each with a maximum blob size.  Entry blobs can then be read/written
> >>from/to an entry on the drive.  Here's an example of usage:
> >>
> >>VNVRAM *vnvram;
> >>int errcode
> >>const VNVRAMEntryName entry_name;
> >>const char *blob_w = "blob data";
> >>char *blob_r;
> >>uint32_t blob_r_size;
> >>
> >>vnvram = vnvram_create("drive-ide0-0-0", false, &errcode);
> >>strcpy((char *)entry_name, "first-entry");
> >VNVRAMEntryName is very prone to buffer overflow.  I hope real code
> >doesn't use strcpy().  The cast is ugly, please don't hide the type.
> >
> >>vnvram_register_entry(vnvram, &entry_name, 1024);
> >>vnvram_write_entry(vnvram, &entry_name, (char *)blob_w, strlen(blob_w)+1);
> >>vnvram_read_entry(vnvram, &entry_name, &blob_r, &blob_r_size);
> >These are synchronous functions.  If I/O is involved then this is a
> >problem: QEMU will be blocked waiting for host I/O to complete and the
> >big QEMU lock is held.  This can cause poor guest interactivity and poor
> >scalability because vcpus cannot make progress, neither can the QEMU
> >monitor respond.
> 
> The vTPM is going to run as a thread and will have to write state
> blobs into a bdrv. The above functions will typically be called from
> this thead. When I originally wrote the code, the vTPM thread could
> not write the blobs into bdrv directly, so I had to resort to
> sending a message to the main QEMU thread to write the data to the
> bdrv. How else could we do this?

How else: use asynchronous APIs like bdrv_aio_writev() or the coroutine
versions (which eliminate the need for callbacks) like bdrv_co_writev().

I'm preparing patches that allow the QEMU block layer to be used safely
outside the QEMU global mutex.  Once this is possible it would be okay
to use synchronous methods.

Stefan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH 1/7] vnvram: VNVRAM bdrv support
  2013-05-23 17:44 ` [Qemu-devel] [PATCH 1/7] vnvram: VNVRAM bdrv support Corey Bryant
@ 2013-05-24 13:06   ` Kevin Wolf
  2013-05-24 15:33     ` Corey Bryant
  0 siblings, 1 reply; 26+ messages in thread
From: Kevin Wolf @ 2013-05-24 13:06 UTC (permalink / raw)
  To: Corey Bryant
  Cc: aliguori, stefanb, mdroth, qemu-devel, jschopp, stefanha, lcapitulino

Am 23.05.2013 um 19:44 hat Corey Bryant geschrieben:
> Provides low-level VNVRAM functionality that reads and writes data,
> such as an entry's binary blob, to a drive image using the block
> driver.
> 
> Signed-off-by: Corey Bryant <coreyb@linux.vnet.ibm.com>

> +/*
> + * Increase the drive size if it's too small to fit the VNVRAM data
> + */
> +static int vnvram_drv_adjust_size(VNVRAM *vnvram)
> +{
> +    int rc = 0;
> +    int64_t needed_size;
> +
> +    needed_size = 0;
> +
> +    if (bdrv_getlength(vnvram->bds) < needed_size) {
> +        rc = bdrv_truncate(vnvram->bds, needed_size);
> +        if (rc != 0) {
> +            DPRINTF("%s: VNVRAM drive too small\n", __func__);
> +        }
> +    }
> +
> +    return rc;
> +}

This function doesn't make a whole lot of sense. It truncates the file
to size 0 if and only if bdrv_getlength() returns an error.

> +
> +/*
> + * Write a header to the drive with entry count of zero
> + */
> +static int vnvram_drv_hdr_create_empty(VNVRAM *vnvram)
> +{
> +    VNVRAMDrvHdr hdr;
> +
> +    hdr.version = VNVRAM_CURRENT_VERSION;
> +    hdr.magic = VNVRAM_MAGIC;
> +    hdr.num_entries = 0;
> +
> +    vnvram_drv_hdr_cpu_to_be((&hdr));
> +
> +    if (bdrv_pwrite(vnvram->bds, 0, (&hdr), sizeof(hdr)) != sizeof(hdr)) {
> +        DPRINTF("%s: Write of header to drive failed\n", __func__);
> +        return -EIO;
> +    }
> +
> +    vnvram->end_offset = sizeof(VNVRAMDrvHdr);
> +
> +    return 0;
> +}
> +
> +/*
> + * Read the header from the drive
> + */
> +static int vnvram_drv_hdr_read(VNVRAM *vnvram, VNVRAMDrvHdr *hdr)
> +{
> +    if (bdrv_pread(vnvram->bds, 0, hdr, sizeof(*hdr)) != sizeof(*hdr)) {
> +        DPRINTF("%s: Read of header from drive failed\n", __func__);
> +        return -EIO;
> +    }

Why do you turn all errors into -EIO instead of returning the real error
code? (More instances of the same thing follow)

> +
> +    vnvram_drv_hdr_be_to_cpu(hdr);
> +
> +    return 0;
> +}
> +}

Kevin

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage
  2013-05-23 19:15     ` Anthony Liguori
@ 2013-05-24 15:27       ` Corey Bryant
  2013-05-29 13:34         ` Anthony Liguori
  0 siblings, 1 reply; 26+ messages in thread
From: Corey Bryant @ 2013-05-24 15:27 UTC (permalink / raw)
  To: Anthony Liguori
  Cc: kwolf, stefanb, mdroth, qemu-devel, jschopp, stefanha, lcapitulino



On 05/23/2013 03:15 PM, Anthony Liguori wrote:
> Corey Bryant <coreyb@linux.vnet.ibm.com> writes:
>
>> On 05/23/2013 02:03 PM, Anthony Liguori wrote:
>>> Corey Bryant <coreyb@linux.vnet.ibm.com> writes:
>>>
>> One of the difficulties in virtualizing a TPM is that it doesn't support
>> SR-IOV.  So the existing passthrough vTPM can only be used by one guest.
>>    We're planning to provide a software emulated vTPM that uses libtpms
>> and it needs to store blobs somewhere that is persistent.  We can't
>> store blobs in the host TPM's hardware NVRAM.  So we have to virtualize
>> it in software.  And we figured we'd provide a persistent storage
>> mechanism that other parts of QEMU could use rather than limit it to
>> just the vTPM's use.
>
> I think you are misunderstanding my feedback.
>
> See http://mid.gmane.org/87ehf03dgw.fsf@codemonkey.ws
>

It looks like we'll be able to follow what you said in that thread, 
specifically:

"Just make the TPM have a DRIVE property, drop all notion of
NVRAM/blobstore, and used fixed offsets into the BlockDriverState for
each blob."

This will limit the functionality to only the vTPM, but it sounds like 
that's desired.  Also it looks like vTPM 1.2 will only have 4 blobs and 
we'll know their max sizes, so we should be able to use fixed offsets 
for them.  This will simplify the code quite a bit.

I assume we'll still need to use a bottom-half to send read/write 
requests to the main thread.  And from the sounds of it the reads/writes 
will need to be asynchronous.

Does this sound ok?

-- 
Regards,
Corey Bryant



>>>>
>>>> VNVRAM *vnvram;
>>>> int errcode
>>>> const VNVRAMEntryName entry_name;
>>>> const char *blob_w = "blob data";
>>>> char *blob_r;
>>>> uint32_t blob_r_size;
>>>>
>>>> vnvram = vnvram_create("drive-ide0-0-0", false, &errcode);
>>>> strcpy((char *)entry_name, "first-entry");
>>>> vnvram_register_entry(vnvram, &entry_name, 1024);
>>>> vnvram_write_entry(vnvram, &entry_name, (char *)blob_w, strlen(blob_w)+1);
>>>> vnvram_read_entry(vnvram, &entry_name, &blob_r, &blob_r_size);
>>>> vnvram_delete(vnvram);
>>>>
>>>> Thanks,
>>>> Corey
>>>>
>>>> Corey Bryant (7):
>>>>     vnvram: VNVRAM bdrv support
>>>>     vnvram: VNVRAM in-memory support
>>>>     vnvram: VNVRAM bottom-half r/w scheduling support
>>>>     vnvram: VNVRAM internal APIs
>>>>     vnvram: VNVRAM additional debug support
>>>>     main: Initialize VNVRAM
>>>>     monitor: QMP/HMP support for retrieving VNVRAM details
>>>>
>>>>    Makefile.objs    |    2 +
>>>>    hmp.c            |   32 ++
>>>>    hmp.h            |    1 +
>>>>    monitor.c        |    7 +
>>>>    qapi-schema.json |   47 ++
>>>>    qmp-commands.hx  |   41 ++
>>>>    vl.c             |    6 +
>>>>    vnvram.c         | 1254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>>>    vnvram.h         |   36 ++
>>>>    9 files changed, 1426 insertions(+), 0 deletions(-)
>>>>    create mode 100644 vnvram.c
>>>>    create mode 100644 vnvram.h
>>>
>>>
>
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH 1/7] vnvram: VNVRAM bdrv support
  2013-05-24 13:06   ` Kevin Wolf
@ 2013-05-24 15:33     ` Corey Bryant
  2013-05-24 15:37       ` Kevin Wolf
  0 siblings, 1 reply; 26+ messages in thread
From: Corey Bryant @ 2013-05-24 15:33 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: aliguori, stefanb, mdroth, qemu-devel, jschopp, stefanha, lcapitulino



On 05/24/2013 09:06 AM, Kevin Wolf wrote:
> Am 23.05.2013 um 19:44 hat Corey Bryant geschrieben:
>> Provides low-level VNVRAM functionality that reads and writes data,
>> such as an entry's binary blob, to a drive image using the block
>> driver.
>>
>> Signed-off-by: Corey Bryant <coreyb@linux.vnet.ibm.com>
>
>> +/*
>> + * Increase the drive size if it's too small to fit the VNVRAM data
>> + */
>> +static int vnvram_drv_adjust_size(VNVRAM *vnvram)
>> +{
>> +    int rc = 0;
>> +    int64_t needed_size;
>> +
>> +    needed_size = 0;
>> +
>> +    if (bdrv_getlength(vnvram->bds) < needed_size) {
>> +        rc = bdrv_truncate(vnvram->bds, needed_size);
>> +        if (rc != 0) {
>> +            DPRINTF("%s: VNVRAM drive too small\n", __func__);
>> +        }
>> +    }
>> +
>> +    return rc;
>> +}
>
> This function doesn't make a whole lot of sense. It truncates the file
> to size 0 if and only if bdrv_getlength() returns an error.
>

There's a later patch that adds a "get size" function and changes the 
initialization of needed_size to the actual size needed to store VNVRAM 
data.  Anyway I should probably just include that change in this patch. 
  I think I'll still need this function or part of it with the new 
simplified approach that it looks like we're going to take.

>> +
>> +/*
>> + * Write a header to the drive with entry count of zero
>> + */
>> +static int vnvram_drv_hdr_create_empty(VNVRAM *vnvram)
>> +{
>> +    VNVRAMDrvHdr hdr;
>> +
>> +    hdr.version = VNVRAM_CURRENT_VERSION;
>> +    hdr.magic = VNVRAM_MAGIC;
>> +    hdr.num_entries = 0;
>> +
>> +    vnvram_drv_hdr_cpu_to_be((&hdr));
>> +
>> +    if (bdrv_pwrite(vnvram->bds, 0, (&hdr), sizeof(hdr)) != sizeof(hdr)) {
>> +        DPRINTF("%s: Write of header to drive failed\n", __func__);
>> +        return -EIO;
>> +    }
>> +
>> +    vnvram->end_offset = sizeof(VNVRAMDrvHdr);
>> +
>> +    return 0;
>> +}
>> +
>> +/*
>> + * Read the header from the drive
>> + */
>> +static int vnvram_drv_hdr_read(VNVRAM *vnvram, VNVRAMDrvHdr *hdr)
>> +{
>> +    if (bdrv_pread(vnvram->bds, 0, hdr, sizeof(*hdr)) != sizeof(*hdr)) {
>> +        DPRINTF("%s: Read of header from drive failed\n", __func__);
>> +        return -EIO;
>> +    }
>
> Why do you turn all errors into -EIO instead of returning the real error
> code? (More instances of the same thing follow)
>

Good point, there's no reason to mask the original error code.

>> +
>> +    vnvram_drv_hdr_be_to_cpu(hdr);
>> +
>> +    return 0;
>> +}
>> +}
>
> Kevin
>
>
>

-- 
Regards,
Corey Bryant

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH 1/7] vnvram: VNVRAM bdrv support
  2013-05-24 15:33     ` Corey Bryant
@ 2013-05-24 15:37       ` Kevin Wolf
  2013-05-24 15:47         ` Corey Bryant
  0 siblings, 1 reply; 26+ messages in thread
From: Kevin Wolf @ 2013-05-24 15:37 UTC (permalink / raw)
  To: Corey Bryant
  Cc: aliguori, stefanb, mdroth, qemu-devel, jschopp, stefanha, lcapitulino

Am 24.05.2013 um 17:33 hat Corey Bryant geschrieben:
> 
> 
> On 05/24/2013 09:06 AM, Kevin Wolf wrote:
> >Am 23.05.2013 um 19:44 hat Corey Bryant geschrieben:
> >>Provides low-level VNVRAM functionality that reads and writes data,
> >>such as an entry's binary blob, to a drive image using the block
> >>driver.
> >>
> >>Signed-off-by: Corey Bryant <coreyb@linux.vnet.ibm.com>
> >
> >>+/*
> >>+ * Increase the drive size if it's too small to fit the VNVRAM data
> >>+ */
> >>+static int vnvram_drv_adjust_size(VNVRAM *vnvram)
> >>+{
> >>+    int rc = 0;
> >>+    int64_t needed_size;
> >>+
> >>+    needed_size = 0;
> >>+
> >>+    if (bdrv_getlength(vnvram->bds) < needed_size) {
> >>+        rc = bdrv_truncate(vnvram->bds, needed_size);
> >>+        if (rc != 0) {
> >>+            DPRINTF("%s: VNVRAM drive too small\n", __func__);
> >>+        }
> >>+    }
> >>+
> >>+    return rc;
> >>+}
> >
> >This function doesn't make a whole lot of sense. It truncates the file
> >to size 0 if and only if bdrv_getlength() returns an error.
> >
> 
> There's a later patch that adds a "get size" function and changes
> the initialization of needed_size to the actual size needed to store
> VNVRAM data.  Anyway I should probably just include that change in
> this patch.  I think I'll still need this function or part of it
> with the new simplified approach that it looks like we're going to
> take.

Okay. But even then, do you really want to truncate on errors?

Kevin

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage
  2013-05-24 12:36     ` Stefan Hajnoczi
@ 2013-05-24 15:39       ` Corey Bryant
  2013-05-27  8:40         ` Stefan Hajnoczi
  0 siblings, 1 reply; 26+ messages in thread
From: Corey Bryant @ 2013-05-24 15:39 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: kwolf, aliguori, Stefan Berger, Stefan Hajnoczi, mdroth,
	qemu-devel, jschopp, lcapitulino



On 05/24/2013 08:36 AM, Stefan Hajnoczi wrote:
> On Fri, May 24, 2013 at 08:13:27AM -0400, Stefan Berger wrote:
>> On 05/24/2013 05:59 AM, Stefan Hajnoczi wrote:
>>> On Thu, May 23, 2013 at 01:44:40PM -0400, Corey Bryant wrote:
>>>> This patch series provides VNVRAM persistent storage support that
>>>> QEMU can use internally.  The initial target user will be a software
>>>> vTPM 1.2 backend that needs to store keys in VNVRAM and be able to
>>>> reboot/migrate and retain the keys.
>>>>
>>>> This support uses QEMU's block driver to provide persistent storage
>>>> by reading/writing VNVRAM data from/to a drive image.  The VNVRAM
>>>> drive image is provided with the -drive command line option just like
>>>> any other drive image and the vnvram_create() API will find it.
>>>>
>>>> The APIs allow for VNVRAM entries to be registered, one at a time,
>>>> each with a maximum blob size.  Entry blobs can then be read/written
>>>> from/to an entry on the drive.  Here's an example of usage:
>>>>
>>>> VNVRAM *vnvram;
>>>> int errcode
>>>> const VNVRAMEntryName entry_name;
>>>> const char *blob_w = "blob data";
>>>> char *blob_r;
>>>> uint32_t blob_r_size;
>>>>
>>>> vnvram = vnvram_create("drive-ide0-0-0", false, &errcode);
>>>> strcpy((char *)entry_name, "first-entry");
>>> VNVRAMEntryName is very prone to buffer overflow.  I hope real code
>>> doesn't use strcpy().  The cast is ugly, please don't hide the type.
>>>
>>>> vnvram_register_entry(vnvram, &entry_name, 1024);
>>>> vnvram_write_entry(vnvram, &entry_name, (char *)blob_w, strlen(blob_w)+1);
>>>> vnvram_read_entry(vnvram, &entry_name, &blob_r, &blob_r_size);
>>> These are synchronous functions.  If I/O is involved then this is a
>>> problem: QEMU will be blocked waiting for host I/O to complete and the
>>> big QEMU lock is held.  This can cause poor guest interactivity and poor
>>> scalability because vcpus cannot make progress, neither can the QEMU
>>> monitor respond.
>>
>> The vTPM is going to run as a thread and will have to write state
>> blobs into a bdrv. The above functions will typically be called from
>> this thead. When I originally wrote the code, the vTPM thread could
>> not write the blobs into bdrv directly, so I had to resort to
>> sending a message to the main QEMU thread to write the data to the
>> bdrv. How else could we do this?
>
> How else: use asynchronous APIs like bdrv_aio_writev() or the coroutine
> versions (which eliminate the need for callbacks) like bdrv_co_writev().
>
> I'm preparing patches that allow the QEMU block layer to be used safely
> outside the QEMU global mutex.  Once this is possible it would be okay
> to use synchronous methods.

Ok thanks.  I'll use aio APIs next time around.  Just to be clear, does 
"eliminating the callback" mean I don't have to use a bottom-half if I 
use coroutine reads/writes?

-- 
Regards,
Corey Bryant

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH 1/7] vnvram: VNVRAM bdrv support
  2013-05-24 15:37       ` Kevin Wolf
@ 2013-05-24 15:47         ` Corey Bryant
  0 siblings, 0 replies; 26+ messages in thread
From: Corey Bryant @ 2013-05-24 15:47 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: aliguori, stefanb, mdroth, qemu-devel, jschopp, stefanha, lcapitulino



On 05/24/2013 11:37 AM, Kevin Wolf wrote:
> Am 24.05.2013 um 17:33 hat Corey Bryant geschrieben:
>>
>>
>> On 05/24/2013 09:06 AM, Kevin Wolf wrote:
>>> Am 23.05.2013 um 19:44 hat Corey Bryant geschrieben:
>>>> Provides low-level VNVRAM functionality that reads and writes data,
>>>> such as an entry's binary blob, to a drive image using the block
>>>> driver.
>>>>
>>>> Signed-off-by: Corey Bryant <coreyb@linux.vnet.ibm.com>
>>>
>>>> +/*
>>>> + * Increase the drive size if it's too small to fit the VNVRAM data
>>>> + */
>>>> +static int vnvram_drv_adjust_size(VNVRAM *vnvram)
>>>> +{
>>>> +    int rc = 0;
>>>> +    int64_t needed_size;
>>>> +
>>>> +    needed_size = 0;
>>>> +
>>>> +    if (bdrv_getlength(vnvram->bds) < needed_size) {
>>>> +        rc = bdrv_truncate(vnvram->bds, needed_size);
>>>> +        if (rc != 0) {
>>>> +            DPRINTF("%s: VNVRAM drive too small\n", __func__);
>>>> +        }
>>>> +    }
>>>> +
>>>> +    return rc;
>>>> +}
>>>
>>> This function doesn't make a whole lot of sense. It truncates the file
>>> to size 0 if and only if bdrv_getlength() returns an error.
>>>
>>
>> There's a later patch that adds a "get size" function and changes
>> the initialization of needed_size to the actual size needed to store
>> VNVRAM data.  Anyway I should probably just include that change in
>> this patch.  I think I'll still need this function or part of it
>> with the new simplified approach that it looks like we're going to
>> take.
>
> Okay. But even then, do you really want to truncate on errors?
>
> Kevin
>
>
>

True, it'll need something to account for bdrv_getlength() failures and 
not truncate in that case.

-- 
Regards,
Corey Bryant

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage
  2013-05-24 15:39       ` Corey Bryant
@ 2013-05-27  8:40         ` Stefan Hajnoczi
  0 siblings, 0 replies; 26+ messages in thread
From: Stefan Hajnoczi @ 2013-05-27  8:40 UTC (permalink / raw)
  To: Corey Bryant
  Cc: kwolf, aliguori, Stefan Berger, Stefan Hajnoczi, mdroth,
	qemu-devel, jschopp, lcapitulino

On Fri, May 24, 2013 at 11:39:09AM -0400, Corey Bryant wrote:
> 
> 
> On 05/24/2013 08:36 AM, Stefan Hajnoczi wrote:
> >On Fri, May 24, 2013 at 08:13:27AM -0400, Stefan Berger wrote:
> >>On 05/24/2013 05:59 AM, Stefan Hajnoczi wrote:
> >>>On Thu, May 23, 2013 at 01:44:40PM -0400, Corey Bryant wrote:
> >>>>This patch series provides VNVRAM persistent storage support that
> >>>>QEMU can use internally.  The initial target user will be a software
> >>>>vTPM 1.2 backend that needs to store keys in VNVRAM and be able to
> >>>>reboot/migrate and retain the keys.
> >>>>
> >>>>This support uses QEMU's block driver to provide persistent storage
> >>>>by reading/writing VNVRAM data from/to a drive image.  The VNVRAM
> >>>>drive image is provided with the -drive command line option just like
> >>>>any other drive image and the vnvram_create() API will find it.
> >>>>
> >>>>The APIs allow for VNVRAM entries to be registered, one at a time,
> >>>>each with a maximum blob size.  Entry blobs can then be read/written
> >>>>from/to an entry on the drive.  Here's an example of usage:
> >>>>
> >>>>VNVRAM *vnvram;
> >>>>int errcode
> >>>>const VNVRAMEntryName entry_name;
> >>>>const char *blob_w = "blob data";
> >>>>char *blob_r;
> >>>>uint32_t blob_r_size;
> >>>>
> >>>>vnvram = vnvram_create("drive-ide0-0-0", false, &errcode);
> >>>>strcpy((char *)entry_name, "first-entry");
> >>>VNVRAMEntryName is very prone to buffer overflow.  I hope real code
> >>>doesn't use strcpy().  The cast is ugly, please don't hide the type.
> >>>
> >>>>vnvram_register_entry(vnvram, &entry_name, 1024);
> >>>>vnvram_write_entry(vnvram, &entry_name, (char *)blob_w, strlen(blob_w)+1);
> >>>>vnvram_read_entry(vnvram, &entry_name, &blob_r, &blob_r_size);
> >>>These are synchronous functions.  If I/O is involved then this is a
> >>>problem: QEMU will be blocked waiting for host I/O to complete and the
> >>>big QEMU lock is held.  This can cause poor guest interactivity and poor
> >>>scalability because vcpus cannot make progress, neither can the QEMU
> >>>monitor respond.
> >>
> >>The vTPM is going to run as a thread and will have to write state
> >>blobs into a bdrv. The above functions will typically be called from
> >>this thead. When I originally wrote the code, the vTPM thread could
> >>not write the blobs into bdrv directly, so I had to resort to
> >>sending a message to the main QEMU thread to write the data to the
> >>bdrv. How else could we do this?
> >
> >How else: use asynchronous APIs like bdrv_aio_writev() or the coroutine
> >versions (which eliminate the need for callbacks) like bdrv_co_writev().
> >
> >I'm preparing patches that allow the QEMU block layer to be used safely
> >outside the QEMU global mutex.  Once this is possible it would be okay
> >to use synchronous methods.
> 
> Ok thanks.  I'll use aio APIs next time around.  Just to be clear,
> does "eliminating the callback" mean I don't have to use a
> bottom-half if I use coroutine reads/writes?

I've only skimmed the patches but I think vTPM runs in its own thread
and uses a BH to kick off I/O requests since the block layer must be
called with the QEMU global mutex held.

In this case you still need the BH since its purpose is to run block
layer code in a thread that holds the QEMU global mutex.

Stefan

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage
  2013-05-24 15:27       ` Corey Bryant
@ 2013-05-29 13:34         ` Anthony Liguori
  0 siblings, 0 replies; 26+ messages in thread
From: Anthony Liguori @ 2013-05-29 13:34 UTC (permalink / raw)
  To: Corey Bryant
  Cc: kwolf, stefanb, mdroth, qemu-devel, jschopp, stefanha, lcapitulino

Corey Bryant <coreyb@linux.vnet.ibm.com> writes:

> On 05/23/2013 03:15 PM, Anthony Liguori wrote:
>> Corey Bryant <coreyb@linux.vnet.ibm.com> writes:
>>
>>> On 05/23/2013 02:03 PM, Anthony Liguori wrote:
>>>> Corey Bryant <coreyb@linux.vnet.ibm.com> writes:
>>>>
>>> One of the difficulties in virtualizing a TPM is that it doesn't support
>>> SR-IOV.  So the existing passthrough vTPM can only be used by one guest.
>>>    We're planning to provide a software emulated vTPM that uses libtpms
>>> and it needs to store blobs somewhere that is persistent.  We can't
>>> store blobs in the host TPM's hardware NVRAM.  So we have to virtualize
>>> it in software.  And we figured we'd provide a persistent storage
>>> mechanism that other parts of QEMU could use rather than limit it to
>>> just the vTPM's use.
>>
>> I think you are misunderstanding my feedback.
>>
>> See http://mid.gmane.org/87ehf03dgw.fsf@codemonkey.ws
>>
>
> It looks like we'll be able to follow what you said in that thread, 
> specifically:
>
> "Just make the TPM have a DRIVE property, drop all notion of
> NVRAM/blobstore, and used fixed offsets into the BlockDriverState for
> each blob."
>
> This will limit the functionality to only the vTPM, but it sounds like 
> that's desired.

Ack.

> Also it looks like vTPM 1.2 will only have 4 blobs and 
> we'll know their max sizes, so we should be able to use fixed offsets 
> for them.  This will simplify the code quite a bit.

Ack.

> I assume we'll still need to use a bottom-half to send read/write 
> requests to the main thread.  And from the sounds of it the reads/writes 
> will need to be asynchronous.

Yes.

>
> Does this sound ok?

Yup.

Regards,

Anthony Liguori

>
> -- 
> Regards,
> Corey Bryant
>
>
>
>>>>>
>>>>> VNVRAM *vnvram;
>>>>> int errcode
>>>>> const VNVRAMEntryName entry_name;
>>>>> const char *blob_w = "blob data";
>>>>> char *blob_r;
>>>>> uint32_t blob_r_size;
>>>>>
>>>>> vnvram = vnvram_create("drive-ide0-0-0", false, &errcode);
>>>>> strcpy((char *)entry_name, "first-entry");
>>>>> vnvram_register_entry(vnvram, &entry_name, 1024);
>>>>> vnvram_write_entry(vnvram, &entry_name, (char *)blob_w, strlen(blob_w)+1);
>>>>> vnvram_read_entry(vnvram, &entry_name, &blob_r, &blob_r_size);
>>>>> vnvram_delete(vnvram);
>>>>>
>>>>> Thanks,
>>>>> Corey
>>>>>
>>>>> Corey Bryant (7):
>>>>>     vnvram: VNVRAM bdrv support
>>>>>     vnvram: VNVRAM in-memory support
>>>>>     vnvram: VNVRAM bottom-half r/w scheduling support
>>>>>     vnvram: VNVRAM internal APIs
>>>>>     vnvram: VNVRAM additional debug support
>>>>>     main: Initialize VNVRAM
>>>>>     monitor: QMP/HMP support for retrieving VNVRAM details
>>>>>
>>>>>    Makefile.objs    |    2 +
>>>>>    hmp.c            |   32 ++
>>>>>    hmp.h            |    1 +
>>>>>    monitor.c        |    7 +
>>>>>    qapi-schema.json |   47 ++
>>>>>    qmp-commands.hx  |   41 ++
>>>>>    vl.c             |    6 +
>>>>>    vnvram.c         | 1254 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>    vnvram.h         |   36 ++
>>>>>    9 files changed, 1426 insertions(+), 0 deletions(-)
>>>>>    create mode 100644 vnvram.c
>>>>>    create mode 100644 vnvram.h
>>>>
>>>>
>>
>>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH 7/7] monitor: QMP/HMP support for retrieving VNVRAM details
  2013-05-23 17:44 ` [Qemu-devel] [PATCH 7/7] monitor: QMP/HMP support for retrieving VNVRAM details Corey Bryant
  2013-05-23 17:59   ` Eric Blake
@ 2013-05-29 17:15   ` Luiz Capitulino
  2013-05-29 17:34     ` Corey Bryant
  1 sibling, 1 reply; 26+ messages in thread
From: Luiz Capitulino @ 2013-05-29 17:15 UTC (permalink / raw)
  To: Corey Bryant
  Cc: kwolf, aliguori, stefanb, qemu-devel, mdroth, jschopp, stefanha

On Thu, 23 May 2013 13:44:47 -0400
Corey Bryant <coreyb@linux.vnet.ibm.com> wrote:

> Signed-off-by: Corey Bryant <coreyb@linux.vnet.ibm.com>

Looks good to me, only one small nit below.

> ---
>  hmp.c            |   32 ++++++++++++++++++++++++
>  hmp.h            |    1 +
>  monitor.c        |    7 +++++
>  qapi-schema.json |   47 +++++++++++++++++++++++++++++++++++
>  qmp-commands.hx  |   41 +++++++++++++++++++++++++++++++
>  vnvram.c         |   71 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  6 files changed, 199 insertions(+), 0 deletions(-)
> 
> diff --git a/hmp.c b/hmp.c
> index 4fb76ec..a144f73 100644
> --- a/hmp.c
> +++ b/hmp.c
> @@ -653,6 +653,38 @@ void hmp_info_tpm(Monitor *mon, const QDict *qdict)
>      qapi_free_TPMInfoList(info_list);
>  }
>  
> +void hmp_info_vnvram(Monitor *mon, const QDict *dict)
> +{
> +    VNVRAMInfoList *info_list, *info;
> +    Error *err = NULL;
> +    unsigned int c = 0;
> +
> +    info_list = qmp_query_vnvram(&err);
> +    if (err) {
> +        monitor_printf(mon, "VNVRAM not found\n");
> +        error_free(err);
> +        return;
> +    }
> +
> +    for (info = info_list; info; info = info->next) {
> +        VNVRAMInfo *ni = info->value;
> +        VNVRAMEntryInfoList *einfo_list = ni->entries, *einfo;
> +        unsigned int d = 0;
> +        monitor_printf(mon, "vnvram%u: drive-id=%s "
> +                       "virtual-disk-size=%"PRId64" vnvram-size=%"PRIu64"\n",
> +                       c++, ni->drive_id, ni->virtual_disk_size,
> +                       ni->vnvram_size);
> +
> +        for (einfo = einfo_list; einfo; einfo = einfo->next) {
> +            VNVRAMEntryInfo *nei = einfo->value;
> +            monitor_printf(mon, "  entry%u: name=%s cur-size=%"PRIu64" "
> +                           "max-size=%"PRIu64"\n",
> +                           d++, nei->name, nei->cur_size, nei->max_size);
> +        }
> +    }
> +    qapi_free_VNVRAMInfoList(info_list);
> +}
> +
>  void hmp_quit(Monitor *mon, const QDict *qdict)
>  {
>      monitor_suspend(mon);
> diff --git a/hmp.h b/hmp.h
> index 95fe76e..e26daf2 100644
> --- a/hmp.h
> +++ b/hmp.h
> @@ -37,6 +37,7 @@ void hmp_info_balloon(Monitor *mon, const QDict *qdict);
>  void hmp_info_pci(Monitor *mon, const QDict *qdict);
>  void hmp_info_block_jobs(Monitor *mon, const QDict *qdict);
>  void hmp_info_tpm(Monitor *mon, const QDict *qdict);
> +void hmp_info_vnvram(Monitor *mon, const QDict *dict);
>  void hmp_quit(Monitor *mon, const QDict *qdict);
>  void hmp_stop(Monitor *mon, const QDict *qdict);
>  void hmp_system_reset(Monitor *mon, const QDict *qdict);
> diff --git a/monitor.c b/monitor.c
> index 62aaebe..c10fe15 100644
> --- a/monitor.c
> +++ b/monitor.c
> @@ -2764,6 +2764,13 @@ static mon_cmd_t info_cmds[] = {
>          .mhandler.cmd = hmp_info_tpm,
>      },
>      {
> +        .name       = "vnvram",
> +        .args_type  = "",
> +        .params     = "",
> +        .help       = "show VNVRAM information",
> +        .mhandler.cmd = hmp_info_vnvram,
> +    },
> +    {
>          .name       = NULL,
>      },
>  };
> diff --git a/qapi-schema.json b/qapi-schema.json
> index 9302e7d..73d42d6 100644
> --- a/qapi-schema.json
> +++ b/qapi-schema.json
> @@ -3619,3 +3619,50 @@
>              '*cpuid-input-ecx': 'int',
>              'cpuid-register': 'X86CPURegister32',
>              'features': 'int' } }
> +
> +# @VNVRAMEntryInfo:
> +#
> +# Information about an entry in the VNVRAM.
> +#
> +# @name: name of the entry
> +#
> +# @cur-size: current size of the entry's blob in bytes

It's preferable not to abbreviate, you can have current-size.

> +#
> +# @max-size: max size of the entry's blob in bytes
> +#
> +# Since: 1.6
> +#
> +##
> +{ 'type': 'VNVRAMEntryInfo',
> +  'data': {'name': 'str', 'cur-size': 'int', 'max-size': 'int', } }
> +
> +##
> +# @VNVRAMInfo:
> +#
> +# Information about the VNVRAM device.
> +#
> +# @drive-id: ID of the VNVRAM (and associated drive)
> +#
> +# @virtual-disk-size: Virtual size of the associated disk drive in bytes
> +#
> +# @vnvram-size: Size of the VNVRAM in bytes
> +#
> +# @entries: Array of @VNVRAMEntryInfo
> +#
> +# Since: 1.6
> +#
> +##
> +{ 'type': 'VNVRAMInfo',
> +  'data': {'drive-id': 'str', 'virtual-disk-size': 'int',
> +           'vnvram-size': 'int', 'entries' : ['VNVRAMEntryInfo']} }
> +
> +##
> +# @query-vnvram:
> +#
> +# Return information about the VNVRAM devices.
> +#
> +# Returns: @VNVRAMInfo on success
> +#
> +# Since: 1.6
> +##
> +{ 'command': 'query-vnvram', 'returns': ['VNVRAMInfo'] }
> diff --git a/qmp-commands.hx b/qmp-commands.hx
> index ffd130e..56a57b7 100644
> --- a/qmp-commands.hx
> +++ b/qmp-commands.hx
> @@ -2932,3 +2932,44 @@ Example:
>  <- { "return": {} }
>  
>  EQMP
> +
> +    {
> +        .name       = "query-vnvram",
> +        .args_type  = "",
> +        .mhandler.cmd_new = qmp_marshal_input_query_vnvram,
> +    },
> +
> +SQMP
> +query-vnvram
> +------------
> +
> +Show VNVRAM info.
> +
> +Return a json-array of json-objects representing VNVRAMs.  Each VNVRAM
> +is described by a json-object with the following:
> +
> +- "drive-id": ID of the VNVRAM (json-string)
> +- "vitual-disk-size": Virtual size of associated disk drive in bytes (json-int)
> +- "vnvram-size": Size of the VNVRAM in bytes (json-int)
> +- "entries": json-array of json-objects representing entries
> +
> +Each entry is described by a json-object with the following:
> +
> +- "name": Name of the entry (json-string)
> +- "cur-size": Current size of the entry's blob in bytes (json-int)
> +- "max-size": Max size of the entry's blob in bytes (json-int)
> +
> +Example:
> +
> +-> { "execute": "query-vnvram" }
> +<- {"return": [
> +      { "vnvram-size": 2050, "virtual-disk-size": 2000896,
> +        "drive-id": "drive-ide0-0-0",
> +        "entries": [
> +         { "name": "this-entry", "cur-size": 2048, "max-size": 21504 },
> +         { "name": "that-entry", "cur-size": 1024, "max-size": 21504 },
> +         { "name": "other-entry", "cur-size": 4096, "max-size": 41472 } ]
> +      } ]
> +   }
> +
> +EQMP
> diff --git a/vnvram.c b/vnvram.c
> index 9c4f64f..a5fe101 100644
> --- a/vnvram.c
> +++ b/vnvram.c
> @@ -16,6 +16,7 @@
>  #include "monitor/monitor.h"
>  #include "qemu/thread.h"
>  #include "sysemu/sysemu.h"
> +#include "qmp-commands.h"
>  
>  /*
>  #define VNVRAM_DEBUG
> @@ -897,6 +898,76 @@ static int vnvram_rwrequest_schedule(VNVRAMRWRequest *rwr)
>      return rc;
>  }
>  
> +/************************ VNVRAM monitor *****************************/
> +/* VNVRAM functions that support QMP and HMP commands                */
> +/*********************************************************************/
> +
> +/*
> + * Get VNVRAM entry details for an in-memory entry
> + */
> +static VNVRAMEntryInfo *vnvram_get_vnvram_entry_info(VNVRAMEntry *entry)
> +{
> +    VNVRAMEntryInfo *res = g_new0(VNVRAMEntryInfo, 1);
> +
> +    res->name = g_strndup(entry->name, sizeof(entry->name));
> +    res->cur_size = entry->cur_size;
> +    res->max_size = entry->max_size;
> +
> +    return res;
> +}
> +
> +/*
> + * Get VNVRAM details based on the VNVRAM struct
> + */
> +static VNVRAMInfo *vnvram_get_vnvram_info(VNVRAM *vnvram)
> +{
> +    VNVRAMEntry *entry;
> +    VNVRAMEntryInfoList *info, *head = NULL, *cur = NULL;
> +    VNVRAMInfo *res = g_new0(VNVRAMInfo, 1);
> +
> +    res->drive_id = g_strdup(vnvram->drv_id);
> +    res->virtual_disk_size = bdrv_getlength(vnvram->bds);
> +    res->vnvram_size = vnvram_get_size(vnvram);
> +
> +    QLIST_FOREACH(entry, &vnvram->entries_head, next) {
> +        info = g_new0(VNVRAMEntryInfoList, 1);
> +        info->value = vnvram_get_vnvram_entry_info(entry);
> +
> +        if (!cur) {
> +            head = cur = info;
> +        } else {
> +            cur->next = info;
> +            cur = info;
> +        }
> +    }
> +    res->entries = head;
> +
> +    return res;
> +}
> +
> +/*
> + * Get VNVRAM data from the in-memory VNVRAM struct and entries
> + */
> +VNVRAMInfoList *qmp_query_vnvram(Error **errp)
> +{
> +    VNVRAM *vnvram;
> +    VNVRAMInfoList *info, *head = NULL, *cur = NULL;
> +
> +    QLIST_FOREACH(vnvram, &vnvrams, list) {
> +        info = g_new0(VNVRAMInfoList, 1);
> +        info->value = vnvram_get_vnvram_info(vnvram);
> +
> +        if (!cur) {
> +            head = cur = info;
> +        } else {
> +            cur->next = info;
> +            cur = info;
> +        }
> +    }
> +
> +    return head;
> +}
> +
>  /************************* VNVRAM APIs *******************************/
>  /* VNVRAM APIs that can be used by QEMU to provide persistent storage*/
>  /*********************************************************************/

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [Qemu-devel] [PATCH 7/7] monitor: QMP/HMP support for retrieving VNVRAM details
  2013-05-29 17:15   ` Luiz Capitulino
@ 2013-05-29 17:34     ` Corey Bryant
  0 siblings, 0 replies; 26+ messages in thread
From: Corey Bryant @ 2013-05-29 17:34 UTC (permalink / raw)
  To: Luiz Capitulino
  Cc: kwolf, aliguori, stefanb, mdroth, qemu-devel, jschopp, stefanha



On 05/29/2013 01:15 PM, Luiz Capitulino wrote:
> On Thu, 23 May 2013 13:44:47 -0400
> Corey Bryant <coreyb@linux.vnet.ibm.com> wrote:
>
>> Signed-off-by: Corey Bryant <coreyb@linux.vnet.ibm.com>
>
> Looks good to me, only one small nit below.
>

It looks like this series is going to get dropped, but thanks for the 
review!

-- 
Regards,
Corey Bryant

>> ---
>>   hmp.c            |   32 ++++++++++++++++++++++++
>>   hmp.h            |    1 +
>>   monitor.c        |    7 +++++
>>   qapi-schema.json |   47 +++++++++++++++++++++++++++++++++++
>>   qmp-commands.hx  |   41 +++++++++++++++++++++++++++++++
>>   vnvram.c         |   71 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>   6 files changed, 199 insertions(+), 0 deletions(-)
>>
>> diff --git a/hmp.c b/hmp.c
>> index 4fb76ec..a144f73 100644
>> --- a/hmp.c
>> +++ b/hmp.c
>> @@ -653,6 +653,38 @@ void hmp_info_tpm(Monitor *mon, const QDict *qdict)
>>       qapi_free_TPMInfoList(info_list);
>>   }
>>
>> +void hmp_info_vnvram(Monitor *mon, const QDict *dict)
>> +{
>> +    VNVRAMInfoList *info_list, *info;
>> +    Error *err = NULL;
>> +    unsigned int c = 0;
>> +
>> +    info_list = qmp_query_vnvram(&err);
>> +    if (err) {
>> +        monitor_printf(mon, "VNVRAM not found\n");
>> +        error_free(err);
>> +        return;
>> +    }
>> +
>> +    for (info = info_list; info; info = info->next) {
>> +        VNVRAMInfo *ni = info->value;
>> +        VNVRAMEntryInfoList *einfo_list = ni->entries, *einfo;
>> +        unsigned int d = 0;
>> +        monitor_printf(mon, "vnvram%u: drive-id=%s "
>> +                       "virtual-disk-size=%"PRId64" vnvram-size=%"PRIu64"\n",
>> +                       c++, ni->drive_id, ni->virtual_disk_size,
>> +                       ni->vnvram_size);
>> +
>> +        for (einfo = einfo_list; einfo; einfo = einfo->next) {
>> +            VNVRAMEntryInfo *nei = einfo->value;
>> +            monitor_printf(mon, "  entry%u: name=%s cur-size=%"PRIu64" "
>> +                           "max-size=%"PRIu64"\n",
>> +                           d++, nei->name, nei->cur_size, nei->max_size);
>> +        }
>> +    }
>> +    qapi_free_VNVRAMInfoList(info_list);
>> +}
>> +
>>   void hmp_quit(Monitor *mon, const QDict *qdict)
>>   {
>>       monitor_suspend(mon);
>> diff --git a/hmp.h b/hmp.h
>> index 95fe76e..e26daf2 100644
>> --- a/hmp.h
>> +++ b/hmp.h
>> @@ -37,6 +37,7 @@ void hmp_info_balloon(Monitor *mon, const QDict *qdict);
>>   void hmp_info_pci(Monitor *mon, const QDict *qdict);
>>   void hmp_info_block_jobs(Monitor *mon, const QDict *qdict);
>>   void hmp_info_tpm(Monitor *mon, const QDict *qdict);
>> +void hmp_info_vnvram(Monitor *mon, const QDict *dict);
>>   void hmp_quit(Monitor *mon, const QDict *qdict);
>>   void hmp_stop(Monitor *mon, const QDict *qdict);
>>   void hmp_system_reset(Monitor *mon, const QDict *qdict);
>> diff --git a/monitor.c b/monitor.c
>> index 62aaebe..c10fe15 100644
>> --- a/monitor.c
>> +++ b/monitor.c
>> @@ -2764,6 +2764,13 @@ static mon_cmd_t info_cmds[] = {
>>           .mhandler.cmd = hmp_info_tpm,
>>       },
>>       {
>> +        .name       = "vnvram",
>> +        .args_type  = "",
>> +        .params     = "",
>> +        .help       = "show VNVRAM information",
>> +        .mhandler.cmd = hmp_info_vnvram,
>> +    },
>> +    {
>>           .name       = NULL,
>>       },
>>   };
>> diff --git a/qapi-schema.json b/qapi-schema.json
>> index 9302e7d..73d42d6 100644
>> --- a/qapi-schema.json
>> +++ b/qapi-schema.json
>> @@ -3619,3 +3619,50 @@
>>               '*cpuid-input-ecx': 'int',
>>               'cpuid-register': 'X86CPURegister32',
>>               'features': 'int' } }
>> +
>> +# @VNVRAMEntryInfo:
>> +#
>> +# Information about an entry in the VNVRAM.
>> +#
>> +# @name: name of the entry
>> +#
>> +# @cur-size: current size of the entry's blob in bytes
>
> It's preferable not to abbreviate, you can have current-size.
>
>> +#
>> +# @max-size: max size of the entry's blob in bytes
>> +#
>> +# Since: 1.6
>> +#
>> +##
>> +{ 'type': 'VNVRAMEntryInfo',
>> +  'data': {'name': 'str', 'cur-size': 'int', 'max-size': 'int', } }
>> +
>> +##
>> +# @VNVRAMInfo:
>> +#
>> +# Information about the VNVRAM device.
>> +#
>> +# @drive-id: ID of the VNVRAM (and associated drive)
>> +#
>> +# @virtual-disk-size: Virtual size of the associated disk drive in bytes
>> +#
>> +# @vnvram-size: Size of the VNVRAM in bytes
>> +#
>> +# @entries: Array of @VNVRAMEntryInfo
>> +#
>> +# Since: 1.6
>> +#
>> +##
>> +{ 'type': 'VNVRAMInfo',
>> +  'data': {'drive-id': 'str', 'virtual-disk-size': 'int',
>> +           'vnvram-size': 'int', 'entries' : ['VNVRAMEntryInfo']} }
>> +
>> +##
>> +# @query-vnvram:
>> +#
>> +# Return information about the VNVRAM devices.
>> +#
>> +# Returns: @VNVRAMInfo on success
>> +#
>> +# Since: 1.6
>> +##
>> +{ 'command': 'query-vnvram', 'returns': ['VNVRAMInfo'] }
>> diff --git a/qmp-commands.hx b/qmp-commands.hx
>> index ffd130e..56a57b7 100644
>> --- a/qmp-commands.hx
>> +++ b/qmp-commands.hx
>> @@ -2932,3 +2932,44 @@ Example:
>>   <- { "return": {} }
>>
>>   EQMP
>> +
>> +    {
>> +        .name       = "query-vnvram",
>> +        .args_type  = "",
>> +        .mhandler.cmd_new = qmp_marshal_input_query_vnvram,
>> +    },
>> +
>> +SQMP
>> +query-vnvram
>> +------------
>> +
>> +Show VNVRAM info.
>> +
>> +Return a json-array of json-objects representing VNVRAMs.  Each VNVRAM
>> +is described by a json-object with the following:
>> +
>> +- "drive-id": ID of the VNVRAM (json-string)
>> +- "vitual-disk-size": Virtual size of associated disk drive in bytes (json-int)
>> +- "vnvram-size": Size of the VNVRAM in bytes (json-int)
>> +- "entries": json-array of json-objects representing entries
>> +
>> +Each entry is described by a json-object with the following:
>> +
>> +- "name": Name of the entry (json-string)
>> +- "cur-size": Current size of the entry's blob in bytes (json-int)
>> +- "max-size": Max size of the entry's blob in bytes (json-int)
>> +
>> +Example:
>> +
>> +-> { "execute": "query-vnvram" }
>> +<- {"return": [
>> +      { "vnvram-size": 2050, "virtual-disk-size": 2000896,
>> +        "drive-id": "drive-ide0-0-0",
>> +        "entries": [
>> +         { "name": "this-entry", "cur-size": 2048, "max-size": 21504 },
>> +         { "name": "that-entry", "cur-size": 1024, "max-size": 21504 },
>> +         { "name": "other-entry", "cur-size": 4096, "max-size": 41472 } ]
>> +      } ]
>> +   }
>> +
>> +EQMP
>> diff --git a/vnvram.c b/vnvram.c
>> index 9c4f64f..a5fe101 100644
>> --- a/vnvram.c
>> +++ b/vnvram.c
>> @@ -16,6 +16,7 @@
>>   #include "monitor/monitor.h"
>>   #include "qemu/thread.h"
>>   #include "sysemu/sysemu.h"
>> +#include "qmp-commands.h"
>>
>>   /*
>>   #define VNVRAM_DEBUG
>> @@ -897,6 +898,76 @@ static int vnvram_rwrequest_schedule(VNVRAMRWRequest *rwr)
>>       return rc;
>>   }
>>
>> +/************************ VNVRAM monitor *****************************/
>> +/* VNVRAM functions that support QMP and HMP commands                */
>> +/*********************************************************************/
>> +
>> +/*
>> + * Get VNVRAM entry details for an in-memory entry
>> + */
>> +static VNVRAMEntryInfo *vnvram_get_vnvram_entry_info(VNVRAMEntry *entry)
>> +{
>> +    VNVRAMEntryInfo *res = g_new0(VNVRAMEntryInfo, 1);
>> +
>> +    res->name = g_strndup(entry->name, sizeof(entry->name));
>> +    res->cur_size = entry->cur_size;
>> +    res->max_size = entry->max_size;
>> +
>> +    return res;
>> +}
>> +
>> +/*
>> + * Get VNVRAM details based on the VNVRAM struct
>> + */
>> +static VNVRAMInfo *vnvram_get_vnvram_info(VNVRAM *vnvram)
>> +{
>> +    VNVRAMEntry *entry;
>> +    VNVRAMEntryInfoList *info, *head = NULL, *cur = NULL;
>> +    VNVRAMInfo *res = g_new0(VNVRAMInfo, 1);
>> +
>> +    res->drive_id = g_strdup(vnvram->drv_id);
>> +    res->virtual_disk_size = bdrv_getlength(vnvram->bds);
>> +    res->vnvram_size = vnvram_get_size(vnvram);
>> +
>> +    QLIST_FOREACH(entry, &vnvram->entries_head, next) {
>> +        info = g_new0(VNVRAMEntryInfoList, 1);
>> +        info->value = vnvram_get_vnvram_entry_info(entry);
>> +
>> +        if (!cur) {
>> +            head = cur = info;
>> +        } else {
>> +            cur->next = info;
>> +            cur = info;
>> +        }
>> +    }
>> +    res->entries = head;
>> +
>> +    return res;
>> +}
>> +
>> +/*
>> + * Get VNVRAM data from the in-memory VNVRAM struct and entries
>> + */
>> +VNVRAMInfoList *qmp_query_vnvram(Error **errp)
>> +{
>> +    VNVRAM *vnvram;
>> +    VNVRAMInfoList *info, *head = NULL, *cur = NULL;
>> +
>> +    QLIST_FOREACH(vnvram, &vnvrams, list) {
>> +        info = g_new0(VNVRAMInfoList, 1);
>> +        info->value = vnvram_get_vnvram_info(vnvram);
>> +
>> +        if (!cur) {
>> +            head = cur = info;
>> +        } else {
>> +            cur->next = info;
>> +            cur = info;
>> +        }
>> +    }
>> +
>> +    return head;
>> +}
>> +
>>   /************************* VNVRAM APIs *******************************/
>>   /* VNVRAM APIs that can be used by QEMU to provide persistent storage*/
>>   /*********************************************************************/
>
>
>
>

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2013-05-29 17:35 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-05-23 17:44 [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage Corey Bryant
2013-05-23 17:44 ` [Qemu-devel] [PATCH 1/7] vnvram: VNVRAM bdrv support Corey Bryant
2013-05-24 13:06   ` Kevin Wolf
2013-05-24 15:33     ` Corey Bryant
2013-05-24 15:37       ` Kevin Wolf
2013-05-24 15:47         ` Corey Bryant
2013-05-23 17:44 ` [Qemu-devel] [PATCH 2/7] vnvram: VNVRAM in-memory support Corey Bryant
2013-05-23 17:44 ` [Qemu-devel] [PATCH 3/7] vnvram: VNVRAM bottom-half r/w scheduling support Corey Bryant
2013-05-23 17:44 ` [Qemu-devel] [PATCH 4/7] vnvram: VNVRAM internal APIs Corey Bryant
2013-05-23 17:44 ` [Qemu-devel] [PATCH 5/7] vnvram: VNVRAM additional debug support Corey Bryant
2013-05-23 17:44 ` [Qemu-devel] [PATCH 6/7] main: Initialize VNVRAM Corey Bryant
2013-05-23 17:44 ` [Qemu-devel] [PATCH 7/7] monitor: QMP/HMP support for retrieving VNVRAM details Corey Bryant
2013-05-23 17:59   ` Eric Blake
2013-05-23 18:43     ` Corey Bryant
2013-05-29 17:15   ` Luiz Capitulino
2013-05-29 17:34     ` Corey Bryant
2013-05-23 18:03 ` [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage Anthony Liguori
2013-05-23 18:41   ` Corey Bryant
2013-05-23 19:15     ` Anthony Liguori
2013-05-24 15:27       ` Corey Bryant
2013-05-29 13:34         ` Anthony Liguori
2013-05-24  9:59 ` Stefan Hajnoczi
2013-05-24 12:13   ` Stefan Berger
2013-05-24 12:36     ` Stefan Hajnoczi
2013-05-24 15:39       ` Corey Bryant
2013-05-27  8:40         ` Stefan Hajnoczi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.