All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent
@ 2010-10-22 18:45 Michael Roth
  2010-10-22 18:45 ` [Qemu-devel] [RFC][PATCH 01/10] virtagent: add common rpc transport defs Michael Roth
                   ` (11 more replies)
  0 siblings, 12 replies; 24+ messages in thread
From: Michael Roth @ 2010-10-22 18:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: aliguori, ryanh, agl, mdroth, abeekhof

This set of patches is meant to be applied on top of the Virtproxy v1 patchset.

OVERVIEW:

There are a wide range of use cases motivating the need for a guest agent of some sort to extend the functionality/usability/control offered by QEMU. Some examples include graceful guest shutdown/reboot and notifications thereof, copy/paste syncing between host/guest, guest statistics gathering, file access, etc.

Ideally these would all be served by a single, easilly extensible agent that can be deployed in a wide range of guests. Virtagent is an XMLRPC server integrated into the Virtproxy guest daemon and aimed at providing this type of functionality.

This code is very rough, and I'll to document most of the bugs/shortcomings we're aware of in this version of the patchset. The main goal of this RFC to get feedback on the types of core functionality we would need in an agent of this sort, as well as feedback on the general approach/architecture implemented here. Any feedback is greatly appreciated however.

To start off this discussion, there have been some recent posts about how much an agent of this sort overlaps with the goals of the Matahari project (https://fedorahosted.org/matahari/). While both of these approaches are at least *feasible*, our use cases require the ability to deploy to guests which may not support virtio-serial, which currently rules Matahari out. This support could be added however: the virtproxy layer used by this agent actually lends itself to extending such support to other agents/services, or a more direct approach could be taken in adding support for isa-serial. 

The question that remains however is one of scope. This agent is intended purely as a means to extend qemu's abilities to perform hypervisor-specific work, whereas Matahari aims to extend general system management capabilities to guests (please correct me if I'm oversimplifying). Virtagent cannot meet Matahari's goals, whereas Matahari technically can meet Virtagent's. My contention however is that the qemu-specific scope/API and shared code base with a more closely integrated agent will provide a more expedient route to functional improvements to qemu, while still allowing for the additional functionality/management capabilities provided by something like Matahari.

DESIGN:

There are actually 2 RPC servers:

1) a server in the guest integrated into the Virtproxy guest daemon which handles RPC requests from QEMU
2) a server in the host (integrated into the Virtproxy host daemon, which we plan to integrate directly into qemu) to handle RPC requests sent by the guest agent (mainly for handling asynchronous events reported by the agent).

At the Virtagent level, communication is done via standard RPCs (HTTP/TCP between host and guest). Virtproxy transparently handles transport over a network or isa/virtio serial channel, allowing the agent to be deployed on older guests which may not support virtio-serial.

Currently there are only 2 RPCs implemented for the guest server (getfile and getdmesg), and 0 for the host. Additional RPCs can be added fairly easily, but are dependent on feedback from here and elsewhere. ping/status, shutdown, and reboot are likely candidates (although the latter 2 will likely require asynchronous notifications to the host RPC server to implement reliably).

EXAMPLE USAGE:

The commandline options are a little convoluted right now; this will addressed in later revisions.

 - Configure guest agent to talk to host via virtio-serial
    # start guest with virtio-serial. for example (RHEL6s13):
    qemu \
    -device virtio-serial \
    -chardev socket,path=/tmp/test0-virtioconsole.sock,server,nowait,id=test0 \
    -device virtconsole,chardev=test0,name=test0 \
    -chardev socket,path=/tmp/test1-virtio-serial.sock,server,nowait,id=test1 \
    -device virtserialport,chardev=test1,name=test1 \
    -chardev socket,path=/tmp/test2-virtio-serial.sock,server,nowait,id=test2 \
    -device virtserialport,chardev=test2,name=test2 \
    -monitor stdio
    ...
    # in the host:
    ./qemu-vp -c unix-connect:/tmp/test2-virtio-serial.sock:- \
              -o virtagent:/tmp/virtagent-guest-client.sock:-
    # in the guest:
    ./qemu-vp -c virtserial-open:/dev/virtio-ports/test2:- -g
    ...
    # monitor commands
    (qemu) agent_viewdmesg
    [139311.710326] wlan0: deauthenticating from 00:30:bd:f7:12:d5 by local choice (reason=3)
    [139323.469857] wlan0: deauthenticating from 00:21:29:cd:41:ee by local choice (reason=3)
    ...
    [257683.375646] wlan0: authenticated
    [257683.375684] wlan0: associate with AP 00:30:bd:f7:12:d5 (try 1)
    [257683.377932] wlan0: RX AssocResp from 00:30:bd:f7:12:d5 (capab=0x411 status=0 aid=4)
    [257683.377940] wlan0: associated
    
    (qemu) agent_viewfile /proc/meminfo
    MemTotal:        3985488 kB
    MemFree:          400524 kB
    Buffers:          220556 kB
    Cached:          2073160 kB
    SwapCached:            0 kB
    ...
    Hugepagesize:       2048 kB
    DirectMap4k:        8896 kB
    DirectMap2M:     4110336 kB

KNOWN ISSUES/PLANS:
 - the client socket that qemu connects to send RPCs is a hardcoded filepath. This is unacceptable as the socket is channel/process specific and things will break when multiple guests are started.
 - capability negotiation will be needed to handle version/architecture differences.
 - proper channel negotiation is critical to avoiding hung monitors and such when a guest reboots or the guest agent is stopped for whatever reason. additionally, a timeout may need to be imposed on the amount of time the http read handler can block the monitor.
 - additional host-to-guest RPCs as well as asynchronous notifications via guest-to-host RPCs for events such as shutdown/reboot/agent up/agent down
 - switch all RPC communication over to non-blocking/asynchronous read/write callbacks to avoid deadlocks.

 Makefile            |    2 +-
 Makefile.target     |    2 +-
 configure           |   25 ++++
 hmp-commands.hx     |   32 +++++
 monitor.c           |    1 +
 qemu-vp.c           |  119 +++++++++++++++--
 qerror.c            |    4 +
 qerror.h            |    3 +
 qmp-commands.hx     |   68 +++++++++
 virtagent-common.c  |  381 +++++++++++++++++++++++++++++++++++++++++++++++++++
 virtagent-common.h  |   63 +++++++++
 virtagent-daemon.c  |  216 +++++++++++++++++++++++++++++
 virtagent-daemon.h  |   21 +++
 virtagent.c         |  298 ++++++++++++++++++++++++++++++++++++++++
 virtagent.h         |   32 +++++
 virtproxy-builtin.c |   30 ++++
 16 files changed, 1286 insertions(+), 11 deletions(-)

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Qemu-devel] [RFC][PATCH 01/10] virtagent: add common rpc transport defs
  2010-10-22 18:45 [Qemu-devel] [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent Michael Roth
@ 2010-10-22 18:45 ` Michael Roth
  2010-10-25 21:39   ` Anthony Liguori
  2010-10-22 18:45 ` [Qemu-devel] [RFC][PATCH 02/10] virtagent: base definitions for host/guest RPC daemon Michael Roth
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 24+ messages in thread
From: Michael Roth @ 2010-10-22 18:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: aliguori, ryanh, agl, mdroth, abeekhof

Common code for sending/recieving RPCs via http over virtproxy channel.
Eventually these will all be switched to asynchronous handlers to avoid
deadlocks between qemu and the guest. For now we can usually get away with
just doing asynchronous reads for http/RPC responses if we don't send
large RPC requests to the guest (since these will likely get buffered
rather than block or cause spinning on writes).

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
 virtagent-common.c |  381 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 virtagent-common.h |   63 +++++++++
 2 files changed, 444 insertions(+), 0 deletions(-)
 create mode 100644 virtagent-common.c
 create mode 100644 virtagent-common.h

diff --git a/virtagent-common.c b/virtagent-common.c
new file mode 100644
index 0000000..51016a6
--- /dev/null
+++ b/virtagent-common.c
@@ -0,0 +1,381 @@
+/*
+ * virt-agent - common host/guest RPC functions
+ *
+ * Copyright IBM Corp. 2010
+ *
+ * Authors:
+ *  Adam Litke        <aglitke@linux.vnet.ibm.com>
+ *  Michael Roth      <mdroth@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#include <stdlib.h>
+#include <unistd.h>
+#include <errno.h>
+#include <string.h>
+#include <ctype.h>
+#include <xmlrpc-c/base.h>
+#include <xmlrpc-c/client.h>
+#include <xmlrpc-c/server.h>
+#include "virtagent-common.h"
+
+static int write_all(int fd, const void *buf, int len1)
+{
+    int ret, len;
+
+    len = len1;
+    while (len > 0) {
+        ret = write(fd, buf, len);
+        if (ret < 0) {
+            if (errno != EINTR && errno != EAGAIN) {
+                LOG("write() failed");
+                return -1;
+            }
+        } else if (ret == 0) {
+            break;
+        } else {
+            buf += ret;
+            len -= ret;
+        }
+    }
+    return len1 - len;
+}
+
+static int read_all(int fd, void *buf, int len)
+{
+    int ret, remaining;
+
+    remaining = len;
+    while (remaining > 0) {
+        ret = read(fd, buf, remaining);
+        if (ret < 0) {
+            if (errno != EINTR && errno != EAGAIN) {
+                LOG("read failed");
+                return -1;
+            }
+        } else if (ret == 0) {
+            break;
+        } else {
+            buf += ret;
+            remaining -= ret;
+        }
+    }
+    return len - remaining;
+}
+
+static int read_line(int fd, void *buf, int len)
+{
+    int ret, remaining;
+    remaining = len;
+    while (remaining > 0) {
+        ret = read(fd, buf, 1);
+        if (ret < 0) {
+            if (errno != EINTR && errno != EAGAIN) {
+                LOG("read failed");
+                return -1;
+            }
+        } else if (ret == 0) {
+            break;
+        } else {
+            remaining--;
+            buf++;
+            if (*((char *)buf-1) == '\n')
+                break;
+        }
+    }
+    memset(buf, 0, remaining);
+    return len - remaining;
+}
+
+typedef struct va_http {
+    char *preamble;
+    int content_length;
+    int content_read;
+    char *content;
+} va_http;
+
+typedef struct HttpReadState {
+    int fd;
+    char hdr_buf[4096];
+    int hdr_pos;
+    va_http http;
+    enum {
+        IN_RESP_HEADER = 0,
+        IN_RESP_BODY,
+    } state;
+    RPCRequest *rpc_data;
+} HttpReadState;
+
+static int end_of_header(char *buf, int end_pos) {
+    return !strncmp(buf+(end_pos-2), "\n\r\n", 3);
+}
+
+static void parse_hdr(va_http *http, char *buf, int len) {
+    int i, line_pos=0;
+    char line_buf[4096];
+
+    for (i=0; i<len; ++i) {
+        if (buf[i] != '\n') {
+            /* read line */
+            line_buf[line_pos++] = buf[i];
+        } else {
+            /* process line */
+            if (strncasecmp(line_buf, "content-length: ", 16) == 0) {
+                http->content_length = atoi(&line_buf[16]);
+                return;
+            }
+            line_pos = 0;
+        }
+    }
+}
+
+static void http_read_handler(void *opaque) {
+    HttpReadState *s = opaque;
+    int ret;
+
+    TRACE("called with opaque: %p", opaque);
+    if (s->state == IN_RESP_HEADER) {
+        while((ret = read(s->fd, s->hdr_buf + s->hdr_pos, 1)) > 0) {
+            //TRACE("buf: %c", s->hdr_buf[0]);
+            s->hdr_pos += ret;
+            if (end_of_header(s->hdr_buf, s->hdr_pos - 1)) {
+                s->state = IN_RESP_BODY;
+                break;
+            }
+        }
+        if (ret == -1) {
+            if (errno == EAGAIN || errno == EWOULDBLOCK) {
+                return;
+            } else {
+                LOG("error reading connection: %s", strerror(errno));
+                goto out_bad;
+            }
+        } else if (ret == 0) {
+            LOG("connection closed unexpectedly");
+            goto out_bad;
+        } else {
+            s->http.content_length = -1;
+            parse_hdr(&s->http, s->hdr_buf, s->hdr_pos);
+            if (s->http.content_length == -1) {
+                LOG("malformed http header");
+                goto out_bad;
+            }
+            s->http.content = qemu_mallocz(s->http.content_length);
+            goto do_resp_body;
+        }
+    } else if (s->state == IN_RESP_BODY) {
+do_resp_body:
+        while(s->http.content_read < s->http.content_length) {
+            ret = read(s->fd, s->http.content + s->http.content_read, 4096);
+            if (ret == -1) {
+                if (errno == EAGAIN || errno == EWOULDBLOCK) {
+                    return;
+                } else {
+                    LOG("error reading connection: %s", strerror(errno));
+                    qemu_free(s->http.content);
+                    goto out;
+                }
+            } else if (ret == 0) {
+                LOG("connection closed unexpectedly, expected %d more bytes",
+                    s->http.content_length - s->http.content_read);
+                goto out_bad;
+            }
+            s->http.content_read += ret;
+        }
+        s->rpc_data->resp_xml = s->http.content;
+        s->rpc_data->resp_xml_len = s->http.content_length;
+        goto out;
+    }
+out_bad:
+    s->rpc_data->resp_xml = NULL;
+out:
+    vp_set_fd_handler(s->fd, NULL, NULL, NULL);
+    s->rpc_data->cb(s->rpc_data);
+    qemu_free(s);
+}
+
+static int write_hdr(int fd, const va_http *http, bool request)
+{
+    int hdr_len;
+    char *hdr;
+    const char *preamble;
+
+    TRACE("called");
+    /* essentially ignored in the context of virtagent, but might as well */
+    preamble = request ? "POST /RPC2 HTTP/1.1" : "HTTP/1.1 200 OK";
+
+    hdr_len = asprintf(&hdr,
+                      "%s" EOL
+                      "Content-Type: text/xml" EOL
+                      "Content-Length: %u" EOL EOL,
+                      preamble,
+                      http->content_length);
+    write_all(fd, hdr, hdr_len);
+
+    return 0;
+}
+
+static int read_hdr(int fd, va_http *http)
+{
+    bool first = true;
+    char line[4096];
+    int ret;
+
+    http->preamble = NULL;
+    http->content_length = -1;
+
+    do {
+        ret = read_line(fd, &line, sizeof(line));
+        if (ret <= 0) {
+            LOG("error reading from connection");
+            ret = -EPIPE;
+            return ret;
+        }
+        TRACE("read line: %s", line);
+        if (first) {
+            http->preamble = line;
+            first = false;
+        }
+        if (strncasecmp(line, "content-length: ", 16) == 0) {
+            TRACE("hi");
+            http->content_length = atoi(&line[16]);
+        }
+    } while (strcmp(line, "\r\n") && strcmp(line, "\n"));
+
+    if (http->preamble == NULL) {
+        LOG("header missing preamble");
+        return -1;
+    } else if (http->content_length == -1) {
+        LOG("missing content length");
+        return -1;
+    }
+
+    return 0;
+}
+
+static int send_http(int fd, const va_http *http, bool request)
+{
+    int ret;
+
+    TRACE("called");
+    ret = write_hdr(fd, http, request);
+    if (ret != 0) {
+        LOG("error sending header");
+        return -1;
+    }
+
+    TRACE("sending body");
+    ret = write_all(fd, http->content, http->content_length);
+    TRACE("done sending body");
+    if (ret != http->content_length) {
+        LOG("error sending content");
+        return -1;
+    }
+
+    return 0;
+}
+
+static int send_http_request(int fd, const va_http *http)
+{
+    return send_http(fd, http, true);
+}
+
+static int send_http_response(int fd, const va_http *http)
+{
+    return send_http(fd, http, false);
+}
+
+static int get_http(int fd, va_http *http)
+{
+    int ret;
+
+    ret = read_hdr(fd, http);
+    if (ret != 0) {
+        LOG("error reading header");
+        return -1;
+    }
+
+    http->content = qemu_malloc(http->content_length);
+    ret = read_all(fd, http->content, http->content_length);
+    if (ret != http->content_length) {
+        LOG("error reading content");
+        return -1;
+    }
+
+    TRACE("http:\n%s", http->content);
+
+    return 0;
+}
+
+/* send http-encoded xmlrpc response */
+int va_send_rpc_response(int fd, const xmlrpc_mem_block *resp_xml)
+{
+    int ret;
+    va_http http_resp;
+
+    http_resp.content = XMLRPC_MEMBLOCK_CONTENTS(char, resp_xml);
+    http_resp.content_length = XMLRPC_MEMBLOCK_SIZE(char, resp_xml);
+
+    TRACE("sending rpc response");
+    ret = send_http_response(fd, &http_resp);
+    if (ret != 0) {
+        LOG("failed to send rpc response");
+        return -1;
+    }
+    TRACE("done sending rpc response");
+
+    return 0;
+}
+
+/* read xmlrpc payload from http request */
+int va_get_rpc_request(int fd, char **req_xml, int *req_len)
+{
+    int ret;
+    va_http http_req;
+
+    TRACE("getting rpc request");
+    ret = get_http(fd, &http_req);
+    if (ret != 0) {
+        LOG("failed to get RPC request");
+        return -1;
+    }
+
+    *req_xml = http_req.content;
+    *req_len = http_req.content_length;
+    TRACE("done getting rpc request");
+
+    return 0;
+}
+
+/* send an RPC request and retrieve the response */
+int va_transport_rpc_call(int fd, RPCRequest *rpc_data)
+{
+    int ret;
+    struct va_http http_req;
+    HttpReadState *read_state;
+
+    TRACE("called");
+    http_req.content = XMLRPC_MEMBLOCK_CONTENTS(char, rpc_data->req_xml);
+    http_req.content_length = XMLRPC_MEMBLOCK_SIZE(char, rpc_data->req_xml);
+
+    /* TODO: this should be done asynchronously */
+    TRACE("sending rpc request");
+    ret = send_http_request(fd, &http_req);
+    if (ret != 0) {
+        LOG("failed to send rpc request");
+        return -1;
+    }
+    TRACE("done sending rpc request");
+
+    TRACE("setting up rpc response handler");
+    read_state = qemu_mallocz(sizeof(HttpReadState));
+    read_state->fd = fd;
+    read_state->rpc_data = rpc_data;
+    read_state->state = IN_RESP_HEADER;
+    vp_set_fd_handler(fd, http_read_handler, NULL, read_state);
+
+    return 0;
+}
diff --git a/virtagent-common.h b/virtagent-common.h
new file mode 100644
index 0000000..0cedeb4
--- /dev/null
+++ b/virtagent-common.h
@@ -0,0 +1,63 @@
+/*
+ * virt-agent - host/guest RPC client functions
+ *
+ * Copyright IBM Corp. 2010
+ *
+ * Authors:
+ *  Adam Litke        <aglitke@linux.vnet.ibm.com>
+ *  Michael Roth      <mdroth@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+#ifndef VIRTAGENT_COMMON_H
+#define VIRTAGENT_COMMON_H
+
+#include <stdlib.h>
+#include <stdio.h>
+#include <xmlrpc-c/base.h>
+#include <xmlrpc-c/client.h>
+#include <xmlrpc-c/server.h>
+#include "qemu-common.h"
+#include "monitor.h"
+#include "virtproxy.h"
+
+#define DEBUG_VA
+
+#ifdef DEBUG_VA
+#define TRACE(msg, ...) do { \
+    fprintf(stderr, "%s:%s():L%d: " msg "\n", \
+            __FILE__, __FUNCTION__, __LINE__, ## __VA_ARGS__); \
+} while(0)
+#else
+#define TRACE(msg, ...) \
+    do { } while (0)
+#endif
+
+#define LOG(msg, ...) do { \
+    fprintf(stderr, "%s:%s(): " msg "\n", \
+            __FILE__, __FUNCTION__, ## __VA_ARGS__); \
+} while(0)
+
+#define TADDR "127.0.0.1:8080"
+#define URL "http://localhost:8080/RPC2"
+#define NAME "QEMU virt-agent host client"
+#define VERSION "1.0"
+#define EOL "\r\n"
+
+typedef void (RPCRequestCallback)(void *rpc_data);
+typedef struct RPCRequest {
+    RPCRequestCallback *cb;
+    xmlrpc_mem_block *req_xml;
+    char *resp_xml;
+    int resp_xml_len;
+    Monitor *mon;
+    MonitorCompletion *mon_cb;
+    void *mon_data;
+} RPCRequest;
+
+int va_send_rpc_response(int fd, const xmlrpc_mem_block *resp_xml);
+int va_get_rpc_request(int fd, char **req_xml, int *req_len);
+int va_transport_rpc_call(int fd, RPCRequest *rpc_data);
+#endif /* VIRTAGENT_COMMON_H */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [RFC][PATCH 02/10] virtagent: base definitions for host/guest RPC daemon
  2010-10-22 18:45 [Qemu-devel] [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent Michael Roth
  2010-10-22 18:45 ` [Qemu-devel] [RFC][PATCH 01/10] virtagent: add common rpc transport defs Michael Roth
@ 2010-10-22 18:45 ` Michael Roth
  2010-10-22 18:45 ` [Qemu-devel] [RFC][PATCH 03/10] virtagent: qemu-vp, integrate virtagent daemon Michael Roth
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Michael Roth @ 2010-10-22 18:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: aliguori, ryanh, agl, mdroth, abeekhof

Basic skeleton code for RPC daemon loop. This is shared by both the
guest-side RPC server as well as the host-side one (the advertised RPCs
for each by guest/host-specific arrays).

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
 virtagent-daemon.c |  118 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 virtagent-daemon.h |   20 +++++++++
 2 files changed, 138 insertions(+), 0 deletions(-)
 create mode 100644 virtagent-daemon.c
 create mode 100644 virtagent-daemon.h

diff --git a/virtagent-daemon.c b/virtagent-daemon.c
new file mode 100644
index 0000000..998b025
--- /dev/null
+++ b/virtagent-daemon.c
@@ -0,0 +1,118 @@
+/*
+ * virt-agent - host/guest RPC daemon functions
+ *
+ * Copyright IBM Corp. 2010
+ *
+ * Authors:
+ *  Adam Litke        <aglitke@linux.vnet.ibm.com>
+ *  Michael Roth      <mdroth@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+#include <stdlib.h>
+#include <stdio.h>
+#include <unistd.h>
+#include <xmlrpc-c/base.h>
+#include <xmlrpc-c/server.h>
+#include <xmlrpc-c/server_abyss.h>
+#include "qemu_socket.h"
+#include "virtagent-daemon.h"
+#include "virtagent-common.h"
+
+static int va_accept(int listen_fd) {
+    struct sockaddr_in saddr;
+    struct sockaddr *addr;
+    socklen_t len;
+    int fd;
+
+    while (1) {
+        len = sizeof(saddr);
+        addr = (struct sockaddr *)&saddr;
+        fd = qemu_accept(listen_fd, addr, &len);
+        if (fd < 0 && errno != EINTR) {
+            LOG("accept() failed");
+            break;
+        } else if (fd >= 0) {
+            TRACE("accepted connection");
+            break;
+        }
+    }
+    return fd;
+}
+
+typedef struct RPCFunction {
+    xmlrpc_value *(*func)(xmlrpc_env *env, xmlrpc_value *param, void *unused);
+    const char *func_name;
+} RPCFunction;
+
+static RPCFunction guest_functions[] = {
+    { NULL, NULL }
+};
+static RPCFunction host_functions[] = {
+    { NULL, NULL }
+};
+
+static void va_register_functions(xmlrpc_env *env, xmlrpc_registry *registry,
+                                  RPCFunction *list)
+{
+    int i;
+    for (i = 0; list[i].func != NULL; ++i) {
+        TRACE("adding func: %s", list[i].func_name);
+        xmlrpc_registry_add_method(env, registry, NULL, list[i].func_name,
+                                   list[i].func, NULL);
+    }
+}
+
+int va_server_loop(int listen_fd, bool is_host)
+{
+    xmlrpc_registry *registryP;
+    xmlrpc_env env;
+    int ret, fd;
+    char *rpc_request;
+    int rpc_request_len;
+    xmlrpc_mem_block *rpc_response;
+    RPCFunction *func_list = is_host ? host_functions : guest_functions;
+
+    xmlrpc_env_init(&env);
+    registryP = xmlrpc_registry_new(&env);
+    va_register_functions(&env, registryP, func_list);
+
+    while (1) {
+        TRACE("waiting for connection from RPC client");
+        fd = va_accept(listen_fd);
+        if (fd < 0) {
+            TRACE("connection error: %s", strerror(errno));
+            continue;
+        }
+        TRACE("RPC client connected, fetching RPC...");
+        ret = va_get_rpc_request(fd, &rpc_request, &rpc_request_len);
+        if (ret != 0 || rpc_request == NULL) {
+            LOG("error retrieving rpc request");
+            goto out;
+        }
+        TRACE("handling RPC request");
+        rpc_response = xmlrpc_registry_process_call(&env, registryP, NULL,
+                                                    rpc_request,
+                                                    rpc_request_len);
+        if (rpc_response == NULL) {
+            LOG("error handling rpc request");
+            goto out;
+        }
+
+        qemu_free(rpc_request);
+        TRACE("sending RPC response");
+        ret = va_send_rpc_response(fd, rpc_response);
+        if (ret != 0) {
+            LOG("error sending rpc response");
+            goto out;
+        }
+        TRACE("RPC completed");
+        XMLRPC_MEMBLOCK_FREE(char, rpc_response);
+out:
+        closesocket(fd);
+    }
+
+    return 0;
+}
diff --git a/virtagent-daemon.h b/virtagent-daemon.h
new file mode 100644
index 0000000..bb197d0
--- /dev/null
+++ b/virtagent-daemon.h
@@ -0,0 +1,20 @@
+/*
+ * virt-agent - host/guest RPC daemon functions
+ *
+ * Copyright IBM Corp. 2010
+ *
+ * Authors:
+ *  Michael Roth      <mdroth@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+#define GUEST_AGENT_SERVICE_ID "virtagent"
+#define GUEST_AGENT_PATH "/tmp/virtagent-guest.sock"
+#define HOST_AGENT_SERVICE_ID "virtagent-host"
+#define HOST_AGENT_PATH "/tmp/virtagent-host.sock"
+#define VA_GETFILE_MAX 1 << 30
+#define VA_FILEBUF_LEN 16384
+
+int va_server_loop(int listen_fd, bool is_host);
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [RFC][PATCH 03/10] virtagent: qemu-vp, integrate virtagent daemon
  2010-10-22 18:45 [Qemu-devel] [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent Michael Roth
  2010-10-22 18:45 ` [Qemu-devel] [RFC][PATCH 01/10] virtagent: add common rpc transport defs Michael Roth
  2010-10-22 18:45 ` [Qemu-devel] [RFC][PATCH 02/10] virtagent: base definitions for host/guest RPC daemon Michael Roth
@ 2010-10-22 18:45 ` Michael Roth
  2010-10-22 18:45 ` [Qemu-devel] [RFC][PATCH 04/10] virtagent: base RPC client definitions Michael Roth
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Michael Roth @ 2010-10-22 18:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: aliguori, ryanh, agl, mdroth, abeekhof

This allows the host or guest RPC server to be forked off from the
qemu-vp instance handling the communication between the host/guest.
Eventually this will be rolled into a virtagent init that can be used by
both qemu-vp and qemu (in anticipation of integrating qemu-vp/virtproxy
as a chardev).

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
 qemu-vp.c |  119 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++-----
 1 files changed, 110 insertions(+), 9 deletions(-)

diff --git a/qemu-vp.c b/qemu-vp.c
index 0cc0e67..dba8c9a 100644
--- a/qemu-vp.c
+++ b/qemu-vp.c
@@ -37,6 +37,7 @@
 #include "qemu-option.h"
 #include "qemu_socket.h"
 #include "virtproxy.h"
+#include "virtagent-daemon.h"
 
 static bool verbose_enabled = 0;
 #define DEBUG_ENABLED
@@ -246,14 +247,18 @@ static void usage(const char *cmd)
 "[-o <oforward_opts> ...]\n"
 "QEMU virt-proxy communication channel\n"
 "\n"
-"  -c, --channel    channel options of the form:\n"
-"                   <method>:<addr>:<port>[:channel_id]\n"
-"  -o, --oforward   oforward options of the form:\n"
-"                   <service_id>:<addr>:<port>[:channel_id]\n"
-"  -i, --iforward   iforward options of the form:\n"
-"                   <service_id>:<addr>:<port>[:channel_id]\n"
-"  -v, --verbose    display extra debugging information\n"
-"  -h, --help       display this help and exit\n"
+"  -c, --channel     channel options of the form:\n"
+"                    <method>:<addr>:<port>[:channel_id]\n"
+"  -p, --host-agent  host rpc server, options of the form:\n"
+"                    [channel_id]\n"
+"  -g, --guest-agent guest rpc server, options of the form:\n"
+"                    [channel_id]\n"
+"  -o, --oforward    oforward options of the form:\n"
+"                    <service_id>:<addr>:<port>[:channel_id]\n"
+"  -i, --iforward    iforward options of the form:\n"
+"                    <service_id>:<addr>:<port>[:channel_id]\n"
+"  -v, --verbose     display extra debugging information\n"
+"  -h, --help        display this help and exit\n"
 "\n"
 "  channels are used to establish a data connection between 2 end-points in\n"
 "  the host or the guest (connection method specified by <method>).\n"
@@ -527,13 +532,48 @@ static int init_iforwards(void) {
     return 0;
 }
 
+static int init_agent(const VPData *agent_iforward, bool is_host) {
+    QemuOpts *opts = agent_iforward->opts;
+    int listen_fd, ret;
+
+    INFO("initializing agent...");
+    if (verbose_enabled) {
+        qemu_opts_print(opts, NULL);
+    }
+
+    /* create unix socket pair that agent http/rpc daemon will listen on */
+    listen_fd = unix_listen_opts(agent_iforward->opts);
+    if (listen_fd < 0) {
+        return -1;
+    }
+
+    /* fork off the agent RPC daemon */
+    ret = fork();
+
+    if (ret == -1) {
+        warn("fork() failed");
+        goto err;
+    } else if (ret == 0) {
+        /* child */
+        return va_server_loop(listen_fd, is_host);
+    }
+
+    return 0;
+
+err:
+    closesocket(listen_fd);
+    return -1;
+}
+
 int main(int argc, char **argv)
 {
-    const char *sopt = "hVvi:o:c:";
+    const char *sopt = "hVvi:o:c:g::p::";
     struct option lopt[] = {
         { "help", 0, NULL, 'h' },
         { "version", 0, NULL, 'V' },
         { "verbose", 0, NULL, 'v' },
+        { "host-agent", 0, NULL, 'p' },
+        { "guest-agent", 0, NULL, 'g' },
         { "iforward", 0, NULL, 'i' },
         { "oforward", 0, NULL, 'o' },
         { "channel", 0, NULL, 'c' },
@@ -543,10 +583,13 @@ int main(int argc, char **argv)
     QTAILQ_INIT(&iforwards);
     QTAILQ_INIT(&oforwards);
     QTAILQ_INIT(&channels);
+    VPData *guest_agent_iforward = NULL;
+    VPData *host_agent_iforward = NULL;
 
     while ((ch = getopt_long(argc, argv, sopt, lopt, &opt_ind)) != -1) {
         QemuOpts *opts;
         VPData *data;
+        char optarg_tmp[VP_ARG_LEN];
         switch (ch) {
         case 'i':
             opts = qemu_opts_create(&vp_opts, NULL, 0);
@@ -578,6 +621,50 @@ int main(int argc, char **argv)
             data->opts = opts;
             QTAILQ_INSERT_TAIL(&channels, data, next);
             break;
+        case 'g':
+            /* create pre-baked iforward for guest agent */
+            if (guest_agent_iforward) {
+                errx(EXIT_FAILURE, "only one --guest-agent argument allowed");
+            }
+            opts = qemu_opts_create(&vp_opts, NULL, 0);
+            if (optarg == 0) {
+                sprintf(optarg_tmp, "%s:%s:-", GUEST_AGENT_SERVICE_ID,
+                                     GUEST_AGENT_PATH);
+            } else {
+                sprintf(optarg_tmp, "%s:%s:-:%d", GUEST_AGENT_SERVICE_ID,
+                                     GUEST_AGENT_PATH, atoi(optarg));
+            }
+            ret = vp_parse(opts, optarg_tmp, 0);
+            if (ret) {
+                errx(EXIT_FAILURE, "error parsing arg: %s", optarg);
+            }
+            data = qemu_mallocz(sizeof(VPData));
+            data->opts = opts;
+            QTAILQ_INSERT_TAIL(&iforwards, data, next);
+            guest_agent_iforward = data;
+            break;
+        case 'p':
+            /* create pre-baked iforward for host agent */
+            if (host_agent_iforward) {
+                errx(EXIT_FAILURE, "only one --host-agent argument allowed");
+            }
+            opts = qemu_opts_create(&vp_opts, NULL, 0);
+            if (optarg == 0) {
+                sprintf(optarg_tmp, "%s:%s:-", HOST_AGENT_SERVICE_ID,
+                                     HOST_AGENT_PATH);
+            } else {
+                sprintf(optarg_tmp, "%s:%s:-:%d", HOST_AGENT_SERVICE_ID,
+                                     HOST_AGENT_PATH, atoi(optarg));
+            }
+            ret = vp_parse(opts, optarg_tmp, 0);
+            if (ret) {
+                errx(EXIT_FAILURE, "error parsing arg: %s", optarg);
+            }
+            data = qemu_mallocz(sizeof(VPData));
+            data->opts = opts;
+            QTAILQ_INSERT_TAIL(&iforwards, data, next);
+            host_agent_iforward = data;
+            break;
         case 'v':
             verbose_enabled = 1;
             break;
@@ -607,6 +694,20 @@ int main(int argc, char **argv)
              "error initializing service mappings for incoming connections");
     }
 
+    if (guest_agent_iforward) {
+        ret = init_agent(guest_agent_iforward, false);
+        if (ret) {
+            errx(EXIT_FAILURE,
+                 "error initializing guest agent");
+        }
+    } else if (host_agent_iforward) {
+        ret = init_agent(host_agent_iforward, true);
+        if (ret) {
+            errx(EXIT_FAILURE,
+                 "error initializing host agent");
+        }
+    }
+
     /* main i/o loop */
     for (;;) {
         DEBUG("entering main_loop_wait()");
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [RFC][PATCH 04/10] virtagent: base RPC client definitions
  2010-10-22 18:45 [Qemu-devel] [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent Michael Roth
                   ` (2 preceding siblings ...)
  2010-10-22 18:45 ` [Qemu-devel] [RFC][PATCH 03/10] virtagent: qemu-vp, integrate virtagent daemon Michael Roth
@ 2010-10-22 18:45 ` Michael Roth
  2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 05/10] virtagent: add getfile RPC Michael Roth
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Michael Roth @ 2010-10-22 18:45 UTC (permalink / raw)
  To: qemu-devel; +Cc: aliguori, ryanh, agl, mdroth, abeekhof

Base skeleton and helpers for executing RPC commands. Monitor commands
will result in a connect() being issued to the virtagent service socket,
which will then be transported to the listening RPC server in the guest
via the virtproxy layer, RPC requests are then sent/recieved via http
over the resulting connection.

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
 monitor.c   |    1 +
 qerror.c    |    4 ++
 qerror.h    |    3 ++
 virtagent.c |   96 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 virtagent.h |   25 +++++++++++++++
 5 files changed, 129 insertions(+), 0 deletions(-)
 create mode 100644 virtagent.c
 create mode 100644 virtagent.h

diff --git a/monitor.c b/monitor.c
index 260cc02..6c99206 100644
--- a/monitor.c
+++ b/monitor.c
@@ -42,6 +42,7 @@
 #include "audio/audio.h"
 #include "disas.h"
 #include "balloon.h"
+#include "virtagent.h"
 #include "qemu-timer.h"
 #include "migration.h"
 #include "kvm.h"
diff --git a/qerror.c b/qerror.c
index ac2cdaf..2f111a9 100644
--- a/qerror.c
+++ b/qerror.c
@@ -200,6 +200,10 @@ static const QErrorStringTable qerror_table[] = {
         .error_fmt = QERR_VNC_SERVER_FAILED,
         .desc      = "Could not start VNC server on %(target)",
     },
+    {
+        .error_fmt = QERR_RPC_FAILED,
+        .desc      = "An RPC error has occurred",
+    },
     {}
 };
 
diff --git a/qerror.h b/qerror.h
index 943a24b..43cce4a 100644
--- a/qerror.h
+++ b/qerror.h
@@ -165,4 +165,7 @@ QError *qobject_to_qerror(const QObject *obj);
 #define QERR_VNC_SERVER_FAILED \
     "{ 'class': 'VNCServerFailed', 'data': { 'target': %s } }"
 
+#define QERR_RPC_FAILED \
+    "{ 'class': 'RPCFailed', 'data': { 'code': %i, 'message': %s } }"
+
 #endif /* QERROR_H */
diff --git a/virtagent.c b/virtagent.c
new file mode 100644
index 0000000..2234bef
--- /dev/null
+++ b/virtagent.c
@@ -0,0 +1,96 @@
+/*
+ * virt-agent - host/guest RPC client functions
+ *
+ * Copyright IBM Corp. 2010
+ *
+ * Authors:
+ *  Adam Litke        <aglitke@linux.vnet.ibm.com>
+ *  Michael Roth      <mdroth@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#include "sysemu.h"
+#include "monitor.h"
+#include "qjson.h"
+#include "qint.h"
+#include "cpu-common.h"
+#include "kvm.h"
+#include "trace.h"
+#include "qemu_socket.h"
+#include "xmlrpc.h"
+#include "xmlrpc_client.h"
+#include "virtagent-daemon.h"
+#include "virtagent-common.h"
+#include "virtagent.h"
+
+static int rpc_has_error(xmlrpc_env *env)
+{
+    if (env->fault_occurred) {
+        LOG("An RPC error has occurred (%i): %s\n", env->fault_code, env->fault_string);
+        //qerror_report(QERR_RPC_FAILED, env->fault_code, env->fault_string);
+        return -1;
+    }
+    return 0;
+}
+
+/*
+ * Get a connected socket that can be used to make an RPC call
+ * This interface will eventually return the connected virtproxy socket for the
+ * virt-agent channel
+ */
+static int get_transport_fd(void)
+{
+    /* TODO: eventually this will need a path that is unique to other
+     * instances of qemu-vp/qemu. for the integrated qemu-vp we should
+     * explore the possiblity of not requiring a unix socket under the
+     * covers, as well as having client init code set up the oforward
+     * for the service rather than qemu-vp
+     */
+    int ret;
+    int fd = unix_connect(GUEST_AGENT_PATH_CLIENT);
+    if (fd < 0) {
+        LOG("failed to connect to virtagent service");
+    }
+    ret = fcntl(fd, F_GETFL);
+    ret = fcntl(fd, F_SETFL, ret | O_NONBLOCK);
+    return fd;
+}
+
+static int rpc_execute(xmlrpc_env *const env, const char *function,
+                       xmlrpc_value *params, RPCRequest *rpc_data)
+{
+    xmlrpc_mem_block *call_xml;
+    int fd, ret;
+
+    fd = get_transport_fd();
+    if (fd < 0) {
+        LOG("invalid fd");
+        ret = -1;
+        goto out;
+    }
+
+    call_xml = XMLRPC_MEMBLOCK_NEW(char, env, 0);
+    xmlrpc_serialize_call(env, call_xml, function, params);
+    if (rpc_has_error(env)) {
+        ret = -EREMOTE;
+        goto out_callxml;
+    }
+
+    rpc_data->req_xml = call_xml;
+    ret = va_transport_rpc_call(fd, rpc_data);
+    if (ret != 0) {
+        ret = -1;
+        goto out_callxml;
+    } else {
+        ret = 0;
+        goto out;
+    }
+
+out_callxml:
+    XMLRPC_MEMBLOCK_FREE(char, call_xml);
+out:
+    return ret;
+}
diff --git a/virtagent.h b/virtagent.h
new file mode 100644
index 0000000..abdb32a
--- /dev/null
+++ b/virtagent.h
@@ -0,0 +1,25 @@
+/*
+ * virt-agent - host/guest RPC client functions
+ *
+ * Copyright IBM Corp. 2010
+ *
+ * Authors:
+ *  Adam Litke        <aglitke@linux.vnet.ibm.com>
+ *  Michael Roth      <mdroth@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef VIRTAGENT_H
+#define VIRTAGENT_H
+
+#include "monitor.h"
+#include "virtagent-common.h"
+
+#define GUEST_AGENT_PATH_CLIENT "/tmp/virtagent-guest-client.sock"
+#define HOST_AGENT_PATH_CLIENT "/tmp/virtagent-host-client.sock"
+#define VA_MAX_CHUNK_SIZE 4096 /* max bytes at a time for get/send file */
+
+#endif /* VIRTAGENT_H */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [RFC][PATCH 05/10] virtagent: add getfile RPC
  2010-10-22 18:45 [Qemu-devel] [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent Michael Roth
                   ` (3 preceding siblings ...)
  2010-10-22 18:45 ` [Qemu-devel] [RFC][PATCH 04/10] virtagent: base RPC client definitions Michael Roth
@ 2010-10-22 18:46 ` Michael Roth
  2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 06/10] virtagent: add agent_viewfile command Michael Roth
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Michael Roth @ 2010-10-22 18:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: aliguori, ryanh, agl, mdroth, abeekhof

Add RPC to retrieve a guest file. A size limit of some sort will
eventually be needed else we can block the monitor for arbitrarily long
periods of time. This interface is intended for smaller reads like
peeking at logs and /proc and such.

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
 virtagent-daemon.c |   53 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 53 insertions(+), 0 deletions(-)

diff --git a/virtagent-daemon.c b/virtagent-daemon.c
index 998b025..fad0f64 100644
--- a/virtagent-daemon.c
+++ b/virtagent-daemon.c
@@ -21,6 +21,57 @@
 #include "virtagent-daemon.h"
 #include "virtagent-common.h"
 
+/* RPC functions common to guest/host daemons */
+
+static xmlrpc_value *getfile(xmlrpc_env *env,
+                                xmlrpc_value *param,
+                                void *user_data)
+{
+    const char *path;
+    char *file_contents = NULL;
+    char buf[VA_FILEBUF_LEN];
+    int fd, ret, count = 0;
+    xmlrpc_value *result = NULL;
+
+    /* parse argument array */
+    xmlrpc_decompose_value(env, param, "(s)", &path);
+    if (env->fault_occurred) {
+        return NULL;
+    }
+
+    fd = open(path, O_RDONLY);
+    if (fd == -1) {
+        LOG("open failed: %s", strerror(errno));
+        xmlrpc_faultf(env, "open failed: %s", strerror(errno));
+        return NULL;
+    }
+
+    while ((ret = read(fd, buf, VA_FILEBUF_LEN)) > 0) {
+        file_contents = qemu_realloc(file_contents, count + VA_FILEBUF_LEN);
+        memcpy(file_contents + count, buf, ret);
+        count += ret;
+        if (count > VA_GETFILE_MAX) {
+            xmlrpc_faultf(env, "max file size (%d bytes) exceeded",
+                          VA_GETFILE_MAX);
+            goto EXIT_CLOSE_BAD;
+        }
+    }
+    if (ret == -1) {
+        LOG("read failed: %s", strerror(errno));
+        xmlrpc_faultf(env, "read failed: %s", strerror(errno));
+        goto EXIT_CLOSE_BAD;
+    }
+
+    result = xmlrpc_build_value(env, "6", file_contents, count);
+
+EXIT_CLOSE_BAD:
+    if (file_contents) {
+        qemu_free(file_contents);
+    }
+    close(fd);
+    return result;
+}
+
 static int va_accept(int listen_fd) {
     struct sockaddr_in saddr;
     struct sockaddr *addr;
@@ -48,6 +99,8 @@ typedef struct RPCFunction {
 } RPCFunction;
 
 static RPCFunction guest_functions[] = {
+    { .func = getfile,
+      .func_name = "getfile" },
     { NULL, NULL }
 };
 static RPCFunction host_functions[] = {
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [RFC][PATCH 06/10] virtagent: add agent_viewfile command
  2010-10-22 18:45 [Qemu-devel] [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent Michael Roth
                   ` (4 preceding siblings ...)
  2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 05/10] virtagent: add getfile RPC Michael Roth
@ 2010-10-22 18:46 ` Michael Roth
  2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 07/10] virtagent: add getdmesg RPC Michael Roth
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Michael Roth @ 2010-10-22 18:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: aliguori, ryanh, agl, mdroth, abeekhof

Utilize the getfile RPC to provide a means to view text files in the
guest. Getfile can handle binary files as well but we don't advertise
that here due to the special handling requiring to store it and provide
it back to the user (base64 encoding it for instance). Hence the
potentially confusing "viewfile" as opposed to "getfile".

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
 hmp-commands.hx |   16 +++++++++
 qmp-commands.hx |   33 ++++++++++++++++++
 virtagent.c     |  102 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 virtagent.h     |    4 ++
 4 files changed, 155 insertions(+), 0 deletions(-)

diff --git a/hmp-commands.hx b/hmp-commands.hx
index 81999aa..e9a7f4a 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -1211,6 +1211,22 @@ show available trace events and their state
 ETEXI
 #endif
 
+    {
+        .name       = "agent_viewfile",
+        .args_type  = "filepath:s",
+        .params     = "filepath",
+        .help       = "Echo a file from the guest filesystem",
+        .user_print = do_agent_viewfile_print,
+        .mhandler.cmd_async = do_agent_viewfile,
+        .flags      = MONITOR_CMD_ASYNC,
+    },
+
+STEXI
+@item agent_viewfile @var{filepath}
+@findex agent_viewfile
+Echo the file identified by @var{filepath} on the guest filesystem
+ETEXI
+
 STEXI
 @end table
 ETEXI
diff --git a/qmp-commands.hx b/qmp-commands.hx
index 793cf1c..efa2137 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -738,6 +738,39 @@ Example:
 EQMP
 
     {
+        .name       = "agent_viewfile",
+        .args_type  = "filepath:s",
+        .params     = "filepath",
+        .help       = "Echo a file from the guest filesystem",
+        .user_print = monitor_user_noop,
+        .mhandler.cmd_async = do_agent_viewfile,
+        .flags      = MONITOR_CMD_ASYNC,
+    },
+
+STEXI
+@item agent_viewfile @var{filepath}
+@findex agent_viewfile
+Echo the file identified by @var{filepath} on the guest filesystem
+ETEXI
+SQMP
+agent_viewfile
+--------
+
+Echo the file identified by @var{filepath} from the guest filesystem.
+
+Arguments:
+
+- "filepath": Full guest path of the desired file
+
+Example:
+
+-> { "execute": "agent_viewfile",
+                "arguments": { "filepath": "/sys/kernel/kexec_loaded" } }
+<- { "return": { "contents": "0" } }
+
+EQMP
+
+    {
         .name       = "qmp_capabilities",
         .args_type  = "",
         .params     = "",
diff --git a/virtagent.c b/virtagent.c
index 2234bef..8059d00 100644
--- a/virtagent.c
+++ b/virtagent.c
@@ -94,3 +94,105 @@ out_callxml:
 out:
     return ret;
 }
+
+void do_agent_viewfile_print(Monitor *mon, const QObject *data)
+{
+    QDict *qdict;
+    const char *contents = NULL;
+    int i;
+
+    qdict = qobject_to_qdict(data);
+    if (!qdict_haskey(qdict, "contents")) {
+        return;
+    }
+
+    contents = qdict_get_str(qdict, "contents");
+    if (contents != NULL) {
+         /* monitor_printf truncates so do it in chunks. also, file_contents
+          * may not be null-termed at proper location so explicitly calc
+          * last chunk sizes */
+        for (i = 0; i < strlen(contents); i += 1024) {
+            monitor_printf(mon, "%.1024s", contents + i);
+        }
+    }
+    monitor_printf(mon, "\n");
+}
+
+static void do_agent_viewfile_cb(void *opaque)
+{
+    RPCRequest *rpc_data = opaque;
+    xmlrpc_value *resp = NULL;
+    char *file_contents = NULL;
+    int file_size, ret;
+    xmlrpc_env env;
+    QDict *qdict = qdict_new();
+
+    if (rpc_data->resp_xml == NULL) {
+        monitor_printf(rpc_data->mon, "error handling RPC request");
+        goto out_no_resp;
+    }
+
+    xmlrpc_env_init(&env);
+    resp = xmlrpc_parse_response(&env, rpc_data->resp_xml,
+                                 rpc_data->resp_xml_len);
+    if (rpc_has_error(&env)) {
+        ret = -1;
+        goto out_no_resp;
+    }
+
+    xmlrpc_parse_value(&env, resp, "6", &file_contents, &file_size);
+    if (rpc_has_error(&env)) {
+        ret = -1;
+        goto out;
+    }
+
+    if (file_contents != NULL) {
+        qdict_put(qdict, "contents",
+                  qstring_from_substr(file_contents, 0, file_size-1));
+    }
+
+out:
+    qemu_free(rpc_data->resp_xml);
+    xmlrpc_DECREF(resp);
+out_no_resp:
+    rpc_data->mon_cb(rpc_data->mon_data, QOBJECT(qdict));
+    qemu_free(rpc_data);
+}
+
+/*
+ * do_agent_viewfile(): View a text file in the guest
+ */
+int do_agent_viewfile(Monitor *mon, const QDict *mon_params,
+                      MonitorCompletion cb, void *opaque)
+{
+    xmlrpc_env env;
+    xmlrpc_value *params;
+    RPCRequest *rpc_data;
+    const char *filepath;
+    int ret;
+
+    filepath = qdict_get_str(mon_params, "filepath");
+    xmlrpc_env_init(&env);
+    params = xmlrpc_build_value(&env, "(s)", filepath);
+    if (rpc_has_error(&env)) {
+        return -1;
+    }
+
+    rpc_data = qemu_mallocz(sizeof(RPCRequest));
+    rpc_data->cb = do_agent_viewfile_cb;
+    rpc_data->mon = mon;
+    rpc_data->mon_cb = cb;
+    rpc_data->mon_data = opaque;
+
+    ret = rpc_execute(&env, "getfile", params, rpc_data);
+    if (ret == -EREMOTE) {
+        monitor_printf(mon, "RPC Failed (%i): %s\n", env.fault_code,
+                       env.fault_string);
+        return -1;
+    } else if (ret == -1) {
+        monitor_printf(mon, "RPC communication error\n");
+        return -1;
+    }
+
+    return 0;
+}
diff --git a/virtagent.h b/virtagent.h
index abdb32a..d027eac 100644
--- a/virtagent.h
+++ b/virtagent.h
@@ -22,4 +22,8 @@
 #define HOST_AGENT_PATH_CLIENT "/tmp/virtagent-host-client.sock"
 #define VA_MAX_CHUNK_SIZE 4096 /* max bytes at a time for get/send file */
 
+void do_agent_viewfile_print(Monitor *mon, const QObject *qobject);
+int do_agent_viewfile(Monitor *mon, const QDict *mon_params,
+                      MonitorCompletion cb, void *opaque);
+
 #endif /* VIRTAGENT_H */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [RFC][PATCH 07/10] virtagent: add getdmesg RPC
  2010-10-22 18:45 [Qemu-devel] [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent Michael Roth
                   ` (5 preceding siblings ...)
  2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 06/10] virtagent: add agent_viewfile command Michael Roth
@ 2010-10-22 18:46 ` Michael Roth
  2010-10-25 21:42   ` Anthony Liguori
  2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 08/10] virtagent: add agent_viewdmesg command Michael Roth
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 24+ messages in thread
From: Michael Roth @ 2010-10-22 18:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: aliguori, ryanh, agl, mdroth, abeekhof

Add RPC to view guest dmesg output.

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
 virtagent-daemon.c |   45 +++++++++++++++++++++++++++++++++++++++++++++
 virtagent-daemon.h |    1 +
 2 files changed, 46 insertions(+), 0 deletions(-)

diff --git a/virtagent-daemon.c b/virtagent-daemon.c
index fad0f64..6aaa987 100644
--- a/virtagent-daemon.c
+++ b/virtagent-daemon.c
@@ -72,6 +72,48 @@ EXIT_CLOSE_BAD:
     return result;
 }
 
+/* getdmesg(): return dmesg output
+ * rpc return values:
+ *   - dmesg output as a string
+ */
+static xmlrpc_value *getdmesg(xmlrpc_env *env,
+                              xmlrpc_value *param,
+                              void *user_data)
+{
+    char *dmesg_buf = NULL, cmd[256];
+    char c;
+    int pos = 0;
+    xmlrpc_value *result = NULL;
+    FILE *pipe;
+
+    dmesg_buf = qemu_mallocz(VA_DMESG_LEN + 2048);
+    sprintf(cmd, "dmesg -s %d", VA_DMESG_LEN);
+
+    //pipe = popen(cmd, "r");
+    pipe = popen("dmesg -s 16000", "r");
+    if (pipe == NULL) {
+        LOG("popen failed: %s", strerror(errno));
+        xmlrpc_faultf(env, "popen failed: %s", strerror(errno));
+        goto EXIT_NOCLOSE;
+    }
+
+    while ((c = fgetc(pipe)) != EOF && pos < VA_DMESG_LEN) {
+        dmesg_buf[pos] = c;
+        pos++;
+    }
+    dmesg_buf[pos++] = '\0';
+    TRACE("dmesg:\n%s", dmesg_buf);
+
+    result = xmlrpc_build_value(env, "s", dmesg_buf);
+    pclose(pipe);
+EXIT_NOCLOSE:
+    if (dmesg_buf) {
+        qemu_free(dmesg_buf);
+    }
+
+    return result;
+}
+
 static int va_accept(int listen_fd) {
     struct sockaddr_in saddr;
     struct sockaddr *addr;
@@ -101,6 +143,8 @@ typedef struct RPCFunction {
 static RPCFunction guest_functions[] = {
     { .func = getfile,
       .func_name = "getfile" },
+    { .func = getdmesg,
+      .func_name = "getdmesg" },
     { NULL, NULL }
 };
 static RPCFunction host_functions[] = {
@@ -165,6 +209,7 @@ int va_server_loop(int listen_fd, bool is_host)
         XMLRPC_MEMBLOCK_FREE(char, rpc_response);
 out:
         closesocket(fd);
+        xmlrpc_env_clean(&env);
     }
 
     return 0;
diff --git a/virtagent-daemon.h b/virtagent-daemon.h
index bb197d0..adcbc9a 100644
--- a/virtagent-daemon.h
+++ b/virtagent-daemon.h
@@ -16,5 +16,6 @@
 #define HOST_AGENT_PATH "/tmp/virtagent-host.sock"
 #define VA_GETFILE_MAX 1 << 30
 #define VA_FILEBUF_LEN 16384
+#define VA_DMESG_LEN 16384
 
 int va_server_loop(int listen_fd, bool is_host);
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [RFC][PATCH 08/10] virtagent: add agent_viewdmesg command
  2010-10-22 18:45 [Qemu-devel] [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent Michael Roth
                   ` (6 preceding siblings ...)
  2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 07/10] virtagent: add getdmesg RPC Michael Roth
@ 2010-10-22 18:46 ` Michael Roth
  2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 09/10] virtagent: Makefile/configure changes to build virtagent bits Michael Roth
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Michael Roth @ 2010-10-22 18:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: aliguori, ryanh, agl, mdroth, abeekhof

Add commands to view guest dmesg output. Currently it is a 16K buffer.

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
 hmp-commands.hx |   16 +++++++++
 qmp-commands.hx |   35 +++++++++++++++++++
 virtagent.c     |  100 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 virtagent.h     |    3 ++
 4 files changed, 154 insertions(+), 0 deletions(-)

diff --git a/hmp-commands.hx b/hmp-commands.hx
index e9a7f4a..0e7a6c9 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -1227,6 +1227,22 @@ STEXI
 Echo the file identified by @var{filepath} on the guest filesystem
 ETEXI
 
+    {
+        .name       = "agent_viewdmesg",
+        .args_type  = "",
+        .params     = "",
+        .help       = "View guest dmesg output",
+        .user_print = do_agent_viewdmesg_print,
+        .mhandler.cmd_async = do_agent_viewdmesg,
+        .flags      = MONITOR_CMD_ASYNC,
+    },
+
+STEXI
+@item agent_viewdmesg
+@findex agent_viewdmesg
+View guest dmesg output
+ETEXI
+
 STEXI
 @end table
 ETEXI
diff --git a/qmp-commands.hx b/qmp-commands.hx
index efa2137..dc319b7 100644
--- a/qmp-commands.hx
+++ b/qmp-commands.hx
@@ -771,6 +771,41 @@ Example:
 EQMP
 
     {
+        .name       = "agent_viewdmesg",
+        .args_type  = "",
+        .params     = "",
+        .help       = "View guest dmesg output",
+        .user_print = do_agent_viewdmesg_print,
+        .mhandler.cmd_async = do_agent_viewdmesg,
+        .flags      = MONITOR_CMD_ASYNC,
+    },
+
+STEXI
+@item agent_viewdmesg
+@findex agent_viewdmesg
+View guest dmesg output
+ETEXI
+SQMP
+agent_viewdmesg
+--------
+
+View guest dmesg output
+
+Arguments:
+
+(none)
+
+Example:
+
+-> { "execute": "agent_viewdmesg" }
+<- { "return": {
+       "contents": "[353487.942215] usb 1-4: USB disconnect, address 9\n..."
+     }
+   }
+
+EQMP
+
+    {
         .name       = "qmp_capabilities",
         .args_type  = "",
         .params     = "",
diff --git a/virtagent.c b/virtagent.c
index 8059d00..cae9a85 100644
--- a/virtagent.c
+++ b/virtagent.c
@@ -196,3 +196,103 @@ int do_agent_viewfile(Monitor *mon, const QDict *mon_params,
 
     return 0;
 }
+
+void do_agent_viewdmesg_print(Monitor *mon, const QObject *data)
+{
+    QDict *qdict;
+    const char *contents = NULL;
+    int i;
+
+    qdict = qobject_to_qdict(data);
+    if (!qdict_haskey(qdict, "contents")) {
+        return;
+    }
+
+    contents = qdict_get_str(qdict, "contents");
+    if (contents != NULL) {
+         /* monitor_printf truncates so do it in chunks. also, file_contents
+          * may not be null-termed at proper location so explicitly calc
+          * last chunk sizes */
+        for (i = 0; i < strlen(contents); i += 1024) {
+            monitor_printf(mon, "%.1024s", contents + i);
+        }
+    }
+    monitor_printf(mon, "\n");
+}
+
+static void do_agent_viewdmesg_cb(void *opaque)
+{
+    RPCRequest *rpc_data = opaque;
+    xmlrpc_value *resp = NULL;
+    char *dmesg = NULL;
+    int ret;
+    xmlrpc_env env;
+    QDict *qdict = qdict_new();
+
+    if (rpc_data->resp_xml == NULL) {
+        monitor_printf(rpc_data->mon, "error handling RPC request");
+        goto out_no_resp;
+    }
+
+    xmlrpc_env_init(&env);
+    resp = xmlrpc_parse_response(&env, rpc_data->resp_xml,
+                                 rpc_data->resp_xml_len);
+    if (rpc_has_error(&env)) {
+        ret = -1;
+        goto out_no_resp;
+    }
+
+    xmlrpc_parse_value(&env, resp, "s", &dmesg);
+    if (rpc_has_error(&env)) {
+        ret = -1;
+        goto out;
+    }
+
+    if (dmesg != NULL) {
+        qdict_put(qdict, "contents", qstring_from_str(dmesg));
+    }
+
+out:
+    qemu_free(rpc_data->resp_xml);
+    xmlrpc_DECREF(resp);
+out_no_resp:
+    rpc_data->mon_cb(rpc_data->mon_data, QOBJECT(qdict));
+    qemu_free(rpc_data);
+}
+
+/*
+ * do_agent_viewdmesg(): View guest dmesg output
+ */
+int do_agent_viewdmesg(Monitor *mon, const QDict *mon_params,
+                      MonitorCompletion cb, void *opaque)
+{
+    xmlrpc_env env;
+    xmlrpc_value *params;
+    RPCRequest *rpc_data;
+    int ret;
+
+    xmlrpc_env_init(&env);
+
+    params = xmlrpc_build_value(&env, "(n)");
+    if (rpc_has_error(&env)) {
+        return -1;
+    }
+
+    rpc_data = qemu_mallocz(sizeof(RPCRequest));
+    rpc_data->cb = do_agent_viewdmesg_cb;
+    rpc_data->mon = mon;
+    rpc_data->mon_cb = cb;
+    rpc_data->mon_data = opaque;
+
+    ret = rpc_execute(&env, "getdmesg", params, rpc_data);
+    if (ret == -EREMOTE) {
+        monitor_printf(mon, "RPC Failed (%i): %s\n", env.fault_code,
+                       env.fault_string);
+        return -1;
+    } else if (ret == -1) {
+        monitor_printf(mon, "RPC communication error\n");
+        return -1;
+    }
+
+    return 0;
+}
diff --git a/virtagent.h b/virtagent.h
index d027eac..82a6924 100644
--- a/virtagent.h
+++ b/virtagent.h
@@ -25,5 +25,8 @@
 void do_agent_viewfile_print(Monitor *mon, const QObject *qobject);
 int do_agent_viewfile(Monitor *mon, const QDict *mon_params,
                       MonitorCompletion cb, void *opaque);
+void do_agent_viewdmesg_print(Monitor *mon, const QObject *qobject);
+int do_agent_viewdmesg(Monitor *mon, const QDict *mon_params,
+                      MonitorCompletion cb, void *opaque);
 
 #endif /* VIRTAGENT_H */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [RFC][PATCH 09/10] virtagent: Makefile/configure changes to build virtagent bits
  2010-10-22 18:45 [Qemu-devel] [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent Michael Roth
                   ` (7 preceding siblings ...)
  2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 08/10] virtagent: add agent_viewdmesg command Michael Roth
@ 2010-10-22 18:46 ` Michael Roth
  2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 10/10] virtproxy: add compat defs for linking against vl.c Michael Roth
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 24+ messages in thread
From: Michael Roth @ 2010-10-22 18:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: aliguori, ryanh, agl, mdroth, abeekhof


Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
 Makefile        |    2 +-
 Makefile.target |    2 +-
 configure       |   25 +++++++++++++++++++++++++
 3 files changed, 27 insertions(+), 2 deletions(-)

diff --git a/Makefile b/Makefile
index 2dd64a3..b513d33 100644
--- a/Makefile
+++ b/Makefile
@@ -135,7 +135,7 @@ qemu-nbd$(EXESUF): qemu-nbd.o qemu-tool.o qemu-error.o $(trace-obj-y) $(block-ob
 
 qemu-io$(EXESUF): qemu-io.o cmd.o qemu-tool.o qemu-error.o $(trace-obj-y) $(block-obj-y) $(qobject-obj-y) $(version-obj-y)
 
-qemu-vp$(EXESUF): qemu-vp.o virtproxy.o qemu-tool.o qemu-error.o qemu-sockets.c $(block-obj-y) $(qobject-obj-y)
+qemu-vp$(EXESUF): qemu-vp.o virtproxy.o virtagent-daemon.o virtagent-common.o qemu-tool.o qemu-error.o qemu-sockets.c $(trace-obj-y) $(block-obj-y) $(qobject-obj-y) $(version-obj-y)
 
 qemu-img-cmds.h: $(SRC_PATH)/qemu-img-cmds.hx
 	$(call quiet-command,sh $(SRC_PATH)/hxtool -h < $< > $@,"  GEN   $@")
diff --git a/Makefile.target b/Makefile.target
index c48cbcc..f8f4c13 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -164,7 +164,7 @@ endif #CONFIG_BSD_USER
 # System emulator target
 ifdef CONFIG_SOFTMMU
 
-obj-y = arch_init.o cpus.o monitor.o machine.o gdbstub.o balloon.o
+obj-y = arch_init.o cpus.o monitor.o machine.o gdbstub.o balloon.o virtagent.o virtagent-common.o
 # virtio has to be here due to weird dependency between PCI and virtio-net.
 # need to fix this properly
 obj-y += virtio-blk.o virtio-balloon.o virtio-net.o virtio-serial-bus.o
diff --git a/configure b/configure
index 27f92e0..cfddad7 100755
--- a/configure
+++ b/configure
@@ -1261,6 +1261,31 @@ EOF
 fi
 
 ##########################################
+# xmlrpc-c probe
+
+# Look for the xmlrpc-c config program
+if test -n "$cross_prefix" && has ${cross_prefix}xmlrpc-c-config; then
+  xmlrpccconfig=${cross_prefix}xmlrpc-c-config
+elif has xmlrpc-c-config; then
+  xmlrpccconfig=xmlrpc-c-config
+else
+  feature_not_found "xmlrpc-c"
+fi
+
+cat > $TMPC << EOF
+#include <xmlrpc.h>
+int main(void) { xmlrpc_env env; xmlrpc_env_init(&env); return 0; }
+EOF
+xmlrpc_cflags=`$xmlrpccconfig --cflags 2> /dev/null`
+xmlrpc_libs=`$xmlrpccconfig client server-util --libs 2> /dev/null`
+if compile_prog "$xmlrpc_cflags" "$xmlrpc_libs"; then
+  libs_softmmu="$xmlrpc_libs $libs_softmmu"
+  libs_tools="$xmlrpc_libs $libs_tools"
+else
+  feature_not_found "xmlrpc-c"
+fi
+
+##########################################
 # VNC TLS detection
 if test "$vnc_tls" != "no" ; then
   cat > $TMPC <<EOF
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [RFC][PATCH 10/10] virtproxy: add compat defs for linking against vl.c
  2010-10-22 18:45 [Qemu-devel] [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent Michael Roth
                   ` (8 preceding siblings ...)
  2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 09/10] virtagent: Makefile/configure changes to build virtagent bits Michael Roth
@ 2010-10-22 18:46 ` Michael Roth
  2010-10-23 11:31 ` [Qemu-devel] Re: [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent Andrew Beekhof
  2010-10-25 10:30 ` Andrew Beekhof
  11 siblings, 0 replies; 24+ messages in thread
From: Michael Roth @ 2010-10-22 18:46 UTC (permalink / raw)
  To: qemu-devel; +Cc: aliguori, ryanh, agl, mdroth, abeekhof

Virtagent depends on basic qemu functions re-implemented by qemu-vp to
allow it to be used in output of qemu (in particular, to allow the
client code to be used for guest-to-host RPC calls). To build virtagent
into qemu we need to define these in terms of the built-in qemu
functions.

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
 Makefile.target     |    2 +-
 virtproxy-builtin.c |   30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+), 1 deletions(-)
 create mode 100644 virtproxy-builtin.c

diff --git a/Makefile.target b/Makefile.target
index f8f4c13..d72b2be 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -164,7 +164,7 @@ endif #CONFIG_BSD_USER
 # System emulator target
 ifdef CONFIG_SOFTMMU
 
-obj-y = arch_init.o cpus.o monitor.o machine.o gdbstub.o balloon.o virtagent.o virtagent-common.o
+obj-y = arch_init.o cpus.o monitor.o machine.o gdbstub.o balloon.o virtagent.o virtagent-common.o virtproxy-builtin.o
 # virtio has to be here due to weird dependency between PCI and virtio-net.
 # need to fix this properly
 obj-y += virtio-blk.o virtio-balloon.o virtio-net.o virtio-serial-bus.o
diff --git a/virtproxy-builtin.c b/virtproxy-builtin.c
new file mode 100644
index 0000000..c3416b3
--- /dev/null
+++ b/virtproxy-builtin.c
@@ -0,0 +1,30 @@
+/*
+ * virt-proxy - host/guest communication layer builtin definitions
+ *
+ * Copyright IBM Corp. 2010
+ *
+ * Authors:
+ *  Adam Litke        <aglitke@linux.vnet.ibm.com>
+ *  Michael Roth      <mdroth@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+/* the following are functions we define in terms of qemu when linked
+ * against qemu/vl.c. these will be added on an as-needed basis
+ */
+
+#include "qemu-char.h"
+#include "virtproxy.h"
+
+int vp_set_fd_handler(int fd,
+                        IOHandler *fd_read,
+                        IOHandler *fd_write,
+                        void *opaque)
+{
+    return qemu_set_fd_handler(fd, fd_read, fd_write, opaque);
+}
+
+
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] Re: [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent
  2010-10-22 18:45 [Qemu-devel] [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent Michael Roth
                   ` (9 preceding siblings ...)
  2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 10/10] virtproxy: add compat defs for linking against vl.c Michael Roth
@ 2010-10-23 11:31 ` Andrew Beekhof
  2010-10-23 14:41   ` Michael Roth
  2010-10-25 10:30 ` Andrew Beekhof
  11 siblings, 1 reply; 24+ messages in thread
From: Andrew Beekhof @ 2010-10-23 11:31 UTC (permalink / raw)
  To: Michael Roth; +Cc: aliguori, ryanh, agl, qemu-devel

On 10/22/2010 08:45 PM, Michael Roth wrote:
> This set of patches is meant to be applied on top of the Virtproxy v1 patchset.
>
> OVERVIEW:
>
> There are a wide range of use cases motivating the need for a guest agent of some sort to extend the functionality/usability/control offered by QEMU. Some examples include graceful guest shutdown/reboot and notifications thereof, copy/paste syncing between host/guest, guest statistics gathering, file access, etc.
>
> Ideally these would all be served by a single, easilly extensible agent that can be deployed in a wide range of guests. Virtagent is an XMLRPC server integrated into the Virtproxy guest daemon and aimed at providing this type of functionality.
>
> This code is very rough, and I'll to document most of the bugs/shortcomings we're aware of in this version of the patchset. The main goal of this RFC to get feedback on the types of core functionality we would need in an agent of this sort, as well as feedback on the general approach/architecture implemented here. Any feedback is greatly appreciated however.
>
> To start off this discussion, there have been some recent posts about how much an agent of this sort overlaps with the goals of the Matahari project (https://fedorahosted.org/matahari/). While both of these approaches are at least *feasible*, our use cases require


> the ability to deploy to guests which may not support virtio-serial, which currently rules Matahari out.

Possibly there has been some miscommunication, Matahari does not require 
virtio-serial.

It will be possible to configure Matahari to use virtio-serial if it 
exists, but like Virtproxy it can also be used over a regular IP network 
(yes, its transparent to clients too).

-- Andrew

> This support could be added however: the virtproxy layer used by this agent actually lends itself to extending such support to other agents/services, or a more direct approach could be taken in adding support for isa-serial.
>
> The question that remains however is one of scope. This agent is intended purely as a means to extend qemu's abilities to perform hypervisor-specific work, whereas Matahari aims to extend general system management capabilities to guests (please correct me if I'm oversimplifying). Virtagent cannot meet Matahari's goals, whereas Matahari technically can meet Virtagent's. My contention however is that the qemu-specific scope/API and shared code base with a more closely integrated agent will provide a more expedient route to functional improvements to qemu, while still allowing for the additional functionality/management capabilities provided by something like Matahari.
>
> DESIGN:
>
> There are actually 2 RPC servers:
>
> 1) a server in the guest integrated into the Virtproxy guest daemon which handles RPC requests from QEMU
> 2) a server in the host (integrated into the Virtproxy host daemon, which we plan to integrate directly into qemu) to handle RPC requests sent by the guest agent (mainly for handling asynchronous events reported by the agent).
>
> At the Virtagent level, communication is done via standard RPCs (HTTP/TCP between host and guest). Virtproxy transparently handles transport over a network or isa/virtio serial channel, allowing the agent to be deployed on older guests which may not support virtio-serial.
>
> Currently there are only 2 RPCs implemented for the guest server (getfile and getdmesg), and 0 for the host. Additional RPCs can be added fairly easily, but are dependent on feedback from here and elsewhere. ping/status, shutdown, and reboot are likely candidates (although the latter 2 will likely require asynchronous notifications to the host RPC server to implement reliably).
>
> EXAMPLE USAGE:
>
> The commandline options are a little convoluted right now; this will addressed in later revisions.
>
>   - Configure guest agent to talk to host via virtio-serial
>      # start guest with virtio-serial. for example (RHEL6s13):
>      qemu \
>      -device virtio-serial \
>      -chardev socket,path=/tmp/test0-virtioconsole.sock,server,nowait,id=test0 \
>      -device virtconsole,chardev=test0,name=test0 \
>      -chardev socket,path=/tmp/test1-virtio-serial.sock,server,nowait,id=test1 \
>      -device virtserialport,chardev=test1,name=test1 \
>      -chardev socket,path=/tmp/test2-virtio-serial.sock,server,nowait,id=test2 \
>      -device virtserialport,chardev=test2,name=test2 \
>      -monitor stdio
>      ...
>      # in the host:
>      ./qemu-vp -c unix-connect:/tmp/test2-virtio-serial.sock:- \
>                -o virtagent:/tmp/virtagent-guest-client.sock:-
>      # in the guest:
>      ./qemu-vp -c virtserial-open:/dev/virtio-ports/test2:- -g
>      ...
>      # monitor commands
>      (qemu) agent_viewdmesg
>      [139311.710326] wlan0: deauthenticating from 00:30:bd:f7:12:d5 by local choice (reason=3)
>      [139323.469857] wlan0: deauthenticating from 00:21:29:cd:41:ee by local choice (reason=3)
>      ...
>      [257683.375646] wlan0: authenticated
>      [257683.375684] wlan0: associate with AP 00:30:bd:f7:12:d5 (try 1)
>      [257683.377932] wlan0: RX AssocResp from 00:30:bd:f7:12:d5 (capab=0x411 status=0 aid=4)
>      [257683.377940] wlan0: associated
>
>      (qemu) agent_viewfile /proc/meminfo
>      MemTotal:        3985488 kB
>      MemFree:          400524 kB
>      Buffers:          220556 kB
>      Cached:          2073160 kB
>      SwapCached:            0 kB
>      ...
>      Hugepagesize:       2048 kB
>      DirectMap4k:        8896 kB
>      DirectMap2M:     4110336 kB
>
> KNOWN ISSUES/PLANS:
>   - the client socket that qemu connects to send RPCs is a hardcoded filepath. This is unacceptable as the socket is channel/process specific and things will break when multiple guests are started.
>   - capability negotiation will be needed to handle version/architecture differences.
>   - proper channel negotiation is critical to avoiding hung monitors and such when a guest reboots or the guest agent is stopped for whatever reason. additionally, a timeout may need to be imposed on the amount of time the http read handler can block the monitor.
>   - additional host-to-guest RPCs as well as asynchronous notifications via guest-to-host RPCs for events such as shutdown/reboot/agent up/agent down
>   - switch all RPC communication over to non-blocking/asynchronous read/write callbacks to avoid deadlocks.
>
>   Makefile            |    2 +-
>   Makefile.target     |    2 +-
>   configure           |   25 ++++
>   hmp-commands.hx     |   32 +++++
>   monitor.c           |    1 +
>   qemu-vp.c           |  119 +++++++++++++++--
>   qerror.c            |    4 +
>   qerror.h            |    3 +
>   qmp-commands.hx     |   68 +++++++++
>   virtagent-common.c  |  381 +++++++++++++++++++++++++++++++++++++++++++++++++++
>   virtagent-common.h  |   63 +++++++++
>   virtagent-daemon.c  |  216 +++++++++++++++++++++++++++++
>   virtagent-daemon.h  |   21 +++
>   virtagent.c         |  298 ++++++++++++++++++++++++++++++++++++++++
>   virtagent.h         |   32 +++++
>   virtproxy-builtin.c |   30 ++++
>   16 files changed, 1286 insertions(+), 11 deletions(-)
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Qemu-devel] Re: [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent
  2010-10-23 11:31 ` [Qemu-devel] Re: [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent Andrew Beekhof
@ 2010-10-23 14:41   ` Michael Roth
  2010-10-25  9:46     ` Andrew Beekhof
  0 siblings, 1 reply; 24+ messages in thread
From: Michael Roth @ 2010-10-23 14:41 UTC (permalink / raw)
  To: Andrew Beekhof; +Cc: aliguori, ryanh, agl, qemu-devel

On 10/23/2010 06:31 AM, Andrew Beekhof wrote:
> On 10/22/2010 08:45 PM, Michael Roth wrote:
>> This set of patches is meant to be applied on top of the Virtproxy v1
>> patchset.
>>
>> OVERVIEW:
>>
>> There are a wide range of use cases motivating the need for a guest
>> agent of some sort to extend the functionality/usability/control
>> offered by QEMU. Some examples include graceful guest shutdown/reboot
>> and notifications thereof, copy/paste syncing between host/guest,
>> guest statistics gathering, file access, etc.
>>
>> Ideally these would all be served by a single, easilly extensible
>> agent that can be deployed in a wide range of guests. Virtagent is an
>> XMLRPC server integrated into the Virtproxy guest daemon and aimed at
>> providing this type of functionality.
>>
>> This code is very rough, and I'll to document most of the
>> bugs/shortcomings we're aware of in this version of the patchset. The
>> main goal of this RFC to get feedback on the types of core
>> functionality we would need in an agent of this sort, as well as
>> feedback on the general approach/architecture implemented here. Any
>> feedback is greatly appreciated however.
>>
>> To start off this discussion, there have been some recent posts about
>> how much an agent of this sort overlaps with the goals of the Matahari
>> project (https://fedorahosted.org/matahari/). While both of these
>> approaches are at least *feasible*, our use cases require
>
>
>> the ability to deploy to guests which may not support virtio-serial,
>> which currently rules Matahari out.
>
> Possibly there has been some miscommunication, Matahari does not require
> virtio-serial.
>
> It will be possible to configure Matahari to use virtio-serial if it
> exists, but like Virtproxy it can also be used over a regular IP network
> (yes, its transparent to clients too).
>

Sorry, I'm aware of this but wasn't very clear here. I was speaking in 
the context of our cloud, or other environments where there isn't any 
network access between the host and the guest. This essentially leaves 
virtio-serial and isa-serial, and a large number of our guests don't 
support the former.

> -- Andrew
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Qemu-devel] Re: [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent
  2010-10-23 14:41   ` Michael Roth
@ 2010-10-25  9:46     ` Andrew Beekhof
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Beekhof @ 2010-10-25  9:46 UTC (permalink / raw)
  To: Michael Roth; +Cc: aliguori, ryanh, agl, qemu-devel

On 10/23/2010 04:41 PM, Michael Roth wrote:
> On 10/23/2010 06:31 AM, Andrew Beekhof wrote:
>> On 10/22/2010 08:45 PM, Michael Roth wrote:

>>> the ability to deploy to guests which may not support virtio-serial,
>>> which currently rules Matahari out.
>>
>> Possibly there has been some miscommunication, Matahari does not require
>> virtio-serial.
>>
>> It will be possible to configure Matahari to use virtio-serial if it
>> exists, but like Virtproxy it can also be used over a regular IP network
>> (yes, its transparent to clients too).
>>
>
> Sorry, I'm aware of this but wasn't very clear here. I was speaking in
> the context of our cloud, or other environments where there isn't any
> network access between the host and the guest. This essentially leaves
> virtio-serial and isa-serial, and a large number of our guests don't
> support the former.

Well again, Matahari has no concept of what transport is being used. 
That's completely hidden from us by Qpid behind a layer of abstraction.

So what we're really talking about is the difficulty in adding a new 
transport to Qpid... and if the interface is anything like virtio-serial 
or a standard serial port, we're talking about a few hundred lines and 
"a couple of days".

Not exactly a major barrier considering the pay-off would be a single 
set of agents.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Qemu-devel] Re: [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent
  2010-10-22 18:45 [Qemu-devel] [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent Michael Roth
                   ` (10 preceding siblings ...)
  2010-10-23 11:31 ` [Qemu-devel] Re: [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent Andrew Beekhof
@ 2010-10-25 10:30 ` Andrew Beekhof
  2010-10-25 17:06   ` Michael Roth
  2010-10-25 21:28   ` Anthony Liguori
  11 siblings, 2 replies; 24+ messages in thread
From: Andrew Beekhof @ 2010-10-25 10:30 UTC (permalink / raw)
  To: Michael Roth; +Cc: aliguori, ryanh, agl, qemu-devel

On 10/22/2010 08:45 PM, Michael Roth wrote:
> This set of patches is meant to be applied on top of the Virtproxy v1 patchset.
>
> OVERVIEW:
>
> There are a wide range of use cases motivating the need for a guest agent of some sort to extend the functionality/usability/control offered by QEMU.
> Some examples include graceful guest shutdown/reboot and notifications thereof, copy/paste syncing between host/guest, guest statistics gathering, file access, etc.
>
> Ideally these would all be served by a single, easilly extensible agent that can be deployed in a wide range of guests.
> Virtagent is an XMLRPC server integrated into the Virtproxy guest daemon and aimed at providing this type of functionality.
>
> This code is very rough, and I'll to document most of the bugs/shortcomings we're aware of in this version of the patchset.
> The main goal of this RFC to get feedback on the types of core functionality we would need in an agent of this sort, as well as feedback on the general approach/architecture implemented here.
> Any feedback is greatly appreciated however.
>
> To start off this discussion, there have been some recent posts about how much an agent of this sort overlaps with the goals of the Matahari project (https://fedorahosted.org/matahari/).
> While both of these approaches are at least *feasible*, our use cases require the ability to deploy to guests which may not support virtio-serial, which currently rules Matahari out.
> This support could be added however: the virtproxy layer used by this agent actually lends itself to extending such support to other agents/services, or a more direct approach could be taken in adding support for isa-serial.
>
> The question that remains however is one of scope.
> This agent is intended purely as a means to extend qemu's abilities to perform hypervisor-specific work,

"shutdown/reboot", "statistics", "file gathering"... none of those sound 
very "hypervisor-specific" to me ;-)

> whereas Matahari aims to extend general system management capabilities to guests (please correct me if I'm oversimplifying).

As I replied elsewhere, Matahari is both an architecture and a 
collection of independent but commonly useful agents.

So while there will be a bunch of other agents doing a bunch of things 
you don't care about, you don't have to care that they exist either :-)

A hypothetical QEMU agent would be a independent entity, with both the 
daemon and source code completely isolated from any other agents.

It doesn't even need to live in the Matahari project.

> Virtagent cannot meet Matahari's goals, whereas Matahari technically can meet Virtagent's.
> My contention however is that the qemu-specific scope/API and shared code base with a more closely integrated agent will provide a more expedient route to functional improvements to qemu,

See above. Would leveraging the Matahari architecture but keeping the 
agent in the QEMU project address this concern?

> while still allowing for the additional functionality/management capabilities provided by something like Matahari.
>
> DESIGN:
>
> There are actually 2 RPC servers:
>
> 1) a server in the guest integrated into the Virtproxy guest daemon which handles RPC requests from QEMU

Question: Is the scope here purely between a host and its guest?  Or is 
it intended that one could access the guest daemon from other hosts/guests?

> 2) a server in the host (integrated into the Virtproxy host daemon, which we plan to integrate directly into qemu) to handle RPC requests sent by the guest agent (mainly for handling asynchronous events reported by the agent).
>
> At the Virtagent level, communication is done via standard RPCs (HTTP/TCP between host and guest). Virtproxy transparently handles transport over a network or isa/virtio serial channel, allowing the agent to be deployed on older guests which may not support virtio-serial.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Qemu-devel] Re: [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent
  2010-10-25 10:30 ` Andrew Beekhof
@ 2010-10-25 17:06   ` Michael Roth
  2010-10-26  7:14     ` Andrew Beekhof
  2010-10-25 21:28   ` Anthony Liguori
  1 sibling, 1 reply; 24+ messages in thread
From: Michael Roth @ 2010-10-25 17:06 UTC (permalink / raw)
  To: Andrew Beekhof; +Cc: aliguori, ryanh, agl, qemu-devel

On 10/25/2010 05:30 AM, Andrew Beekhof wrote:
> On 10/22/2010 08:45 PM, Michael Roth wrote:
>> This set of patches is meant to be applied on top of the Virtproxy v1
>> patchset.
>>
>> OVERVIEW:
>>
>> There are a wide range of use cases motivating the need for a guest
>> agent of some sort to extend the functionality/usability/control
>> offered by QEMU.
>> Some examples include graceful guest shutdown/reboot and notifications
>> thereof, copy/paste syncing between host/guest, guest statistics
>> gathering, file access, etc.
>>
>> Ideally these would all be served by a single, easilly extensible
>> agent that can be deployed in a wide range of guests.
>> Virtagent is an XMLRPC server integrated into the Virtproxy guest
>> daemon and aimed at providing this type of functionality.
>>
>> This code is very rough, and I'll to document most of the
>> bugs/shortcomings we're aware of in this version of the patchset.
>> The main goal of this RFC to get feedback on the types of core
>> functionality we would need in an agent of this sort, as well as
>> feedback on the general approach/architecture implemented here.
>> Any feedback is greatly appreciated however.
>>
>> To start off this discussion, there have been some recent posts about
>> how much an agent of this sort overlaps with the goals of the Matahari
>> project (https://fedorahosted.org/matahari/).
>> While both of these approaches are at least *feasible*, our use cases
>> require the ability to deploy to guests which may not support
>> virtio-serial, which currently rules Matahari out.
>> This support could be added however: the virtproxy layer used by this
>> agent actually lends itself to extending such support to other
>> agents/services, or a more direct approach could be taken in adding
>> support for isa-serial.
>>
>> The question that remains however is one of scope.
>> This agent is intended purely as a means to extend qemu's abilities to
>> perform hypervisor-specific work,
>
> "shutdown/reboot", "statistics", "file gathering"... none of those sound
> very "hypervisor-specific" to me ;-)
>

QEMU-specific is probably a better word, in the sense that the API (and 
the code dependencies) terminates at the QEMU level. Though I think one 
can make the argument that shutdown/reboot are hypervisor-related, 
especially given that there are situations where we currently cannot 
induce a guest reboot from qemu. Also, file access is actually horrible 
use case for RPC. It does however provide a pretty powerful primitive 
for exposing data/statistics made available in /proc or /sys, which do 
extend a hypervisor's ability to make better decisions about things like 
memory ballooning.

But those are just some examples of the functionality that could be 
implemented by the agent, and the RPCs that make sense to implement are 
TBD based on input here and elsewhere.

>> whereas Matahari aims to extend general system management capabilities
>> to guests (please correct me if I'm oversimplifying).
>
> As I replied elsewhere, Matahari is both an architecture and a
> collection of independent but commonly useful agents.
>
> So while there will be a bunch of other agents doing a bunch of things
> you don't care about, you don't have to care that they exist either :-)
>
> A hypothetical QEMU agent would be a independent entity, with both the
> daemon and source code completely isolated from any other agents.
>
> It doesn't even need to live in the Matahari project.
>
 >> Virtagent cannot meet Matahari's goals, whereas Matahari technically
 >> can meet Virtagent's.
 >> My contention however is that the qemu-specific scope/API and shared
 >> code base with a more closely integrated agent will provide a more
 >> expedient route to functional improvements to qemu,
 >
 > See above. Would leveraging the Matahari architecture but keeping the
 > agent in the QEMU project address this concern?
 >

To some extent. But what is gained here though except the bus? If we 
aren't using the functions provided by the common agents we still need 
to re-implement them, and if we do use them then we are dependent on 
their implementations as well as their deployment overhead. And even if 
we do choose to ignore the existing agents, we inherit the guest and 
host-side dependencies of Matahari nonetheless. Maybe these dependencies 
are fairly minimal, but it makes it that much more difficult to support 
other/older guests.

Beyond deployment, there's also just the basic end-user experience we 
need to consider. In the context of general systems management it makes 
total sense that you'd install Matahari and then whatever agents you 
needed in your guests/hosts. In the context of qemu it doesn't. VMWare 
has a vmware package that interacts with vmware, virtualbox has a 
virtualbox utilities package that interacts with virtualbox. 
Simple/intuitive and what users are used to. But qemu has an agent that 
communicates with matahari that qemu talks to? And do we make it clear 
that they need to go back to the host and install matahari?

The single dependency with the agent proposed here is xmlrpc-c, and that 
can be removed with a bit of work.

I agree there's a lot of overlap, and it looks like Matahari is a 
completely reasonable architecture for the functionality it implements, 
but it still seems like more buy-in/overhead than what is necessary for 
the simple qemu guest agent we're looking to implement. And it doesn't 
seem like, in the context of this guest agent, that we stand to gain 
much in doing so.

>> There are actually 2 RPC servers:
>>
>> 1) a server in the guest integrated into the Virtproxy guest daemon
>> which handles RPC requests from QEMU
>
> Question: Is the scope here purely between a host and its guest? Or is
> it intended that one could access the guest daemon from other hosts/guests?
>

Purely between a host and it's guest. If other hosts/guests needed to 
access it it would need to be via the qemu interfaces or libvirt wrappers.

>> 2) a server in the host (integrated into the Virtproxy host daemon,
>> which we plan to integrate directly into qemu) to handle RPC requests
>> sent by the guest agent (mainly for handling asynchronous events
>> reported by the agent).
>>
>> At the Virtagent level, communication is done via standard RPCs
>> (HTTP/TCP between host and guest). Virtproxy transparently handles
>> transport over a network or isa/virtio serial channel, allowing the
>> agent to be deployed on older guests which may not support virtio-serial.
>
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] Re: [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent
  2010-10-25 10:30 ` Andrew Beekhof
  2010-10-25 17:06   ` Michael Roth
@ 2010-10-25 21:28   ` Anthony Liguori
  2010-10-26  7:27     ` Andrew Beekhof
  1 sibling, 1 reply; 24+ messages in thread
From: Anthony Liguori @ 2010-10-25 21:28 UTC (permalink / raw)
  To: Andrew Beekhof; +Cc: ryanh, agl, Michael Roth, qemu-devel

On 10/25/2010 05:30 AM, Andrew Beekhof wrote:
> On 10/22/2010 08:45 PM, Michael Roth wrote:
>> This set of patches is meant to be applied on top of the Virtproxy v1 
>> patchset.
>>
>> OVERVIEW:
>>
>> There are a wide range of use cases motivating the need for a guest 
>> agent of some sort to extend the functionality/usability/control 
>> offered by QEMU.
>> Some examples include graceful guest shutdown/reboot and 
>> notifications thereof, copy/paste syncing between host/guest, guest 
>> statistics gathering, file access, etc.
>>
>> Ideally these would all be served by a single, easilly extensible 
>> agent that can be deployed in a wide range of guests.
>> Virtagent is an XMLRPC server integrated into the Virtproxy guest 
>> daemon and aimed at providing this type of functionality.
>>
>> This code is very rough, and I'll to document most of the 
>> bugs/shortcomings we're aware of in this version of the patchset.
>> The main goal of this RFC to get feedback on the types of core 
>> functionality we would need in an agent of this sort, as well as 
>> feedback on the general approach/architecture implemented here.
>> Any feedback is greatly appreciated however.
>>
>> To start off this discussion, there have been some recent posts about 
>> how much an agent of this sort overlaps with the goals of the 
>> Matahari project (https://fedorahosted.org/matahari/).
>> While both of these approaches are at least *feasible*, our use cases 
>> require the ability to deploy to guests which may not support 
>> virtio-serial, which currently rules Matahari out.
>> This support could be added however: the virtproxy layer used by this 
>> agent actually lends itself to extending such support to other 
>> agents/services, or a more direct approach could be taken in adding 
>> support for isa-serial.
>>
>> The question that remains however is one of scope.
>> This agent is intended purely as a means to extend qemu's abilities 
>> to perform hypervisor-specific work,
>
> "shutdown/reboot", "statistics", "file gathering"... none of those 
> sound very "hypervisor-specific" to me ;-)

A hypervisor initiated shutdown is very different than a network 
initiated shutdown.

>> whereas Matahari aims to extend general system management 
>> capabilities to guests (please correct me if I'm oversimplifying).
>
> As I replied elsewhere, Matahari is both an architecture and a 
> collection of independent but commonly useful agents.
>
> So while there will be a bunch of other agents doing a bunch of things 
> you don't care about, you don't have to care that they exist either :-)
>
> A hypothetical QEMU agent would be a independent entity, with both the 
> daemon and source code completely isolated from any other agents.
>
> It doesn't even need to live in the Matahari project.

I've taken a deeper look at Matahari.

First thing I've noticed is that the AMQP seems to be unfriendly to C.  
QPID and it's friends are all implemented in C++ as it Matahari itself.  
The lack of a C client library is a deal breaker for QEMU because it 
makes it impossible to integrate into QEMU.

The second thing that I've observed is that AMQP is less oriented toward 
point-to-point communication than, say, XML-RPC but rather focuses on a 
pub/sub model.  This is not a bad thing, but I wonder if there are any 
real cases where it makes sense as a guest agent.  It seems like a high 
complexity cost without a lot of return.

>> Virtagent cannot meet Matahari's goals, whereas Matahari technically 
>> can meet Virtagent's.
>> My contention however is that the qemu-specific scope/API and shared 
>> code base with a more closely integrated agent will provide a more 
>> expedient route to functional improvements to qemu,
>
> See above. Would leveraging the Matahari architecture but keeping the 
> agent in the QEMU project address this concern?

Biggest is going to be the fact that it's not C-friendly.

>> while still allowing for the additional functionality/management 
>> capabilities provided by something like Matahari.
>>
>> DESIGN:
>>
>> There are actually 2 RPC servers:
>>
>> 1) a server in the guest integrated into the Virtproxy guest daemon 
>> which handles RPC requests from QEMU
>
> Question: Is the scope here purely between a host and its guest?  Or 
> is it intended that one could access the guest daemon from other 
> hosts/guests?

Just host and guest is the intended scope.

Regards,

Anthony Liguori

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [RFC][PATCH 01/10] virtagent: add common rpc transport defs
  2010-10-22 18:45 ` [Qemu-devel] [RFC][PATCH 01/10] virtagent: add common rpc transport defs Michael Roth
@ 2010-10-25 21:39   ` Anthony Liguori
  2010-10-25 21:54     ` malc
  0 siblings, 1 reply; 24+ messages in thread
From: Anthony Liguori @ 2010-10-25 21:39 UTC (permalink / raw)
  To: Michael Roth; +Cc: abeekhof, ryanh, agl, qemu-devel

On 10/22/2010 01:45 PM, Michael Roth wrote:
> Common code for sending/recieving RPCs via http over virtproxy channel.
> Eventually these will all be switched to asynchronous handlers to avoid
> deadlocks between qemu and the guest. For now we can usually get away with
> just doing asynchronous reads for http/RPC responses if we don't send
> large RPC requests to the guest (since these will likely get buffered
> rather than block or cause spinning on writes).
>
> Signed-off-by: Michael Roth<mdroth@linux.vnet.ibm.com>
> ---
>   virtagent-common.c |  381 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>   virtagent-common.h |   63 +++++++++
>   2 files changed, 444 insertions(+), 0 deletions(-)
>   create mode 100644 virtagent-common.c
>   create mode 100644 virtagent-common.h
>
> diff --git a/virtagent-common.c b/virtagent-common.c
> new file mode 100644
> index 0000000..51016a6
> --- /dev/null
> +++ b/virtagent-common.c
> @@ -0,0 +1,381 @@
> +/*
> + * virt-agent - common host/guest RPC functions
> + *
> + * Copyright IBM Corp. 2010
> + *
> + * Authors:
> + *  Adam Litke<aglitke@linux.vnet.ibm.com>
> + *  Michael Roth<mdroth@linux.vnet.ibm.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> + * See the COPYING file in the top-level directory.
> + *
> + */
> +
> +#include<stdlib.h>
> +#include<unistd.h>
> +#include<errno.h>
> +#include<string.h>
> +#include<ctype.h>
>    

QEMU code should never include system headers.  Instead, it should use 
qemu-common.h and friends.

> +#include<xmlrpc-c/base.h>
> +#include<xmlrpc-c/client.h>
> +#include<xmlrpc-c/server.h>
> +#include "virtagent-common.h"
> +
> +static int write_all(int fd, const void *buf, int len1)
> +{
> +    int ret, len;
> +
> +    len = len1;
> +    while (len>  0) {
> +        ret = write(fd, buf, len);
> +        if (ret<  0) {
> +            if (errno != EINTR&&  errno != EAGAIN) {
> +                LOG("write() failed");
> +                return -1;
> +            }
> +        } else if (ret == 0) {
> +            break;
> +        } else {
> +            buf += ret;
> +            len -= ret;
> +        }
> +    }
> +    return len1 - len;
> +}
> +
> +static int read_all(int fd, void *buf, int len)
> +{
> +    int ret, remaining;
> +
> +    remaining = len;
> +    while (remaining>  0) {
> +        ret = read(fd, buf, remaining);
> +        if (ret<  0) {
> +            if (errno != EINTR&&  errno != EAGAIN) {
> +                LOG("read failed");
> +                return -1;
> +            }
> +        } else if (ret == 0) {
> +            break;
> +        } else {
> +            buf += ret;
> +            remaining -= ret;
> +        }
> +    }
> +    return len - remaining;
> +}
> +
> +static int read_line(int fd, void *buf, int len)
> +{
> +    int ret, remaining;
> +    remaining = len;
> +    while (remaining>  0) {
> +        ret = read(fd, buf, 1);
> +        if (ret<  0) {
> +            if (errno != EINTR&&  errno != EAGAIN) {
> +                LOG("read failed");
> +                return -1;
> +            }
>    

EINTR should be handled differently than EAGAIN.  Looping on EAGAIN is 
usually wrong because it implies 100% cpu consumption whereas EINTR is 
an expected (albeit lame) return from read.

> +        } else if (ret == 0) {
> +            break;
> +        } else {
> +            remaining--;
> +            buf++;
> +            if (*((char *)buf-1) == '\n')
> +                break;
> +        }
> +    }
> +    memset(buf, 0, remaining);
>    

Memsetting the whole thing seems a bit much, no?

Should use size_t for len (and these functions already live in qemu).

> +    return len - remaining;
> +}
> +
> +typedef struct va_http {
> +    char *preamble;
> +    int content_length;
> +    int content_read;
> +    char *content;
> +} va_http;
>    

CODING_STYLE.

> +
> +typedef struct HttpReadState {
> +    int fd;
> +    char hdr_buf[4096];
> +    int hdr_pos;
> +    va_http http;
> +    enum {
> +        IN_RESP_HEADER = 0,
> +        IN_RESP_BODY,
> +    } state;
> +    RPCRequest *rpc_data;
> +} HttpReadState;
> +
> +static int end_of_header(char *buf, int end_pos) {
> +    return !strncmp(buf+(end_pos-2), "\n\r\n", 3);
> +}
>    

The { belongs on a new line.

> +static void parse_hdr(va_http *http, char *buf, int len) {
> +    int i, line_pos=0;
> +    char line_buf[4096];
> +
> +    for (i=0; i<len; ++i) {
> +        if (buf[i] != '\n') {
> +            /* read line */
> +            line_buf[line_pos++] = buf[i];
> +        } else {
> +            /* process line */
> +            if (strncasecmp(line_buf, "content-length: ", 16) == 0) {
> +                http->content_length = atoi(&line_buf[16]);
> +                return;
> +            }
> +            line_pos = 0;
> +        }
> +    }
> +}
>    

expressions need spaces.  IOW, i=0; i<len; ++i should be i = 0; i < len; 
i++.

Avoid C++ isms like ++i.

> +static void http_read_handler(void *opaque) {
> +    HttpReadState *s = opaque;
> +    int ret;
> +
> +    TRACE("called with opaque: %p", opaque);
> +    if (s->state == IN_RESP_HEADER) {
> +        while((ret = read(s->fd, s->hdr_buf + s->hdr_pos, 1))>  0) {
>    

Read goes on another line.

> +            //TRACE("buf: %c", s->hdr_buf[0]);
> +            s->hdr_pos += ret;
> +            if (end_of_header(s->hdr_buf, s->hdr_pos - 1)) {
> +                s->state = IN_RESP_BODY;
> +                break;
> +            }
> +        }
> +        if (ret == -1) {
> +            if (errno == EAGAIN || errno == EWOULDBLOCK) {
>    

EWOULDBLOCK won't happen.

> +                return;
> +            } else {
> +                LOG("error reading connection: %s", strerror(errno));
> +                goto out_bad;
> +            }
> +        } else if (ret == 0) {
> +            LOG("connection closed unexpectedly");
> +            goto out_bad;
> +        } else {
> +            s->http.content_length = -1;
> +            parse_hdr(&s->http, s->hdr_buf, s->hdr_pos);
> +            if (s->http.content_length == -1) {
> +                LOG("malformed http header");
> +                goto out_bad;
> +            }
> +            s->http.content = qemu_mallocz(s->http.content_length);
> +            goto do_resp_body;
> +        }
> +    } else if (s->state == IN_RESP_BODY) {
> +do_resp_body:
> +        while(s->http.content_read<  s->http.content_length) {
> +            ret = read(s->fd, s->http.content + s->http.content_read, 4096);
>    

You can overrun your buffer here.  Should be MIN(s->http.content_length 
- s->http.content_read, 4096) or better yet, just http->content.length - 
s->http.content_read.

> +            if (ret == -1) {
> +                if (errno == EAGAIN || errno == EWOULDBLOCK) {
> +                    return;
> +                } else {
> +                    LOG("error reading connection: %s", strerror(errno));
> +                    qemu_free(s->http.content);
> +                    goto out;
> +                }
> +            } else if (ret == 0) {
> +                LOG("connection closed unexpectedly, expected %d more bytes",
> +                    s->http.content_length - s->http.content_read);
> +                goto out_bad;
> +            }
> +            s->http.content_read += ret;
> +        }
> +        s->rpc_data->resp_xml = s->http.content;
> +        s->rpc_data->resp_xml_len = s->http.content_length;
> +        goto out;
> +    }
> +out_bad:
> +    s->rpc_data->resp_xml = NULL;
> +out:
> +    vp_set_fd_handler(s->fd, NULL, NULL, NULL);
> +    s->rpc_data->cb(s->rpc_data);
> +    qemu_free(s);
> +}
> +
> +static int write_hdr(int fd, const va_http *http, bool request)
> +{
> +    int hdr_len;
> +    char *hdr;
> +    const char *preamble;
> +
> +    TRACE("called");
>    

Not very descriptive :-)

> +    /* essentially ignored in the context of virtagent, but might as well */
> +    preamble = request ? "POST /RPC2 HTTP/1.1" : "HTTP/1.1 200 OK";
> +
> +    hdr_len = asprintf(&hdr,
> +                      "%s" EOL
> +                      "Content-Type: text/xml" EOL
> +                      "Content-Length: %u" EOL EOL,
> +                      preamble,
> +                      http->content_length);
>    

Should split this up in multiple writes as asprintf is not portable.

> +    write_all(fd, hdr, hdr_len);
>    

Should check the return.

> +
> +    return 0;
> +}
> +
> +static int read_hdr(int fd, va_http *http)
> +{
> +    bool first = true;
> +    char line[4096];
> +    int ret;
> +
> +    http->preamble = NULL;
> +    http->content_length = -1;
> +
> +    do {
> +        ret = read_line(fd,&line, sizeof(line));
> +        if (ret<= 0) {
> +            LOG("error reading from connection");
> +            ret = -EPIPE;
> +            return ret;
> +        }
> +        TRACE("read line: %s", line);
> +        if (first) {
> +            http->preamble = line;
> +            first = false;
> +        }
> +        if (strncasecmp(line, "content-length: ", 16) == 0) {
> +            TRACE("hi");
>    

Cute, but should be more informative.

BTW, don't use strncasecmp.  Just check against Content-Length.

casecmp is not portable and worse yet, interacts weirdly with the 
current locale which is not what you want here.  HTTP requires 
Content-Length, it doesn't not accept content-length.

> +            http->content_length = atoi(&line[16]);
> +        }
> +    } while (strcmp(line, "\r\n")&&  strcmp(line, "\n"));
> +
> +    if (http->preamble == NULL) {
> +        LOG("header missing preamble");
> +        return -1;
> +    } else if (http->content_length == -1) {
> +        LOG("missing content length");
> +        return -1;
> +    }
> +
> +    return 0;
> +}
> +
> +static int send_http(int fd, const va_http *http, bool request)
> +{
> +    int ret;
> +
> +    TRACE("called");
> +    ret = write_hdr(fd, http, request);
> +    if (ret != 0) {
> +        LOG("error sending header");
> +        return -1;
> +    }
> +
> +    TRACE("sending body");
> +    ret = write_all(fd, http->content, http->content_length);
> +    TRACE("done sending body");
> +    if (ret != http->content_length) {
> +        LOG("error sending content");
> +        return -1;
> +    }
> +
> +    return 0;
> +}
> +
> +static int send_http_request(int fd, const va_http *http)
> +{
> +    return send_http(fd, http, true);
> +}
> +
> +static int send_http_response(int fd, const va_http *http)
> +{
> +    return send_http(fd, http, false);
> +}
> +
> +static int get_http(int fd, va_http *http)
> +{
> +    int ret;
> +
> +    ret = read_hdr(fd, http);
> +    if (ret != 0) {
> +        LOG("error reading header");
> +        return -1;
> +    }
> +
> +    http->content = qemu_malloc(http->content_length);
> +    ret = read_all(fd, http->content, http->content_length);
> +    if (ret != http->content_length) {
> +        LOG("error reading content");
> +        return -1;
> +    }
> +
> +    TRACE("http:\n%s", http->content);
> +
> +    return 0;
> +}
> +
> +/* send http-encoded xmlrpc response */
> +int va_send_rpc_response(int fd, const xmlrpc_mem_block *resp_xml)
> +{
> +    int ret;
> +    va_http http_resp;
> +
> +    http_resp.content = XMLRPC_MEMBLOCK_CONTENTS(char, resp_xml);
> +    http_resp.content_length = XMLRPC_MEMBLOCK_SIZE(char, resp_xml);
> +
> +    TRACE("sending rpc response");
> +    ret = send_http_response(fd,&http_resp);
> +    if (ret != 0) {
> +        LOG("failed to send rpc response");
> +        return -1;
> +    }
> +    TRACE("done sending rpc response");
> +
> +    return 0;
> +}
> +
> +/* read xmlrpc payload from http request */
> +int va_get_rpc_request(int fd, char **req_xml, int *req_len)
> +{
> +    int ret;
> +    va_http http_req;
> +
> +    TRACE("getting rpc request");
> +    ret = get_http(fd,&http_req);
> +    if (ret != 0) {
> +        LOG("failed to get RPC request");
> +        return -1;
> +    }
> +
> +    *req_xml = http_req.content;
> +    *req_len = http_req.content_length;
> +    TRACE("done getting rpc request");
> +
> +    return 0;
> +}
> +
> +/* send an RPC request and retrieve the response */
> +int va_transport_rpc_call(int fd, RPCRequest *rpc_data)
> +{
> +    int ret;
> +    struct va_http http_req;
> +    HttpReadState *read_state;
> +
> +    TRACE("called");
> +    http_req.content = XMLRPC_MEMBLOCK_CONTENTS(char, rpc_data->req_xml);
> +    http_req.content_length = XMLRPC_MEMBLOCK_SIZE(char, rpc_data->req_xml);
> +
> +    /* TODO: this should be done asynchronously */
> +    TRACE("sending rpc request");
> +    ret = send_http_request(fd,&http_req);
> +    if (ret != 0) {
> +        LOG("failed to send rpc request");
> +        return -1;
> +    }
> +    TRACE("done sending rpc request");
> +
> +    TRACE("setting up rpc response handler");
> +    read_state = qemu_mallocz(sizeof(HttpReadState));
> +    read_state->fd = fd;
> +    read_state->rpc_data = rpc_data;
> +    read_state->state = IN_RESP_HEADER;
> +    vp_set_fd_handler(fd, http_read_handler, NULL, read_state);
> +
> +    return 0;
> +}
> diff --git a/virtagent-common.h b/virtagent-common.h
> new file mode 100644
> index 0000000..0cedeb4
> --- /dev/null
> +++ b/virtagent-common.h
> @@ -0,0 +1,63 @@
> +/*
> + * virt-agent - host/guest RPC client functions
> + *
> + * Copyright IBM Corp. 2010
> + *
> + * Authors:
> + *  Adam Litke<aglitke@linux.vnet.ibm.com>
> + *  Michael Roth<mdroth@linux.vnet.ibm.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> + * See the COPYING file in the top-level directory.
> + *
> + */
> +#ifndef VIRTAGENT_COMMON_H
> +#define VIRTAGENT_COMMON_H
> +
> +#include<stdlib.h>
> +#include<stdio.h>
> +#include<xmlrpc-c/base.h>
> +#include<xmlrpc-c/client.h>
> +#include<xmlrpc-c/server.h>
> +#include "qemu-common.h"
> +#include "monitor.h"
> +#include "virtproxy.h"
> +
> +#define DEBUG_VA
> +
> +#ifdef DEBUG_VA
> +#define TRACE(msg, ...) do { \
> +    fprintf(stderr, "%s:%s():L%d: " msg "\n", \
> +            __FILE__, __FUNCTION__, __LINE__, ## __VA_ARGS__); \
> +} while(0)
> +#else
> +#define TRACE(msg, ...) \
> +    do { } while (0)
> +#endif
> +
> +#define LOG(msg, ...) do { \
> +    fprintf(stderr, "%s:%s(): " msg "\n", \
> +            __FILE__, __FUNCTION__, ## __VA_ARGS__); \
> +} while(0)
> +
> +#define TADDR "127.0.0.1:8080"
> +#define URL "http://localhost:8080/RPC2"
> +#define NAME "QEMU virt-agent host client"
> +#define VERSION "1.0"
> +#define EOL "\r\n"
> +
> +typedef void (RPCRequestCallback)(void *rpc_data);
> +typedef struct RPCRequest {
> +    RPCRequestCallback *cb;
> +    xmlrpc_mem_block *req_xml;
> +    char *resp_xml;
> +    int resp_xml_len;
> +    Monitor *mon;
> +    MonitorCompletion *mon_cb;
> +    void *mon_data;
> +} RPCRequest;
> +
> +int va_send_rpc_response(int fd, const xmlrpc_mem_block *resp_xml);
> +int va_get_rpc_request(int fd, char **req_xml, int *req_len);
> +int va_transport_rpc_call(int fd, RPCRequest *rpc_data);
> +#endif /* VIRTAGENT_COMMON_H */
>    

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [RFC][PATCH 07/10] virtagent: add getdmesg RPC
  2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 07/10] virtagent: add getdmesg RPC Michael Roth
@ 2010-10-25 21:42   ` Anthony Liguori
  0 siblings, 0 replies; 24+ messages in thread
From: Anthony Liguori @ 2010-10-25 21:42 UTC (permalink / raw)
  To: Michael Roth; +Cc: aliguori, ryanh, agl, qemu-devel, abeekhof

On 10/22/2010 01:46 PM, Michael Roth wrote:
> Add RPC to view guest dmesg output.
>
> Signed-off-by: Michael Roth<mdroth@linux.vnet.ibm.com>
> ---
>   virtagent-daemon.c |   45 +++++++++++++++++++++++++++++++++++++++++++++
>   virtagent-daemon.h |    1 +
>   2 files changed, 46 insertions(+), 0 deletions(-)
>
> diff --git a/virtagent-daemon.c b/virtagent-daemon.c
> index fad0f64..6aaa987 100644
> --- a/virtagent-daemon.c
> +++ b/virtagent-daemon.c
> @@ -72,6 +72,48 @@ EXIT_CLOSE_BAD:
>       return result;
>   }
>
> +/* getdmesg(): return dmesg output
> + * rpc return values:
> + *   - dmesg output as a string
> + */
> +static xmlrpc_value *getdmesg(xmlrpc_env *env,
> +                              xmlrpc_value *param,
> +                              void *user_data)
> +{
> +    char *dmesg_buf = NULL, cmd[256];
> +    char c;
> +    int pos = 0;
> +    xmlrpc_value *result = NULL;
> +    FILE *pipe;
> +
> +    dmesg_buf = qemu_mallocz(VA_DMESG_LEN + 2048);
> +    sprintf(cmd, "dmesg -s %d", VA_DMESG_LEN);
> +
> +    //pipe = popen(cmd, "r");
> +    pipe = popen("dmesg -s 16000", "r");
> +    if (pipe == NULL) {
> +        LOG("popen failed: %s", strerror(errno));
> +        xmlrpc_faultf(env, "popen failed: %s", strerror(errno));
> +        goto EXIT_NOCLOSE;
> +    }
> +
> +    while ((c = fgetc(pipe)) != EOF&&  pos<  VA_DMESG_LEN) {
> +        dmesg_buf[pos] = c;
> +        pos++;
> +    }
>    

fgetc seems a bit odd here.  Why not fread?

Regards,

Anthony Liguori

> +    dmesg_buf[pos++] = '\0';
> +    TRACE("dmesg:\n%s", dmesg_buf);
> +
> +    result = xmlrpc_build_value(env, "s", dmesg_buf);
> +    pclose(pipe);
> +EXIT_NOCLOSE:
> +    if (dmesg_buf) {
> +        qemu_free(dmesg_buf);
> +    }
> +
> +    return result;
> +}
> +
>   static int va_accept(int listen_fd) {
>       struct sockaddr_in saddr;
>       struct sockaddr *addr;
> @@ -101,6 +143,8 @@ typedef struct RPCFunction {
>   static RPCFunction guest_functions[] = {
>       { .func = getfile,
>         .func_name = "getfile" },
> +    { .func = getdmesg,
> +      .func_name = "getdmesg" },
>       { NULL, NULL }
>   };
>   static RPCFunction host_functions[] = {
> @@ -165,6 +209,7 @@ int va_server_loop(int listen_fd, bool is_host)
>           XMLRPC_MEMBLOCK_FREE(char, rpc_response);
>   out:
>           closesocket(fd);
> +        xmlrpc_env_clean(&env);
>       }
>
>       return 0;
> diff --git a/virtagent-daemon.h b/virtagent-daemon.h
> index bb197d0..adcbc9a 100644
> --- a/virtagent-daemon.h
> +++ b/virtagent-daemon.h
> @@ -16,5 +16,6 @@
>   #define HOST_AGENT_PATH "/tmp/virtagent-host.sock"
>   #define VA_GETFILE_MAX 1<<  30
>   #define VA_FILEBUF_LEN 16384
> +#define VA_DMESG_LEN 16384
>
>   int va_server_loop(int listen_fd, bool is_host);
>    

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [RFC][PATCH 01/10] virtagent: add common rpc transport defs
  2010-10-25 21:39   ` Anthony Liguori
@ 2010-10-25 21:54     ` malc
  2010-10-25 22:02       ` Anthony Liguori
  0 siblings, 1 reply; 24+ messages in thread
From: malc @ 2010-10-25 21:54 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: abeekhof, ryanh, agl, Michael Roth, qemu-devel

On Mon, 25 Oct 2010, Anthony Liguori wrote:

> On 10/22/2010 01:45 PM, Michael Roth wrote:
> > Common code for sending/recieving RPCs via http over virtproxy channel.
> > Eventually these will all be switched to asynchronous handlers to avoid
> > deadlocks between qemu and the guest. For now we can usually get away with
> > just doing asynchronous reads for http/RPC responses if we don't send
> > large RPC requests to the guest (since these will likely get buffered
> > rather than block or cause spinning on writes).
[..snip..]

> 
> expressions need spaces.  IOW, i=0; i<len; ++i should be i = 0; i < len; i++.
> 
> Avoid C++ isms like ++i.

Why? 

[..snip..]

-- 
mailto:av1474@comtv.ru

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [RFC][PATCH 01/10] virtagent: add common rpc transport defs
  2010-10-25 21:54     ` malc
@ 2010-10-25 22:02       ` Anthony Liguori
  2010-10-25 22:32         ` malc
  0 siblings, 1 reply; 24+ messages in thread
From: Anthony Liguori @ 2010-10-25 22:02 UTC (permalink / raw)
  To: malc; +Cc: abeekhof, ryanh, agl, Michael Roth, qemu-devel

On 10/25/2010 04:54 PM, malc wrote:
> On Mon, 25 Oct 2010, Anthony Liguori wrote:
>
>    
>> On 10/22/2010 01:45 PM, Michael Roth wrote:
>>      
>>> Common code for sending/recieving RPCs via http over virtproxy channel.
>>> Eventually these will all be switched to asynchronous handlers to avoid
>>> deadlocks between qemu and the guest. For now we can usually get away with
>>> just doing asynchronous reads for http/RPC responses if we don't send
>>> large RPC requests to the guest (since these will likely get buffered
>>> rather than block or cause spinning on writes).
>>>        
> [..snip..]
>
>    
>> expressions need spaces.  IOW, i=0; i<len; ++i should be i = 0; i<  len; i++.
>>
>> Avoid C++ isms like ++i.
>>      
> Why?
>    

Because it creates arbitrary inconsistency.

Regards,

Anthony Liguori

> [..snip..]
>
>    

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [RFC][PATCH 01/10] virtagent: add common rpc transport defs
  2010-10-25 22:02       ` Anthony Liguori
@ 2010-10-25 22:32         ` malc
  0 siblings, 0 replies; 24+ messages in thread
From: malc @ 2010-10-25 22:32 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: abeekhof, ryanh, agl, Michael Roth, qemu-devel

On Mon, 25 Oct 2010, Anthony Liguori wrote:

> On 10/25/2010 04:54 PM, malc wrote:
> > On Mon, 25 Oct 2010, Anthony Liguori wrote:
> > 
> >    
> > > On 10/22/2010 01:45 PM, Michael Roth wrote:
> > >      
> > > > Common code for sending/recieving RPCs via http over virtproxy channel.
> > > > Eventually these will all be switched to asynchronous handlers to avoid
> > > > deadlocks between qemu and the guest. For now we can usually get away
> > > > with
> > > > just doing asynchronous reads for http/RPC responses if we don't send
> > > > large RPC requests to the guest (since these will likely get buffered
> > > > rather than block or cause spinning on writes).
> > > >        
> > [..snip..]
> > 
> >    
> > > expressions need spaces.  IOW, i=0; i<len; ++i should be i = 0; i<  len;
> > > i++.
> > > 
> > > Avoid C++ isms like ++i.
> > >      
> > Why?
> >    
> 
> Because it creates arbitrary inconsistency.
> 

That's rubbish.

-- 
mailto:av1474@comtv.ru

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Qemu-devel] Re: [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent
  2010-10-25 17:06   ` Michael Roth
@ 2010-10-26  7:14     ` Andrew Beekhof
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Beekhof @ 2010-10-26  7:14 UTC (permalink / raw)
  To: Michael Roth; +Cc: aliguori, Perry Myers, ryanh, agl, qemu-devel

On 10/25/2010 07:06 PM, Michael Roth wrote:
> On 10/25/2010 05:30 AM, Andrew Beekhof wrote:
>> On 10/22/2010 08:45 PM, Michael Roth wrote:
>>> This set of patches is meant to be applied on top of the Virtproxy v1
>>> patchset.
>>>
>>> OVERVIEW:
>>>
>>> There are a wide range of use cases motivating the need for a guest
>>> agent of some sort to extend the functionality/usability/control
>>> offered by QEMU.
>>> Some examples include graceful guest shutdown/reboot and notifications
>>> thereof, copy/paste syncing between host/guest, guest statistics
>>> gathering, file access, etc.
>>>
>>> Ideally these would all be served by a single, easilly extensible
>>> agent that can be deployed in a wide range of guests.
>>> Virtagent is an XMLRPC server integrated into the Virtproxy guest
>>> daemon and aimed at providing this type of functionality.
>>>
>>> This code is very rough, and I'll to document most of the
>>> bugs/shortcomings we're aware of in this version of the patchset.
>>> The main goal of this RFC to get feedback on the types of core
>>> functionality we would need in an agent of this sort, as well as
>>> feedback on the general approach/architecture implemented here.
>>> Any feedback is greatly appreciated however.
>>>
>>> To start off this discussion, there have been some recent posts about
>>> how much an agent of this sort overlaps with the goals of the Matahari
>>> project (https://fedorahosted.org/matahari/).
>>> While both of these approaches are at least *feasible*, our use cases
>>> require the ability to deploy to guests which may not support
>>> virtio-serial, which currently rules Matahari out.
>>> This support could be added however: the virtproxy layer used by this
>>> agent actually lends itself to extending such support to other
>>> agents/services, or a more direct approach could be taken in adding
>>> support for isa-serial.
>>>
>>> The question that remains however is one of scope.
>>> This agent is intended purely as a means to extend qemu's abilities to
>>> perform hypervisor-specific work,
>>
>> "shutdown/reboot", "statistics", "file gathering"... none of those sound
>> very "hypervisor-specific" to me ;-)
>>
>
> QEMU-specific is probably a better word, in the sense that the API (and
> the code dependencies) terminates at the QEMU level.

The concepts though apply to all flavours of virtualization and even 
bare-metal machines.

Implementation details aside, they're the first functions any 
agent/collection of agents is going to implement - which is why I'm 
having a hard time with the *-specific labels.

> Though I think one
> can make the argument that shutdown/reboot are hypervisor-related,
> especially given that there are situations where we currently cannot
> induce a guest reboot from qemu. Also, file access is actually horrible
> use case for RPC. It does however provide a pretty powerful primitive
> for exposing data/statistics made available in /proc or /sys, which do
> extend a hypervisor's ability to make better decisions about things like
> memory ballooning.
>
> But those are just some examples of the functionality that could be
> implemented by the agent, and the RPCs that make sense to implement are
> TBD based on input here and elsewhere.
>
>>> whereas Matahari aims to extend general system management capabilities
>>> to guests (please correct me if I'm oversimplifying).
>>
>> As I replied elsewhere, Matahari is both an architecture and a
>> collection of independent but commonly useful agents.
>>
>> So while there will be a bunch of other agents doing a bunch of things
>> you don't care about, you don't have to care that they exist either :-)
>>
>> A hypothetical QEMU agent would be a independent entity, with both the
>> daemon and source code completely isolated from any other agents.
>>
>> It doesn't even need to live in the Matahari project.
>>
>  >> Virtagent cannot meet Matahari's goals, whereas Matahari technically
>  >> can meet Virtagent's.
>  >> My contention however is that the qemu-specific scope/API and shared
>  >> code base with a more closely integrated agent will provide a more
>  >> expedient route to functional improvements to qemu,
>  >
>  > See above. Would leveraging the Matahari architecture but keeping the
>  > agent in the QEMU project address this concern?
>  >
>
> To some extent. But what is gained here though except the bus?

I guess I was thinking that if you have the bus, why not make use of the 
existing standardised agents and focus the QEMU one on the parts that 
weren't already provided.

> If we
> aren't using the functions provided by the common agents we still need
> to re-implement them,  and if we do use them then we are dependent on
> their implementations as well as their deployment overhead. And even if
> we do choose to ignore the existing agents, we inherit the guest and
> host-side dependencies of Matahari nonetheless. Maybe these dependencies
> are fairly minimal, but it makes it that much more difficult to support
> other/older guests.

Matahari is purely guest side, the Host is currently a standard qpid 
server.  For usage on a single single host its possible we may drop that 
as a hard requirement and support dbus as the host-side aggregator.

On the guest side, each agent may well end up being its own subpackage.
So the dependency list would depend on what functionality the user 
wanted (plus qpid of course).

>
> Beyond deployment, there's also just the basic end-user experience we
> need to consider. In the context of general systems management it makes
> total sense that you'd install Matahari and then whatever agents you
> needed in your guests/hosts. In the context of qemu it doesn't. VMWare
> has a vmware package that interacts with vmware, virtualbox has a
> virtualbox utilities package that interacts with virtualbox.

That's kinda the point, everyone is reinventing the wheel here.
And not just virt projects, off the top of my head there are a couple 
for clustering; one for SAP, Puppet, MRG; the FMCI team is working on a 
bunch...

We're working with all those groups (and others) to try and get some 
consolidation.  Mostly the reaction so far has been positive but if you 
want to do your own thing, I can live with that.

What would be nice though is if the guest-side functions that implement 
the RPC calls lived in a library.  That way, like we do with netcf and 
libvirt, we could write glue for plugging the QEMU-specific ones into 
Matahari.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] Re: [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent
  2010-10-25 21:28   ` Anthony Liguori
@ 2010-10-26  7:27     ` Andrew Beekhof
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Beekhof @ 2010-10-26  7:27 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: Perry Myers, ryanh, agl, Michael Roth, qemu-devel

On 10/25/2010 11:28 PM, Anthony Liguori wrote:
> On 10/25/2010 05:30 AM, Andrew Beekhof wrote:
>> On 10/22/2010 08:45 PM, Michael Roth wrote:
>>> This set of patches is meant to be applied on top of the Virtproxy v1
>>> patchset.
>>>
>>> OVERVIEW:
>>>
>>> There are a wide range of use cases motivating the need for a guest
>>> agent of some sort to extend the functionality/usability/control
>>> offered by QEMU.
>>> Some examples include graceful guest shutdown/reboot and
>>> notifications thereof, copy/paste syncing between host/guest, guest
>>> statistics gathering, file access, etc.
>>>
>>> Ideally these would all be served by a single, easilly extensible
>>> agent that can be deployed in a wide range of guests.
>>> Virtagent is an XMLRPC server integrated into the Virtproxy guest
>>> daemon and aimed at providing this type of functionality.
>>>
>>> This code is very rough, and I'll to document most of the
>>> bugs/shortcomings we're aware of in this version of the patchset.
>>> The main goal of this RFC to get feedback on the types of core
>>> functionality we would need in an agent of this sort, as well as
>>> feedback on the general approach/architecture implemented here.
>>> Any feedback is greatly appreciated however.
>>>
>>> To start off this discussion, there have been some recent posts about
>>> how much an agent of this sort overlaps with the goals of the
>>> Matahari project (https://fedorahosted.org/matahari/).
>>> While both of these approaches are at least *feasible*, our use cases
>>> require the ability to deploy to guests which may not support
>>> virtio-serial, which currently rules Matahari out.
>>> This support could be added however: the virtproxy layer used by this
>>> agent actually lends itself to extending such support to other
>>> agents/services, or a more direct approach could be taken in adding
>>> support for isa-serial.
>>>
>>> The question that remains however is one of scope.
>>> This agent is intended purely as a means to extend qemu's abilities
>>> to perform hypervisor-specific work,
>>
>> "shutdown/reboot", "statistics", "file gathering"... none of those
>> sound very "hypervisor-specific" to me ;-)
>
> A hypervisor initiated shutdown is very different than a network
> initiated shutdown.

Could you expand on that?

>
>>> whereas Matahari aims to extend general system management
>>> capabilities to guests (please correct me if I'm oversimplifying).
>>
>> As I replied elsewhere, Matahari is both an architecture and a
>> collection of independent but commonly useful agents.
>>
>> So while there will be a bunch of other agents doing a bunch of things
>> you don't care about, you don't have to care that they exist either :-)
>>
>> A hypothetical QEMU agent would be a independent entity, with both the
>> daemon and source code completely isolated from any other agents.
>>
>> It doesn't even need to live in the Matahari project.
>
> I've taken a deeper look at Matahari.
>
> First thing I've noticed is that the AMQP seems to be unfriendly to C.
> QPID and it's friends are all implemented in C++ as it Matahari itself.
> The lack of a C client library is a deal breaker for QEMU because it
> makes it impossible to integrate into QEMU.

Well there are ways to link C programs against a C++ library.
But I agree that they're not ideal.

>
> The second thing that I've observed is that AMQP is less oriented toward
> point-to-point communication than, say, XML-RPC but rather focuses on a
> pub/sub model. This is not a bad thing, but I wonder if there are any
> real cases where it makes sense as a guest agent.

Oh it absolutely makes sense once you have more than a single host.
And even more so if you're in an environment where you don't have access 
to the physical host(s) at all, such as in the cloud.

> It seems like a high
> complexity cost without a lot of return.
>
>>> Virtagent cannot meet Matahari's goals, whereas Matahari technically
>>> can meet Virtagent's.
>>> My contention however is that the qemu-specific scope/API and shared
>>> code base with a more closely integrated agent will provide a more
>>> expedient route to functional improvements to qemu,
>>
>> See above. Would leveraging the Matahari architecture but keeping the
>> agent in the QEMU project address this concern?
>
> Biggest is going to be the fact that it's not C-friendly.
>
>>> while still allowing for the additional functionality/management
>>> capabilities provided by something like Matahari.
>>>
>>> DESIGN:
>>>
>>> There are actually 2 RPC servers:
>>>
>>> 1) a server in the guest integrated into the Virtproxy guest daemon
>>> which handles RPC requests from QEMU
>>
>> Question: Is the scope here purely between a host and its guest? Or is
>> it intended that one could access the guest daemon from other
>> hosts/guests?
>
> Just host and guest is the intended scope.
>
> Regards,
>
> Anthony Liguori

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2010-10-26  7:27 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-10-22 18:45 [Qemu-devel] [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent Michael Roth
2010-10-22 18:45 ` [Qemu-devel] [RFC][PATCH 01/10] virtagent: add common rpc transport defs Michael Roth
2010-10-25 21:39   ` Anthony Liguori
2010-10-25 21:54     ` malc
2010-10-25 22:02       ` Anthony Liguori
2010-10-25 22:32         ` malc
2010-10-22 18:45 ` [Qemu-devel] [RFC][PATCH 02/10] virtagent: base definitions for host/guest RPC daemon Michael Roth
2010-10-22 18:45 ` [Qemu-devel] [RFC][PATCH 03/10] virtagent: qemu-vp, integrate virtagent daemon Michael Roth
2010-10-22 18:45 ` [Qemu-devel] [RFC][PATCH 04/10] virtagent: base RPC client definitions Michael Roth
2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 05/10] virtagent: add getfile RPC Michael Roth
2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 06/10] virtagent: add agent_viewfile command Michael Roth
2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 07/10] virtagent: add getdmesg RPC Michael Roth
2010-10-25 21:42   ` Anthony Liguori
2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 08/10] virtagent: add agent_viewdmesg command Michael Roth
2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 09/10] virtagent: Makefile/configure changes to build virtagent bits Michael Roth
2010-10-22 18:46 ` [Qemu-devel] [RFC][PATCH 10/10] virtproxy: add compat defs for linking against vl.c Michael Roth
2010-10-23 11:31 ` [Qemu-devel] Re: [RFC][PATCH 00/10] virtagent: host/guest RPC communication agent Andrew Beekhof
2010-10-23 14:41   ` Michael Roth
2010-10-25  9:46     ` Andrew Beekhof
2010-10-25 10:30 ` Andrew Beekhof
2010-10-25 17:06   ` Michael Roth
2010-10-26  7:14     ` Andrew Beekhof
2010-10-25 21:28   ` Anthony Liguori
2010-10-26  7:27     ` Andrew Beekhof

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.