linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC/PATCHSET 0/3] virtio: Implement virtio pstore device (v5)
@ 2016-09-04 14:38 Namhyung Kim
  2016-09-04 14:38 ` [PATCH 1/3] virtio: Basic implementation of virtio pstore driver Namhyung Kim
                   ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Namhyung Kim @ 2016-09-04 14:38 UTC (permalink / raw)
  To: virtio-dev
  Cc: LKML, qemu-devel, kvm, virtualization, Paolo Bonzini,
	Radim Krčmář,
	Michael S. Tsirkin, Anthony Liguori, Anton Vorontsov,
	Colin Cross, Kees Cook, Tony Luck, Steven Rostedt, Ingo Molnar,
	Minchan Kim, Will Deacon

Hello,

This is another iteration of the virtio-pstore work.  I've addressed
comments from Michael S. Tsirkin on the kernel code.

 * changes in v5)
  - convert __virtioXX to __leXX  (Michael)

 * changes in v4)
  - use qio_channel_file_new_path()  (Daniel)
  - rename to delete_old_pstore_file  (Daniel)
  - convert G_REMOVE_SOURCE to FALSE  (Daniel)

 * changes in v3)
  - use QIOChannel API  (Stefan, Daniel)
  - add bound check for malcious guests  (Daniel)
  - drop support PSTORE_TYPE_CONSOLE for now
  - update license to allow GPL v2 or later  (Michael)
  - limit number of pstore files on qemu

 * changes in v2)
  - update VIRTIO_ID_PSTORE to 22  (Cornelia, Stefan)
  - make buffer size configurable  (Cornelia)
  - support PSTORE_TYPE_CONSOLE  (Kees)
  - use separate virtqueues for read and write
  - support concurrent async write
  - manage pstore (file) id in device side
  - fix various mistakes in qemu device  (Stefan)

It started from the fact that dumping ftrace buffer at kernel
oops/panic takes too much time.  Although there's a way to reduce the
size of the original data, sometimes I want to have the information as
many as possible.  Maybe kexec/kdump can solve this problem but it
consumes some portion of guest memory so I'd like to avoid it.  And I
know the qemu + crashtool can dump and analyze the whole guest memory
including the ftrace buffer without wasting guest memory, but it adds
one more layer and has some limitation as an out-of-tree tool like not
being in sync with the kernel changes.

So I think it'd be great using the pstore interface to dump guest
kernel data on the host.  One can read the data on the host directly
or on the guest (at the next boot) using pstore filesystem as usual.
While this patchset only implements dumping kernel log buffer, it can
be extended to have ftrace buffer and probably some more..

The patch 0001 implements virtio pstore driver.  It has two virt queue
for (sync) read and (async) write, pstore buffer and io request and
response structure.  The virtio_pstore_req struct is to give
information about the current pstore operation.  The result will be
written to the virtio_pstore_res struct.  For read operation it also
uses virtio_pstore_fileinfo struct.

The patch 0002 and 0003 implement virtio-pstore legacy PCI device on
qemu-kvm and kvmtool respectively.  I referenced virtio-baloon and
virtio-rng implementations and I don't know whether kvmtool supports
modern virtio 1.0+ spec.  Other transports might be supported later.

For example, using virtio-pstore on qemu looks like below:

  $ qemu-system-x86_64 -enable-kvm -device virtio-pstore,directory=xxx

When guest kernel gets panic the log messages will be saved under the
xxx directory.

  $ ls xxx
  dmesg-1.enc.z  dmesg-2.enc.z

As you can see the pstore subsystem compresses the log data using zlib
(now supports lzo and lz4 too).  The data can be extracted with the
following command:

  $ cat xxx/dmesg-1.enc.z | \
  > python -c 'import sys, zlib; print(zlib.decompress(sys.stdin.read()))'
  Oops#1 Part1
  <5>[    0.000000] Linux version 4.6.0kvm+ (namhyung@danjae) (gcc version 5.3.0 (GCC) ) #145 SMP Mon Jul 18 10:22:45 KST 2016
  <6>[    0.000000] Command line: root=/dev/vda console=ttyS0
  <6>[    0.000000] x86/fpu: Legacy x87 FPU detected.
  <6>[    0.000000] x86/fpu: Using 'eager' FPU context switches.
  <6>[    0.000000] e820: BIOS-provided physical RAM map:
  <6>[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
  <6>[    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
  <6>[    0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
  <6>[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x0000000007fddfff] usable
  <6>[    0.000000] BIOS-e820: [mem 0x0000000007fde000-0x0000000007ffffff] reserved
  <6>[    0.000000] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
  <6>[    0.000000] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
  <6>[    0.000000] NX (Execute Disable) protection: active
  <6>[    0.000000] SMBIOS 2.8 present.
  <7>[    0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.9.3-0-ge2fc41e-prebuilt.qemu-project.org 04/01/2014
  ...


Namhyung Kim (3):
  virtio: Basic implementation of virtio pstore driver
  qemu: Implement virtio-pstore device
  kvmtool: Implement virtio-pstore device


 drivers/virtio/Kconfig             |  10 +
 drivers/virtio/Makefile            |   1 +
 drivers/virtio/virtio_pstore.c     | 417 +++++++++++++++++++++++++++++++++++++
 include/uapi/linux/Kbuild          |   1 +
 include/uapi/linux/virtio_ids.h    |   1 +
 include/uapi/linux/virtio_pstore.h |  74 +++++++
 6 files changed, 504 insertions(+)
 create mode 100644 drivers/virtio/virtio_pstore.c
 create mode 100644 include/uapi/linux/virtio_pstore.h


Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Anthony Liguori <aliguori@amazon.com>
Cc: Anton Vorontsov <anton@enomsg.org>
Cc: Colin Cross <ccross@android.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: kvm@vger.kernel.org
Cc: qemu-devel@nongnu.org
Cc: virtualization@lists.linux-foundation.org
Cc: virtio-dev@lists.oasis-open.org

Thanks,
Namhyung


-- 
2.9.3

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/3] virtio: Basic implementation of virtio pstore driver
  2016-09-04 14:38 [RFC/PATCHSET 0/3] virtio: Implement virtio pstore device (v5) Namhyung Kim
@ 2016-09-04 14:38 ` Namhyung Kim
  2016-09-08 20:49   ` Kees Cook
  2016-09-22 11:57   ` Stefan Hajnoczi
  2016-09-04 14:38 ` [PATCH 2/3] qemu: Implement virtio-pstore device Namhyung Kim
  2016-09-04 14:39 ` [PATCH 3/3] kvmtool: " Namhyung Kim
  2 siblings, 2 replies; 12+ messages in thread
From: Namhyung Kim @ 2016-09-04 14:38 UTC (permalink / raw)
  To: virtio-dev
  Cc: LKML, qemu-devel, kvm, virtualization, Paolo Bonzini,
	Radim Krčmář,
	Michael S. Tsirkin, Anthony Liguori, Anton Vorontsov,
	Colin Cross, Kees Cook, Tony Luck, Steven Rostedt, Ingo Molnar,
	Minchan Kim, Will Deacon

The virtio pstore driver provides interface to the pstore subsystem so
that the guest kernel's log/dump message can be saved on the host
machine.  Users can access the log file directly on the host, or on the
guest at the next boot using pstore filesystem.  It currently deals with
kernel log (printk) buffer only, but we can extend it to have other
information (like ftrace dump) later.

It supports legacy PCI device using a 16K buffer by default and it's
configurable.  It uses two virtqueues - one for (sync) read and another
for (async) write.  Since it cannot wait for write finished, it supports
up to 128 concurrent IO.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Anthony Liguori <aliguori@amazon.com>
Cc: Anton Vorontsov <anton@enomsg.org>
Cc: Colin Cross <ccross@android.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: virtio-dev@lists.oasis-open.org
Cc: kvm@vger.kernel.org
Cc: qemu-devel@nongnu.org
Cc: virtualization@lists.linux-foundation.org
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
 drivers/virtio/Kconfig             |  10 +
 drivers/virtio/Makefile            |   1 +
 drivers/virtio/virtio_pstore.c     | 417 +++++++++++++++++++++++++++++++++++++
 include/uapi/linux/Kbuild          |   1 +
 include/uapi/linux/virtio_ids.h    |   1 +
 include/uapi/linux/virtio_pstore.h |  74 +++++++
 6 files changed, 504 insertions(+)
 create mode 100644 drivers/virtio/virtio_pstore.c
 create mode 100644 include/uapi/linux/virtio_pstore.h

diff --git a/drivers/virtio/Kconfig b/drivers/virtio/Kconfig
index 77590320d44c..8f0e6c796c12 100644
--- a/drivers/virtio/Kconfig
+++ b/drivers/virtio/Kconfig
@@ -58,6 +58,16 @@ config VIRTIO_INPUT
 
 	 If unsure, say M.
 
+config VIRTIO_PSTORE
+	tristate "Virtio pstore driver"
+	depends on VIRTIO
+	depends on PSTORE
+	---help---
+	 This driver supports virtio pstore devices to save/restore
+	 panic and oops messages on the host.
+
+	 If unsure, say M.
+
  config VIRTIO_MMIO
 	tristate "Platform bus driver for memory mapped virtio devices"
 	depends on HAS_IOMEM && HAS_DMA
diff --git a/drivers/virtio/Makefile b/drivers/virtio/Makefile
index 41e30e3dc842..bee68cb26d48 100644
--- a/drivers/virtio/Makefile
+++ b/drivers/virtio/Makefile
@@ -5,3 +5,4 @@ virtio_pci-y := virtio_pci_modern.o virtio_pci_common.o
 virtio_pci-$(CONFIG_VIRTIO_PCI_LEGACY) += virtio_pci_legacy.o
 obj-$(CONFIG_VIRTIO_BALLOON) += virtio_balloon.o
 obj-$(CONFIG_VIRTIO_INPUT) += virtio_input.o
+obj-$(CONFIG_VIRTIO_PSTORE) += virtio_pstore.o
diff --git a/drivers/virtio/virtio_pstore.c b/drivers/virtio/virtio_pstore.c
new file mode 100644
index 000000000000..c8fd8e39d1b8
--- /dev/null
+++ b/drivers/virtio/virtio_pstore.c
@@ -0,0 +1,417 @@
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pstore.h>
+#include <linux/virtio.h>
+#include <linux/virtio_config.h>
+#include <uapi/linux/virtio_ids.h>
+#include <uapi/linux/virtio_pstore.h>
+
+#define VIRT_PSTORE_ORDER    2
+#define VIRT_PSTORE_BUFSIZE  (4096 << VIRT_PSTORE_ORDER)
+#define VIRT_PSTORE_NR_REQ   128
+
+struct virtio_pstore {
+	struct virtio_device	*vdev;
+	struct virtqueue	*vq[2];
+	struct pstore_info	 pstore;
+	struct virtio_pstore_req req[VIRT_PSTORE_NR_REQ];
+	struct virtio_pstore_res res[VIRT_PSTORE_NR_REQ];
+	unsigned int		 req_id;
+
+	/* Waiting for host to ack */
+	wait_queue_head_t	acked;
+	int			failed;
+};
+
+#define TYPE_TABLE_ENTRY(_entry)				\
+	{ PSTORE_TYPE_##_entry, VIRTIO_PSTORE_TYPE_##_entry }
+
+struct type_table {
+	int pstore;
+	u16 virtio;
+} type_table[] = {
+	TYPE_TABLE_ENTRY(DMESG),
+};
+
+#undef TYPE_TABLE_ENTRY
+
+
+static __le16 to_virtio_type(enum pstore_type_id type)
+{
+	unsigned int i;
+
+	for (i = 0; i < ARRAY_SIZE(type_table); i++) {
+		if (type == type_table[i].pstore)
+			return cpu_to_le16(type_table[i].virtio);
+	}
+
+	return cpu_to_le16(VIRTIO_PSTORE_TYPE_UNKNOWN);
+}
+
+static enum pstore_type_id from_virtio_type(__le16 type)
+{
+	unsigned int i;
+
+	for (i = 0; i < ARRAY_SIZE(type_table); i++) {
+		if (le16_to_cpu(type) == type_table[i].virtio)
+			return type_table[i].pstore;
+	}
+
+	return PSTORE_TYPE_UNKNOWN;
+}
+
+static void virtpstore_ack(struct virtqueue *vq)
+{
+	struct virtio_pstore *vps = vq->vdev->priv;
+
+	wake_up(&vps->acked);
+}
+
+static void virtpstore_check(struct virtqueue *vq)
+{
+	struct virtio_pstore *vps = vq->vdev->priv;
+	struct virtio_pstore_res *res;
+	unsigned int len;
+
+	res = virtqueue_get_buf(vq, &len);
+	if (res == NULL)
+		return;
+
+	if (le32_to_cpu(res->ret) < 0)
+		vps->failed = 1;
+}
+
+static void virt_pstore_get_reqs(struct virtio_pstore *vps,
+				 struct virtio_pstore_req **preq,
+				 struct virtio_pstore_res **pres)
+{
+	unsigned int idx = vps->req_id++ % VIRT_PSTORE_NR_REQ;
+
+	*preq = &vps->req[idx];
+	*pres = &vps->res[idx];
+
+	memset(*preq, 0, sizeof(**preq));
+	memset(*pres, 0, sizeof(**pres));
+}
+
+static int virt_pstore_open(struct pstore_info *psi)
+{
+	struct virtio_pstore *vps = psi->data;
+	struct virtio_pstore_req *req;
+	struct virtio_pstore_res *res;
+	struct scatterlist sgo[1], sgi[1];
+	struct scatterlist *sgs[2] = { sgo, sgi };
+	unsigned int len;
+
+	virt_pstore_get_reqs(vps, &req, &res);
+
+	req->cmd = cpu_to_le16(VIRTIO_PSTORE_CMD_OPEN);
+
+	sg_init_one(sgo, req, sizeof(*req));
+	sg_init_one(sgi, res, sizeof(*res));
+	virtqueue_add_sgs(vps->vq[0], sgs, 1, 1, vps, GFP_KERNEL);
+	virtqueue_kick(vps->vq[0]);
+
+	wait_event(vps->acked, virtqueue_get_buf(vps->vq[0], &len));
+	return le32_to_cpu(res->ret);
+}
+
+static int virt_pstore_close(struct pstore_info *psi)
+{
+	struct virtio_pstore *vps = psi->data;
+	struct virtio_pstore_req *req = &vps->req[vps->req_id];
+	struct virtio_pstore_res *res = &vps->res[vps->req_id];
+	struct scatterlist sgo[1], sgi[1];
+	struct scatterlist *sgs[2] = { sgo, sgi };
+	unsigned int len;
+
+	virt_pstore_get_reqs(vps, &req, &res);
+
+	req->cmd = cpu_to_le16(VIRTIO_PSTORE_CMD_CLOSE);
+
+	sg_init_one(sgo, req, sizeof(*req));
+	sg_init_one(sgi, res, sizeof(*res));
+	virtqueue_add_sgs(vps->vq[0], sgs, 1, 1, vps, GFP_KERNEL);
+	virtqueue_kick(vps->vq[0]);
+
+	wait_event(vps->acked, virtqueue_get_buf(vps->vq[0], &len));
+	return le32_to_cpu(res->ret);
+}
+
+static ssize_t virt_pstore_read(u64 *id, enum pstore_type_id *type,
+				int *count, struct timespec *time,
+				char **buf, bool *compressed,
+				ssize_t *ecc_notice_size,
+				struct pstore_info *psi)
+{
+	struct virtio_pstore *vps = psi->data;
+	struct virtio_pstore_req *req;
+	struct virtio_pstore_res *res;
+	struct virtio_pstore_fileinfo info;
+	struct scatterlist sgo[1], sgi[3];
+	struct scatterlist *sgs[2] = { sgo, sgi };
+	unsigned int len;
+	unsigned int flags;
+	int ret;
+	void *bf;
+
+	virt_pstore_get_reqs(vps, &req, &res);
+
+	req->cmd = cpu_to_le16(VIRTIO_PSTORE_CMD_READ);
+
+	sg_init_one(sgo, req, sizeof(*req));
+	sg_init_table(sgi, 3);
+	sg_set_buf(&sgi[0], res, sizeof(*res));
+	sg_set_buf(&sgi[1], &info, sizeof(info));
+	sg_set_buf(&sgi[2], psi->buf, psi->bufsize);
+	virtqueue_add_sgs(vps->vq[0], sgs, 1, 1, vps, GFP_KERNEL);
+	virtqueue_kick(vps->vq[0]);
+
+	wait_event(vps->acked, virtqueue_get_buf(vps->vq[0], &len));
+	if (len < sizeof(*res) + sizeof(info))
+		return -1;
+
+	ret = le32_to_cpu(res->ret);
+	if (ret < 0)
+		return ret;
+
+	len = le32_to_cpu(info.len);
+
+	bf = kmalloc(len, GFP_KERNEL);
+	if (bf == NULL)
+		return -ENOMEM;
+
+	*id    = le64_to_cpu(info.id);
+	*type  = from_virtio_type(info.type);
+	*count = le32_to_cpu(info.count);
+
+	flags = le32_to_cpu(info.flags);
+	*compressed = flags & VIRTIO_PSTORE_FL_COMPRESSED;
+
+	time->tv_sec  = le64_to_cpu(info.time_sec);
+	time->tv_nsec = le32_to_cpu(info.time_nsec);
+
+	memcpy(bf, psi->buf, len);
+	*buf = bf;
+
+	return len;
+}
+
+static int notrace virt_pstore_write(enum pstore_type_id type,
+				     enum kmsg_dump_reason reason,
+				     u64 *id, unsigned int part, int count,
+				     bool compressed, size_t size,
+				     struct pstore_info *psi)
+{
+	struct virtio_pstore *vps = psi->data;
+	struct virtio_pstore_req *req;
+	struct virtio_pstore_res *res;
+	struct scatterlist sgo[2], sgi[1];
+	struct scatterlist *sgs[2] = { sgo, sgi };
+	unsigned int flags = compressed ? VIRTIO_PSTORE_FL_COMPRESSED : 0;
+
+	if (vps->failed)
+		return -1;
+
+	*id = vps->req_id;
+	virt_pstore_get_reqs(vps, &req, &res);
+
+	req->cmd   = cpu_to_le16(VIRTIO_PSTORE_CMD_WRITE);
+	req->type  = to_virtio_type(type);
+	req->flags = cpu_to_le32(flags);
+
+	sg_init_table(sgo, 2);
+	sg_set_buf(&sgo[0], req, sizeof(*req));
+	sg_set_buf(&sgo[1], psi->buf, size);
+	sg_init_one(sgi, res, sizeof(*res));
+	virtqueue_add_sgs(vps->vq[1], sgs, 1, 1, vps, GFP_ATOMIC);
+	virtqueue_kick(vps->vq[1]);
+
+	return 0;
+}
+
+static int virt_pstore_erase(enum pstore_type_id type, u64 id, int count,
+			     struct timespec time, struct pstore_info *psi)
+{
+	struct virtio_pstore *vps = psi->data;
+	struct virtio_pstore_req *req;
+	struct virtio_pstore_res *res;
+	struct scatterlist sgo[1], sgi[1];
+	struct scatterlist *sgs[2] = { sgo, sgi };
+	unsigned int len;
+
+	virt_pstore_get_reqs(vps, &req, &res);
+
+	req->cmd   = cpu_to_le16(VIRTIO_PSTORE_CMD_ERASE);
+	req->type  = to_virtio_type(type);
+	req->id	   = cpu_to_le64(id);
+	req->count = cpu_to_le32(count);
+
+	sg_init_one(sgo, req, sizeof(*req));
+	sg_init_one(sgi, res, sizeof(*res));
+	virtqueue_add_sgs(vps->vq[0], sgs, 1, 1, vps, GFP_KERNEL);
+	virtqueue_kick(vps->vq[0]);
+
+	wait_event(vps->acked, virtqueue_get_buf(vps->vq[0], &len));
+	return le32_to_cpu(res->ret);
+}
+
+static int virt_pstore_init(struct virtio_pstore *vps)
+{
+	struct pstore_info *psinfo = &vps->pstore;
+	int err;
+
+	if (!psinfo->bufsize)
+		psinfo->bufsize = VIRT_PSTORE_BUFSIZE;
+
+	psinfo->buf = alloc_pages_exact(psinfo->bufsize, GFP_KERNEL);
+	if (!psinfo->buf) {
+		pr_err("cannot allocate pstore buffer\n");
+		return -ENOMEM;
+	}
+
+	psinfo->owner = THIS_MODULE;
+	psinfo->name  = "virtio";
+	psinfo->open  = virt_pstore_open;
+	psinfo->close = virt_pstore_close;
+	psinfo->read  = virt_pstore_read;
+	psinfo->erase = virt_pstore_erase;
+	psinfo->write = virt_pstore_write;
+	psinfo->flags = PSTORE_FLAGS_FRAGILE;
+
+	psinfo->data  = vps;
+	spin_lock_init(&psinfo->buf_lock);
+
+	err = pstore_register(psinfo);
+	if (err)
+		kfree(psinfo->buf);
+
+	return err;
+}
+
+static int virt_pstore_exit(struct virtio_pstore *vps)
+{
+	struct pstore_info *psinfo = &vps->pstore;
+
+	pstore_unregister(psinfo);
+
+	free_pages_exact(psinfo->buf, psinfo->bufsize);
+	psinfo->buf = NULL;
+	psinfo->bufsize = 0;
+
+	return 0;
+}
+
+static int virtpstore_init_vqs(struct virtio_pstore *vps)
+{
+	vq_callback_t *callbacks[] = { virtpstore_ack, virtpstore_check };
+	const char *names[] = { "pstore_read", "pstore_write" };
+
+	return vps->vdev->config->find_vqs(vps->vdev, 2, vps->vq,
+					   callbacks, names);
+}
+
+static void virtpstore_init_config(struct virtio_pstore *vps)
+{
+	u32 bufsize;
+
+	virtio_cread(vps->vdev, struct virtio_pstore_config, bufsize, &bufsize);
+
+	vps->pstore.bufsize = PAGE_ALIGN(bufsize);
+}
+
+static void virtpstore_confirm_config(struct virtio_pstore *vps)
+{
+	u32 bufsize = vps->pstore.bufsize;
+
+	virtio_cwrite(vps->vdev, struct virtio_pstore_config, bufsize,
+		     &bufsize);
+}
+
+static int virtpstore_probe(struct virtio_device *vdev)
+{
+	struct virtio_pstore *vps;
+	int err;
+
+	if (!vdev->config->get) {
+		dev_err(&vdev->dev, "driver init: config access disabled\n");
+		return -EINVAL;
+	}
+
+	vdev->priv = vps = kzalloc(sizeof(*vps), GFP_KERNEL);
+	if (!vps) {
+		err = -ENOMEM;
+		goto out;
+	}
+	vps->vdev = vdev;
+
+	err = virtpstore_init_vqs(vps);
+	if (err < 0)
+		goto out_free;
+
+	virtpstore_init_config(vps);
+
+	err = virt_pstore_init(vps);
+	if (err)
+		goto out_del_vq;
+
+	virtpstore_confirm_config(vps);
+
+	init_waitqueue_head(&vps->acked);
+
+	virtio_device_ready(vdev);
+
+	dev_info(&vdev->dev, "driver init: ok (bufsize = %luK, flags = %x)\n",
+		 vps->pstore.bufsize >> 10, vps->pstore.flags);
+
+	return 0;
+
+out_del_vq:
+	vdev->config->del_vqs(vdev);
+out_free:
+	kfree(vps);
+out:
+	dev_err(&vdev->dev, "driver init: failed with %d\n", err);
+	return err;
+}
+
+static void virtpstore_remove(struct virtio_device *vdev)
+{
+	struct virtio_pstore *vps = vdev->priv;
+
+	virt_pstore_exit(vps);
+
+	/* Now we reset the device so we can clean up the queues. */
+	vdev->config->reset(vdev);
+
+	vdev->config->del_vqs(vdev);
+
+	kfree(vps);
+}
+
+static unsigned int features[] = {
+};
+
+static struct virtio_device_id id_table[] = {
+	{ VIRTIO_ID_PSTORE, VIRTIO_DEV_ANY_ID },
+	{ 0 },
+};
+
+static struct virtio_driver virtio_pstore_driver = {
+	.driver.name         = KBUILD_MODNAME,
+	.driver.owner        = THIS_MODULE,
+	.feature_table       = features,
+	.feature_table_size  = ARRAY_SIZE(features),
+	.id_table            = id_table,
+	.probe               = virtpstore_probe,
+	.remove              = virtpstore_remove,
+};
+
+module_virtio_driver(virtio_pstore_driver);
+MODULE_DEVICE_TABLE(virtio, id_table);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Namhyung Kim <namhyung@kernel.org>");
+MODULE_DESCRIPTION("Virtio pstore driver");
diff --git a/include/uapi/linux/Kbuild b/include/uapi/linux/Kbuild
index 6d4e92ccdc91..9bbb1554d8b2 100644
--- a/include/uapi/linux/Kbuild
+++ b/include/uapi/linux/Kbuild
@@ -449,6 +449,7 @@ header-y += virtio_ids.h
 header-y += virtio_input.h
 header-y += virtio_net.h
 header-y += virtio_pci.h
+header-y += virtio_pstore.h
 header-y += virtio_ring.h
 header-y += virtio_rng.h
 header-y += virtio_scsi.h
diff --git a/include/uapi/linux/virtio_ids.h b/include/uapi/linux/virtio_ids.h
index 77925f587b15..c72a9ab588c0 100644
--- a/include/uapi/linux/virtio_ids.h
+++ b/include/uapi/linux/virtio_ids.h
@@ -41,5 +41,6 @@
 #define VIRTIO_ID_CAIF	       12 /* Virtio caif */
 #define VIRTIO_ID_GPU          16 /* virtio GPU */
 #define VIRTIO_ID_INPUT        18 /* virtio input */
+#define VIRTIO_ID_PSTORE       22 /* virtio pstore */
 
 #endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/include/uapi/linux/virtio_pstore.h b/include/uapi/linux/virtio_pstore.h
new file mode 100644
index 000000000000..57c35327f53b
--- /dev/null
+++ b/include/uapi/linux/virtio_pstore.h
@@ -0,0 +1,74 @@
+#ifndef _LINUX_VIRTIO_PSTORE_H
+#define _LINUX_VIRTIO_PSTORE_H
+/* This header is BSD licensed so anyone can use the definitions to implement
+ * compatible drivers/servers.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of IBM nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL IBM OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE. */
+#include <linux/types.h>
+#include <linux/virtio_types.h>
+
+#define VIRTIO_PSTORE_CMD_NULL   0
+#define VIRTIO_PSTORE_CMD_OPEN   1
+#define VIRTIO_PSTORE_CMD_READ   2
+#define VIRTIO_PSTORE_CMD_WRITE  3
+#define VIRTIO_PSTORE_CMD_ERASE  4
+#define VIRTIO_PSTORE_CMD_CLOSE  5
+
+#define VIRTIO_PSTORE_TYPE_UNKNOWN  0
+#define VIRTIO_PSTORE_TYPE_DMESG    1
+
+#define VIRTIO_PSTORE_FL_COMPRESSED  1
+
+struct virtio_pstore_req {
+	__le16		cmd;
+	__le16		type;
+	__le32		flags;
+	__le64		id;
+	__le32		count;
+	__le32		reserved;
+};
+
+struct virtio_pstore_res {
+	__le16		cmd;
+	__le16		type;
+	__le32		ret;
+};
+
+struct virtio_pstore_fileinfo {
+	__le64		id;
+	__le32		count;
+	__le16		type;
+	__le16		unused;
+	__le32		flags;
+	__le32		len;
+	__le64		time_sec;
+	__le32		time_nsec;
+	__le32		reserved;
+};
+
+struct virtio_pstore_config {
+	__le32		bufsize;
+};
+
+#endif /* _LINUX_VIRTIO_PSTORE_H */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/3] qemu: Implement virtio-pstore device
  2016-09-04 14:38 [RFC/PATCHSET 0/3] virtio: Implement virtio pstore device (v5) Namhyung Kim
  2016-09-04 14:38 ` [PATCH 1/3] virtio: Basic implementation of virtio pstore driver Namhyung Kim
@ 2016-09-04 14:38 ` Namhyung Kim
  2016-09-22 12:23   ` Stefan Hajnoczi
  2016-09-04 14:39 ` [PATCH 3/3] kvmtool: " Namhyung Kim
  2 siblings, 1 reply; 12+ messages in thread
From: Namhyung Kim @ 2016-09-04 14:38 UTC (permalink / raw)
  To: virtio-dev
  Cc: LKML, qemu-devel, kvm, virtualization, Namhyung Kim,
	Paolo Bonzini, Radim Krčmář,
	Michael S. Tsirkin, Anthony Liguori, Anton Vorontsov,
	Colin Cross, Kees Cook, Tony Luck, Steven Rostedt, Ingo Molnar,
	Minchan Kim, Daniel P . Berrange

From: Namhyung Kim <namhyung@gmail.com>

Add virtio pstore device to allow kernel log files saved on the host.
It will save the log files on the directory given by pstore device
option.

  $ qemu-system-x86_64 -device virtio-pstore,directory=dir-xx ...

  (guest) # echo c > /proc/sysrq-trigger

  $ ls dir-xx
  dmesg-1.enc.z  dmesg-2.enc.z

The log files are usually compressed using zlib.  Users can see the log
messages directly on the host or on the guest (using pstore filesystem).

The 'directory' property is required for virtio-pstore device to work.
It also adds 'bufsize' property to set size of pstore bufer.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Anthony Liguori <aliguori@amazon.com>
Cc: Anton Vorontsov <anton@enomsg.org>
Cc: Colin Cross <ccross@android.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Namhyung Kim <namhyung@gmail.com>
---
 hw/virtio/Makefile.objs                        |   2 +-
 hw/virtio/virtio-pci.c                         |  52 ++
 hw/virtio/virtio-pci.h                         |  14 +
 hw/virtio/virtio-pstore.c                      | 689 +++++++++++++++++++++++++
 include/hw/pci/pci.h                           |   1 +
 include/hw/virtio/virtio-pstore.h              |  36 ++
 include/standard-headers/linux/virtio_ids.h    |   1 +
 include/standard-headers/linux/virtio_pstore.h |  76 +++
 qdev-monitor.c                                 |   1 +
 9 files changed, 871 insertions(+), 1 deletion(-)
 create mode 100644 hw/virtio/virtio-pstore.c
 create mode 100644 include/hw/virtio/virtio-pstore.h
 create mode 100644 include/standard-headers/linux/virtio_pstore.h

diff --git a/hw/virtio/Makefile.objs b/hw/virtio/Makefile.objs
index 3e2b175..aae7082 100644
--- a/hw/virtio/Makefile.objs
+++ b/hw/virtio/Makefile.objs
@@ -4,4 +4,4 @@ common-obj-y += virtio-bus.o
 common-obj-y += virtio-mmio.o
 
 obj-y += virtio.o virtio-balloon.o 
-obj-$(CONFIG_LINUX) += vhost.o vhost-backend.o vhost-user.o
+obj-$(CONFIG_LINUX) += vhost.o vhost-backend.o vhost-user.o virtio-pstore.o
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index 755f921..c184823 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -2416,6 +2416,57 @@ static const TypeInfo virtio_host_pci_info = {
 };
 #endif
 
+/* virtio-pstore-pci */
+
+static void virtio_pstore_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
+{
+    VirtIOPstorePCI *vps = VIRTIO_PSTORE_PCI(vpci_dev);
+    DeviceState *vdev = DEVICE(&vps->vdev);
+    Error *err = NULL;
+
+    qdev_set_parent_bus(vdev, BUS(&vpci_dev->bus));
+    object_property_set_bool(OBJECT(vdev), true, "realized", &err);
+    if (err) {
+        error_propagate(errp, err);
+        return;
+    }
+}
+
+static void virtio_pstore_pci_class_init(ObjectClass *klass, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(klass);
+    VirtioPCIClass *k = VIRTIO_PCI_CLASS(klass);
+    PCIDeviceClass *pcidev_k = PCI_DEVICE_CLASS(klass);
+
+    k->realize = virtio_pstore_pci_realize;
+    set_bit(DEVICE_CATEGORY_MISC, dc->categories);
+
+    pcidev_k->vendor_id = PCI_VENDOR_ID_REDHAT_QUMRANET;
+    pcidev_k->device_id = PCI_DEVICE_ID_VIRTIO_PSTORE;
+    pcidev_k->revision = VIRTIO_PCI_ABI_VERSION;
+    pcidev_k->class_id = PCI_CLASS_OTHERS;
+}
+
+static void virtio_pstore_pci_instance_init(Object *obj)
+{
+    VirtIOPstorePCI *dev = VIRTIO_PSTORE_PCI(obj);
+
+    virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
+                                TYPE_VIRTIO_PSTORE);
+    object_property_add_alias(obj, "directory", OBJECT(&dev->vdev),
+                              "directory", &error_abort);
+    object_property_add_alias(obj, "bufsize", OBJECT(&dev->vdev),
+                              "bufsize", &error_abort);
+}
+
+static const TypeInfo virtio_pstore_pci_info = {
+    .name          = TYPE_VIRTIO_PSTORE_PCI,
+    .parent        = TYPE_VIRTIO_PCI,
+    .instance_size = sizeof(VirtIOPstorePCI),
+    .instance_init = virtio_pstore_pci_instance_init,
+    .class_init    = virtio_pstore_pci_class_init,
+};
+
 /* virtio-pci-bus */
 
 static void virtio_pci_bus_new(VirtioBusState *bus, size_t bus_size,
@@ -2485,6 +2536,7 @@ static void virtio_pci_register_types(void)
 #ifdef CONFIG_VHOST_SCSI
     type_register_static(&vhost_scsi_pci_info);
 #endif
+    type_register_static(&virtio_pstore_pci_info);
 }
 
 type_init(virtio_pci_register_types)
diff --git a/hw/virtio/virtio-pci.h b/hw/virtio/virtio-pci.h
index 25fbf8a..354b2b7 100644
--- a/hw/virtio/virtio-pci.h
+++ b/hw/virtio/virtio-pci.h
@@ -31,6 +31,7 @@
 #ifdef CONFIG_VHOST_SCSI
 #include "hw/virtio/vhost-scsi.h"
 #endif
+#include "hw/virtio/virtio-pstore.h"
 
 typedef struct VirtIOPCIProxy VirtIOPCIProxy;
 typedef struct VirtIOBlkPCI VirtIOBlkPCI;
@@ -44,6 +45,7 @@ typedef struct VirtIOInputPCI VirtIOInputPCI;
 typedef struct VirtIOInputHIDPCI VirtIOInputHIDPCI;
 typedef struct VirtIOInputHostPCI VirtIOInputHostPCI;
 typedef struct VirtIOGPUPCI VirtIOGPUPCI;
+typedef struct VirtIOPstorePCI VirtIOPstorePCI;
 
 /* virtio-pci-bus */
 
@@ -324,6 +326,18 @@ struct VirtIOGPUPCI {
     VirtIOGPU vdev;
 };
 
+/*
+ * virtio-pstore-pci: This extends VirtioPCIProxy.
+ */
+#define TYPE_VIRTIO_PSTORE_PCI "virtio-pstore-pci"
+#define VIRTIO_PSTORE_PCI(obj) \
+        OBJECT_CHECK(VirtIOPstorePCI, (obj), TYPE_VIRTIO_PSTORE_PCI)
+
+struct VirtIOPstorePCI {
+    VirtIOPCIProxy parent_obj;
+    VirtIOPstore vdev;
+};
+
 /* Virtio ABI version, if we increment this, we break the guest driver. */
 #define VIRTIO_PCI_ABI_VERSION          0
 
diff --git a/hw/virtio/virtio-pstore.c b/hw/virtio/virtio-pstore.c
new file mode 100644
index 0000000..08d1063
--- /dev/null
+++ b/hw/virtio/virtio-pstore.c
@@ -0,0 +1,689 @@
+/*
+ * Virtio Pstore Device
+ *
+ * Copyright (C) 2016  LG Electronics
+ *
+ * Authors:
+ *  Namhyung Kim  <namhyung@gmail.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#include <stdio.h>
+
+#include "qemu/osdep.h"
+#include "qemu/iov.h"
+#include "qemu-common.h"
+#include "qemu/cutils.h"
+#include "qemu/error-report.h"
+#include "sysemu/kvm.h"
+#include "qapi/visitor.h"
+#include "qapi-event.h"
+#include "io/channel-file.h"
+#include "trace.h"
+
+#include "hw/virtio/virtio.h"
+#include "hw/virtio/virtio-bus.h"
+#include "hw/virtio/virtio-access.h"
+#include "hw/virtio/virtio-pstore.h"
+
+#define PSTORE_DEFAULT_BUFSIZE   (16 * 1024)
+#define PSTORE_DEFAULT_FILE_MAX  5
+
+/* the index should match to the type value */
+static const char *virtio_pstore_file_prefix[] = {
+    "unknown-",		/* VIRTIO_PSTORE_TYPE_UNKNOWN */
+    "dmesg-",		/* VIRTIO_PSTORE_TYPE_DMESG */
+};
+
+static char *virtio_pstore_to_filename(VirtIOPstore *s,
+                                       struct virtio_pstore_req *req)
+{
+    const char *basename;
+    unsigned long long id;
+    unsigned int type = le16_to_cpu(req->type);
+    unsigned int flags = le32_to_cpu(req->flags);
+
+    if (type < ARRAY_SIZE(virtio_pstore_file_prefix)) {
+        basename = virtio_pstore_file_prefix[type];
+    } else {
+        basename = "unknown-";
+    }
+
+    id = s->id++;
+    return g_strdup_printf("%s/%s%llu%s", s->directory, basename, id,
+                            flags & VIRTIO_PSTORE_FL_COMPRESSED ? ".enc.z" : "");
+}
+
+static char *virtio_pstore_from_filename(VirtIOPstore *s, char *name,
+                                         struct virtio_pstore_fileinfo *info)
+{
+    char *filename;
+    unsigned int idx;
+
+    filename = g_strdup_printf("%s/%s", s->directory, name);
+    if (filename == NULL)
+        return NULL;
+
+    for (idx = 0; idx < ARRAY_SIZE(virtio_pstore_file_prefix); idx++) {
+        if (g_str_has_prefix(name, virtio_pstore_file_prefix[idx])) {
+            info->type = idx;
+            name += strlen(virtio_pstore_file_prefix[idx]);
+            break;
+        }
+    }
+
+    if (idx == ARRAY_SIZE(virtio_pstore_file_prefix)) {
+        g_free(filename);
+        return NULL;
+    }
+
+    qemu_strtoull(name, NULL, 0, &info->id);
+
+    info->flags = 0;
+    if (g_str_has_suffix(name, ".enc.z")) {
+        info->flags |= VIRTIO_PSTORE_FL_COMPRESSED;
+    }
+
+    return filename;
+}
+
+static int prefix_idx;
+static int prefix_count;
+static int prefix_len;
+
+static int filter_pstore(const struct dirent *de)
+{
+    int i;
+
+    for (i = 0; i < prefix_count; i++) {
+        const char *prefix = virtio_pstore_file_prefix[prefix_idx + i];
+
+        if (g_str_has_prefix(de->d_name, prefix)) {
+            return 1;
+        }
+    }
+    return 0;
+}
+
+static int sort_pstore(const struct dirent **a, const struct dirent **b)
+{
+    uint64_t id_a, id_b;
+
+    qemu_strtoull((*a)->d_name + prefix_len, NULL, 0, &id_a);
+    qemu_strtoull((*b)->d_name + prefix_len, NULL, 0, &id_b);
+
+    return id_a - id_b;
+}
+
+static int delete_old_pstore_file(VirtIOPstore *s, unsigned short type)
+{
+    int ret = 0;
+    int i, num;
+    char *filename;
+    struct dirent **files;
+
+    if (type >= ARRAY_SIZE(virtio_pstore_file_prefix)) {
+        type = VIRTIO_PSTORE_TYPE_UNKNOWN;
+    }
+
+    prefix_idx = type;
+    prefix_len = strlen(virtio_pstore_file_prefix[type]);
+    prefix_count = 1;  /* only scan current type */
+
+    /* delete the oldest file in the same type */
+    num = scandir(s->directory, &files, filter_pstore, sort_pstore);
+    if (num < 0)
+        return num;
+    if (num < (int)s->file_max)
+        goto out;
+
+    filename = g_strdup_printf("%s/%s", s->directory, files[0]->d_name);
+    if (filename == NULL) {
+        ret = -1;
+        goto out;
+    }
+
+    ret = unlink(filename);
+
+out:
+    for (i = 0; i < num; i++) {
+        g_free(files[i]);
+    }
+    g_free(files);
+
+    return ret;
+}
+
+static ssize_t virtio_pstore_do_open(VirtIOPstore *s)
+{
+    /* scan all pstore files */
+    prefix_idx = 0;
+    prefix_count = ARRAY_SIZE(virtio_pstore_file_prefix);
+
+    s->file_idx = 0;
+    s->num_file = scandir(s->directory, &s->files, filter_pstore, alphasort);
+
+    return s->num_file >= 0 ? 0 : -1;
+}
+
+static ssize_t virtio_pstore_do_close(VirtIOPstore *s)
+{
+    int i;
+
+    for (i = 0; i < s->num_file; i++) {
+        g_free(s->files[i]);
+    }
+    g_free(s->files);
+    s->files = NULL;
+
+    s->num_file = 0;
+    return 0;
+}
+
+static ssize_t virtio_pstore_do_erase(VirtIOPstore *s,
+                                      struct virtio_pstore_req *req)
+{
+    char *filename;
+    int ret;
+
+    filename = virtio_pstore_to_filename(s, req);
+    if (filename == NULL)
+        return -1;
+
+    ret = unlink(filename);
+
+    g_free(filename);
+    return ret;
+}
+
+struct pstore_read_arg {
+    VirtIOPstore *vps;
+    VirtQueueElement *elem;
+    struct virtio_pstore_fileinfo info;
+    QIOChannel *ioc;
+};
+
+static gboolean pstore_async_read_fn(QIOChannel *ioc, GIOCondition condition,
+                                     gpointer data)
+{
+    struct pstore_read_arg *rarg = data;
+    struct virtio_pstore_fileinfo *info = &rarg->info;
+    VirtIOPstore *vps = rarg->vps;
+    VirtQueueElement *elem = rarg->elem;
+    struct virtio_pstore_res res;
+    size_t offset = sizeof(res) + sizeof(*info);
+    struct iovec *sg = elem->in_sg;
+    unsigned int sg_num = elem->in_num;
+    Error *err = NULL;
+    ssize_t len;
+    int ret;
+
+    /* skip res and fileinfo */
+    iov_discard_front(&sg, &sg_num, sizeof(res) + sizeof(*info));
+
+    len = qio_channel_readv(rarg->ioc, sg, sg_num, &err);
+    if (len < 0) {
+        if (errno == EAGAIN) {
+            len = 0;
+        }
+        ret = -1;
+    } else {
+        info->len = cpu_to_le32(len);
+        ret = 0;
+    }
+
+    res.cmd  = cpu_to_le16(VIRTIO_PSTORE_CMD_READ);
+    res.type = cpu_to_le16(VIRTIO_PSTORE_TYPE_UNKNOWN);
+    res.ret  = cpu_to_le32(ret);
+
+    /* now copy res and fileinfo */
+    iov_from_buf(elem->in_sg, elem->in_num, 0, &res, sizeof(res));
+    iov_from_buf(elem->in_sg, elem->in_num, sizeof(res), info, sizeof(*info));
+
+    len += offset;
+    virtqueue_push(vps->rvq, elem, len);
+    virtio_notify(VIRTIO_DEVICE(vps), vps->rvq);
+
+    return FALSE;
+}
+
+static void free_rarg_fn(gpointer data)
+{
+    struct pstore_read_arg *rarg = data;
+
+    qio_channel_close(rarg->ioc, NULL);
+
+    g_free(rarg->elem);
+    g_free(rarg);
+}
+
+static ssize_t virtio_pstore_do_read(VirtIOPstore *s, VirtQueueElement *elem)
+{
+    char *filename = NULL;
+    int idx;
+    struct stat stbuf;
+    struct pstore_read_arg *rarg = NULL;
+    QIOChannelFile *iocf;
+    Error *err = NULL;
+    int ret = -1;
+
+    if (s->file_idx >= s->num_file) {
+        return 0;
+    }
+
+    rarg = g_malloc(sizeof(*rarg));
+    if (rarg == NULL) {
+        return -1;
+    }
+
+    idx = s->file_idx++;
+    filename = virtio_pstore_from_filename(s, s->files[idx]->d_name,
+                                           &rarg->info);
+    if (filename == NULL) {
+        goto out;
+    }
+
+    iocf = qio_channel_file_new_path(filename, O_RDONLY, 0644, &err);
+    if (err) {
+        error_reportf_err(err, "cannot create io channel: ");
+        goto out;
+    }
+
+    if (fstat(iocf->fd, &stbuf) < 0) {
+        goto out;
+    }
+
+    rarg->vps            = s;
+    rarg->elem           = elem;
+    rarg->ioc            = QIO_CHANNEL(iocf);
+    rarg->info.id        = cpu_to_le64(rarg->info.id);
+    rarg->info.type      = cpu_to_le16(rarg->info.type);
+    rarg->info.flags     = cpu_to_le32(rarg->info.flags);
+    rarg->info.time_sec  = cpu_to_le64(stbuf.st_ctim.tv_sec);
+    rarg->info.time_nsec = cpu_to_le32(stbuf.st_ctim.tv_nsec);
+
+    qio_channel_set_blocking(rarg->ioc, false, &err);
+    qio_channel_add_watch(rarg->ioc, G_IO_IN, pstore_async_read_fn, rarg,
+                          free_rarg_fn);
+    g_free(filename);
+    return 1;
+
+out:
+    g_free(filename);
+    g_free(rarg);
+
+    return ret;
+}
+
+struct pstore_write_arg {
+    VirtIOPstore *vps;
+    VirtQueueElement *elem;
+    struct virtio_pstore_req *req;
+    QIOChannel *ioc;
+};
+
+static gboolean pstore_async_write_fn(QIOChannel *ioc, GIOCondition condition,
+                                      gpointer data)
+{
+    struct pstore_write_arg *warg = data;
+    VirtIOPstore *vps = warg->vps;
+    VirtQueueElement *elem = warg->elem;
+    struct iovec *sg = elem->out_sg;
+    unsigned int sg_num = elem->out_num;
+    struct virtio_pstore_res res;
+    Error *err = NULL;
+    ssize_t len;
+    int ret;
+
+    /* we already consumed the req */
+    iov_discard_front(&sg, &sg_num, sizeof(*warg->req));
+
+    len = qio_channel_writev(warg->ioc, sg, sg_num, &err);
+    if (len < 0) {
+        ret = -1;
+    } else {
+        ret = 0;
+    }
+
+    res.cmd  = cpu_to_le16(VIRTIO_PSTORE_CMD_WRITE);
+    res.type = warg->req->type;
+    res.ret  = cpu_to_le32(ret);
+
+    /* tell the result to guest */
+    iov_from_buf(elem->in_sg, elem->in_num, 0, &res, sizeof(res));
+
+    virtqueue_push(vps->wvq, elem, sizeof(res));
+    virtio_notify(VIRTIO_DEVICE(vps), vps->wvq);
+
+    return FALSE;
+}
+
+static void free_warg_fn(gpointer data)
+{
+    struct pstore_write_arg *warg = data;
+
+    qio_channel_close(warg->ioc, NULL);
+
+    g_free(warg->elem);
+    g_free(warg);
+}
+
+static ssize_t virtio_pstore_do_write(VirtIOPstore *s, VirtQueueElement *elem,
+                                      struct virtio_pstore_req *req)
+{
+    unsigned short type = le16_to_cpu(req->type);
+    char *filename = NULL;
+    int flags = O_WRONLY | O_CREAT | O_TRUNC;
+    struct pstore_write_arg *warg = NULL;
+    QIOChannelFile *iocf;
+    Error *err = NULL;
+    int ret = -1;
+
+    /* do not keep same type of files more than 'file-max' */
+    delete_old_pstore_file(s, type);
+
+    filename = virtio_pstore_to_filename(s, req);
+    if (filename == NULL) {
+        return -1;
+    }
+
+    iocf = qio_channel_file_new_path(filename, flags, 0644, &err);
+    if (err) {
+        error_reportf_err(err, "cannot create io channel: ");
+        goto out;
+    }
+
+    warg = g_malloc(sizeof(*warg));
+    if (warg == NULL) {
+        goto out;
+    }
+
+    warg->vps  = s;
+    warg->elem = elem;
+    warg->req  = req;
+    warg->ioc  = QIO_CHANNEL(iocf);
+
+    qio_channel_set_blocking(warg->ioc, false, &err);
+    qio_channel_add_watch(warg->ioc, G_IO_OUT, pstore_async_write_fn, warg,
+                          free_warg_fn);
+    g_free(filename);
+    return 1;
+
+out:
+    g_free(filename);
+    g_free(warg);
+    return ret;
+}
+
+static void virtio_pstore_handle_io(VirtIODevice *vdev, VirtQueue *vq)
+{
+    VirtIOPstore *s = VIRTIO_PSTORE(vdev);
+    VirtQueueElement *elem;
+    struct virtio_pstore_req req;
+    struct virtio_pstore_res res;
+    ssize_t len = 0;
+    int ret;
+
+    for (;;) {
+        elem = virtqueue_pop(vq, sizeof(VirtQueueElement));
+        if (!elem) {
+            return;
+        }
+
+        if (elem->out_num < 1 || elem->in_num < 1) {
+            error_report("request or response buffer is missing");
+            exit(1);
+        }
+
+        if (elem->out_num > 2 || elem->in_num > 3) {
+            error_report("invalid number of input/output buffer");
+            exit(1);
+        }
+
+        len = iov_to_buf(elem->out_sg, elem->out_num, 0, &req, sizeof(req));
+        if (len != (ssize_t)sizeof(req)) {
+            error_report("invalid request size: %ld", (long)len);
+            exit(1);
+        }
+        res.cmd  = req.cmd;
+        res.type = req.type;
+
+        switch (le16_to_cpu(req.cmd)) {
+        case VIRTIO_PSTORE_CMD_OPEN:
+            ret = virtio_pstore_do_open(s);
+            break;
+        case VIRTIO_PSTORE_CMD_CLOSE:
+            ret = virtio_pstore_do_close(s);
+            break;
+        case VIRTIO_PSTORE_CMD_ERASE:
+            ret = virtio_pstore_do_erase(s, &req);
+            break;
+        case VIRTIO_PSTORE_CMD_READ:
+            ret = virtio_pstore_do_read(s, elem);
+            if (ret == 1) {
+                /* async channel io */
+                continue;
+            }
+            break;
+        case VIRTIO_PSTORE_CMD_WRITE:
+            ret = virtio_pstore_do_write(s, elem, &req);
+            if (ret == 1) {
+                /* async channel io */
+                continue;
+            }
+            break;
+        default:
+            ret = -1;
+            break;
+        }
+
+        res.ret = ret;
+
+        iov_from_buf(elem->in_sg, elem->in_num, 0, &res, sizeof(res));
+        virtqueue_push(vq, elem, sizeof(res) + len);
+
+        virtio_notify(vdev, vq);
+        g_free(elem);
+
+        if (ret < 0) {
+            return;
+        }
+    }
+}
+
+static void virtio_pstore_device_realize(DeviceState *dev, Error **errp)
+{
+    VirtIODevice *vdev = VIRTIO_DEVICE(dev);
+    VirtIOPstore *s = VIRTIO_PSTORE(dev);
+
+    virtio_init(vdev, "virtio-pstore", VIRTIO_ID_PSTORE,
+                sizeof(struct virtio_pstore_config));
+
+    s->id = 1;
+
+    if (!s->bufsize)
+        s->bufsize = PSTORE_DEFAULT_BUFSIZE;
+    if (!s->file_max)
+        s->file_max = PSTORE_DEFAULT_FILE_MAX;
+
+    s->rvq = virtio_add_queue(vdev, 128, virtio_pstore_handle_io);
+    s->wvq = virtio_add_queue(vdev, 128, virtio_pstore_handle_io);
+}
+
+static void virtio_pstore_device_unrealize(DeviceState *dev, Error **errp)
+{
+    VirtIODevice *vdev = VIRTIO_DEVICE(dev);
+
+    virtio_cleanup(vdev);
+}
+
+static void virtio_pstore_get_config(VirtIODevice *vdev, uint8_t *config_data)
+{
+    VirtIOPstore *dev = VIRTIO_PSTORE(vdev);
+    struct virtio_pstore_config config;
+
+    config.bufsize = cpu_to_le32(dev->bufsize);
+
+    memcpy(config_data, &config, sizeof(struct virtio_pstore_config));
+}
+
+static void virtio_pstore_set_config(VirtIODevice *vdev,
+                                     const uint8_t *config_data)
+{
+    VirtIOPstore *dev = VIRTIO_PSTORE(vdev);
+    struct virtio_pstore_config config;
+
+    memcpy(&config, config_data, sizeof(struct virtio_pstore_config));
+
+    dev->bufsize = le32_to_cpu(config.bufsize);
+}
+
+static uint64_t get_features(VirtIODevice *vdev, uint64_t f, Error **errp)
+{
+    return f;
+}
+
+static void pstore_get_directory(Object *obj, Visitor *v,
+                                 const char *name, void *opaque,
+                                 Error **errp)
+{
+    VirtIOPstore *s = opaque;
+
+    visit_type_str(v, name, &s->directory, errp);
+}
+
+static void pstore_set_directory(Object *obj, Visitor *v,
+                                 const char *name, void *opaque,
+                                 Error **errp)
+{
+    VirtIOPstore *s = opaque;
+    Error *local_err = NULL;
+    char *value;
+
+    visit_type_str(v, name, &value, &local_err);
+    if (local_err) {
+        error_propagate(errp, local_err);
+        return;
+    }
+
+    g_free(s->directory);
+    s->directory = value;
+}
+
+static void pstore_release_directory(Object *obj, const char *name,
+                                     void *opaque)
+{
+    VirtIOPstore *s = opaque;
+
+    g_free(s->directory);
+    s->directory = NULL;
+}
+
+static void pstore_get_bufsize(Object *obj, Visitor *v,
+                               const char *name, void *opaque,
+                               Error **errp)
+{
+    VirtIOPstore *s = opaque;
+    uint64_t value = s->bufsize;
+
+    visit_type_size(v, name, &value, errp);
+}
+
+static void pstore_set_bufsize(Object *obj, Visitor *v,
+                               const char *name, void *opaque,
+                               Error **errp)
+{
+    VirtIOPstore *s = opaque;
+    Error *error = NULL;
+    uint64_t value;
+
+    visit_type_size(v, name, &value, &error);
+    if (error) {
+        error_propagate(errp, error);
+        return;
+    }
+
+    if (value < 4096) {
+        error_setg(&error, "Warning: too small buffer size: %"PRIu64, value);
+        error_propagate(errp, error);
+        return;
+    }
+
+    s->bufsize = value;
+}
+
+static void pstore_get_file_max(Object *obj, Visitor *v,
+                                const char *name, void *opaque,
+                                Error **errp)
+{
+    VirtIOPstore *s = opaque;
+    int64_t value = s->file_max;
+
+    visit_type_int(v, name, &value, errp);
+}
+
+static void pstore_set_file_max(Object *obj, Visitor *v,
+                                const char *name, void *opaque,
+                                Error **errp)
+{
+    VirtIOPstore *s = opaque;
+    Error *error = NULL;
+    int64_t value;
+
+    visit_type_int(v, name, &value, &error);
+    if (error) {
+        error_propagate(errp, error);
+        return;
+    }
+
+    s->file_max = value;
+}
+
+static Property virtio_pstore_properties[] = {
+    DEFINE_PROP_END_OF_LIST(),
+};
+
+static void virtio_pstore_instance_init(Object *obj)
+{
+    VirtIOPstore *s = VIRTIO_PSTORE(obj);
+
+    object_property_add(obj, "directory", "str",
+                        pstore_get_directory, pstore_set_directory,
+                        pstore_release_directory, s, NULL);
+    object_property_add(obj, "bufsize", "size",
+                        pstore_get_bufsize, pstore_set_bufsize, NULL, s, NULL);
+    object_property_add(obj, "file-max", "int",
+                        pstore_get_file_max, pstore_set_file_max, NULL, s, NULL);
+}
+
+static void virtio_pstore_class_init(ObjectClass *klass, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(klass);
+    VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
+
+    dc->props = virtio_pstore_properties;
+    set_bit(DEVICE_CATEGORY_MISC, dc->categories);
+    vdc->realize = virtio_pstore_device_realize;
+    vdc->unrealize = virtio_pstore_device_unrealize;
+    vdc->get_config = virtio_pstore_get_config;
+    vdc->set_config = virtio_pstore_set_config;
+    vdc->get_features = get_features;
+}
+
+static const TypeInfo virtio_pstore_info = {
+    .name = TYPE_VIRTIO_PSTORE,
+    .parent = TYPE_VIRTIO_DEVICE,
+    .instance_size = sizeof(VirtIOPstore),
+    .instance_init = virtio_pstore_instance_init,
+    .class_init = virtio_pstore_class_init,
+};
+
+static void virtio_register_types(void)
+{
+    type_register_static(&virtio_pstore_info);
+}
+
+type_init(virtio_register_types)
diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index 929ec2f..b31774a 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -79,6 +79,7 @@
 #define PCI_DEVICE_ID_VIRTIO_SCSI        0x1004
 #define PCI_DEVICE_ID_VIRTIO_RNG         0x1005
 #define PCI_DEVICE_ID_VIRTIO_9P          0x1009
+#define PCI_DEVICE_ID_VIRTIO_PSTORE      0x100a
 
 #define PCI_VENDOR_ID_REDHAT             0x1b36
 #define PCI_DEVICE_ID_REDHAT_BRIDGE      0x0001
diff --git a/include/hw/virtio/virtio-pstore.h b/include/hw/virtio/virtio-pstore.h
new file mode 100644
index 0000000..85b1828
--- /dev/null
+++ b/include/hw/virtio/virtio-pstore.h
@@ -0,0 +1,36 @@
+/*
+ * Virtio Pstore Support
+ *
+ * Authors:
+ *  Namhyung Kim      <namhyung@gmail.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.  See
+ * the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef _QEMU_VIRTIO_PSTORE_H
+#define _QEMU_VIRTIO_PSTORE_H
+
+#include "standard-headers/linux/virtio_pstore.h"
+#include "hw/virtio/virtio.h"
+#include "hw/pci/pci.h"
+
+#define TYPE_VIRTIO_PSTORE "virtio-pstore-device"
+#define VIRTIO_PSTORE(obj) \
+        OBJECT_CHECK(VirtIOPstore, (obj), TYPE_VIRTIO_PSTORE)
+
+typedef struct VirtIOPstore {
+    VirtIODevice    parent_obj;
+    VirtQueue      *rvq;
+    VirtQueue      *wvq;
+    char           *directory;
+    int             file_idx;
+    int             num_file;
+    struct dirent **files;
+    uint64_t        id;
+    uint64_t        bufsize;
+    uint64_t        file_max;
+} VirtIOPstore;
+
+#endif
diff --git a/include/standard-headers/linux/virtio_ids.h b/include/standard-headers/linux/virtio_ids.h
index 77925f5..c72a9ab 100644
--- a/include/standard-headers/linux/virtio_ids.h
+++ b/include/standard-headers/linux/virtio_ids.h
@@ -41,5 +41,6 @@
 #define VIRTIO_ID_CAIF	       12 /* Virtio caif */
 #define VIRTIO_ID_GPU          16 /* virtio GPU */
 #define VIRTIO_ID_INPUT        18 /* virtio input */
+#define VIRTIO_ID_PSTORE       22 /* virtio pstore */
 
 #endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/include/standard-headers/linux/virtio_pstore.h b/include/standard-headers/linux/virtio_pstore.h
new file mode 100644
index 0000000..cbbe668
--- /dev/null
+++ b/include/standard-headers/linux/virtio_pstore.h
@@ -0,0 +1,76 @@
+#ifndef _LINUX_VIRTIO_PSTORE_H
+#define _LINUX_VIRTIO_PSTORE_H
+/* This header is BSD licensed so anyone can use the definitions to implement
+ * compatible drivers/servers.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of IBM nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL IBM OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE. */
+#include "standard-headers/linux/types.h"
+#include "standard-headers/linux/virtio_types.h"
+#include "standard-headers/linux/virtio_ids.h"
+#include "standard-headers/linux/virtio_config.h"
+
+#define VIRTIO_PSTORE_CMD_NULL   0
+#define VIRTIO_PSTORE_CMD_OPEN   1
+#define VIRTIO_PSTORE_CMD_READ   2
+#define VIRTIO_PSTORE_CMD_WRITE  3
+#define VIRTIO_PSTORE_CMD_ERASE  4
+#define VIRTIO_PSTORE_CMD_CLOSE  5
+
+#define VIRTIO_PSTORE_TYPE_UNKNOWN  0
+#define VIRTIO_PSTORE_TYPE_DMESG    1
+
+#define VIRTIO_PSTORE_FL_COMPRESSED  1
+
+struct virtio_pstore_req {
+    uint16_t cmd;
+    uint16_t type;
+    uint32_t flags;
+    uint64_t id;
+    uint32_t count;
+    uint32_t reserved;
+};
+
+struct virtio_pstore_res {
+    uint16_t cmd;
+    uint16_t type;
+    uint32_t ret;
+};
+
+struct virtio_pstore_fileinfo {
+    uint64_t id;
+    uint32_t count;
+    uint16_t type;
+    uint16_t unused;
+    uint32_t flags;
+    uint32_t len;
+    uint64_t time_sec;
+    uint32_t time_nsec;
+    uint32_t reserved;
+};
+
+struct virtio_pstore_config {
+    uint32_t bufsize;
+};
+
+#endif /* _LINUX_VIRTIO_PSTORE_H */
diff --git a/qdev-monitor.c b/qdev-monitor.c
index e19617f..e1df5a9 100644
--- a/qdev-monitor.c
+++ b/qdev-monitor.c
@@ -73,6 +73,7 @@ static const QDevAlias qdev_alias_table[] = {
     { "virtio-serial-pci", "virtio-serial", QEMU_ARCH_ALL & ~QEMU_ARCH_S390X },
     { "virtio-tablet-ccw", "virtio-tablet", QEMU_ARCH_S390X },
     { "virtio-tablet-pci", "virtio-tablet", QEMU_ARCH_ALL & ~QEMU_ARCH_S390X },
+    { "virtio-pstore-pci", "virtio-pstore" },
     { }
 };
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/3] kvmtool: Implement virtio-pstore device
  2016-09-04 14:38 [RFC/PATCHSET 0/3] virtio: Implement virtio pstore device (v5) Namhyung Kim
  2016-09-04 14:38 ` [PATCH 1/3] virtio: Basic implementation of virtio pstore driver Namhyung Kim
  2016-09-04 14:38 ` [PATCH 2/3] qemu: Implement virtio-pstore device Namhyung Kim
@ 2016-09-04 14:39 ` Namhyung Kim
  2 siblings, 0 replies; 12+ messages in thread
From: Namhyung Kim @ 2016-09-04 14:39 UTC (permalink / raw)
  To: virtio-dev
  Cc: LKML, qemu-devel, kvm, virtualization, Namhyung Kim, Will Deacon

From: Namhyung Kim <namhyung@gmail.com>

Add virtio pstore device to allow kernel log messages saved on the
host.  With this patch, it will save the log files under directory given
by --pstore option.

  $ lkvm run --pstore=dir-xx

  (guest) # echo c > /proc/sysrq-trigger

  $ ls dir-xx
  dmesg-1.enc.z  dmesg-2.enc.z

The log files are usually compressed using zlib.  User can easily see
the messages on the host or on the guest (using pstore filesystem).

Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Namhyung Kim <namhyung@gmail.com>
---
 Makefile                     |   1 +
 builtin-run.c                |   2 +
 include/kvm/kvm-config.h     |   1 +
 include/kvm/virtio-pci-dev.h |   2 +
 include/kvm/virtio-pstore.h  |  53 +++++
 include/linux/virtio_ids.h   |   1 +
 virtio/pstore.c              | 447 +++++++++++++++++++++++++++++++++++++++++++
 7 files changed, 507 insertions(+)
 create mode 100644 include/kvm/virtio-pstore.h
 create mode 100644 virtio/pstore.c

diff --git a/Makefile b/Makefile
index 1f0196f..d7462b9 100644
--- a/Makefile
+++ b/Makefile
@@ -67,6 +67,7 @@ OBJS	+= virtio/net.o
 OBJS	+= virtio/rng.o
 OBJS    += virtio/balloon.o
 OBJS	+= virtio/pci.o
+OBJS	+= virtio/pstore.o
 OBJS	+= disk/blk.o
 OBJS	+= disk/qcow.o
 OBJS	+= disk/raw.o
diff --git a/builtin-run.c b/builtin-run.c
index 72b878d..08c12dd 100644
--- a/builtin-run.c
+++ b/builtin-run.c
@@ -128,6 +128,8 @@ void kvm_run_set_wrapper_sandbox(void)
 			" rootfs"),					\
 	OPT_STRING('\0', "hugetlbfs", &(cfg)->hugetlbfs_path, "path",	\
 			"Hugetlbfs path"),				\
+	OPT_STRING('\0', "pstore", &(cfg)->pstore_path, "path",		\
+			"pstore data path"),				\
 									\
 	OPT_GROUP("Kernel options:"),					\
 	OPT_STRING('k', "kernel", &(cfg)->kernel_filename, "kernel",	\
diff --git a/include/kvm/kvm-config.h b/include/kvm/kvm-config.h
index 386fa8c..42b7651 100644
--- a/include/kvm/kvm-config.h
+++ b/include/kvm/kvm-config.h
@@ -45,6 +45,7 @@ struct kvm_config {
 	const char *hugetlbfs_path;
 	const char *custom_rootfs_name;
 	const char *real_cmdline;
+	const char *pstore_path;
 	struct virtio_net_params *net_params;
 	bool single_step;
 	bool vnc;
diff --git a/include/kvm/virtio-pci-dev.h b/include/kvm/virtio-pci-dev.h
index 48ae018..4339d94 100644
--- a/include/kvm/virtio-pci-dev.h
+++ b/include/kvm/virtio-pci-dev.h
@@ -15,6 +15,7 @@
 #define PCI_DEVICE_ID_VIRTIO_BLN		0x1005
 #define PCI_DEVICE_ID_VIRTIO_SCSI		0x1008
 #define PCI_DEVICE_ID_VIRTIO_9P			0x1009
+#define PCI_DEVICE_ID_VIRTIO_PSTORE		0x100a
 #define PCI_DEVICE_ID_VESA			0x2000
 #define PCI_DEVICE_ID_PCI_SHMEM			0x0001
 
@@ -34,5 +35,6 @@
 #define PCI_CLASS_RNG				0xff0000
 #define PCI_CLASS_BLN				0xff0000
 #define PCI_CLASS_9P				0xff0000
+#define PCI_CLASS_PSTORE			0xff0000
 
 #endif /* VIRTIO_PCI_DEV_H_ */
diff --git a/include/kvm/virtio-pstore.h b/include/kvm/virtio-pstore.h
new file mode 100644
index 0000000..9f52ffd
--- /dev/null
+++ b/include/kvm/virtio-pstore.h
@@ -0,0 +1,53 @@
+#ifndef KVM__PSTORE_VIRTIO_H
+#define KVM__PSTORE_VIRTIO_H
+
+#include <kvm/virtio.h>
+#include <sys/types.h>
+
+#define VIRTIO_PSTORE_CMD_NULL   0
+#define VIRTIO_PSTORE_CMD_OPEN   1
+#define VIRTIO_PSTORE_CMD_READ   2
+#define VIRTIO_PSTORE_CMD_WRITE  3
+#define VIRTIO_PSTORE_CMD_ERASE  4
+#define VIRTIO_PSTORE_CMD_CLOSE  5
+
+#define VIRTIO_PSTORE_TYPE_UNKNOWN  0
+#define VIRTIO_PSTORE_TYPE_DMESG    1
+
+#define VIRTIO_PSTORE_FL_COMPRESSED  1
+
+struct virtio_pstore_req {
+	__virtio16		cmd;
+	__virtio16		type;
+	__virtio32		flags;
+	__virtio64		id;
+	__virtio32		count;
+	__virtio32		reserved;
+};
+
+struct virtio_pstore_res {
+	__virtio16		cmd;
+	__virtio16		type;
+	__virtio32		ret;
+};
+
+struct virtio_pstore_fileinfo {
+	__virtio64		id;
+	__virtio32		count;
+	__virtio16		type;
+	__virtio16		unused;
+	__virtio32		flags;
+	__virtio32		len;
+	__virtio64		time_sec;
+	__virtio32		time_nsec;
+	__virtio32		reserved;
+};
+
+struct virtio_pstore_config {
+	__virtio32		bufsize;
+};
+
+int virtio_pstore__init(struct kvm *kvm);
+int virtio_pstore__exit(struct kvm *kvm);
+
+#endif /* KVM__PSTORE_VIRTIO_H */
diff --git a/include/linux/virtio_ids.h b/include/linux/virtio_ids.h
index 5f60aa4..40eabf7 100644
--- a/include/linux/virtio_ids.h
+++ b/include/linux/virtio_ids.h
@@ -40,5 +40,6 @@
 #define VIRTIO_ID_RPROC_SERIAL 11 /* virtio remoteproc serial link */
 #define VIRTIO_ID_CAIF	       12 /* Virtio caif */
 #define VIRTIO_ID_INPUT        18 /* virtio input */
+#define VIRTIO_ID_PSTORE       22 /* virtio pstore */
 
 #endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/virtio/pstore.c b/virtio/pstore.c
new file mode 100644
index 0000000..fb9806f
--- /dev/null
+++ b/virtio/pstore.c
@@ -0,0 +1,447 @@
+#include "kvm/virtio-pstore.h"
+
+#include "kvm/virtio-pci-dev.h"
+
+#include "kvm/virtio.h"
+#include "kvm/util.h"
+#include "kvm/kvm.h"
+#include "kvm/threadpool.h"
+#include "kvm/guest_compat.h"
+
+#include <linux/virtio_ring.h>
+
+#include <linux/list.h>
+#include <fcntl.h>
+#include <dirent.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <pthread.h>
+#include <linux/kernel.h>
+#include <sys/eventfd.h>
+
+#define NUM_VIRT_QUEUES			2
+#define VIRTIO_PSTORE_QUEUE_SIZE	128
+
+struct io_thread_arg {
+	struct kvm		*kvm;
+	struct pstore_dev	*pdev;
+};
+
+struct pstore_dev {
+	struct list_head	list;
+	struct virtio_device	vdev;
+	pthread_t		io_thread;
+	int			io_efd;
+	int			done;
+
+	struct virtio_pstore_config *config;
+
+	int			fd;
+	DIR			*dir;
+	u64			id;
+
+	/* virtio queue */
+	struct virt_queue	vqs[NUM_VIRT_QUEUES];
+};
+
+static LIST_HEAD(pdevs);
+static int compat_id = -1;
+
+static u8 *get_config(struct kvm *kvm, void *dev)
+{
+	struct pstore_dev *pdev = dev;
+
+	return (u8*)pdev->config;
+}
+
+static u32 get_host_features(struct kvm *kvm, void *dev)
+{
+	/* Unused */
+	return 0;
+}
+
+static void set_guest_features(struct kvm *kvm, void *dev, u32 features)
+{
+	/* Unused */
+}
+
+static void virtio_pstore_to_filename(struct kvm *kvm, struct pstore_dev *pdev,
+				      char *buf, size_t sz,
+				      struct virtio_pstore_req *req)
+{
+	const char *basename;
+	unsigned long long id = 0;
+	unsigned int flags = virtio_host_to_guest_u64(pdev->vqs, req->flags);
+
+	switch (req->type) {
+	case VIRTIO_PSTORE_TYPE_DMESG:
+		basename = "dmesg";
+		id = pdev->id++;
+		break;
+	default:
+		basename = "unknown";
+		break;
+	}
+
+	snprintf(buf, sz, "%s/%s-%llu%s", kvm->cfg.pstore_path, basename, id,
+		 flags & VIRTIO_PSTORE_FL_COMPRESSED ? ".enc.z" : "");
+}
+
+static void virtio_pstore_from_filename(struct kvm *kvm, char *name,
+					char *buf, size_t sz,
+					struct virtio_pstore_fileinfo *info)
+{
+	size_t len = strlen(name);
+
+	snprintf(buf, sz, "%s/%s", kvm->cfg.pstore_path, name);
+
+	info->flags = 0;
+	if (len > 6 && !strncmp(name + len - 6, ".enc.z", 6))
+		info->flags |= VIRTIO_PSTORE_FL_COMPRESSED;
+
+	if (!strncmp(name, "dmesg-", 6)) {
+		info->type = VIRTIO_PSTORE_TYPE_DMESG;
+		name += strlen("dmesg-");
+	} else if (!strncmp(name, "unknown-", 8)) {
+		info->type = VIRTIO_PSTORE_TYPE_UNKNOWN;
+		name += strlen("unknown-");
+	}
+
+	info->id = strtoul(name, NULL, 0);
+}
+
+static int virtio_pstore_do_open(struct kvm *kvm, struct pstore_dev *pdev,
+				 struct virtio_pstore_req *req,
+				 struct iovec *iov)
+{
+	pdev->dir = opendir(kvm->cfg.pstore_path);
+	if (pdev->dir == NULL)
+		return -errno;
+
+	return 0;
+}
+
+static int virtio_pstore_do_close(struct kvm *kvm, struct pstore_dev *pdev,
+				  struct virtio_pstore_req *req,
+				  struct iovec *iov)
+{
+	if (pdev->dir == NULL)
+		return -1;
+
+	closedir(pdev->dir);
+	pdev->dir = NULL;
+
+	return 0;
+}
+
+static ssize_t virtio_pstore_do_read(struct kvm *kvm, struct pstore_dev *pdev,
+				     struct virtio_pstore_req *req,
+				     struct iovec *iov,
+				     struct virtio_pstore_fileinfo *info)
+{
+	char path[PATH_MAX];
+	FILE *fp;
+	ssize_t len = 0;
+	struct stat stbuf;
+	struct dirent *dent;
+
+	if (pdev->dir == NULL)
+		return 0;
+
+	dent = readdir(pdev->dir);
+	while (dent) {
+		if (dent->d_name[0] != '.')
+			break;
+		dent = readdir(pdev->dir);
+	}
+
+	if (dent == NULL)
+		return 0;
+
+	virtio_pstore_from_filename(kvm, dent->d_name, path, sizeof(path), info);
+	fp = fopen(path, "r");
+	if (fp == NULL)
+		return -1;
+
+	if (fstat(fileno(fp), &stbuf) < 0)
+		return -1;
+
+	len = fread(iov[3].iov_base, 1, iov[3].iov_len, fp);
+	if (len < 0 && errno == EAGAIN) {
+		len = 0;
+		goto out;
+	}
+
+	info->id     = virtio_host_to_guest_u64(pdev->vqs, info->id);
+	info->type   = virtio_host_to_guest_u64(pdev->vqs, info->type);
+	info->flags  = virtio_host_to_guest_u32(pdev->vqs, info->flags);
+	info->len    = virtio_host_to_guest_u32(pdev->vqs, len);
+
+	info->time_sec  = virtio_host_to_guest_u64(pdev->vqs, stbuf.st_ctim.tv_sec);
+	info->time_nsec = virtio_host_to_guest_u32(pdev->vqs, stbuf.st_ctim.tv_nsec);
+
+	len += sizeof(*info);
+
+out:
+	fclose(fp);
+	return len;
+}
+
+static ssize_t virtio_pstore_do_write(struct kvm *kvm, struct pstore_dev *pdev,
+				      struct virtio_pstore_req *req,
+				      struct iovec *iov)
+{
+	char path[PATH_MAX];
+	FILE *fp;
+	ssize_t len = 0;
+
+	virtio_pstore_to_filename(kvm, pdev, path, sizeof(path), req);
+
+	fp = fopen(path, "a");
+	if (fp == NULL)
+		return -1;
+
+	len = fwrite(iov[1].iov_base, 1, iov[1].iov_len, fp);
+	if (len < 0 && errno == EAGAIN)
+		len = 0;
+
+	fclose(fp);
+	return 0;
+}
+
+static ssize_t virtio_pstore_do_erase(struct kvm *kvm, struct pstore_dev *pdev,
+				      struct virtio_pstore_req *req,
+				      struct iovec *iov)
+{
+	char path[PATH_MAX];
+
+	virtio_pstore_to_filename(kvm, pdev, path, sizeof(path), req);
+
+	return unlink(path);
+}
+
+static bool virtio_pstore_do_io_request(struct kvm *kvm, struct pstore_dev *pdev,
+					struct virt_queue *vq)
+{
+	struct iovec iov[VIRTIO_PSTORE_QUEUE_SIZE];
+	struct virtio_pstore_req *req;
+	struct virtio_pstore_res *res;
+	struct virtio_pstore_fileinfo *info;
+	ssize_t len = 0;
+	u16 out, in, head;
+	int ret = 0;
+
+	head = virt_queue__get_iov(vq, iov, &out, &in, kvm);
+
+	if (iov[0].iov_len != sizeof(*req) || iov[out].iov_len != sizeof(*res)) {
+		return false;
+	}
+
+	req = iov[0].iov_base;
+	res = iov[out].iov_base;
+
+	switch (virtio_guest_to_host_u16(vq, req->cmd)) {
+	case VIRTIO_PSTORE_CMD_OPEN:
+		ret = virtio_pstore_do_open(kvm, pdev, req, iov);
+		break;
+	case VIRTIO_PSTORE_CMD_READ:
+		info = iov[out + 1].iov_base;
+		ret = virtio_pstore_do_read(kvm, pdev, req, iov, info);
+		if (ret > 0) {
+			len = ret;
+			ret = 0;
+		}
+		break;
+	case VIRTIO_PSTORE_CMD_WRITE:
+		ret = virtio_pstore_do_write(kvm, pdev, req, iov);
+		break;
+	case VIRTIO_PSTORE_CMD_CLOSE:
+		ret = virtio_pstore_do_close(kvm, pdev, req, iov);
+		break;
+	case VIRTIO_PSTORE_CMD_ERASE:
+		ret = virtio_pstore_do_erase(kvm, pdev, req, iov);
+		break;
+	default:
+		return false;
+	}
+
+	res->cmd  = req->cmd;
+	res->type = req->type;
+	res->ret  = virtio_host_to_guest_u32(vq, ret);
+
+	virt_queue__set_used_elem(vq, head, sizeof(*res) + len);
+
+	return ret == 0;
+}
+
+static void virtio_pstore_do_io(struct kvm *kvm, struct pstore_dev *pdev,
+				struct virt_queue *vq)
+{
+	bool done = false;
+
+	while (virt_queue__available(vq)) {
+		virtio_pstore_do_io_request(kvm, pdev, vq);
+		done = true;
+	}
+
+	if (done)
+		pdev->vdev.ops->signal_vq(kvm, &pdev->vdev, vq - pdev->vqs);
+}
+
+static void *virtio_pstore_io_thread(void *arg)
+{
+	struct io_thread_arg *io_arg = arg;
+	struct pstore_dev *pdev = io_arg->pdev;
+	struct kvm *kvm = io_arg->kvm;
+	u64 data;
+	int r;
+
+	kvm__set_thread_name("virtio-pstore-io");
+
+	while (!pdev->done) {
+		r = read(pdev->io_efd, &data, sizeof(u64));
+		if (r < 0)
+			continue;
+
+		virtio_pstore_do_io(kvm, pdev, &pdev->vqs[0]);
+		virtio_pstore_do_io(kvm, pdev, &pdev->vqs[1]);
+	}
+	free(io_arg);
+
+	pthread_exit(NULL);
+	return NULL;
+}
+
+static int init_vq(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align,
+		   u32 pfn)
+{
+	struct pstore_dev *pdev = dev;
+	struct virt_queue *queue;
+	void *p;
+
+	compat__remove_message(compat_id);
+
+	queue		= &pdev->vqs[vq];
+	queue->pfn	= pfn;
+	p		= virtio_get_vq(kvm, queue->pfn, page_size);
+
+	vring_init(&queue->vring, VIRTIO_PSTORE_QUEUE_SIZE, p, align);
+
+	return 0;
+}
+
+static int notify_vq(struct kvm *kvm, void *dev, u32 vq)
+{
+	struct pstore_dev *pdev = dev;
+	u64 data = 1;
+	int r;
+
+	r = write(pdev->io_efd, &data, sizeof(data));
+	if (r < 0)
+		return r;
+
+	return 0;
+}
+
+static int get_pfn_vq(struct kvm *kvm, void *dev, u32 vq)
+{
+	struct pstore_dev *pdev = dev;
+
+	return pdev->vqs[vq].pfn;
+}
+
+static int get_size_vq(struct kvm *kvm, void *dev, u32 vq)
+{
+	return VIRTIO_PSTORE_QUEUE_SIZE;
+}
+
+static int set_size_vq(struct kvm *kvm, void *dev, u32 vq, int size)
+{
+	/* FIXME: dynamic */
+	return size;
+}
+
+static struct virtio_ops pstore_dev_virtio_ops = {
+	.get_config		= get_config,
+	.get_host_features	= get_host_features,
+	.set_guest_features	= set_guest_features,
+	.init_vq		= init_vq,
+	.notify_vq		= notify_vq,
+	.get_pfn_vq		= get_pfn_vq,
+	.get_size_vq		= get_size_vq,
+	.set_size_vq		= set_size_vq,
+};
+
+int virtio_pstore__init(struct kvm *kvm)
+{
+	struct pstore_dev *pdev;
+	struct io_thread_arg *io_arg = NULL;
+	int r;
+
+	if (!kvm->cfg.pstore_path)
+		return 0;
+
+	pdev = calloc(1, sizeof(*pdev));
+	if (pdev == NULL)
+		return -ENOMEM;
+
+	pdev->config = calloc(1, sizeof(*pdev->config));
+	if (pdev->config == NULL) {
+		r = -ENOMEM;
+		goto cleanup;
+	}
+
+	pdev->id = 1;
+
+	io_arg = malloc(sizeof(*io_arg));
+	if (io_arg == NULL) {
+		r = -ENOMEM;
+		goto cleanup;
+	}
+
+	pdev->io_efd = eventfd(0, 0);
+
+	*io_arg = (struct io_thread_arg) {
+		.pdev   = pdev,
+		.kvm    = kvm,
+	};
+	r = pthread_create(&pdev->io_thread, NULL,
+			   virtio_pstore_io_thread, io_arg);
+	if (r < 0)
+		goto cleanup;
+
+	r = virtio_init(kvm, pdev, &pdev->vdev, &pstore_dev_virtio_ops,
+			VIRTIO_DEFAULT_TRANS(kvm), PCI_DEVICE_ID_VIRTIO_PSTORE,
+			VIRTIO_ID_PSTORE, PCI_CLASS_PSTORE);
+	if (r < 0)
+		goto cleanup;
+
+	list_add_tail(&pdev->list, &pdevs);
+
+	if (compat_id == -1)
+		compat_id = virtio_compat_add_message("virtio-pstore", "CONFIG_VIRTIO_PSTORE");
+	return 0;
+
+cleanup:
+	free(io_arg);
+	free(pdev->config);
+	free(pdev);
+
+	return r;
+}
+virtio_dev_init(virtio_pstore__init);
+
+int virtio_pstore__exit(struct kvm *kvm)
+{
+	struct pstore_dev *pdev, *tmp;
+
+	list_for_each_entry_safe(pdev, tmp, &pdevs, list) {
+		list_del(&pdev->list);
+		close(pdev->io_efd);
+		pdev->vdev.ops->exit(kvm, &pdev->vdev);
+		free(pdev);
+	}
+
+	return 0;
+}
+virtio_dev_exit(virtio_pstore__exit);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] virtio: Basic implementation of virtio pstore driver
  2016-09-04 14:38 ` [PATCH 1/3] virtio: Basic implementation of virtio pstore driver Namhyung Kim
@ 2016-09-08 20:49   ` Kees Cook
  2016-09-22 11:57   ` Stefan Hajnoczi
  1 sibling, 0 replies; 12+ messages in thread
From: Kees Cook @ 2016-09-08 20:49 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: virtio-dev, LKML, qemu-devel, KVM, virtualization, Paolo Bonzini,
	Radim Krčmář,
	Michael S. Tsirkin, Anthony Liguori, Anton Vorontsov,
	Colin Cross, Tony Luck, Steven Rostedt, Ingo Molnar, Minchan Kim,
	Will Deacon

On Sun, Sep 4, 2016 at 7:38 AM, Namhyung Kim <namhyung@kernel.org> wrote:
> The virtio pstore driver provides interface to the pstore subsystem so
> that the guest kernel's log/dump message can be saved on the host
> machine.  Users can access the log file directly on the host, or on the
> guest at the next boot using pstore filesystem.  It currently deals with
> kernel log (printk) buffer only, but we can extend it to have other
> information (like ftrace dump) later.
>
> It supports legacy PCI device using a 16K buffer by default and it's
> configurable.  It uses two virtqueues - one for (sync) read and another
> for (async) write.  Since it cannot wait for write finished, it supports
> up to 128 concurrent IO.
>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Radim Krčmář <rkrcmar@redhat.com>
> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> Cc: Anthony Liguori <aliguori@amazon.com>
> Cc: Anton Vorontsov <anton@enomsg.org>
> Cc: Colin Cross <ccross@android.com>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Tony Luck <tony.luck@intel.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: virtio-dev@lists.oasis-open.org
> Cc: kvm@vger.kernel.org
> Cc: qemu-devel@nongnu.org
> Cc: virtualization@lists.linux-foundation.org
> Signed-off-by: Namhyung Kim <namhyung@kernel.org>

While I can't speak to the virtio parts, the interface into pstore
looks fine to me. :)

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

-- 
Kees Cook
Nexus Security

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] virtio: Basic implementation of virtio pstore driver
  2016-09-04 14:38 ` [PATCH 1/3] virtio: Basic implementation of virtio pstore driver Namhyung Kim
  2016-09-08 20:49   ` Kees Cook
@ 2016-09-22 11:57   ` Stefan Hajnoczi
  2016-09-23  5:48     ` Namhyung Kim
  1 sibling, 1 reply; 12+ messages in thread
From: Stefan Hajnoczi @ 2016-09-22 11:57 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: virtio-dev, Anton Vorontsov, Kees Cook, kvm,
	Radim Krčmář,
	qemu-devel, Michael S. Tsirkin, Will Deacon, LKML,
	Steven Rostedt, virtualization, Minchan Kim, Tony Luck,
	Anthony Liguori, Colin Cross, Paolo Bonzini, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 10474 bytes --]

On Sun, Sep 04, 2016 at 11:38:58PM +0900, Namhyung Kim wrote:
> The virtio pstore driver provides interface to the pstore subsystem so
> that the guest kernel's log/dump message can be saved on the host
> machine.  Users can access the log file directly on the host, or on the
> guest at the next boot using pstore filesystem.  It currently deals with
> kernel log (printk) buffer only, but we can extend it to have other
> information (like ftrace dump) later.
> 
> It supports legacy PCI device using a 16K buffer by default and it's
> configurable.  It uses two virtqueues - one for (sync) read and another
> for (async) write.  Since it cannot wait for write finished, it supports
> up to 128 concurrent IO.

Please document the locks that this code relies on.  It is generally not
safe to call virtqueue_*() from multiple threads.  I also wonder about
locking for virtio_pstore->req_id and other fields.  Are locks missing
or is something in pstore ensuring safety?

> +static int virt_pstore_open(struct pstore_info *psi)
> +{
> +	struct virtio_pstore *vps = psi->data;
> +	struct virtio_pstore_req *req;
> +	struct virtio_pstore_res *res;
> +	struct scatterlist sgo[1], sgi[1];
> +	struct scatterlist *sgs[2] = { sgo, sgi };
> +	unsigned int len;
> +
> +	virt_pstore_get_reqs(vps, &req, &res);
> +
> +	req->cmd = cpu_to_le16(VIRTIO_PSTORE_CMD_OPEN);
> +
> +	sg_init_one(sgo, req, sizeof(*req));
> +	sg_init_one(sgi, res, sizeof(*res));
> +	virtqueue_add_sgs(vps->vq[0], sgs, 1, 1, vps, GFP_KERNEL);
> +	virtqueue_kick(vps->vq[0]);
> +
> +	wait_event(vps->acked, virtqueue_get_buf(vps->vq[0], &len));
> +	return le32_to_cpu(res->ret);

This assumes the device puts compatible Linux errno values in res->ret.
The function doesn't need to return -errno if I'm reading fs/pstore/
code correctly.  You could return -1 on error to avoid making this
assumption.  The same applies to other res->ret usage below.

> +}
> +
> +static int virt_pstore_close(struct pstore_info *psi)
> +{
> +	struct virtio_pstore *vps = psi->data;
> +	struct virtio_pstore_req *req = &vps->req[vps->req_id];
> +	struct virtio_pstore_res *res = &vps->res[vps->req_id];

Assigning &vps->req[vps->req_id]/&vps->res[vps->req_id] is unnecessary,
virt_pstore_get_reqs() handles that below.

> +	struct scatterlist sgo[1], sgi[1];
> +	struct scatterlist *sgs[2] = { sgo, sgi };
> +	unsigned int len;
> +
> +	virt_pstore_get_reqs(vps, &req, &res);
> +
> +	req->cmd = cpu_to_le16(VIRTIO_PSTORE_CMD_CLOSE);
> +
> +	sg_init_one(sgo, req, sizeof(*req));
> +	sg_init_one(sgi, res, sizeof(*res));
> +	virtqueue_add_sgs(vps->vq[0], sgs, 1, 1, vps, GFP_KERNEL);
> +	virtqueue_kick(vps->vq[0]);
> +
> +	wait_event(vps->acked, virtqueue_get_buf(vps->vq[0], &len));
> +	return le32_to_cpu(res->ret);
> +}
> +
> +static ssize_t virt_pstore_read(u64 *id, enum pstore_type_id *type,
> +				int *count, struct timespec *time,
> +				char **buf, bool *compressed,
> +				ssize_t *ecc_notice_size,
> +				struct pstore_info *psi)
> +{
> +	struct virtio_pstore *vps = psi->data;
> +	struct virtio_pstore_req *req;
> +	struct virtio_pstore_res *res;
> +	struct virtio_pstore_fileinfo info;
> +	struct scatterlist sgo[1], sgi[3];
> +	struct scatterlist *sgs[2] = { sgo, sgi };
> +	unsigned int len;
> +	unsigned int flags;
> +	int ret;
> +	void *bf;
> +
> +	virt_pstore_get_reqs(vps, &req, &res);
> +
> +	req->cmd = cpu_to_le16(VIRTIO_PSTORE_CMD_READ);
> +
> +	sg_init_one(sgo, req, sizeof(*req));
> +	sg_init_table(sgi, 3);
> +	sg_set_buf(&sgi[0], res, sizeof(*res));
> +	sg_set_buf(&sgi[1], &info, sizeof(info));
> +	sg_set_buf(&sgi[2], psi->buf, psi->bufsize);
> +	virtqueue_add_sgs(vps->vq[0], sgs, 1, 1, vps, GFP_KERNEL);
> +	virtqueue_kick(vps->vq[0]);
> +
> +	wait_event(vps->acked, virtqueue_get_buf(vps->vq[0], &len));
> +	if (len < sizeof(*res) + sizeof(info))
> +		return -1;
> +
> +	ret = le32_to_cpu(res->ret);
> +	if (ret < 0)
> +		return ret;
> +
> +	len = le32_to_cpu(info.len);

We trust the device but a length check would be nice here to be clear
that the memcpy() below is always safe:

if (len > psi->bufsize)
    return -1;

> +
> +	bf = kmalloc(len, GFP_KERNEL);
> +	if (bf == NULL)
> +		return -ENOMEM;

There's no point in returning -ENOMEM if we return -1 and res->ret
above.  The caller has no way of knowing the true meaning of the return
value.  I would return -1 to be clear that there are no error constants.

> +
> +	*id    = le64_to_cpu(info.id);
> +	*type  = from_virtio_type(info.type);
> +	*count = le32_to_cpu(info.count);
> +
> +	flags = le32_to_cpu(info.flags);
> +	*compressed = flags & VIRTIO_PSTORE_FL_COMPRESSED;
> +
> +	time->tv_sec  = le64_to_cpu(info.time_sec);
> +	time->tv_nsec = le32_to_cpu(info.time_nsec);
> +
> +	memcpy(bf, psi->buf, len);
> +	*buf = bf;
> +
> +	return len;
> +}
> +
> +static int notrace virt_pstore_write(enum pstore_type_id type,
> +				     enum kmsg_dump_reason reason,
> +				     u64 *id, unsigned int part, int count,
> +				     bool compressed, size_t size,
> +				     struct pstore_info *psi)
> +{
> +	struct virtio_pstore *vps = psi->data;
> +	struct virtio_pstore_req *req;
> +	struct virtio_pstore_res *res;
> +	struct scatterlist sgo[2], sgi[1];
> +	struct scatterlist *sgs[2] = { sgo, sgi };
> +	unsigned int flags = compressed ? VIRTIO_PSTORE_FL_COMPRESSED : 0;
> +
> +	if (vps->failed)
> +		return -1;
> +
> +	*id = vps->req_id;
> +	virt_pstore_get_reqs(vps, &req, &res);

This function does not wait for a response from the device so you cannot
call virt_pstore_get_reqs() without risk of reusing an in-flight buffer.

> +
> +	req->cmd   = cpu_to_le16(VIRTIO_PSTORE_CMD_WRITE);
> +	req->type  = to_virtio_type(type);
> +	req->flags = cpu_to_le32(flags);
> +
> +	sg_init_table(sgo, 2);
> +	sg_set_buf(&sgo[0], req, sizeof(*req));
> +	sg_set_buf(&sgo[1], psi->buf, size);
> +	sg_init_one(sgi, res, sizeof(*res));
> +	virtqueue_add_sgs(vps->vq[1], sgs, 1, 1, vps, GFP_ATOMIC);

If we don't wait for request completion then virtqueue_add_sgs() could
fail here if the virtqueue is already full.

> +	virtqueue_kick(vps->vq[1]);
> +
> +	return 0;
> +}
> +
> +static int virt_pstore_erase(enum pstore_type_id type, u64 id, int count,
> +			     struct timespec time, struct pstore_info *psi)
> +{
> +	struct virtio_pstore *vps = psi->data;
> +	struct virtio_pstore_req *req;
> +	struct virtio_pstore_res *res;
> +	struct scatterlist sgo[1], sgi[1];
> +	struct scatterlist *sgs[2] = { sgo, sgi };
> +	unsigned int len;
> +
> +	virt_pstore_get_reqs(vps, &req, &res);
> +
> +	req->cmd   = cpu_to_le16(VIRTIO_PSTORE_CMD_ERASE);
> +	req->type  = to_virtio_type(type);
> +	req->id	   = cpu_to_le64(id);
> +	req->count = cpu_to_le32(count);
> +
> +	sg_init_one(sgo, req, sizeof(*req));
> +	sg_init_one(sgi, res, sizeof(*res));
> +	virtqueue_add_sgs(vps->vq[0], sgs, 1, 1, vps, GFP_KERNEL);
> +	virtqueue_kick(vps->vq[0]);

Erase commands are sent on the "read" virtqueue, not the "write"
virtqueue?

> +
> +	wait_event(vps->acked, virtqueue_get_buf(vps->vq[0], &len));
> +	return le32_to_cpu(res->ret);
> +}
> +
> +static int virt_pstore_init(struct virtio_pstore *vps)
> +{
> +	struct pstore_info *psinfo = &vps->pstore;
> +	int err;
> +
> +	if (!psinfo->bufsize)
> +		psinfo->bufsize = VIRT_PSTORE_BUFSIZE;
> +
> +	psinfo->buf = alloc_pages_exact(psinfo->bufsize, GFP_KERNEL);
> +	if (!psinfo->buf) {
> +		pr_err("cannot allocate pstore buffer\n");
> +		return -ENOMEM;
> +	}
> +
> +	psinfo->owner = THIS_MODULE;
> +	psinfo->name  = "virtio";
> +	psinfo->open  = virt_pstore_open;
> +	psinfo->close = virt_pstore_close;
> +	psinfo->read  = virt_pstore_read;
> +	psinfo->erase = virt_pstore_erase;
> +	psinfo->write = virt_pstore_write;
> +	psinfo->flags = PSTORE_FLAGS_FRAGILE;
> +
> +	psinfo->data  = vps;
> +	spin_lock_init(&psinfo->buf_lock);
> +
> +	err = pstore_register(psinfo);
> +	if (err)
> +		kfree(psinfo->buf);

Should this be free_pages_exact() instead of kfree()?

Is it safe to call pstore_register() before the remaining initialization
steps?

My understanding is that if pstore is already mounted then requests will
immediately be sent.  In order to support this we'd need to initialize
everything else first before calling pstore_register().

> +
> +	return err;
> +}
> +
> +static int virt_pstore_exit(struct virtio_pstore *vps)
> +{
> +	struct pstore_info *psinfo = &vps->pstore;
> +
> +	pstore_unregister(psinfo);
> +
> +	free_pages_exact(psinfo->buf, psinfo->bufsize);
> +	psinfo->buf = NULL;
> +	psinfo->bufsize = 0;
> +
> +	return 0;
> +}
> +
> +static int virtpstore_init_vqs(struct virtio_pstore *vps)
> +{
> +	vq_callback_t *callbacks[] = { virtpstore_ack, virtpstore_check };
> +	const char *names[] = { "pstore_read", "pstore_write" };
> +
> +	return vps->vdev->config->find_vqs(vps->vdev, 2, vps->vq,
> +					   callbacks, names);
> +}
> +
> +static void virtpstore_init_config(struct virtio_pstore *vps)
> +{
> +	u32 bufsize;
> +
> +	virtio_cread(vps->vdev, struct virtio_pstore_config, bufsize, &bufsize);
> +
> +	vps->pstore.bufsize = PAGE_ALIGN(bufsize);
> +}
> +
> +static void virtpstore_confirm_config(struct virtio_pstore *vps)
> +{
> +	u32 bufsize = vps->pstore.bufsize;
> +
> +	virtio_cwrite(vps->vdev, struct virtio_pstore_config, bufsize,
> +		     &bufsize);
> +}
> +
> +static int virtpstore_probe(struct virtio_device *vdev)
> +{
> +	struct virtio_pstore *vps;
> +	int err;
> +
> +	if (!vdev->config->get) {
> +		dev_err(&vdev->dev, "driver init: config access disabled\n");
> +		return -EINVAL;
> +	}
> +
> +	vdev->priv = vps = kzalloc(sizeof(*vps), GFP_KERNEL);
> +	if (!vps) {
> +		err = -ENOMEM;
> +		goto out;
> +	}
> +	vps->vdev = vdev;
> +
> +	err = virtpstore_init_vqs(vps);
> +	if (err < 0)
> +		goto out_free;
> +
> +	virtpstore_init_config(vps);
> +
> +	err = virt_pstore_init(vps);
> +	if (err)
> +		goto out_del_vq;
> +
> +	virtpstore_confirm_config(vps);
> +
> +	init_waitqueue_head(&vps->acked);
> +
> +	virtio_device_ready(vdev);

This call is needed if the .probe() function uses virtqueues before
returning.  Right now it either doesn't use the virtqueues or it has
already used them in virt_pstore_init().  Please move this before
pstore_register().

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] qemu: Implement virtio-pstore device
  2016-09-04 14:38 ` [PATCH 2/3] qemu: Implement virtio-pstore device Namhyung Kim
@ 2016-09-22 12:23   ` Stefan Hajnoczi
  2016-09-23  5:52     ` Namhyung Kim
  0 siblings, 1 reply; 12+ messages in thread
From: Stefan Hajnoczi @ 2016-09-22 12:23 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: virtio-dev, Anton Vorontsov, Daniel P . Berrange, Kees Cook, kvm,
	Radim Krčmář,
	qemu-devel, Namhyung Kim, Michael S. Tsirkin, LKML,
	Steven Rostedt, virtualization, Minchan Kim, Tony Luck,
	Anthony Liguori, Colin Cross, Paolo Bonzini, Ingo Molnar

[-- Attachment #1: Type: text/plain, Size: 3211 bytes --]

On Sun, Sep 04, 2016 at 11:38:59PM +0900, Namhyung Kim wrote:
> +static void virtio_pstore_handle_io(VirtIODevice *vdev, VirtQueue *vq)
> +{
> +    VirtIOPstore *s = VIRTIO_PSTORE(vdev);
> +    VirtQueueElement *elem;
> +    struct virtio_pstore_req req;
> +    struct virtio_pstore_res res;
> +    ssize_t len = 0;
> +    int ret;
> +
> +    for (;;) {
> +        elem = virtqueue_pop(vq, sizeof(VirtQueueElement));
> +        if (!elem) {
> +            return;
> +        }
> +
> +        if (elem->out_num < 1 || elem->in_num < 1) {
> +            error_report("request or response buffer is missing");
> +            exit(1);

The new virtio_error() function might be available, depending on when
this patch series is merged.  virtio_error() should be used instead of
exit(1).  See "[PATCH v5 0/9] virtio: avoid exit() when device enters
invalid states" on qemu-devel.

> +        }
> +
> +        if (elem->out_num > 2 || elem->in_num > 3) {
> +            error_report("invalid number of input/output buffer");
> +            exit(1);
> +        }

The VIRTIO specification requires that flexible framing is supported.
The device cannot make assumptions about the scatter-gather list.  It
must support any layout (e.g. even multiple 1-byte iovecs making up the
buffer).

> +
> +        len = iov_to_buf(elem->out_sg, elem->out_num, 0, &req, sizeof(req));
> +        if (len != (ssize_t)sizeof(req)) {
> +            error_report("invalid request size: %ld", (long)len);
> +            exit(1);
> +        }
> +        res.cmd  = req.cmd;
> +        res.type = req.type;
> +
> +        switch (le16_to_cpu(req.cmd)) {
> +        case VIRTIO_PSTORE_CMD_OPEN:
> +            ret = virtio_pstore_do_open(s);
> +            break;
> +        case VIRTIO_PSTORE_CMD_CLOSE:
> +            ret = virtio_pstore_do_close(s);
> +            break;
> +        case VIRTIO_PSTORE_CMD_ERASE:
> +            ret = virtio_pstore_do_erase(s, &req);
> +            break;
> +        case VIRTIO_PSTORE_CMD_READ:
> +            ret = virtio_pstore_do_read(s, elem);
> +            if (ret == 1) {
> +                /* async channel io */
> +                continue;
> +            }
> +            break;
> +        case VIRTIO_PSTORE_CMD_WRITE:
> +            ret = virtio_pstore_do_write(s, elem, &req);
> +            if (ret == 1) {
> +                /* async channel io */
> +                continue;
> +            }
> +            break;
> +        default:
> +            ret = -1;
> +            break;
> +        }
> +
> +        res.ret = ret;

Missing cpu_to_le()?

> +static void pstore_set_bufsize(Object *obj, Visitor *v,
> +                               const char *name, void *opaque,
> +                               Error **errp)
> +{
> +    VirtIOPstore *s = opaque;
> +    Error *error = NULL;
> +    uint64_t value;
> +
> +    visit_type_size(v, name, &value, &error);
> +    if (error) {
> +        error_propagate(errp, error);
> +        return;
> +    }
> +
> +    if (value < 4096) {
> +        error_setg(&error, "Warning: too small buffer size: %"PRIu64, value);

This is an error, not a warning.  Please remove "Warning:" so it's clear
to the user that this message caused QEMU to fail.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/3] virtio: Basic implementation of virtio pstore driver
  2016-09-22 11:57   ` Stefan Hajnoczi
@ 2016-09-23  5:48     ` Namhyung Kim
  0 siblings, 0 replies; 12+ messages in thread
From: Namhyung Kim @ 2016-09-23  5:48 UTC (permalink / raw)
  To: Stefan Hajnoczi, Kees Cook
  Cc: virtio-dev, Anton Vorontsov, kvm, Radim Krčmář,
	qemu-devel, Michael S. Tsirkin, Will Deacon, LKML,
	Steven Rostedt, virtualization, Minchan Kim, Tony Luck,
	Anthony Liguori, Colin Cross, Paolo Bonzini, Ingo Molnar

Hi Stefan,

On Thu, Sep 22, 2016 at 12:57:44PM +0100, Stefan Hajnoczi wrote:
> On Sun, Sep 04, 2016 at 11:38:58PM +0900, Namhyung Kim wrote:
> > The virtio pstore driver provides interface to the pstore subsystem so
> > that the guest kernel's log/dump message can be saved on the host
> > machine.  Users can access the log file directly on the host, or on the
> > guest at the next boot using pstore filesystem.  It currently deals with
> > kernel log (printk) buffer only, but we can extend it to have other
> > information (like ftrace dump) later.
> > 
> > It supports legacy PCI device using a 16K buffer by default and it's
> > configurable.  It uses two virtqueues - one for (sync) read and another
> > for (async) write.  Since it cannot wait for write finished, it supports
> > up to 128 concurrent IO.
> 
> Please document the locks that this code relies on.  It is generally not
> safe to call virtqueue_*() from multiple threads.  I also wonder about
> locking for virtio_pstore->req_id and other fields.  Are locks missing
> or is something in pstore ensuring safety?

Ok, I should use atomic inc for pstore->req_id.  The open-read-close
are serialized by the read_mutex of pstore_info.  Write can happend
anytime so I gave it a dedicate queue.

Erase is a problem though, normally it's only doable after mount
operation is done so no contention to the open-read-close.  But if the
pstore_update_ms is set, timer routine can schedule a work to do the
open-read-close loop which might contend to erase.

I'm not sure how useful pstore_update_ms is and the descriptoin saids

  "milliseconds before pstore updates its content "
  "(default is -1, which means runtime updates are disabled; "
  "enabling this option is not safe, it may lead to further "
  "corruption on Oopses)")


> 
> > +static int virt_pstore_open(struct pstore_info *psi)
> > +{
> > +	struct virtio_pstore *vps = psi->data;
> > +	struct virtio_pstore_req *req;
> > +	struct virtio_pstore_res *res;
> > +	struct scatterlist sgo[1], sgi[1];
> > +	struct scatterlist *sgs[2] = { sgo, sgi };
> > +	unsigned int len;
> > +
> > +	virt_pstore_get_reqs(vps, &req, &res);
> > +
> > +	req->cmd = cpu_to_le16(VIRTIO_PSTORE_CMD_OPEN);
> > +
> > +	sg_init_one(sgo, req, sizeof(*req));
> > +	sg_init_one(sgi, res, sizeof(*res));
> > +	virtqueue_add_sgs(vps->vq[0], sgs, 1, 1, vps, GFP_KERNEL);
> > +	virtqueue_kick(vps->vq[0]);
> > +
> > +	wait_event(vps->acked, virtqueue_get_buf(vps->vq[0], &len));
> > +	return le32_to_cpu(res->ret);
> 
> This assumes the device puts compatible Linux errno values in res->ret.
> The function doesn't need to return -errno if I'm reading fs/pstore/
> code correctly.  You could return -1 on error to avoid making this
> assumption.  The same applies to other res->ret usage below.

Ok.

> 
> > +}
> > +
> > +static int virt_pstore_close(struct pstore_info *psi)
> > +{
> > +	struct virtio_pstore *vps = psi->data;
> > +	struct virtio_pstore_req *req = &vps->req[vps->req_id];
> > +	struct virtio_pstore_res *res = &vps->res[vps->req_id];
> 
> Assigning &vps->req[vps->req_id]/&vps->res[vps->req_id] is unnecessary,
> virt_pstore_get_reqs() handles that below.

Ah, right.

> 
> > +	struct scatterlist sgo[1], sgi[1];
> > +	struct scatterlist *sgs[2] = { sgo, sgi };
> > +	unsigned int len;
> > +
> > +	virt_pstore_get_reqs(vps, &req, &res);
> > +
> > +	req->cmd = cpu_to_le16(VIRTIO_PSTORE_CMD_CLOSE);
> > +
> > +	sg_init_one(sgo, req, sizeof(*req));
> > +	sg_init_one(sgi, res, sizeof(*res));
> > +	virtqueue_add_sgs(vps->vq[0], sgs, 1, 1, vps, GFP_KERNEL);
> > +	virtqueue_kick(vps->vq[0]);
> > +
> > +	wait_event(vps->acked, virtqueue_get_buf(vps->vq[0], &len));
> > +	return le32_to_cpu(res->ret);
> > +}
> > +
> > +static ssize_t virt_pstore_read(u64 *id, enum pstore_type_id *type,
> > +				int *count, struct timespec *time,
> > +				char **buf, bool *compressed,
> > +				ssize_t *ecc_notice_size,
> > +				struct pstore_info *psi)
> > +{
> > +	struct virtio_pstore *vps = psi->data;
> > +	struct virtio_pstore_req *req;
> > +	struct virtio_pstore_res *res;
> > +	struct virtio_pstore_fileinfo info;
> > +	struct scatterlist sgo[1], sgi[3];
> > +	struct scatterlist *sgs[2] = { sgo, sgi };
> > +	unsigned int len;
> > +	unsigned int flags;
> > +	int ret;
> > +	void *bf;
> > +
> > +	virt_pstore_get_reqs(vps, &req, &res);
> > +
> > +	req->cmd = cpu_to_le16(VIRTIO_PSTORE_CMD_READ);
> > +
> > +	sg_init_one(sgo, req, sizeof(*req));
> > +	sg_init_table(sgi, 3);
> > +	sg_set_buf(&sgi[0], res, sizeof(*res));
> > +	sg_set_buf(&sgi[1], &info, sizeof(info));
> > +	sg_set_buf(&sgi[2], psi->buf, psi->bufsize);
> > +	virtqueue_add_sgs(vps->vq[0], sgs, 1, 1, vps, GFP_KERNEL);
> > +	virtqueue_kick(vps->vq[0]);
> > +
> > +	wait_event(vps->acked, virtqueue_get_buf(vps->vq[0], &len));
> > +	if (len < sizeof(*res) + sizeof(info))
> > +		return -1;
> > +
> > +	ret = le32_to_cpu(res->ret);
> > +	if (ret < 0)
> > +		return ret;
> > +
> > +	len = le32_to_cpu(info.len);
> 
> We trust the device but a length check would be nice here to be clear
> that the memcpy() below is always safe:
> 
> if (len > psi->bufsize)
>     return -1;

Good point.

> 
> > +
> > +	bf = kmalloc(len, GFP_KERNEL);
> > +	if (bf == NULL)
> > +		return -ENOMEM;
> 
> There's no point in returning -ENOMEM if we return -1 and res->ret
> above.  The caller has no way of knowing the true meaning of the return
> value.  I would return -1 to be clear that there are no error constants.

Ok.

> 
> > +
> > +	*id    = le64_to_cpu(info.id);
> > +	*type  = from_virtio_type(info.type);
> > +	*count = le32_to_cpu(info.count);
> > +
> > +	flags = le32_to_cpu(info.flags);
> > +	*compressed = flags & VIRTIO_PSTORE_FL_COMPRESSED;
> > +
> > +	time->tv_sec  = le64_to_cpu(info.time_sec);
> > +	time->tv_nsec = le32_to_cpu(info.time_nsec);
> > +
> > +	memcpy(bf, psi->buf, len);
> > +	*buf = bf;
> > +
> > +	return len;
> > +}
> > +
> > +static int notrace virt_pstore_write(enum pstore_type_id type,
> > +				     enum kmsg_dump_reason reason,
> > +				     u64 *id, unsigned int part, int count,
> > +				     bool compressed, size_t size,
> > +				     struct pstore_info *psi)
> > +{
> > +	struct virtio_pstore *vps = psi->data;
> > +	struct virtio_pstore_req *req;
> > +	struct virtio_pstore_res *res;
> > +	struct scatterlist sgo[2], sgi[1];
> > +	struct scatterlist *sgs[2] = { sgo, sgi };
> > +	unsigned int flags = compressed ? VIRTIO_PSTORE_FL_COMPRESSED : 0;
> > +
> > +	if (vps->failed)
> > +		return -1;
> > +
> > +	*id = vps->req_id;
> > +	virt_pstore_get_reqs(vps, &req, &res);
> 
> This function does not wait for a response from the device so you cannot
> call virt_pstore_get_reqs() without risk of reusing an in-flight buffer.

Right.  This part is debatable IMHO.

Generally pstore is used to dump oops buffer when kernel is dying.
This is an occasional event so I think it's ok with the current code.

The problematic case is when other writers (console, pmsg or ftrace)
are enabled.  They might contend with the oops, but pstore serializes
them with a spinlock.  But as you said it still has a risk of reusing
the in-flight buffer.  I think it's okay that oops overwriting other,
but the reverse is not good.  I tried to mitigate the problem by
managing the buffer position to spread out the write but it's not a
complete solution.

I thought simply stopping the guest when writing oops, or we might
create a way to tell pstore to stop other writers (until writing oops
finished) somehow.

Kees, what do you think?

> 
> > +
> > +	req->cmd   = cpu_to_le16(VIRTIO_PSTORE_CMD_WRITE);
> > +	req->type  = to_virtio_type(type);
> > +	req->flags = cpu_to_le32(flags);
> > +
> > +	sg_init_table(sgo, 2);
> > +	sg_set_buf(&sgo[0], req, sizeof(*req));
> > +	sg_set_buf(&sgo[1], psi->buf, size);
> > +	sg_init_one(sgi, res, sizeof(*res));
> > +	virtqueue_add_sgs(vps->vq[1], sgs, 1, 1, vps, GFP_ATOMIC);
> 
> If we don't wait for request completion then virtqueue_add_sgs() could
> fail here if the virtqueue is already full.

Ah, didn't known that.  So we need to wait for completion somehow.


> 
> > +	virtqueue_kick(vps->vq[1]);
> > +
> > +	return 0;
> > +}
> > +
> > +static int virt_pstore_erase(enum pstore_type_id type, u64 id, int count,
> > +			     struct timespec time, struct pstore_info *psi)
> > +{
> > +	struct virtio_pstore *vps = psi->data;
> > +	struct virtio_pstore_req *req;
> > +	struct virtio_pstore_res *res;
> > +	struct scatterlist sgo[1], sgi[1];
> > +	struct scatterlist *sgs[2] = { sgo, sgi };
> > +	unsigned int len;
> > +
> > +	virt_pstore_get_reqs(vps, &req, &res);
> > +
> > +	req->cmd   = cpu_to_le16(VIRTIO_PSTORE_CMD_ERASE);
> > +	req->type  = to_virtio_type(type);
> > +	req->id	   = cpu_to_le64(id);
> > +	req->count = cpu_to_le32(count);
> > +
> > +	sg_init_one(sgo, req, sizeof(*req));
> > +	sg_init_one(sgi, res, sizeof(*res));
> > +	virtqueue_add_sgs(vps->vq[0], sgs, 1, 1, vps, GFP_KERNEL);
> > +	virtqueue_kick(vps->vq[0]);
> 
> Erase commands are sent on the "read" virtqueue, not the "write"
> virtqueue?

Yes, you can think they're sync and async queues.  The write is the
only async operation.


> 
> > +
> > +	wait_event(vps->acked, virtqueue_get_buf(vps->vq[0], &len));
> > +	return le32_to_cpu(res->ret);
> > +}
> > +
> > +static int virt_pstore_init(struct virtio_pstore *vps)
> > +{
> > +	struct pstore_info *psinfo = &vps->pstore;
> > +	int err;
> > +
> > +	if (!psinfo->bufsize)
> > +		psinfo->bufsize = VIRT_PSTORE_BUFSIZE;
> > +
> > +	psinfo->buf = alloc_pages_exact(psinfo->bufsize, GFP_KERNEL);
> > +	if (!psinfo->buf) {
> > +		pr_err("cannot allocate pstore buffer\n");
> > +		return -ENOMEM;
> > +	}
> > +
> > +	psinfo->owner = THIS_MODULE;
> > +	psinfo->name  = "virtio";
> > +	psinfo->open  = virt_pstore_open;
> > +	psinfo->close = virt_pstore_close;
> > +	psinfo->read  = virt_pstore_read;
> > +	psinfo->erase = virt_pstore_erase;
> > +	psinfo->write = virt_pstore_write;
> > +	psinfo->flags = PSTORE_FLAGS_FRAGILE;
> > +
> > +	psinfo->data  = vps;
> > +	spin_lock_init(&psinfo->buf_lock);
> > +
> > +	err = pstore_register(psinfo);
> > +	if (err)
> > +		kfree(psinfo->buf);
> 
> Should this be free_pages_exact() instead of kfree()?

Oops, right.

> 
> Is it safe to call pstore_register() before the remaining initialization
> steps?
> 
> My understanding is that if pstore is already mounted then requests will
> immediately be sent.  In order to support this we'd need to initialize
> everything else first before calling pstore_register().

Fair enough.  Will change.

> 
> > +
> > +	return err;
> > +}
> > +
> > +static int virt_pstore_exit(struct virtio_pstore *vps)
> > +{
> > +	struct pstore_info *psinfo = &vps->pstore;
> > +
> > +	pstore_unregister(psinfo);
> > +
> > +	free_pages_exact(psinfo->buf, psinfo->bufsize);
> > +	psinfo->buf = NULL;
> > +	psinfo->bufsize = 0;
> > +
> > +	return 0;
> > +}
> > +
> > +static int virtpstore_init_vqs(struct virtio_pstore *vps)
> > +{
> > +	vq_callback_t *callbacks[] = { virtpstore_ack, virtpstore_check };
> > +	const char *names[] = { "pstore_read", "pstore_write" };
> > +
> > +	return vps->vdev->config->find_vqs(vps->vdev, 2, vps->vq,
> > +					   callbacks, names);
> > +}
> > +
> > +static void virtpstore_init_config(struct virtio_pstore *vps)
> > +{
> > +	u32 bufsize;
> > +
> > +	virtio_cread(vps->vdev, struct virtio_pstore_config, bufsize, &bufsize);
> > +
> > +	vps->pstore.bufsize = PAGE_ALIGN(bufsize);
> > +}
> > +
> > +static void virtpstore_confirm_config(struct virtio_pstore *vps)
> > +{
> > +	u32 bufsize = vps->pstore.bufsize;
> > +
> > +	virtio_cwrite(vps->vdev, struct virtio_pstore_config, bufsize,
> > +		     &bufsize);
> > +}
> > +
> > +static int virtpstore_probe(struct virtio_device *vdev)
> > +{
> > +	struct virtio_pstore *vps;
> > +	int err;
> > +
> > +	if (!vdev->config->get) {
> > +		dev_err(&vdev->dev, "driver init: config access disabled\n");
> > +		return -EINVAL;
> > +	}
> > +
> > +	vdev->priv = vps = kzalloc(sizeof(*vps), GFP_KERNEL);
> > +	if (!vps) {
> > +		err = -ENOMEM;
> > +		goto out;
> > +	}
> > +	vps->vdev = vdev;
> > +
> > +	err = virtpstore_init_vqs(vps);
> > +	if (err < 0)
> > +		goto out_free;
> > +
> > +	virtpstore_init_config(vps);
> > +
> > +	err = virt_pstore_init(vps);
> > +	if (err)
> > +		goto out_del_vq;
> > +
> > +	virtpstore_confirm_config(vps);
> > +
> > +	init_waitqueue_head(&vps->acked);
> > +
> > +	virtio_device_ready(vdev);
> 
> This call is needed if the .probe() function uses virtqueues before
> returning.  Right now it either doesn't use the virtqueues or it has
> already used them in virt_pstore_init().  Please move this before
> pstore_register().

Will do.

Thank you so much for your detailed review!
Namhyung

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] qemu: Implement virtio-pstore device
  2016-09-22 12:23   ` Stefan Hajnoczi
@ 2016-09-23  5:52     ` Namhyung Kim
  0 siblings, 0 replies; 12+ messages in thread
From: Namhyung Kim @ 2016-09-23  5:52 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: virtio-dev, Anton Vorontsov, Daniel P . Berrange, Kees Cook, kvm,
	Radim Krčmář,
	qemu-devel, Michael S. Tsirkin, LKML, Steven Rostedt,
	virtualization, Minchan Kim, Tony Luck, Anthony Liguori,
	Colin Cross, Paolo Bonzini, Ingo Molnar

On Thu, Sep 22, 2016 at 01:23:16PM +0100, Stefan Hajnoczi wrote:
> On Sun, Sep 04, 2016 at 11:38:59PM +0900, Namhyung Kim wrote:
> > +static void virtio_pstore_handle_io(VirtIODevice *vdev, VirtQueue *vq)
> > +{
> > +    VirtIOPstore *s = VIRTIO_PSTORE(vdev);
> > +    VirtQueueElement *elem;
> > +    struct virtio_pstore_req req;
> > +    struct virtio_pstore_res res;
> > +    ssize_t len = 0;
> > +    int ret;
> > +
> > +    for (;;) {
> > +        elem = virtqueue_pop(vq, sizeof(VirtQueueElement));
> > +        if (!elem) {
> > +            return;
> > +        }
> > +
> > +        if (elem->out_num < 1 || elem->in_num < 1) {
> > +            error_report("request or response buffer is missing");
> > +            exit(1);
> 
> The new virtio_error() function might be available, depending on when
> this patch series is merged.  virtio_error() should be used instead of
> exit(1).  See "[PATCH v5 0/9] virtio: avoid exit() when device enters
> invalid states" on qemu-devel.

Thanks for the info, will take a look.

> 
> > +        }
> > +
> > +        if (elem->out_num > 2 || elem->in_num > 3) {
> > +            error_report("invalid number of input/output buffer");
> > +            exit(1);
> > +        }
> 
> The VIRTIO specification requires that flexible framing is supported.
> The device cannot make assumptions about the scatter-gather list.  It
> must support any layout (e.g. even multiple 1-byte iovecs making up the
> buffer).

Ok.

> 
> > +
> > +        len = iov_to_buf(elem->out_sg, elem->out_num, 0, &req, sizeof(req));
> > +        if (len != (ssize_t)sizeof(req)) {
> > +            error_report("invalid request size: %ld", (long)len);
> > +            exit(1);
> > +        }
> > +        res.cmd  = req.cmd;
> > +        res.type = req.type;
> > +
> > +        switch (le16_to_cpu(req.cmd)) {
> > +        case VIRTIO_PSTORE_CMD_OPEN:
> > +            ret = virtio_pstore_do_open(s);
> > +            break;
> > +        case VIRTIO_PSTORE_CMD_CLOSE:
> > +            ret = virtio_pstore_do_close(s);
> > +            break;
> > +        case VIRTIO_PSTORE_CMD_ERASE:
> > +            ret = virtio_pstore_do_erase(s, &req);
> > +            break;
> > +        case VIRTIO_PSTORE_CMD_READ:
> > +            ret = virtio_pstore_do_read(s, elem);
> > +            if (ret == 1) {
> > +                /* async channel io */
> > +                continue;
> > +            }
> > +            break;
> > +        case VIRTIO_PSTORE_CMD_WRITE:
> > +            ret = virtio_pstore_do_write(s, elem, &req);
> > +            if (ret == 1) {
> > +                /* async channel io */
> > +                continue;
> > +            }
> > +            break;
> > +        default:
> > +            ret = -1;
> > +            break;
> > +        }
> > +
> > +        res.ret = ret;
> 
> Missing cpu_to_le()?

Right!

> 
> > +static void pstore_set_bufsize(Object *obj, Visitor *v,
> > +                               const char *name, void *opaque,
> > +                               Error **errp)
> > +{
> > +    VirtIOPstore *s = opaque;
> > +    Error *error = NULL;
> > +    uint64_t value;
> > +
> > +    visit_type_size(v, name, &value, &error);
> > +    if (error) {
> > +        error_propagate(errp, error);
> > +        return;
> > +    }
> > +
> > +    if (value < 4096) {
> > +        error_setg(&error, "Warning: too small buffer size: %"PRIu64, value);
> 
> This is an error, not a warning.  Please remove "Warning:" so it's clear
> to the user that this message caused QEMU to fail.

Will do.

Thanks,
Namhyung

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 3/3] kvmtool: Implement virtio-pstore device
  2016-08-31  8:07 [RFC/PATCHSET 0/3] virtio: Implement virtio pstore device (v4) Namhyung Kim
@ 2016-08-31  8:08 ` Namhyung Kim
  0 siblings, 0 replies; 12+ messages in thread
From: Namhyung Kim @ 2016-08-31  8:08 UTC (permalink / raw)
  To: virtio-dev, virtualization, kvm, qemu-devel
  Cc: LKML, Namhyung Kim, Will Deacon

From: Namhyung Kim <namhyung@gmail.com>

Add virtio pstore device to allow kernel log messages saved on the
host.  With this patch, it will save the log files under directory given
by --pstore option.

  $ lkvm run --pstore=dir-xx

  (guest) # echo c > /proc/sysrq-trigger

  $ ls dir-xx
  dmesg-1.enc.z  dmesg-2.enc.z

The log files are usually compressed using zlib.  User can easily see
the messages on the host or on the guest (using pstore filesystem).

Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Namhyung Kim <namhyung@gmail.com>
---
 Makefile                     |   1 +
 builtin-run.c                |   2 +
 include/kvm/kvm-config.h     |   1 +
 include/kvm/virtio-pci-dev.h |   2 +
 include/kvm/virtio-pstore.h  |  53 +++++
 include/linux/virtio_ids.h   |   1 +
 virtio/pstore.c              | 447 +++++++++++++++++++++++++++++++++++++++++++
 7 files changed, 507 insertions(+)
 create mode 100644 include/kvm/virtio-pstore.h
 create mode 100644 virtio/pstore.c

diff --git a/Makefile b/Makefile
index 1f0196f..d7462b9 100644
--- a/Makefile
+++ b/Makefile
@@ -67,6 +67,7 @@ OBJS	+= virtio/net.o
 OBJS	+= virtio/rng.o
 OBJS    += virtio/balloon.o
 OBJS	+= virtio/pci.o
+OBJS	+= virtio/pstore.o
 OBJS	+= disk/blk.o
 OBJS	+= disk/qcow.o
 OBJS	+= disk/raw.o
diff --git a/builtin-run.c b/builtin-run.c
index 72b878d..08c12dd 100644
--- a/builtin-run.c
+++ b/builtin-run.c
@@ -128,6 +128,8 @@ void kvm_run_set_wrapper_sandbox(void)
 			" rootfs"),					\
 	OPT_STRING('\0', "hugetlbfs", &(cfg)->hugetlbfs_path, "path",	\
 			"Hugetlbfs path"),				\
+	OPT_STRING('\0', "pstore", &(cfg)->pstore_path, "path",		\
+			"pstore data path"),				\
 									\
 	OPT_GROUP("Kernel options:"),					\
 	OPT_STRING('k', "kernel", &(cfg)->kernel_filename, "kernel",	\
diff --git a/include/kvm/kvm-config.h b/include/kvm/kvm-config.h
index 386fa8c..42b7651 100644
--- a/include/kvm/kvm-config.h
+++ b/include/kvm/kvm-config.h
@@ -45,6 +45,7 @@ struct kvm_config {
 	const char *hugetlbfs_path;
 	const char *custom_rootfs_name;
 	const char *real_cmdline;
+	const char *pstore_path;
 	struct virtio_net_params *net_params;
 	bool single_step;
 	bool vnc;
diff --git a/include/kvm/virtio-pci-dev.h b/include/kvm/virtio-pci-dev.h
index 48ae018..4339d94 100644
--- a/include/kvm/virtio-pci-dev.h
+++ b/include/kvm/virtio-pci-dev.h
@@ -15,6 +15,7 @@
 #define PCI_DEVICE_ID_VIRTIO_BLN		0x1005
 #define PCI_DEVICE_ID_VIRTIO_SCSI		0x1008
 #define PCI_DEVICE_ID_VIRTIO_9P			0x1009
+#define PCI_DEVICE_ID_VIRTIO_PSTORE		0x100a
 #define PCI_DEVICE_ID_VESA			0x2000
 #define PCI_DEVICE_ID_PCI_SHMEM			0x0001
 
@@ -34,5 +35,6 @@
 #define PCI_CLASS_RNG				0xff0000
 #define PCI_CLASS_BLN				0xff0000
 #define PCI_CLASS_9P				0xff0000
+#define PCI_CLASS_PSTORE			0xff0000
 
 #endif /* VIRTIO_PCI_DEV_H_ */
diff --git a/include/kvm/virtio-pstore.h b/include/kvm/virtio-pstore.h
new file mode 100644
index 0000000..9f52ffd
--- /dev/null
+++ b/include/kvm/virtio-pstore.h
@@ -0,0 +1,53 @@
+#ifndef KVM__PSTORE_VIRTIO_H
+#define KVM__PSTORE_VIRTIO_H
+
+#include <kvm/virtio.h>
+#include <sys/types.h>
+
+#define VIRTIO_PSTORE_CMD_NULL   0
+#define VIRTIO_PSTORE_CMD_OPEN   1
+#define VIRTIO_PSTORE_CMD_READ   2
+#define VIRTIO_PSTORE_CMD_WRITE  3
+#define VIRTIO_PSTORE_CMD_ERASE  4
+#define VIRTIO_PSTORE_CMD_CLOSE  5
+
+#define VIRTIO_PSTORE_TYPE_UNKNOWN  0
+#define VIRTIO_PSTORE_TYPE_DMESG    1
+
+#define VIRTIO_PSTORE_FL_COMPRESSED  1
+
+struct virtio_pstore_req {
+	__virtio16		cmd;
+	__virtio16		type;
+	__virtio32		flags;
+	__virtio64		id;
+	__virtio32		count;
+	__virtio32		reserved;
+};
+
+struct virtio_pstore_res {
+	__virtio16		cmd;
+	__virtio16		type;
+	__virtio32		ret;
+};
+
+struct virtio_pstore_fileinfo {
+	__virtio64		id;
+	__virtio32		count;
+	__virtio16		type;
+	__virtio16		unused;
+	__virtio32		flags;
+	__virtio32		len;
+	__virtio64		time_sec;
+	__virtio32		time_nsec;
+	__virtio32		reserved;
+};
+
+struct virtio_pstore_config {
+	__virtio32		bufsize;
+};
+
+int virtio_pstore__init(struct kvm *kvm);
+int virtio_pstore__exit(struct kvm *kvm);
+
+#endif /* KVM__PSTORE_VIRTIO_H */
diff --git a/include/linux/virtio_ids.h b/include/linux/virtio_ids.h
index 5f60aa4..40eabf7 100644
--- a/include/linux/virtio_ids.h
+++ b/include/linux/virtio_ids.h
@@ -40,5 +40,6 @@
 #define VIRTIO_ID_RPROC_SERIAL 11 /* virtio remoteproc serial link */
 #define VIRTIO_ID_CAIF	       12 /* Virtio caif */
 #define VIRTIO_ID_INPUT        18 /* virtio input */
+#define VIRTIO_ID_PSTORE       22 /* virtio pstore */
 
 #endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/virtio/pstore.c b/virtio/pstore.c
new file mode 100644
index 0000000..fb9806f
--- /dev/null
+++ b/virtio/pstore.c
@@ -0,0 +1,447 @@
+#include "kvm/virtio-pstore.h"
+
+#include "kvm/virtio-pci-dev.h"
+
+#include "kvm/virtio.h"
+#include "kvm/util.h"
+#include "kvm/kvm.h"
+#include "kvm/threadpool.h"
+#include "kvm/guest_compat.h"
+
+#include <linux/virtio_ring.h>
+
+#include <linux/list.h>
+#include <fcntl.h>
+#include <dirent.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <pthread.h>
+#include <linux/kernel.h>
+#include <sys/eventfd.h>
+
+#define NUM_VIRT_QUEUES			2
+#define VIRTIO_PSTORE_QUEUE_SIZE	128
+
+struct io_thread_arg {
+	struct kvm		*kvm;
+	struct pstore_dev	*pdev;
+};
+
+struct pstore_dev {
+	struct list_head	list;
+	struct virtio_device	vdev;
+	pthread_t		io_thread;
+	int			io_efd;
+	int			done;
+
+	struct virtio_pstore_config *config;
+
+	int			fd;
+	DIR			*dir;
+	u64			id;
+
+	/* virtio queue */
+	struct virt_queue	vqs[NUM_VIRT_QUEUES];
+};
+
+static LIST_HEAD(pdevs);
+static int compat_id = -1;
+
+static u8 *get_config(struct kvm *kvm, void *dev)
+{
+	struct pstore_dev *pdev = dev;
+
+	return (u8*)pdev->config;
+}
+
+static u32 get_host_features(struct kvm *kvm, void *dev)
+{
+	/* Unused */
+	return 0;
+}
+
+static void set_guest_features(struct kvm *kvm, void *dev, u32 features)
+{
+	/* Unused */
+}
+
+static void virtio_pstore_to_filename(struct kvm *kvm, struct pstore_dev *pdev,
+				      char *buf, size_t sz,
+				      struct virtio_pstore_req *req)
+{
+	const char *basename;
+	unsigned long long id = 0;
+	unsigned int flags = virtio_host_to_guest_u64(pdev->vqs, req->flags);
+
+	switch (req->type) {
+	case VIRTIO_PSTORE_TYPE_DMESG:
+		basename = "dmesg";
+		id = pdev->id++;
+		break;
+	default:
+		basename = "unknown";
+		break;
+	}
+
+	snprintf(buf, sz, "%s/%s-%llu%s", kvm->cfg.pstore_path, basename, id,
+		 flags & VIRTIO_PSTORE_FL_COMPRESSED ? ".enc.z" : "");
+}
+
+static void virtio_pstore_from_filename(struct kvm *kvm, char *name,
+					char *buf, size_t sz,
+					struct virtio_pstore_fileinfo *info)
+{
+	size_t len = strlen(name);
+
+	snprintf(buf, sz, "%s/%s", kvm->cfg.pstore_path, name);
+
+	info->flags = 0;
+	if (len > 6 && !strncmp(name + len - 6, ".enc.z", 6))
+		info->flags |= VIRTIO_PSTORE_FL_COMPRESSED;
+
+	if (!strncmp(name, "dmesg-", 6)) {
+		info->type = VIRTIO_PSTORE_TYPE_DMESG;
+		name += strlen("dmesg-");
+	} else if (!strncmp(name, "unknown-", 8)) {
+		info->type = VIRTIO_PSTORE_TYPE_UNKNOWN;
+		name += strlen("unknown-");
+	}
+
+	info->id = strtoul(name, NULL, 0);
+}
+
+static int virtio_pstore_do_open(struct kvm *kvm, struct pstore_dev *pdev,
+				 struct virtio_pstore_req *req,
+				 struct iovec *iov)
+{
+	pdev->dir = opendir(kvm->cfg.pstore_path);
+	if (pdev->dir == NULL)
+		return -errno;
+
+	return 0;
+}
+
+static int virtio_pstore_do_close(struct kvm *kvm, struct pstore_dev *pdev,
+				  struct virtio_pstore_req *req,
+				  struct iovec *iov)
+{
+	if (pdev->dir == NULL)
+		return -1;
+
+	closedir(pdev->dir);
+	pdev->dir = NULL;
+
+	return 0;
+}
+
+static ssize_t virtio_pstore_do_read(struct kvm *kvm, struct pstore_dev *pdev,
+				     struct virtio_pstore_req *req,
+				     struct iovec *iov,
+				     struct virtio_pstore_fileinfo *info)
+{
+	char path[PATH_MAX];
+	FILE *fp;
+	ssize_t len = 0;
+	struct stat stbuf;
+	struct dirent *dent;
+
+	if (pdev->dir == NULL)
+		return 0;
+
+	dent = readdir(pdev->dir);
+	while (dent) {
+		if (dent->d_name[0] != '.')
+			break;
+		dent = readdir(pdev->dir);
+	}
+
+	if (dent == NULL)
+		return 0;
+
+	virtio_pstore_from_filename(kvm, dent->d_name, path, sizeof(path), info);
+	fp = fopen(path, "r");
+	if (fp == NULL)
+		return -1;
+
+	if (fstat(fileno(fp), &stbuf) < 0)
+		return -1;
+
+	len = fread(iov[3].iov_base, 1, iov[3].iov_len, fp);
+	if (len < 0 && errno == EAGAIN) {
+		len = 0;
+		goto out;
+	}
+
+	info->id     = virtio_host_to_guest_u64(pdev->vqs, info->id);
+	info->type   = virtio_host_to_guest_u64(pdev->vqs, info->type);
+	info->flags  = virtio_host_to_guest_u32(pdev->vqs, info->flags);
+	info->len    = virtio_host_to_guest_u32(pdev->vqs, len);
+
+	info->time_sec  = virtio_host_to_guest_u64(pdev->vqs, stbuf.st_ctim.tv_sec);
+	info->time_nsec = virtio_host_to_guest_u32(pdev->vqs, stbuf.st_ctim.tv_nsec);
+
+	len += sizeof(*info);
+
+out:
+	fclose(fp);
+	return len;
+}
+
+static ssize_t virtio_pstore_do_write(struct kvm *kvm, struct pstore_dev *pdev,
+				      struct virtio_pstore_req *req,
+				      struct iovec *iov)
+{
+	char path[PATH_MAX];
+	FILE *fp;
+	ssize_t len = 0;
+
+	virtio_pstore_to_filename(kvm, pdev, path, sizeof(path), req);
+
+	fp = fopen(path, "a");
+	if (fp == NULL)
+		return -1;
+
+	len = fwrite(iov[1].iov_base, 1, iov[1].iov_len, fp);
+	if (len < 0 && errno == EAGAIN)
+		len = 0;
+
+	fclose(fp);
+	return 0;
+}
+
+static ssize_t virtio_pstore_do_erase(struct kvm *kvm, struct pstore_dev *pdev,
+				      struct virtio_pstore_req *req,
+				      struct iovec *iov)
+{
+	char path[PATH_MAX];
+
+	virtio_pstore_to_filename(kvm, pdev, path, sizeof(path), req);
+
+	return unlink(path);
+}
+
+static bool virtio_pstore_do_io_request(struct kvm *kvm, struct pstore_dev *pdev,
+					struct virt_queue *vq)
+{
+	struct iovec iov[VIRTIO_PSTORE_QUEUE_SIZE];
+	struct virtio_pstore_req *req;
+	struct virtio_pstore_res *res;
+	struct virtio_pstore_fileinfo *info;
+	ssize_t len = 0;
+	u16 out, in, head;
+	int ret = 0;
+
+	head = virt_queue__get_iov(vq, iov, &out, &in, kvm);
+
+	if (iov[0].iov_len != sizeof(*req) || iov[out].iov_len != sizeof(*res)) {
+		return false;
+	}
+
+	req = iov[0].iov_base;
+	res = iov[out].iov_base;
+
+	switch (virtio_guest_to_host_u16(vq, req->cmd)) {
+	case VIRTIO_PSTORE_CMD_OPEN:
+		ret = virtio_pstore_do_open(kvm, pdev, req, iov);
+		break;
+	case VIRTIO_PSTORE_CMD_READ:
+		info = iov[out + 1].iov_base;
+		ret = virtio_pstore_do_read(kvm, pdev, req, iov, info);
+		if (ret > 0) {
+			len = ret;
+			ret = 0;
+		}
+		break;
+	case VIRTIO_PSTORE_CMD_WRITE:
+		ret = virtio_pstore_do_write(kvm, pdev, req, iov);
+		break;
+	case VIRTIO_PSTORE_CMD_CLOSE:
+		ret = virtio_pstore_do_close(kvm, pdev, req, iov);
+		break;
+	case VIRTIO_PSTORE_CMD_ERASE:
+		ret = virtio_pstore_do_erase(kvm, pdev, req, iov);
+		break;
+	default:
+		return false;
+	}
+
+	res->cmd  = req->cmd;
+	res->type = req->type;
+	res->ret  = virtio_host_to_guest_u32(vq, ret);
+
+	virt_queue__set_used_elem(vq, head, sizeof(*res) + len);
+
+	return ret == 0;
+}
+
+static void virtio_pstore_do_io(struct kvm *kvm, struct pstore_dev *pdev,
+				struct virt_queue *vq)
+{
+	bool done = false;
+
+	while (virt_queue__available(vq)) {
+		virtio_pstore_do_io_request(kvm, pdev, vq);
+		done = true;
+	}
+
+	if (done)
+		pdev->vdev.ops->signal_vq(kvm, &pdev->vdev, vq - pdev->vqs);
+}
+
+static void *virtio_pstore_io_thread(void *arg)
+{
+	struct io_thread_arg *io_arg = arg;
+	struct pstore_dev *pdev = io_arg->pdev;
+	struct kvm *kvm = io_arg->kvm;
+	u64 data;
+	int r;
+
+	kvm__set_thread_name("virtio-pstore-io");
+
+	while (!pdev->done) {
+		r = read(pdev->io_efd, &data, sizeof(u64));
+		if (r < 0)
+			continue;
+
+		virtio_pstore_do_io(kvm, pdev, &pdev->vqs[0]);
+		virtio_pstore_do_io(kvm, pdev, &pdev->vqs[1]);
+	}
+	free(io_arg);
+
+	pthread_exit(NULL);
+	return NULL;
+}
+
+static int init_vq(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align,
+		   u32 pfn)
+{
+	struct pstore_dev *pdev = dev;
+	struct virt_queue *queue;
+	void *p;
+
+	compat__remove_message(compat_id);
+
+	queue		= &pdev->vqs[vq];
+	queue->pfn	= pfn;
+	p		= virtio_get_vq(kvm, queue->pfn, page_size);
+
+	vring_init(&queue->vring, VIRTIO_PSTORE_QUEUE_SIZE, p, align);
+
+	return 0;
+}
+
+static int notify_vq(struct kvm *kvm, void *dev, u32 vq)
+{
+	struct pstore_dev *pdev = dev;
+	u64 data = 1;
+	int r;
+
+	r = write(pdev->io_efd, &data, sizeof(data));
+	if (r < 0)
+		return r;
+
+	return 0;
+}
+
+static int get_pfn_vq(struct kvm *kvm, void *dev, u32 vq)
+{
+	struct pstore_dev *pdev = dev;
+
+	return pdev->vqs[vq].pfn;
+}
+
+static int get_size_vq(struct kvm *kvm, void *dev, u32 vq)
+{
+	return VIRTIO_PSTORE_QUEUE_SIZE;
+}
+
+static int set_size_vq(struct kvm *kvm, void *dev, u32 vq, int size)
+{
+	/* FIXME: dynamic */
+	return size;
+}
+
+static struct virtio_ops pstore_dev_virtio_ops = {
+	.get_config		= get_config,
+	.get_host_features	= get_host_features,
+	.set_guest_features	= set_guest_features,
+	.init_vq		= init_vq,
+	.notify_vq		= notify_vq,
+	.get_pfn_vq		= get_pfn_vq,
+	.get_size_vq		= get_size_vq,
+	.set_size_vq		= set_size_vq,
+};
+
+int virtio_pstore__init(struct kvm *kvm)
+{
+	struct pstore_dev *pdev;
+	struct io_thread_arg *io_arg = NULL;
+	int r;
+
+	if (!kvm->cfg.pstore_path)
+		return 0;
+
+	pdev = calloc(1, sizeof(*pdev));
+	if (pdev == NULL)
+		return -ENOMEM;
+
+	pdev->config = calloc(1, sizeof(*pdev->config));
+	if (pdev->config == NULL) {
+		r = -ENOMEM;
+		goto cleanup;
+	}
+
+	pdev->id = 1;
+
+	io_arg = malloc(sizeof(*io_arg));
+	if (io_arg == NULL) {
+		r = -ENOMEM;
+		goto cleanup;
+	}
+
+	pdev->io_efd = eventfd(0, 0);
+
+	*io_arg = (struct io_thread_arg) {
+		.pdev   = pdev,
+		.kvm    = kvm,
+	};
+	r = pthread_create(&pdev->io_thread, NULL,
+			   virtio_pstore_io_thread, io_arg);
+	if (r < 0)
+		goto cleanup;
+
+	r = virtio_init(kvm, pdev, &pdev->vdev, &pstore_dev_virtio_ops,
+			VIRTIO_DEFAULT_TRANS(kvm), PCI_DEVICE_ID_VIRTIO_PSTORE,
+			VIRTIO_ID_PSTORE, PCI_CLASS_PSTORE);
+	if (r < 0)
+		goto cleanup;
+
+	list_add_tail(&pdev->list, &pdevs);
+
+	if (compat_id == -1)
+		compat_id = virtio_compat_add_message("virtio-pstore", "CONFIG_VIRTIO_PSTORE");
+	return 0;
+
+cleanup:
+	free(io_arg);
+	free(pdev->config);
+	free(pdev);
+
+	return r;
+}
+virtio_dev_init(virtio_pstore__init);
+
+int virtio_pstore__exit(struct kvm *kvm)
+{
+	struct pstore_dev *pdev, *tmp;
+
+	list_for_each_entry_safe(pdev, tmp, &pdevs, list) {
+		list_del(&pdev->list);
+		close(pdev->io_efd);
+		pdev->vdev.ops->exit(kvm, &pdev->vdev);
+		free(pdev);
+	}
+
+	return 0;
+}
+virtio_dev_exit(virtio_pstore__exit);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/3] kvmtool: Implement virtio-pstore device
  2016-08-20  8:07 [RFC/PATCHSET 0/3] virtio: Implement virtio pstore device (v3) Namhyung Kim
@ 2016-08-20  8:07 ` Namhyung Kim
  0 siblings, 0 replies; 12+ messages in thread
From: Namhyung Kim @ 2016-08-20  8:07 UTC (permalink / raw)
  To: virtio-dev, kvm, qemu-devel, virtualization
  Cc: LKML, Namhyung Kim, Will Deacon

From: Namhyung Kim <namhyung@gmail.com>

Add virtio pstore device to allow kernel log messages saved on the
host.  With this patch, it will save the log files under directory given
by --pstore option.

  $ lkvm run --pstore=dir-xx

  (guest) # echo c > /proc/sysrq-trigger

  $ ls dir-xx
  dmesg-1.enc.z  dmesg-2.enc.z

The log files are usually compressed using zlib.  User can easily see
the messages on the host or on the guest (using pstore filesystem).

Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Namhyung Kim <namhyung@gmail.com>
---
 Makefile                     |   1 +
 builtin-run.c                |   2 +
 include/kvm/kvm-config.h     |   1 +
 include/kvm/virtio-pci-dev.h |   2 +
 include/kvm/virtio-pstore.h  |  53 +++++
 include/linux/virtio_ids.h   |   1 +
 virtio/pstore.c              | 447 +++++++++++++++++++++++++++++++++++++++++++
 7 files changed, 507 insertions(+)
 create mode 100644 include/kvm/virtio-pstore.h
 create mode 100644 virtio/pstore.c

diff --git a/Makefile b/Makefile
index 1f0196f..d7462b9 100644
--- a/Makefile
+++ b/Makefile
@@ -67,6 +67,7 @@ OBJS	+= virtio/net.o
 OBJS	+= virtio/rng.o
 OBJS    += virtio/balloon.o
 OBJS	+= virtio/pci.o
+OBJS	+= virtio/pstore.o
 OBJS	+= disk/blk.o
 OBJS	+= disk/qcow.o
 OBJS	+= disk/raw.o
diff --git a/builtin-run.c b/builtin-run.c
index 72b878d..08c12dd 100644
--- a/builtin-run.c
+++ b/builtin-run.c
@@ -128,6 +128,8 @@ void kvm_run_set_wrapper_sandbox(void)
 			" rootfs"),					\
 	OPT_STRING('\0', "hugetlbfs", &(cfg)->hugetlbfs_path, "path",	\
 			"Hugetlbfs path"),				\
+	OPT_STRING('\0', "pstore", &(cfg)->pstore_path, "path",		\
+			"pstore data path"),				\
 									\
 	OPT_GROUP("Kernel options:"),					\
 	OPT_STRING('k', "kernel", &(cfg)->kernel_filename, "kernel",	\
diff --git a/include/kvm/kvm-config.h b/include/kvm/kvm-config.h
index 386fa8c..42b7651 100644
--- a/include/kvm/kvm-config.h
+++ b/include/kvm/kvm-config.h
@@ -45,6 +45,7 @@ struct kvm_config {
 	const char *hugetlbfs_path;
 	const char *custom_rootfs_name;
 	const char *real_cmdline;
+	const char *pstore_path;
 	struct virtio_net_params *net_params;
 	bool single_step;
 	bool vnc;
diff --git a/include/kvm/virtio-pci-dev.h b/include/kvm/virtio-pci-dev.h
index 48ae018..4339d94 100644
--- a/include/kvm/virtio-pci-dev.h
+++ b/include/kvm/virtio-pci-dev.h
@@ -15,6 +15,7 @@
 #define PCI_DEVICE_ID_VIRTIO_BLN		0x1005
 #define PCI_DEVICE_ID_VIRTIO_SCSI		0x1008
 #define PCI_DEVICE_ID_VIRTIO_9P			0x1009
+#define PCI_DEVICE_ID_VIRTIO_PSTORE		0x100a
 #define PCI_DEVICE_ID_VESA			0x2000
 #define PCI_DEVICE_ID_PCI_SHMEM			0x0001
 
@@ -34,5 +35,6 @@
 #define PCI_CLASS_RNG				0xff0000
 #define PCI_CLASS_BLN				0xff0000
 #define PCI_CLASS_9P				0xff0000
+#define PCI_CLASS_PSTORE			0xff0000
 
 #endif /* VIRTIO_PCI_DEV_H_ */
diff --git a/include/kvm/virtio-pstore.h b/include/kvm/virtio-pstore.h
new file mode 100644
index 0000000..9f52ffd
--- /dev/null
+++ b/include/kvm/virtio-pstore.h
@@ -0,0 +1,53 @@
+#ifndef KVM__PSTORE_VIRTIO_H
+#define KVM__PSTORE_VIRTIO_H
+
+#include <kvm/virtio.h>
+#include <sys/types.h>
+
+#define VIRTIO_PSTORE_CMD_NULL   0
+#define VIRTIO_PSTORE_CMD_OPEN   1
+#define VIRTIO_PSTORE_CMD_READ   2
+#define VIRTIO_PSTORE_CMD_WRITE  3
+#define VIRTIO_PSTORE_CMD_ERASE  4
+#define VIRTIO_PSTORE_CMD_CLOSE  5
+
+#define VIRTIO_PSTORE_TYPE_UNKNOWN  0
+#define VIRTIO_PSTORE_TYPE_DMESG    1
+
+#define VIRTIO_PSTORE_FL_COMPRESSED  1
+
+struct virtio_pstore_req {
+	__virtio16		cmd;
+	__virtio16		type;
+	__virtio32		flags;
+	__virtio64		id;
+	__virtio32		count;
+	__virtio32		reserved;
+};
+
+struct virtio_pstore_res {
+	__virtio16		cmd;
+	__virtio16		type;
+	__virtio32		ret;
+};
+
+struct virtio_pstore_fileinfo {
+	__virtio64		id;
+	__virtio32		count;
+	__virtio16		type;
+	__virtio16		unused;
+	__virtio32		flags;
+	__virtio32		len;
+	__virtio64		time_sec;
+	__virtio32		time_nsec;
+	__virtio32		reserved;
+};
+
+struct virtio_pstore_config {
+	__virtio32		bufsize;
+};
+
+int virtio_pstore__init(struct kvm *kvm);
+int virtio_pstore__exit(struct kvm *kvm);
+
+#endif /* KVM__PSTORE_VIRTIO_H */
diff --git a/include/linux/virtio_ids.h b/include/linux/virtio_ids.h
index 5f60aa4..40eabf7 100644
--- a/include/linux/virtio_ids.h
+++ b/include/linux/virtio_ids.h
@@ -40,5 +40,6 @@
 #define VIRTIO_ID_RPROC_SERIAL 11 /* virtio remoteproc serial link */
 #define VIRTIO_ID_CAIF	       12 /* Virtio caif */
 #define VIRTIO_ID_INPUT        18 /* virtio input */
+#define VIRTIO_ID_PSTORE       22 /* virtio pstore */
 
 #endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/virtio/pstore.c b/virtio/pstore.c
new file mode 100644
index 0000000..fb9806f
--- /dev/null
+++ b/virtio/pstore.c
@@ -0,0 +1,447 @@
+#include "kvm/virtio-pstore.h"
+
+#include "kvm/virtio-pci-dev.h"
+
+#include "kvm/virtio.h"
+#include "kvm/util.h"
+#include "kvm/kvm.h"
+#include "kvm/threadpool.h"
+#include "kvm/guest_compat.h"
+
+#include <linux/virtio_ring.h>
+
+#include <linux/list.h>
+#include <fcntl.h>
+#include <dirent.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <pthread.h>
+#include <linux/kernel.h>
+#include <sys/eventfd.h>
+
+#define NUM_VIRT_QUEUES			2
+#define VIRTIO_PSTORE_QUEUE_SIZE	128
+
+struct io_thread_arg {
+	struct kvm		*kvm;
+	struct pstore_dev	*pdev;
+};
+
+struct pstore_dev {
+	struct list_head	list;
+	struct virtio_device	vdev;
+	pthread_t		io_thread;
+	int			io_efd;
+	int			done;
+
+	struct virtio_pstore_config *config;
+
+	int			fd;
+	DIR			*dir;
+	u64			id;
+
+	/* virtio queue */
+	struct virt_queue	vqs[NUM_VIRT_QUEUES];
+};
+
+static LIST_HEAD(pdevs);
+static int compat_id = -1;
+
+static u8 *get_config(struct kvm *kvm, void *dev)
+{
+	struct pstore_dev *pdev = dev;
+
+	return (u8*)pdev->config;
+}
+
+static u32 get_host_features(struct kvm *kvm, void *dev)
+{
+	/* Unused */
+	return 0;
+}
+
+static void set_guest_features(struct kvm *kvm, void *dev, u32 features)
+{
+	/* Unused */
+}
+
+static void virtio_pstore_to_filename(struct kvm *kvm, struct pstore_dev *pdev,
+				      char *buf, size_t sz,
+				      struct virtio_pstore_req *req)
+{
+	const char *basename;
+	unsigned long long id = 0;
+	unsigned int flags = virtio_host_to_guest_u64(pdev->vqs, req->flags);
+
+	switch (req->type) {
+	case VIRTIO_PSTORE_TYPE_DMESG:
+		basename = "dmesg";
+		id = pdev->id++;
+		break;
+	default:
+		basename = "unknown";
+		break;
+	}
+
+	snprintf(buf, sz, "%s/%s-%llu%s", kvm->cfg.pstore_path, basename, id,
+		 flags & VIRTIO_PSTORE_FL_COMPRESSED ? ".enc.z" : "");
+}
+
+static void virtio_pstore_from_filename(struct kvm *kvm, char *name,
+					char *buf, size_t sz,
+					struct virtio_pstore_fileinfo *info)
+{
+	size_t len = strlen(name);
+
+	snprintf(buf, sz, "%s/%s", kvm->cfg.pstore_path, name);
+
+	info->flags = 0;
+	if (len > 6 && !strncmp(name + len - 6, ".enc.z", 6))
+		info->flags |= VIRTIO_PSTORE_FL_COMPRESSED;
+
+	if (!strncmp(name, "dmesg-", 6)) {
+		info->type = VIRTIO_PSTORE_TYPE_DMESG;
+		name += strlen("dmesg-");
+	} else if (!strncmp(name, "unknown-", 8)) {
+		info->type = VIRTIO_PSTORE_TYPE_UNKNOWN;
+		name += strlen("unknown-");
+	}
+
+	info->id = strtoul(name, NULL, 0);
+}
+
+static int virtio_pstore_do_open(struct kvm *kvm, struct pstore_dev *pdev,
+				 struct virtio_pstore_req *req,
+				 struct iovec *iov)
+{
+	pdev->dir = opendir(kvm->cfg.pstore_path);
+	if (pdev->dir == NULL)
+		return -errno;
+
+	return 0;
+}
+
+static int virtio_pstore_do_close(struct kvm *kvm, struct pstore_dev *pdev,
+				  struct virtio_pstore_req *req,
+				  struct iovec *iov)
+{
+	if (pdev->dir == NULL)
+		return -1;
+
+	closedir(pdev->dir);
+	pdev->dir = NULL;
+
+	return 0;
+}
+
+static ssize_t virtio_pstore_do_read(struct kvm *kvm, struct pstore_dev *pdev,
+				     struct virtio_pstore_req *req,
+				     struct iovec *iov,
+				     struct virtio_pstore_fileinfo *info)
+{
+	char path[PATH_MAX];
+	FILE *fp;
+	ssize_t len = 0;
+	struct stat stbuf;
+	struct dirent *dent;
+
+	if (pdev->dir == NULL)
+		return 0;
+
+	dent = readdir(pdev->dir);
+	while (dent) {
+		if (dent->d_name[0] != '.')
+			break;
+		dent = readdir(pdev->dir);
+	}
+
+	if (dent == NULL)
+		return 0;
+
+	virtio_pstore_from_filename(kvm, dent->d_name, path, sizeof(path), info);
+	fp = fopen(path, "r");
+	if (fp == NULL)
+		return -1;
+
+	if (fstat(fileno(fp), &stbuf) < 0)
+		return -1;
+
+	len = fread(iov[3].iov_base, 1, iov[3].iov_len, fp);
+	if (len < 0 && errno == EAGAIN) {
+		len = 0;
+		goto out;
+	}
+
+	info->id     = virtio_host_to_guest_u64(pdev->vqs, info->id);
+	info->type   = virtio_host_to_guest_u64(pdev->vqs, info->type);
+	info->flags  = virtio_host_to_guest_u32(pdev->vqs, info->flags);
+	info->len    = virtio_host_to_guest_u32(pdev->vqs, len);
+
+	info->time_sec  = virtio_host_to_guest_u64(pdev->vqs, stbuf.st_ctim.tv_sec);
+	info->time_nsec = virtio_host_to_guest_u32(pdev->vqs, stbuf.st_ctim.tv_nsec);
+
+	len += sizeof(*info);
+
+out:
+	fclose(fp);
+	return len;
+}
+
+static ssize_t virtio_pstore_do_write(struct kvm *kvm, struct pstore_dev *pdev,
+				      struct virtio_pstore_req *req,
+				      struct iovec *iov)
+{
+	char path[PATH_MAX];
+	FILE *fp;
+	ssize_t len = 0;
+
+	virtio_pstore_to_filename(kvm, pdev, path, sizeof(path), req);
+
+	fp = fopen(path, "a");
+	if (fp == NULL)
+		return -1;
+
+	len = fwrite(iov[1].iov_base, 1, iov[1].iov_len, fp);
+	if (len < 0 && errno == EAGAIN)
+		len = 0;
+
+	fclose(fp);
+	return 0;
+}
+
+static ssize_t virtio_pstore_do_erase(struct kvm *kvm, struct pstore_dev *pdev,
+				      struct virtio_pstore_req *req,
+				      struct iovec *iov)
+{
+	char path[PATH_MAX];
+
+	virtio_pstore_to_filename(kvm, pdev, path, sizeof(path), req);
+
+	return unlink(path);
+}
+
+static bool virtio_pstore_do_io_request(struct kvm *kvm, struct pstore_dev *pdev,
+					struct virt_queue *vq)
+{
+	struct iovec iov[VIRTIO_PSTORE_QUEUE_SIZE];
+	struct virtio_pstore_req *req;
+	struct virtio_pstore_res *res;
+	struct virtio_pstore_fileinfo *info;
+	ssize_t len = 0;
+	u16 out, in, head;
+	int ret = 0;
+
+	head = virt_queue__get_iov(vq, iov, &out, &in, kvm);
+
+	if (iov[0].iov_len != sizeof(*req) || iov[out].iov_len != sizeof(*res)) {
+		return false;
+	}
+
+	req = iov[0].iov_base;
+	res = iov[out].iov_base;
+
+	switch (virtio_guest_to_host_u16(vq, req->cmd)) {
+	case VIRTIO_PSTORE_CMD_OPEN:
+		ret = virtio_pstore_do_open(kvm, pdev, req, iov);
+		break;
+	case VIRTIO_PSTORE_CMD_READ:
+		info = iov[out + 1].iov_base;
+		ret = virtio_pstore_do_read(kvm, pdev, req, iov, info);
+		if (ret > 0) {
+			len = ret;
+			ret = 0;
+		}
+		break;
+	case VIRTIO_PSTORE_CMD_WRITE:
+		ret = virtio_pstore_do_write(kvm, pdev, req, iov);
+		break;
+	case VIRTIO_PSTORE_CMD_CLOSE:
+		ret = virtio_pstore_do_close(kvm, pdev, req, iov);
+		break;
+	case VIRTIO_PSTORE_CMD_ERASE:
+		ret = virtio_pstore_do_erase(kvm, pdev, req, iov);
+		break;
+	default:
+		return false;
+	}
+
+	res->cmd  = req->cmd;
+	res->type = req->type;
+	res->ret  = virtio_host_to_guest_u32(vq, ret);
+
+	virt_queue__set_used_elem(vq, head, sizeof(*res) + len);
+
+	return ret == 0;
+}
+
+static void virtio_pstore_do_io(struct kvm *kvm, struct pstore_dev *pdev,
+				struct virt_queue *vq)
+{
+	bool done = false;
+
+	while (virt_queue__available(vq)) {
+		virtio_pstore_do_io_request(kvm, pdev, vq);
+		done = true;
+	}
+
+	if (done)
+		pdev->vdev.ops->signal_vq(kvm, &pdev->vdev, vq - pdev->vqs);
+}
+
+static void *virtio_pstore_io_thread(void *arg)
+{
+	struct io_thread_arg *io_arg = arg;
+	struct pstore_dev *pdev = io_arg->pdev;
+	struct kvm *kvm = io_arg->kvm;
+	u64 data;
+	int r;
+
+	kvm__set_thread_name("virtio-pstore-io");
+
+	while (!pdev->done) {
+		r = read(pdev->io_efd, &data, sizeof(u64));
+		if (r < 0)
+			continue;
+
+		virtio_pstore_do_io(kvm, pdev, &pdev->vqs[0]);
+		virtio_pstore_do_io(kvm, pdev, &pdev->vqs[1]);
+	}
+	free(io_arg);
+
+	pthread_exit(NULL);
+	return NULL;
+}
+
+static int init_vq(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align,
+		   u32 pfn)
+{
+	struct pstore_dev *pdev = dev;
+	struct virt_queue *queue;
+	void *p;
+
+	compat__remove_message(compat_id);
+
+	queue		= &pdev->vqs[vq];
+	queue->pfn	= pfn;
+	p		= virtio_get_vq(kvm, queue->pfn, page_size);
+
+	vring_init(&queue->vring, VIRTIO_PSTORE_QUEUE_SIZE, p, align);
+
+	return 0;
+}
+
+static int notify_vq(struct kvm *kvm, void *dev, u32 vq)
+{
+	struct pstore_dev *pdev = dev;
+	u64 data = 1;
+	int r;
+
+	r = write(pdev->io_efd, &data, sizeof(data));
+	if (r < 0)
+		return r;
+
+	return 0;
+}
+
+static int get_pfn_vq(struct kvm *kvm, void *dev, u32 vq)
+{
+	struct pstore_dev *pdev = dev;
+
+	return pdev->vqs[vq].pfn;
+}
+
+static int get_size_vq(struct kvm *kvm, void *dev, u32 vq)
+{
+	return VIRTIO_PSTORE_QUEUE_SIZE;
+}
+
+static int set_size_vq(struct kvm *kvm, void *dev, u32 vq, int size)
+{
+	/* FIXME: dynamic */
+	return size;
+}
+
+static struct virtio_ops pstore_dev_virtio_ops = {
+	.get_config		= get_config,
+	.get_host_features	= get_host_features,
+	.set_guest_features	= set_guest_features,
+	.init_vq		= init_vq,
+	.notify_vq		= notify_vq,
+	.get_pfn_vq		= get_pfn_vq,
+	.get_size_vq		= get_size_vq,
+	.set_size_vq		= set_size_vq,
+};
+
+int virtio_pstore__init(struct kvm *kvm)
+{
+	struct pstore_dev *pdev;
+	struct io_thread_arg *io_arg = NULL;
+	int r;
+
+	if (!kvm->cfg.pstore_path)
+		return 0;
+
+	pdev = calloc(1, sizeof(*pdev));
+	if (pdev == NULL)
+		return -ENOMEM;
+
+	pdev->config = calloc(1, sizeof(*pdev->config));
+	if (pdev->config == NULL) {
+		r = -ENOMEM;
+		goto cleanup;
+	}
+
+	pdev->id = 1;
+
+	io_arg = malloc(sizeof(*io_arg));
+	if (io_arg == NULL) {
+		r = -ENOMEM;
+		goto cleanup;
+	}
+
+	pdev->io_efd = eventfd(0, 0);
+
+	*io_arg = (struct io_thread_arg) {
+		.pdev   = pdev,
+		.kvm    = kvm,
+	};
+	r = pthread_create(&pdev->io_thread, NULL,
+			   virtio_pstore_io_thread, io_arg);
+	if (r < 0)
+		goto cleanup;
+
+	r = virtio_init(kvm, pdev, &pdev->vdev, &pstore_dev_virtio_ops,
+			VIRTIO_DEFAULT_TRANS(kvm), PCI_DEVICE_ID_VIRTIO_PSTORE,
+			VIRTIO_ID_PSTORE, PCI_CLASS_PSTORE);
+	if (r < 0)
+		goto cleanup;
+
+	list_add_tail(&pdev->list, &pdevs);
+
+	if (compat_id == -1)
+		compat_id = virtio_compat_add_message("virtio-pstore", "CONFIG_VIRTIO_PSTORE");
+	return 0;
+
+cleanup:
+	free(io_arg);
+	free(pdev->config);
+	free(pdev);
+
+	return r;
+}
+virtio_dev_init(virtio_pstore__init);
+
+int virtio_pstore__exit(struct kvm *kvm)
+{
+	struct pstore_dev *pdev, *tmp;
+
+	list_for_each_entry_safe(pdev, tmp, &pdevs, list) {
+		list_del(&pdev->list);
+		close(pdev->io_efd);
+		pdev->vdev.ops->exit(kvm, &pdev->vdev);
+		free(pdev);
+	}
+
+	return 0;
+}
+virtio_dev_exit(virtio_pstore__exit);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/3] kvmtool: Implement virtio-pstore device
  2016-07-18  4:37 [RFC/PATCHSET 0/3] virtio-pstore: Implement virtio pstore device Namhyung Kim
@ 2016-07-18  4:37 ` Namhyung Kim
  0 siblings, 0 replies; 12+ messages in thread
From: Namhyung Kim @ 2016-07-18  4:37 UTC (permalink / raw)
  To: LKML
  Cc: Paolo Bonzini, Radim Krčmář,
	Michael S. Tsirkin, Anthony Liguori, Anton Vorontsov,
	Colin Cross, Kees Cook, Tony Luck, Steven Rostedt, Ingo Molnar,
	Minchan Kim, kvm, virtualization

Add virtio pstore device to allow kernel log messages saved on the
host.  With this patch, it will save the log files under directory given
by --pstore option.

  $ lkvm run --pstore=dir-xx

  (guest) # echo c > /proc/sysrq-trigger

  $ ls dir-xx
  dmesg-0.enc.z  dmesg-1.enc.z

The log files are usually compressed using zlib.  User can easily see
the messages on the host or on the guest (using pstore filesystem).

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Anthony Liguori <aliguori@amazon.com>
Cc: Anton Vorontsov <anton@enomsg.org>
Cc: Colin Cross <ccross@android.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: kvm@vger.kernel.org
Cc: virtualization@lists.linux-foundation.org
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
---
 Makefile                     |   1 +
 builtin-run.c                |   2 +
 include/kvm/kvm-config.h     |   1 +
 include/kvm/virtio-pci-dev.h |   2 +
 include/kvm/virtio-pstore.h  |  31 ++++
 include/linux/virtio_ids.h   |   1 +
 virtio/pstore.c              | 359 +++++++++++++++++++++++++++++++++++++++++++
 7 files changed, 397 insertions(+)
 create mode 100644 include/kvm/virtio-pstore.h
 create mode 100644 virtio/pstore.c

diff --git a/Makefile b/Makefile
index 1f0196f..d7462b9 100644
--- a/Makefile
+++ b/Makefile
@@ -67,6 +67,7 @@ OBJS	+= virtio/net.o
 OBJS	+= virtio/rng.o
 OBJS    += virtio/balloon.o
 OBJS	+= virtio/pci.o
+OBJS	+= virtio/pstore.o
 OBJS	+= disk/blk.o
 OBJS	+= disk/qcow.o
 OBJS	+= disk/raw.o
diff --git a/builtin-run.c b/builtin-run.c
index 72b878d..08c12dd 100644
--- a/builtin-run.c
+++ b/builtin-run.c
@@ -128,6 +128,8 @@ void kvm_run_set_wrapper_sandbox(void)
 			" rootfs"),					\
 	OPT_STRING('\0', "hugetlbfs", &(cfg)->hugetlbfs_path, "path",	\
 			"Hugetlbfs path"),				\
+	OPT_STRING('\0', "pstore", &(cfg)->pstore_path, "path",		\
+			"pstore data path"),				\
 									\
 	OPT_GROUP("Kernel options:"),					\
 	OPT_STRING('k', "kernel", &(cfg)->kernel_filename, "kernel",	\
diff --git a/include/kvm/kvm-config.h b/include/kvm/kvm-config.h
index 386fa8c..42b7651 100644
--- a/include/kvm/kvm-config.h
+++ b/include/kvm/kvm-config.h
@@ -45,6 +45,7 @@ struct kvm_config {
 	const char *hugetlbfs_path;
 	const char *custom_rootfs_name;
 	const char *real_cmdline;
+	const char *pstore_path;
 	struct virtio_net_params *net_params;
 	bool single_step;
 	bool vnc;
diff --git a/include/kvm/virtio-pci-dev.h b/include/kvm/virtio-pci-dev.h
index 48ae018..4339d94 100644
--- a/include/kvm/virtio-pci-dev.h
+++ b/include/kvm/virtio-pci-dev.h
@@ -15,6 +15,7 @@
 #define PCI_DEVICE_ID_VIRTIO_BLN		0x1005
 #define PCI_DEVICE_ID_VIRTIO_SCSI		0x1008
 #define PCI_DEVICE_ID_VIRTIO_9P			0x1009
+#define PCI_DEVICE_ID_VIRTIO_PSTORE		0x100a
 #define PCI_DEVICE_ID_VESA			0x2000
 #define PCI_DEVICE_ID_PCI_SHMEM			0x0001
 
@@ -34,5 +35,6 @@
 #define PCI_CLASS_RNG				0xff0000
 #define PCI_CLASS_BLN				0xff0000
 #define PCI_CLASS_9P				0xff0000
+#define PCI_CLASS_PSTORE			0xff0000
 
 #endif /* VIRTIO_PCI_DEV_H_ */
diff --git a/include/kvm/virtio-pstore.h b/include/kvm/virtio-pstore.h
new file mode 100644
index 0000000..293ab57
--- /dev/null
+++ b/include/kvm/virtio-pstore.h
@@ -0,0 +1,31 @@
+#ifndef KVM__PSTORE_VIRTIO_H
+#define KVM__PSTORE_VIRTIO_H
+
+struct kvm;
+
+#define VIRTIO_PSTORE_TYPE_UNKNOWN  0
+#define VIRTIO_PSTORE_TYPE_DMESG    1
+
+#define VIRTIO_PSTORE_CMD_NULL   0
+#define VIRTIO_PSTORE_CMD_OPEN   1
+#define VIRTIO_PSTORE_CMD_READ   2
+#define VIRTIO_PSTORE_CMD_WRITE  3
+#define VIRTIO_PSTORE_CMD_ERASE  4
+#define VIRTIO_PSTORE_CMD_CLOSE  5
+
+#define VIRTIO_PSTORE_FL_COMPRESSED  1
+
+struct pstore_hdr {
+	u64			id;
+	u32			flags;
+	u16			cmd;
+	u16			type;
+	u64			time_sec;
+	u32			time_nsec;
+	u32			unused;
+};
+
+int virtio_pstore__init(struct kvm *kvm);
+int virtio_pstore__exit(struct kvm *kvm);
+
+#endif /* KVM__PSTORE_VIRTIO_H */
diff --git a/include/linux/virtio_ids.h b/include/linux/virtio_ids.h
index 5f60aa4..f34cabc 100644
--- a/include/linux/virtio_ids.h
+++ b/include/linux/virtio_ids.h
@@ -40,5 +40,6 @@
 #define VIRTIO_ID_RPROC_SERIAL 11 /* virtio remoteproc serial link */
 #define VIRTIO_ID_CAIF	       12 /* Virtio caif */
 #define VIRTIO_ID_INPUT        18 /* virtio input */
+#define VIRTIO_ID_PSTORE       19 /* virtio pstore */
 
 #endif /* _LINUX_VIRTIO_IDS_H */
diff --git a/virtio/pstore.c b/virtio/pstore.c
new file mode 100644
index 0000000..094e54b
--- /dev/null
+++ b/virtio/pstore.c
@@ -0,0 +1,359 @@
+#include "kvm/virtio-pstore.h"
+
+#include "kvm/virtio-pci-dev.h"
+
+#include "kvm/virtio.h"
+#include "kvm/util.h"
+#include "kvm/kvm.h"
+#include "kvm/threadpool.h"
+#include "kvm/guest_compat.h"
+
+#include <linux/virtio_ring.h>
+
+#include <linux/list.h>
+#include <fcntl.h>
+#include <dirent.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <pthread.h>
+#include <linux/kernel.h>
+
+#define NUM_VIRT_QUEUES			1
+#define VIRTIO_PSTORE_QUEUE_SIZE	128
+
+struct pstore_dev_job {
+	struct virt_queue	*vq;
+	struct pstore_dev	*pdev;
+	struct thread_pool__job	job_id;
+};
+
+struct pstore_dev {
+	struct list_head	list;
+	struct virtio_device	vdev;
+
+	int			fd;
+	DIR			*dir;
+
+	/* virtio queue */
+	struct virt_queue	vqs[NUM_VIRT_QUEUES];
+	struct pstore_dev_job	jobs[NUM_VIRT_QUEUES];
+};
+
+static LIST_HEAD(pdevs);
+static int compat_id = -1;
+
+static u8 *get_config(struct kvm *kvm, void *dev)
+{
+	/* Unused */
+	return 0;
+}
+
+static u32 get_host_features(struct kvm *kvm, void *dev)
+{
+	/* Unused */
+	return 0;
+}
+
+static void set_guest_features(struct kvm *kvm, void *dev, u32 features)
+{
+	/* Unused */
+}
+
+static void virtio_pstore_hdr_to_filename(struct kvm *kvm, struct pstore_hdr *hdr,
+					  char *buf, size_t sz)
+{
+	const char *basename;
+
+	switch (hdr->type) {
+	case VIRTIO_PSTORE_TYPE_DMESG:
+		basename = "dmesg";
+		break;
+	default:
+		basename = "unknown";
+		break;
+	}
+
+	snprintf(buf, sz, "%s/%s-%llu%s", kvm->cfg.pstore_path, basename,
+		 hdr->id, hdr->flags & VIRTIO_PSTORE_FL_COMPRESSED ? ".enc.z" : "");
+}
+
+static void virtio_pstore_filename_to_hdr(struct kvm *kvm, struct pstore_hdr *hdr,
+					  char *name, char *buf, size_t sz)
+{
+	size_t len = strlen(name);
+
+	hdr->flags = 0;
+	if (!strncmp(name + len - 6, ".enc.z", 6))
+		hdr->flags |= VIRTIO_PSTORE_FL_COMPRESSED;
+
+	snprintf(buf, sz, "%s/%s", kvm->cfg.pstore_path, name);
+
+	if (!strncmp(name, "dmesg", 5)) {
+		hdr->type = VIRTIO_PSTORE_TYPE_DMESG;
+		name += 5;
+	} else if (!strncmp(name, "unknown", 7)) {
+		hdr->type = VIRTIO_PSTORE_TYPE_UNKNOWN;
+		name += 7;
+	}
+
+	hdr->id = strtoul(name + 1, NULL, 0);
+}
+
+static int virtio_pstore_do_open(struct kvm *kvm, struct pstore_dev *pdev,
+				 struct pstore_hdr *hdr, struct iovec *iov)
+{
+	pdev->dir = opendir(kvm->cfg.pstore_path);
+	if (pdev->dir == NULL)
+		return -errno;
+
+	return 0;
+}
+
+static int virtio_pstore_do_close(struct kvm *kvm, struct pstore_dev *pdev,
+				   struct pstore_hdr *hdr, struct iovec *iov)
+{
+	if (pdev->dir == NULL)
+		return -1;
+
+	closedir(pdev->dir);
+	pdev->dir = NULL;
+
+	return 0;
+}
+
+static ssize_t virtio_pstore_do_write(struct kvm *kvm, struct pstore_dev *pdev,
+				      struct pstore_hdr *hdr, struct iovec *iov)
+{
+	char path[PATH_MAX];
+	FILE *fp;
+	ssize_t len = 0;
+
+	virtio_pstore_hdr_to_filename(kvm, hdr, path, sizeof(path));
+
+	fp = fopen(path, "a");
+	if (fp == NULL)
+		return -1;
+
+	len = fwrite(iov[1].iov_base, iov[1].iov_len, 1, fp);
+	if (len < 0 && errno == EAGAIN)
+		len = 0;
+
+	fclose(fp);
+	return len;
+}
+
+static ssize_t virtio_pstore_do_read(struct kvm *kvm, struct pstore_dev *pdev,
+				     struct pstore_hdr *hdr, struct iovec *iov)
+{
+	char path[PATH_MAX];
+	FILE *fp;
+	ssize_t len = 0;
+	struct stat stbuf;
+	struct dirent *dent;
+
+	if (pdev->dir == NULL)
+		return 0;
+
+	dent = readdir(pdev->dir);
+	while (dent) {
+		if (dent->d_name[0] != '.')
+			break;
+		dent = readdir(pdev->dir);
+	}
+
+	if (dent == NULL)
+		return 0;
+
+	virtio_pstore_filename_to_hdr(kvm, hdr, dent->d_name, path, sizeof(path));
+	if (stat(path, &stbuf) < 0)
+		return -1;
+
+	fp = fopen(path, "r");
+	if (fp == NULL)
+		return -1;
+
+	len = fread(iov[1].iov_base, 1, iov[1].iov_len, fp);
+	if (len < 0 && errno == EAGAIN)
+		len = 0;
+
+	hdr->id  = virtio_host_to_guest_u64(pdev->vqs, hdr->id);
+	hdr->flags  = virtio_host_to_guest_u32(pdev->vqs, hdr->flags);
+
+	hdr->time_sec  = virtio_host_to_guest_u64(pdev->vqs, stbuf.st_ctim.tv_sec);
+	hdr->time_nsec = virtio_host_to_guest_u32(pdev->vqs, stbuf.st_ctim.tv_nsec);
+
+	fclose(fp);
+	return len;
+}
+
+static ssize_t virtio_pstore_do_erase(struct kvm *kvm, struct pstore_dev *pdev,
+				      struct pstore_hdr *hdr, struct iovec *iov)
+{
+	char path[PATH_MAX];
+
+	virtio_pstore_hdr_to_filename(kvm, hdr, path, sizeof(path));
+
+	return unlink(path);
+}
+
+static bool virtio_pstore_do_io_request(struct kvm *kvm, struct pstore_dev *pdev,
+					struct virt_queue *vq)
+{
+	struct iovec iov[VIRTIO_PSTORE_QUEUE_SIZE];
+	struct pstore_hdr *hdr;
+	ssize_t len = 0;
+	u16 out, in, head;
+
+	head = virt_queue__get_iov(vq, iov, &out, &in, kvm);
+
+	hdr = iov[0].iov_base;
+
+	switch (virtio_guest_to_host_u16(vq, hdr->cmd)) {
+	case VIRTIO_PSTORE_CMD_OPEN:
+		len = virtio_pstore_do_open(kvm, pdev, hdr, iov);
+		break;
+	case VIRTIO_PSTORE_CMD_READ:
+		len = virtio_pstore_do_read(kvm, pdev, hdr, iov);
+		break;
+	case VIRTIO_PSTORE_CMD_WRITE:
+		len = virtio_pstore_do_write(kvm, pdev, hdr, iov);
+		break;
+	case VIRTIO_PSTORE_CMD_CLOSE:
+		virtio_pstore_do_close(kvm, pdev, hdr, iov);
+		break;
+	case VIRTIO_PSTORE_CMD_ERASE:
+		len = virtio_pstore_do_erase(kvm, pdev, hdr, iov);
+		break;
+	default:
+		return false;
+	}
+
+	if (len < 0)
+		return false;
+
+	virt_queue__set_used_elem(vq, head, len);
+
+	return true;
+}
+
+static void virtio_pstore_do_io(struct kvm *kvm, void *param)
+{
+	struct pstore_dev_job *job	= param;
+	struct virt_queue *vq		= job->vq;
+	struct pstore_dev *pdev		= job->pdev;
+
+	while (virt_queue__available(vq))
+		virtio_pstore_do_io_request(kvm, pdev, vq);
+
+	pdev->vdev.ops->signal_vq(kvm, &pdev->vdev, vq - pdev->vqs);
+}
+
+static int init_vq(struct kvm *kvm, void *dev, u32 vq, u32 page_size, u32 align,
+		   u32 pfn)
+{
+	struct pstore_dev *pdev = dev;
+	struct virt_queue *queue;
+	struct pstore_dev_job *job;
+	void *p;
+
+	compat__remove_message(compat_id);
+
+	queue		= &pdev->vqs[vq];
+	queue->pfn	= pfn;
+	p		= virtio_get_vq(kvm, queue->pfn, page_size);
+
+	job = &pdev->jobs[vq];
+
+	vring_init(&queue->vring, VIRTIO_PSTORE_QUEUE_SIZE, p, align);
+
+	*job = (struct pstore_dev_job) {
+		.vq	= queue,
+		.pdev	= pdev,
+	};
+
+	thread_pool__init_job(&job->job_id, kvm, virtio_pstore_do_io, job);
+
+	return 0;
+}
+
+static int notify_vq(struct kvm *kvm, void *dev, u32 vq)
+{
+	struct pstore_dev *pdev = dev;
+
+	thread_pool__do_job(&pdev->jobs[vq].job_id);
+
+	return 0;
+}
+
+static int get_pfn_vq(struct kvm *kvm, void *dev, u32 vq)
+{
+	struct pstore_dev *pdev = dev;
+
+	return pdev->vqs[vq].pfn;
+}
+
+static int get_size_vq(struct kvm *kvm, void *dev, u32 vq)
+{
+	return VIRTIO_PSTORE_QUEUE_SIZE;
+}
+
+static int set_size_vq(struct kvm *kvm, void *dev, u32 vq, int size)
+{
+	/* FIXME: dynamic */
+	return size;
+}
+
+static struct virtio_ops pstore_dev_virtio_ops = {
+	.get_config		= get_config,
+	.get_host_features	= get_host_features,
+	.set_guest_features	= set_guest_features,
+	.init_vq		= init_vq,
+	.notify_vq		= notify_vq,
+	.get_pfn_vq		= get_pfn_vq,
+	.get_size_vq		= get_size_vq,
+	.set_size_vq		= set_size_vq,
+};
+
+int virtio_pstore__init(struct kvm *kvm)
+{
+	struct pstore_dev *pdev;
+	int r;
+
+	if (!kvm->cfg.pstore_path)
+		return 0;
+
+	pdev = malloc(sizeof(*pdev));
+	if (pdev == NULL)
+		return -ENOMEM;
+
+	r = virtio_init(kvm, pdev, &pdev->vdev, &pstore_dev_virtio_ops,
+			VIRTIO_DEFAULT_TRANS(kvm), PCI_DEVICE_ID_VIRTIO_PSTORE,
+			VIRTIO_ID_PSTORE, PCI_CLASS_PSTORE);
+	if (r < 0)
+		goto cleanup;
+
+	list_add_tail(&pdev->list, &pdevs);
+
+	if (compat_id == -1)
+		compat_id = virtio_compat_add_message("virtio-pstore", "CONFIG_VIRTIO_PSTORE");
+	return 0;
+cleanup:
+	free(pdev);
+
+	return r;
+}
+virtio_dev_init(virtio_pstore__init);
+
+int virtio_pstore__exit(struct kvm *kvm)
+{
+	struct pstore_dev *pdev, *tmp;
+
+	list_for_each_entry_safe(pdev, tmp, &pdevs, list) {
+		list_del(&pdev->list);
+		pdev->vdev.ops->exit(kvm, &pdev->vdev);
+		free(pdev);
+	}
+
+	return 0;
+}
+virtio_dev_exit(virtio_pstore__exit);
-- 
2.8.0

^ permalink raw reply related	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2016-09-23  5:52 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-04 14:38 [RFC/PATCHSET 0/3] virtio: Implement virtio pstore device (v5) Namhyung Kim
2016-09-04 14:38 ` [PATCH 1/3] virtio: Basic implementation of virtio pstore driver Namhyung Kim
2016-09-08 20:49   ` Kees Cook
2016-09-22 11:57   ` Stefan Hajnoczi
2016-09-23  5:48     ` Namhyung Kim
2016-09-04 14:38 ` [PATCH 2/3] qemu: Implement virtio-pstore device Namhyung Kim
2016-09-22 12:23   ` Stefan Hajnoczi
2016-09-23  5:52     ` Namhyung Kim
2016-09-04 14:39 ` [PATCH 3/3] kvmtool: " Namhyung Kim
  -- strict thread matches above, loose matches on Subject: below --
2016-08-31  8:07 [RFC/PATCHSET 0/3] virtio: Implement virtio pstore device (v4) Namhyung Kim
2016-08-31  8:08 ` [PATCH 3/3] kvmtool: Implement virtio-pstore device Namhyung Kim
2016-08-20  8:07 [RFC/PATCHSET 0/3] virtio: Implement virtio pstore device (v3) Namhyung Kim
2016-08-20  8:07 ` [PATCH 3/3] kvmtool: Implement virtio-pstore device Namhyung Kim
2016-07-18  4:37 [RFC/PATCHSET 0/3] virtio-pstore: Implement virtio pstore device Namhyung Kim
2016-07-18  4:37 ` [PATCH 3/3] kvmtool: Implement virtio-pstore device Namhyung Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).