All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jim Fehlig <jfehlig@suse.com>
To: xen-devel <xen-devel@lists.xen.org>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Ken Johnson <ken@suse.com>
Subject: [RFC] support more qdisk types
Date: Mon, 25 Jan 2016 17:25:02 -0700	[thread overview]
Message-ID: <56A6BCDE.6040900@suse.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 1868 bytes --]

I would like to hear the community's opinion on supporting more qdisk types in
xl/libxl, e.g. nbd, rbd, iSCSI, etc. I prefer supporting additional qdisk types
in libxl over apps like xl or libvirt doing all the setup, producing a block
device, and then passing that to libxl. Each libxl app would have to
re-implement functionality already provided by qdisk. libxl already supports
IDE, AHCI, SCSI, and Xen PV qdisks. My suggestion is to extend that to initially
include nbd, rbd, and iSCSI. Sheepdog, ssh, etc. could be added in the future.

I considered several approaches to supporting additional qdisk types, based
primarily on changes to the disk cfg and interface. At one extreme is to change
nothing and use the existing 'target=' to encode all required config for the
additional qdisk types. libxl would need to be taught how to turn the blob into
an appropriate qdisk. At the other extreme is extending xl-disk-configuration
with discrete knobs for each possible config item and making the
libxl_device_disk structure more hierarchical. E.g.

libxl_device_disk {
    ... existing
    libxl_device_disk_source src;
}

libxl_device_disk_source {
    libxl_disk_source_protocol protocol;
    int num_hosts;
    libxl_disk_source_host hosts;
    libxl_disk_source_auth auth;
}

enum libxl_disk_source_protocol {
    LIBXL_DISK_SOURCE_PROTOCOL_UNKNOWN = 0,
    LIBXL_DISK_SOURCE_PROTOCOL_NBD = 1,
    LIBXL_DISK_SOURCE_PROTOCOL_RBD = 2,
    LIBXL_DISK_SOURCE_PROTOCOL_ISCSI = 3,
}

libxl_disk_source_host {
    char *name;
    int port;
}

libxl_disk_source_auth {
    char *user;
    char *data;
}

As an initial RFC, I took a stab at something in the middle, adding a few items
to the xl-disk-configuration and libxl_device_disk. Attached is a patch to the
doc and IDL illustrating the proposal.

Suggests, comments, and feedback warmly welcomed.

Regards,
Jim


[-- Attachment #2: RFC-support-more-qdisk-types.patch --]
[-- Type: text/x-patch, Size: 3405 bytes --]

>From 3a6aeb434506c620dd122b9ff19656bcdd35f081 Mon Sep 17 00:00:00 2001
From: Jim Fehlig <jfehlig@suse.com>
Date: Mon, 25 Jan 2016 16:57:42 -0700
Subject: [PATCH] [RFC] support more qdisk types

Extend xl-disk-configuration to include settings needed to support
nbd, rbd, and iSCSI qdisks. Add corresponding fields to the
libxl_device_disk structure.

Signed-off-by: Jim Fehlig <jfehlig@suse.com>
---
 docs/misc/xl-disk-configuration.txt | 43 +++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_types.idl         | 12 +++++++++++
 2 files changed, 55 insertions(+)

diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
index 6a2118d..87a6560 100644
--- a/docs/misc/xl-disk-configuration.txt
+++ b/docs/misc/xl-disk-configuration.txt
@@ -168,6 +168,49 @@ Normally this option should not be specified, in which case libxl will
 automatically determine the most suitable backend.
 
 
+backendprotocol=<backend-protocol>
+----------------------------------
+
+Description:           Specifies the protocol used by the qdisk backend
+                       when accessing a network-based disk
+Supported values:      nbd, rbd, iscsi
+Mandatory:             No
+
+backendprotocol is only supported by the qdisk backendtype. nbd uses
+QEMU's internal network block device implementation. rbd uses QEMU's
+integration with librados to natively access block devices on Ceph
+clusters. iscsi uses QEMU's integration with libisci to access iSCSI
+resources.
+
+
+server=<host:port>
+------------------
+
+Description:           Specifies a host and port providing access to a
+                       network-based disk
+Supported values:      Valid host:port pairs
+Mandatory:             No
+
+server is only supported by the qdisk backendtype. host can be a valid
+hostname or host IP address. Some backendprotocol's such as rbd allow
+specifying multiple servers for accessing a network-based disk.
+
+
+auth=<user:data>
+----------------
+
+Description:           Specifies authentication information for a
+                       network-based disk
+Supported values:      backendprotocol dependent
+Mandatory:             No
+
+auth is only supported by the qdisk backendtype. rbd supports cephx
+authentication, where 'user' would be the Ceph user and 'data' the
+user's base64-encoded cephx key obtained with 'ceph auth get-key <user>'.
+iscsi supports CHAP username/password, in which case 'data' contains the
+user's password.
+
+
 script=<script>
 ---------------
 
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 9ad7eba..44a6e06 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -123,6 +123,13 @@ libxl_disk_backend = Enumeration("disk_backend", [
     (3, "QDISK"),
     ])
 
+libxl_disk_backend_protocol = Enumeration("disk_backend_protocol", [
+    (0, "UNKNOWN"),
+    (1, "nbd"),
+    (2, "rbd"),
+    (3, "iscsi"),
+    ])
+
 libxl_nic_type = Enumeration("nic_type", [
     (0, "UNKNOWN"),
     (1, "VIF_IOEMU"),
@@ -568,6 +575,11 @@ libxl_device_disk = Struct("device_disk", [
     ("is_cdrom", integer),
     ("direct_io_safe", bool),
     ("discard_enable", libxl_defbool),
+    ("backend_prtocol", libxl_disk_backend_protocol),
+    ("server", string),
+    ("port", integer),
+    ("auth_user", string),
+    ("auth_data", string),
     ])
 
 libxl_device_nic = Struct("device_nic", [
-- 
2.1.4


[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

             reply	other threads:[~2016-01-26  0:25 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-26  0:25 Jim Fehlig [this message]
2016-01-27 18:32 ` [RFC] support more qdisk types Konrad Rzeszutek Wilk
2016-01-27 20:25   ` Doug Goldstein
2016-01-27 21:09     ` Konrad Rzeszutek Wilk
2016-01-28  2:42       ` Jim Fehlig
2016-01-29 14:07         ` Konrad Rzeszutek Wilk
2016-01-29 17:18           ` Jim Fehlig
2016-01-29 17:59             ` Konrad Rzeszutek Wilk
2016-01-28  2:37     ` Jim Fehlig
2016-01-29 14:21       ` Doug Goldstein
2016-01-28  2:27   ` Jim Fehlig
2016-02-02 14:59 ` Wei Liu
2016-02-02 22:06   ` Jim Fehlig
2016-02-03  9:56     ` Ian Campbell
2016-02-04  2:53       ` Jim Fehlig
2016-02-04 10:16         ` Ian Campbell
2016-02-09  0:54         ` Jim Fehlig
2016-02-09  9:35           ` Ian Campbell
2016-02-09 10:58           ` Ian Jackson
2016-02-03 10:35     ` Wei Liu
2016-02-03 10:51       ` Ian Campbell
2016-02-03 10:55         ` Wei Liu
2016-02-03 11:05           ` Ian Campbell
2016-02-03 11:08             ` Wei Liu
2016-02-03 11:15             ` Roger Pau Monné

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56A6BCDE.6040900@suse.com \
    --to=jfehlig@suse.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=ken@suse.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.