linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] Userspace Passthrough backstore for LIO
@ 2014-09-15 23:12 Andy Grover
  2014-09-15 23:12 ` [PATCH 1/4] target: Remove unneeded check in sbc_parse_cdb Andy Grover
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Andy Grover @ 2014-09-15 23:12 UTC (permalink / raw)
  To: target-devel; +Cc: linux-scsi, hch, nab, shli, linux-kernel, rjones

Hi all,

Here is version three of the userspace passthrough backstore code I've
been working on. I think it's ready to be considered for upstream.

If you are wondering what this is, patch 3 is documentation that
should explain, at both a high- and low-level.

There are other pieces needed for this to work properly -- changes to
targetcli and rtslib to easily configure the new backstore type, plus
the user code that hooks up to the passthrough. I've written some code
to do this, called tcmu-runner
(https://github.com/agrover/tcmu-runner) but the userspace development
naturally is hindered until the kernel code is accepted :)

Changes since version 2:
* Incorporate doc improvements from Richard W. M. Jones
* Correct off-by-1 error in tcmu_get_blocks
* execute_rw must be set in the sbc_ops passed to sbc_parse_cdb

Thanks -- Andy

Andy Grover (4):
  target: Remove unneeded check in sbc_parse_cdb
  uio: Export definition of struct uio_device
  target: Add documentation on the target userspace pass-through driver
  target: Add a user-passthrough backstore

 Documentation/target/tcmu-design.txt   |  239 +++++++
 drivers/target/Kconfig                 |    5 +
 drivers/target/Makefile                |    1 +
 drivers/target/target_core_sbc.c       |    2 +-
 drivers/target/target_core_transport.c |    4 +
 drivers/target/target_core_user.c      | 1169 ++++++++++++++++++++++++++++++++
 drivers/uio/uio.c                      |   12 -
 include/linux/uio_driver.h             |   12 +-
 include/uapi/linux/Kbuild              |    1 +
 include/uapi/linux/target_core_user.h  |  142 ++++
 10 files changed, 1573 insertions(+), 14 deletions(-)
 create mode 100644 Documentation/target/tcmu-design.txt
 create mode 100644 drivers/target/target_core_user.c
 create mode 100644 include/uapi/linux/target_core_user.h

-- 
1.9.3


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/4] target: Remove unneeded check in sbc_parse_cdb
  2014-09-15 23:12 [PATCH 0/4] Userspace Passthrough backstore for LIO Andy Grover
@ 2014-09-15 23:12 ` Andy Grover
  2014-09-15 23:12 ` [PATCH 2/4] uio: Export definition of struct uio_device Andy Grover
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 13+ messages in thread
From: Andy Grover @ 2014-09-15 23:12 UTC (permalink / raw)
  To: target-devel; +Cc: linux-scsi, hch, nab, shli, linux-kernel, rjones

The check of SCF_SCSI_DATA_CDB seems to be a remnant from before hch's
refactoring of this function. There are no places where that flag is set
that cmd->execute_cmd isn't also set.

Signed-off-by: Andy Grover <agrover@redhat.com>
---
 drivers/target/target_core_sbc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
index bd78d92..ebe62af 100644
--- a/drivers/target/target_core_sbc.c
+++ b/drivers/target/target_core_sbc.c
@@ -948,7 +948,7 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
 	}
 
 	/* reject any command that we don't have a handler for */
-	if (!(cmd->se_cmd_flags & SCF_SCSI_DATA_CDB) && !cmd->execute_cmd)
+	if (!cmd->execute_cmd)
 		return TCM_UNSUPPORTED_SCSI_OPCODE;
 
 	if (cmd->se_cmd_flags & SCF_SCSI_DATA_CDB) {
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/4] uio: Export definition of struct uio_device
  2014-09-15 23:12 [PATCH 0/4] Userspace Passthrough backstore for LIO Andy Grover
  2014-09-15 23:12 ` [PATCH 1/4] target: Remove unneeded check in sbc_parse_cdb Andy Grover
@ 2014-09-15 23:12 ` Andy Grover
  2014-09-15 23:12 ` [PATCH 3/4] target: Add documentation on the target userspace pass-through driver Andy Grover
  2014-09-15 23:12 ` [PATCH 4/4] target: Add a user-passthrough backstore Andy Grover
  3 siblings, 0 replies; 13+ messages in thread
From: Andy Grover @ 2014-09-15 23:12 UTC (permalink / raw)
  To: target-devel; +Cc: linux-scsi, hch, nab, shli, linux-kernel, rjones

In order to prevent a O(n) search of the filesystem to link up its uio
node with its target configuration, TCMU needs to know the minor number
that UIO assigned. Expose the definition of this struct so TCMU can
access this field.

Signed-off-by: Andy Grover <agrover@redhat.com>
---
 drivers/uio/uio.c          | 12 ------------
 include/linux/uio_driver.h | 12 +++++++++++-
 2 files changed, 11 insertions(+), 13 deletions(-)

diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
index a673e5b..60fa627 100644
--- a/drivers/uio/uio.c
+++ b/drivers/uio/uio.c
@@ -28,18 +28,6 @@
 
 #define UIO_MAX_DEVICES		(1U << MINORBITS)
 
-struct uio_device {
-	struct module		*owner;
-	struct device		*dev;
-	int			minor;
-	atomic_t		event;
-	struct fasync_struct	*async_queue;
-	wait_queue_head_t	wait;
-	struct uio_info		*info;
-	struct kobject		*map_dir;
-	struct kobject		*portio_dir;
-};
-
 static int uio_major;
 static struct cdev *uio_cdev;
 static DEFINE_IDR(uio_idr);
diff --git a/include/linux/uio_driver.h b/include/linux/uio_driver.h
index 1ad4724..baa8171 100644
--- a/include/linux/uio_driver.h
+++ b/include/linux/uio_driver.h
@@ -63,7 +63,17 @@ struct uio_port {
 
 #define MAX_UIO_PORT_REGIONS	5
 
-struct uio_device;
+struct uio_device {
+        struct module           *owner;
+        struct device           *dev;
+        int                     minor;
+        atomic_t                event;
+        struct fasync_struct    *async_queue;
+        wait_queue_head_t       wait;
+        struct uio_info         *info;
+        struct kobject          *map_dir;
+        struct kobject          *portio_dir;
+};
 
 /**
  * struct uio_info - UIO device capabilities
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/4] target: Add documentation on the target userspace pass-through driver
  2014-09-15 23:12 [PATCH 0/4] Userspace Passthrough backstore for LIO Andy Grover
  2014-09-15 23:12 ` [PATCH 1/4] target: Remove unneeded check in sbc_parse_cdb Andy Grover
  2014-09-15 23:12 ` [PATCH 2/4] uio: Export definition of struct uio_device Andy Grover
@ 2014-09-15 23:12 ` Andy Grover
  2014-09-15 23:12 ` [PATCH 4/4] target: Add a user-passthrough backstore Andy Grover
  3 siblings, 0 replies; 13+ messages in thread
From: Andy Grover @ 2014-09-15 23:12 UTC (permalink / raw)
  To: target-devel; +Cc: linux-scsi, hch, nab, shli, linux-kernel, rjones

Describes the driver and its interface to make it possible for user
programs to back a LIO-exported LUN.

Thanks to Richard W. M. Jones for review, and supplementing this doc
with the first two paragraphs.

Signed-off-by: Andy Grover <agrover@redhat.com>
---
 Documentation/target/tcmu-design.txt | 239 +++++++++++++++++++++++++++++++++++
 1 file changed, 239 insertions(+)
 create mode 100644 Documentation/target/tcmu-design.txt

diff --git a/Documentation/target/tcmu-design.txt b/Documentation/target/tcmu-design.txt
new file mode 100644
index 0000000..92662be
--- /dev/null
+++ b/Documentation/target/tcmu-design.txt
@@ -0,0 +1,239 @@
+TCM Userspace Design
+--------------------
+
+TCM is another name for LIO, an in-kernel iSCSI target (server).
+Existing TCM targets run in the kernel.  TCMU (TCM in Userspace)
+allows userspace programs to be written which act as iSCSI targets.
+This document describes the design.
+
+The existing kernel provides modules for different SCSI transport
+protocols.  TCM also modularizes the data storage.  There are existing
+modules for file, block device, RAM or using another SCSI device as
+storage.  These are called "backstores" or "storage engines".  These
+built-in modules are implemented entirely as kernel code.
+
+Background:
+
+In addition to modularizing the transport protocol used for carrying
+SCSI commands ("fabrics"), the Linux kernel target, LIO, also modularizes
+the actual data storage as well. These are referred to as "backstores"
+or "storage engines". The target comes with backstores that allow a
+file, a block device, RAM, or another SCSI device to be used for the
+local storage needed for the exported SCSI LUN. Like the rest of LIO,
+these are implemented entirely as kernel code.
+
+These backstores cover the most common use cases, but not all. One new
+use case that other non-kernel target solutions, such as tgt, are able
+to support is using Gluster's GLFS or Ceph's RBD as a backstore. The
+target then serves as a translator, allowing initiators to store data
+in these non-traditional networked storage systems, while still only
+using standard protocols themselves.
+
+If the target is a userspace process, supporting these is easy. tgt,
+for example, needs only a small adapter module for each, because the
+modules just use the available userspace libraries for RBD and GLFS.
+
+Adding support for these backstores in LIO is considerably more
+difficult, because LIO is entirely kernel code. Instead of undertaking
+the significant work to port the GLFS or RBD APIs and protocols to the
+kernel, another approach is to create a userspace pass-through
+backstore for LIO, "TCMU".
+
+
+Benefits:
+
+In addition to allowing relatively easy support for RBD and GLFS, TCMU
+will also allow easier development of new backstores. TCMU combines
+with the LIO loopback fabric to become something similar to FUSE
+(Filesystem in Userspace), but at the SCSI layer instead of the
+filesystem layer. A SUSE, if you will.
+
+The disadvantage is there are more distinct components to configure, and
+potentially to malfunction. This is unavoidable, but hopefully not
+fatal if we're careful to keep things as simple as possible.
+
+Design constraints:
+
+- Good performance: high throughput, low latency
+- Cleanly handle if userspace:
+   1) never attaches
+   2) hangs
+   3) dies
+   4) misbehaves
+- Allow future flexibility in user & kernel implementations
+- Be reasonably memory-efficient
+- Simple to configure & run
+- Simple to write a userspace backend
+
+
+Implementation overview:
+
+The core of the TCMU interface is a memory region that is shared
+between kernel and userspace. Within this region is: a control area
+(mailbox); a lockless producer/consumer circular buffer for commands
+to be passed up, and status returned; and an in/out data buffer area.
+
+TCMU uses the pre-existing UIO subsystem. UIO allows device driver
+development in userspace, and this is conceptually very close to the
+TCMU use case, except instead of a physical device, TCMU implements a
+memory-mapped layout designed for SCSI commands. Using UIO also
+benefits TCMU by handling device introspection (e.g. a way for
+userspace to determine how large the shared region is) and signaling
+mechanisms in both directions.
+
+There are no embedded pointers in the memory region. Everything is
+expressed as an offset from the region's starting address. This allows
+the ring to still work if the user process dies and is restarted with
+the region mapped at a different virtual address.
+
+See target_core_user.h for the struct definitions.
+
+The Mailbox:
+
+The mailbox is always at the start of the shared memory region, and
+contains a version, details about the starting offset and size of the
+command ring, and head and tail pointers to be used by the kernel and
+userspace (respectively) to put commands on the ring, and indicate
+when the commands are completed.
+
+version - 1 (userspace should abort if otherwise)
+flags - none yet defined.
+cmdr_off - The offset of the start of the command ring from the start
+of the memory region, to account for the mailbox size.
+cmdr_size - The size of the command ring. This does *not* need to be a
+power of two.
+cmd_head - Modified by the kernel to indicate when a command has been
+placed on the ring.
+cmd_tail - Modified by userspace to indicate when it has completed
+processing of a command.
+
+The Command Ring:
+
+Commands are placed on the ring by the kernel incrementing
+mailbox.cmd_head by the size of the command, modulo cmdr_size, and
+then signaling userspace via uio_event_notify(). Once the command is
+completed, userspace updates mailbox.cmd_tail in the same way and
+signals the kernel via a 4-byte write(). When cmd_head equals
+cmd_tail, the ring is empty -- no commands are currently waiting to be
+processed by userspace.
+
+TCMU commands start with a common header containing "len_op", a 32-bit
+value that stores the length, as well as the opcode in the lowest
+unused bits. Currently only two opcodes are defined, TCMU_OP_PAD and
+TCMU_OP_CMD. When userspace encounters a command with PAD opcode, it
+should skip ahead by the bytes in "length". (The kernel inserts PAD
+entries to ensure each CMD entry fits contigously into the circular
+buffer.)
+
+When userspace handles a CMD, it finds the SCSI CDB (Command Data
+Block) via tcmu_cmd_entry.req.cdb_off. This is an offset from the
+start of the overall shared memory region, not the entry. The data
+in/out buffers are accessible via tht req.iov[] array. Note that
+each iov.iov_base is also an offset from the start of the region.
+
+TCMU currently does not support BIDI operations.
+
+When completing a command, userspace sets rsp.scsi_status, and
+rsp.sense_buffer if necessary. Userspace then increments
+mailbox.cmd_tail by entry.hdr.length (mod cmdr_size) and signals the
+kernel via the UIO method, a 4-byte write to the file descriptor.
+
+The Data Area:
+
+This is shared-memory space after the command ring. The organization
+of this area is not defined in the TCMU interface, and userspace
+should access only the parts referenced by pending iovs.
+
+
+Device Discovery:
+
+Other devices may be using UIO besides TCMU. Unrelated user processes
+may also be handling different sets of TCMU devices. TCMU userspace
+processes must find their devices by scanning sysfs
+class/uio/uio*/name. For TCMU devices, these names will be of the
+format:
+
+tcm-user/<hba_num>/<device_name>/<subtype>/<path>
+
+where "tcm-user" is common for all TCMU-backed UIO devices. <hba_num>
+and <device_name> allow userspace to find the device's path in the
+kernel target's configfs tree. Assuming the usual mount point, it is
+found at:
+
+/sys/kernel/config/target/core/user_<hba_num>/<device_name>
+
+This location contains attributes such as "hw_block_size", that
+userspace needs to know for correct operation.
+
+<subtype> will be a userspace-process-unique string to identify the
+TCMU device as expecting to be backed by a certain handler, and <path>
+will be an additional handler-specific string for the user process to
+configure the device, if needed. The name cannot contain ':', due to
+LIO limitations.
+
+For all devices so discovered, the user handler opens /dev/uioX and
+calls mmap():
+
+mmap(NULL, size, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0)
+
+where size must be equal to the value read from
+/sys/class/uio/uioX/maps/map0/size.
+
+
+Device Events:
+
+If a new device is added or removed, a notification will be broadcast
+over netlink, using a generic netlink family name of "TCM-USER" and a
+multicast group named "config". This will include the UIO name as
+described in the previous section, as well as the UIO minor
+number. This should allow userspace to identify both the UIO device and
+the LIO device, so that after determining the device is supported
+(based on subtype) it can take the appropriate action.
+
+
+Other contingencies:
+
+Userspace handler process never attaches:
+
+- TCMU will post commands, and then abort them after a timeout period
+  (30 seconds.)
+
+Userspace handler process is killed:
+
+- It is still possible to restart and re-connect to TCMU
+  devices. Command ring is preserved. However, after the timeout period,
+  the kernel will abort pending tasks.
+
+Userspace handler process hangs:
+
+- The kernel will abort pending tasks after a timeout period.
+
+Userspace handler process is malicious:
+
+- The process can trivially break the handling of devices it controls,
+  but should not be able to access kernel memory outside its shared
+  memory areas.
+
+
+Writing a user backstore handler:
+
+First, consider instead writing a plugin for tcmu-runner. tcmu-runner
+implements the device enumeration and ring processing, and provides a
+higher-level API to plugin writers. If you're determined to start from
+scratch, it should at least provide working reference code for using
+the user-kernel interface. Multiple unrelated processes can run
+concurrently with disjoint sets of TCMU devices, as long as they
+differentiate "their" devices based on the subtype string.
+
+All SCSI commands that the device receives are presented to userspace
+via the command ring.
+
+However, in order to reduce the amount of boilerplate code needed,
+handlers can opt to complete commands with CHECK CONDITION: INVALID
+COMMAND OPERATION CODE and TCMU will attempt to emulate it using LIO's
+in-kernel support. See tcmu-runner for an example of this.
+
+Finally, be careful to return codes as defined by the SCSI
+specifications. These are different than some values defined in the
+scsi/scsi.h include file. For example, CHECK CONDITION's status code
+is 2, not 1.
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/4] target: Add a user-passthrough backstore
  2014-09-15 23:12 [PATCH 0/4] Userspace Passthrough backstore for LIO Andy Grover
                   ` (2 preceding siblings ...)
  2014-09-15 23:12 ` [PATCH 3/4] target: Add documentation on the target userspace pass-through driver Andy Grover
@ 2014-09-15 23:12 ` Andy Grover
  2014-09-19 21:14   ` Nicholas A. Bellinger
  3 siblings, 1 reply; 13+ messages in thread
From: Andy Grover @ 2014-09-15 23:12 UTC (permalink / raw)
  To: target-devel; +Cc: linux-scsi, hch, nab, shli, linux-kernel, rjones

Add a LIO storage engine that presents commands to userspace for execution.
This would allow more complex backstores to be implemented out-of-kernel,
and also make experimentation a-la FUSE (but at the SCSI level -- "SUSE"?)
possible.

It uses a mmap()able UIO device per LUN to share a command ring and data
area. The commands are raw SCSI CDBs and iovs for in/out data. The command
ring is also reused for returning scsi command status and optional sense
data.

This implementation is based on Shaohua Li's earlier version but heavily
modified. Differences include:

* Shared memory allocated by kernel, not locked-down user pages
* Single ring for command request and response
* Offsets instead of embedded pointers
* Generic SCSI CDB passthrough instead of per-cmd specialization in ring
  format.
* Uses UIO device instead of anon_file passed in mailbox.
* Optional in-kernel handling of some commands.

The main reason for these differences is to permit greater resiliency
if the user process dies or hangs.

Things not yet implemented (on purpose):

* Zero copy. The data area is flexible enough to allow page flipping or
  backend-allocated pages to be used by fabrics, but it's not clear these
  are performance wins. Can come later.
* Out-of-order command completion by userspace. Possible to add by just
  allowing userspace to change cmd_id in rsp cmd entries, but currently
  not supported.
* No locks between kernel cmd submission and completion routines. Sounds
  like it's possible, but this can come later.
* Sparse allocation of mmaped area. Current code vmallocs the whole thing.
  If the mapped area was larger and not fully mapped then the driver would
  have more freedom to change cmd and data area sizes based on demand.

Current code open issues:

* The use of idrs may be overkill -- we maybe can replace them with a
  simple counter to generate cmd_ids, and a hash table to get a cmd_id's
  associated pointer.
* Use of a free-running counter for cmd ring instead of explicit modulo
  math. This would require power-of-2 cmd ring size.

Signed-off-by: Andy Grover <agrover@redhat.com>
---
 drivers/target/Kconfig                 |    5 +
 drivers/target/Makefile                |    1 +
 drivers/target/target_core_transport.c |    4 +
 drivers/target/target_core_user.c      | 1169 ++++++++++++++++++++++++++++++++
 include/uapi/linux/Kbuild              |    1 +
 include/uapi/linux/target_core_user.h  |  142 ++++
 6 files changed, 1322 insertions(+)
 create mode 100644 drivers/target/target_core_user.c
 create mode 100644 include/uapi/linux/target_core_user.h

diff --git a/drivers/target/Kconfig b/drivers/target/Kconfig
index dc2d84a..b03a845 100644
--- a/drivers/target/Kconfig
+++ b/drivers/target/Kconfig
@@ -31,6 +31,11 @@ config TCM_PSCSI
 	Say Y here to enable the TCM/pSCSI subsystem plugin for non-buffered
 	passthrough access to Linux/SCSI device
 
+config TCM_USER
+	tristate "TCM/USER Subsystem Plugin for Linux"
+	help
+	Say Y here to enable the TCM/USER subsystem plugin
+
 source "drivers/target/loopback/Kconfig"
 source "drivers/target/tcm_fc/Kconfig"
 source "drivers/target/iscsi/Kconfig"
diff --git a/drivers/target/Makefile b/drivers/target/Makefile
index 85b012d..bbb4a7d 100644
--- a/drivers/target/Makefile
+++ b/drivers/target/Makefile
@@ -22,6 +22,7 @@ obj-$(CONFIG_TARGET_CORE)	+= target_core_mod.o
 obj-$(CONFIG_TCM_IBLOCK)	+= target_core_iblock.o
 obj-$(CONFIG_TCM_FILEIO)	+= target_core_file.o
 obj-$(CONFIG_TCM_PSCSI)		+= target_core_pscsi.o
+obj-$(CONFIG_TCM_USER)		+= target_core_user.o
 
 # Fabric modules
 obj-$(CONFIG_LOOPBACK_TARGET)	+= loopback/
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index 7fa62fc..f018a8c 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -232,6 +232,10 @@ void transport_subsystem_check_init(void)
 	if (ret != 0)
 		pr_err("Unable to load target_core_pscsi\n");
 
+	ret = request_module("target_core_user");
+	if (ret != 0)
+		pr_err("Unable to load target_core_user\n");
+
 	sub_api_initialized = 1;
 }
 
diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
new file mode 100644
index 0000000..33b56a8
--- /dev/null
+++ b/drivers/target/target_core_user.c
@@ -0,0 +1,1169 @@
+/*
+ * Copyright (C) 2013 Shaohua Li <shli@kernel.org>
+ * Copyright (C) 2014 Red Hat, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+
+#include <linux/spinlock.h>
+#include <linux/module.h>
+#include <linux/idr.h>
+#include <linux/timer.h>
+#include <linux/parser.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_host.h>
+#include <linux/uio_driver.h>
+#include <net/genetlink.h>
+#include <target/target_core_base.h>
+#include <target/target_core_fabric.h>
+#include <target/target_core_backend.h>
+#include <linux/target_core_user.h>
+
+/*
+ * Define a shared-memory interface for LIO to pass SCSI commands and
+ * data to userspace for processing. This is to allow backends that
+ * are too complex for in-kernel support to be possible.
+ *
+ * It uses the UIO framework to do a lot of the device-creation and
+ * introspection work for us.
+ *
+ * See the .h file for how the ring is laid out. Note that while the
+ * command ring is defined, the particulars of the data area are
+ * not. Offset values in the command entry point to other locations
+ * internal to the mmap()ed area. There is separate space outside the
+ * command ring for data buffers. This leaves maximum flexibility for
+ * moving buffer allocations, or even page flipping or other
+ * allocation techniques, without altering the command ring layout.
+ *
+ * SECURITY:
+ * The user process must be assumed to be malicious. There's no way to
+ * prevent it breaking the command ring protocol if it wants, but in
+ * order to prevent other issues we must only ever read *data* from
+ * the shared memory area, not offsets or sizes. This applies to
+ * command ring entries as well as the mailbox. Extra code needed for
+ * this may have a 'UAM' comment.
+ */
+
+
+#define TCMU_TIME_OUT (30 * MSEC_PER_SEC)
+
+#define CMDR_SIZE (16 * 4096)
+#define DATA_SIZE (257 * 4096)
+
+#define TCMU_RING_SIZE (CMDR_SIZE + DATA_SIZE)
+
+static void tcmu_work(struct work_struct *work);
+
+struct device *tcmu_root_device;
+
+struct tcmu_hba {
+	u32 host_id;
+};
+
+#define TCMU_CONFIG_LEN 256
+
+struct tcmu_dev {
+	struct se_device se_dev;
+
+	char *name;
+	struct se_hba *hba;
+
+#define TCMU_DEV_BIT_OPEN 0
+#define TCMU_DEV_BIT_BROKEN 1
+	unsigned long flags;
+
+	struct uio_info uio_info;
+
+	struct tcmu_mailbox *mb_addr;
+	size_t dev_size;
+	u32 cmdr_size;
+	u32 cmdr_last_cleaned;
+	/* Offset of data ring from start of mb */
+	size_t data_off;
+	size_t data_size;
+	/* Ring head + tail values. */
+	/* Must add data_off and mb_addr to get the address */
+	size_t data_head;
+	size_t data_tail;
+
+	wait_queue_head_t wait_cmdr;
+	/* TODO should this be a mutex? */
+	spinlock_t cmdr_lock;
+
+	struct idr commands;
+	spinlock_t commands_lock;
+
+	struct timer_list timeout;
+
+	struct kref ref;
+
+	char dev_config[TCMU_CONFIG_LEN];
+};
+
+#define TCMU_DEV(_se_dev) container_of(_se_dev, struct tcmu_dev, se_dev)
+
+#define CMDR_OFF sizeof(struct tcmu_mailbox)
+
+struct tcmu_cmd {
+	struct se_cmd *se_cmd;
+	struct tcmu_dev *tcmu_dev;
+
+	uint16_t cmd_id;
+
+	/* Can't use se_cmd->data_length when cleaning up expired cmds, because if
+	   cmd has been completed then accessing se_cmd is off limits */
+	size_t data_length;
+
+	unsigned long deadline;
+
+	struct work_struct work;
+
+#define TCMU_CMD_BIT_EXPIRED 0
+	unsigned long flags;
+};
+
+static struct kmem_cache *tcmu_cmd_cache;
+
+/* multicast group */
+enum tcmu_multicast_groups {
+	TCMU_MCGRP_CONFIG,
+};
+
+static const struct genl_multicast_group tcmu_mcgrps[] = {
+	[TCMU_MCGRP_CONFIG] = { .name = "config", },
+};
+
+/* Our generic netlink family */
+static struct genl_family tcmu_genl_family = {
+	.id = GENL_ID_GENERATE,
+	.hdrsize = 0,
+	.name = "TCM-USER",
+	.version = 1,
+	.maxattr = TCMU_ATTR_MAX,
+	.mcgrps = tcmu_mcgrps,
+	.n_mcgrps = ARRAY_SIZE(tcmu_mcgrps),
+};
+
+static void tcmu_destroy_device(struct kref *kref)
+{
+	struct tcmu_dev *udev = container_of(kref, struct tcmu_dev, ref);
+
+	kfree(udev);
+}
+
+static struct tcmu_cmd *tcmu_alloc_cmd(struct se_cmd *se_cmd)
+{
+	struct se_device *se_dev = se_cmd->se_dev;
+	struct tcmu_dev *udev = TCMU_DEV(se_dev);
+	struct tcmu_cmd *tcmu_cmd;
+	int cmd_id;
+
+	tcmu_cmd = kmem_cache_zalloc(tcmu_cmd_cache, GFP_KERNEL);
+	if (!tcmu_cmd)
+		return NULL;
+
+	tcmu_cmd->se_cmd = se_cmd;
+	tcmu_cmd->tcmu_dev = udev;
+	tcmu_cmd->data_length = se_cmd->data_length;
+
+	tcmu_cmd->deadline = jiffies + msecs_to_jiffies(TCMU_TIME_OUT);
+
+	INIT_WORK(&tcmu_cmd->work, tcmu_work);
+
+	idr_preload(GFP_KERNEL);
+	spin_lock_irq(&udev->commands_lock);
+	cmd_id = idr_alloc(&udev->commands, tcmu_cmd, 0,
+		USHRT_MAX, GFP_NOWAIT);
+	spin_unlock_irq(&udev->commands_lock);
+	idr_preload_end();
+
+	if (cmd_id < 0) {
+		kmem_cache_free(tcmu_cmd_cache, tcmu_cmd);
+		return NULL;
+	}
+	tcmu_cmd->cmd_id = cmd_id;
+
+	return tcmu_cmd;
+}
+
+static inline void flush_dcache_range(void *vaddr, size_t size)
+{
+	unsigned long offset = (unsigned long) vaddr & ~PAGE_MASK;
+
+	size = round_up(size+offset, PAGE_SIZE);
+	vaddr -= offset;
+
+	while (size) {
+		flush_dcache_page(virt_to_page(vaddr));
+		size -= PAGE_SIZE;
+	}
+}
+
+/*
+ * Some ring helper functions. We don't assume size is a power of 2 so
+ * we can't use circ_buf.h.
+ */
+static inline size_t spc_used(size_t head, size_t tail, size_t size)
+{
+	int diff = head - tail;
+
+	if (diff >= 0)
+		return diff;
+	else
+		return size + diff;
+}
+
+static inline size_t spc_free(size_t head, size_t tail, size_t size)
+{
+	/* Keep 1 byte unused or we can't tell full from empty */
+	return (size - spc_used(head, tail, size) - 1);
+}
+
+static inline size_t head_to_end(size_t head, size_t size)
+{
+	return size - head;
+}
+
+#define UPDATE_HEAD(head, used, size) smp_store_release(&head, ((head % size) + used) % size)
+
+/*
+ * We can't queue a command until we have space available on the cmd ring *and* space
+ * space avail on the data ring.
+ *
+ * Called with ring lock held.
+ */
+static bool is_ring_space_avail(struct tcmu_dev *udev, size_t cmd_needed, size_t data_needed)
+{
+	struct tcmu_mailbox *mb = udev->mb_addr;
+	size_t space;
+	u32 cmd_head;
+
+	flush_dcache_range(mb, sizeof(*mb));
+
+	cmd_head = mb->cmd_head % udev->cmdr_size; /* UAM */
+
+	space = spc_free(cmd_head, udev->cmdr_last_cleaned, udev->cmdr_size);
+	if (space < cmd_needed) {
+		pr_err("no cmd space: %u %u %u\n", cmd_head,
+		       udev->cmdr_last_cleaned, udev->cmdr_size);
+		return false;
+	}
+
+	space = spc_free(udev->data_head, udev->data_tail, udev->data_size);
+	if (space < data_needed) {
+		pr_err("no data space: %zu %zu %zu\n", udev->data_head,
+		       udev->data_tail, udev->data_size);
+		return false;
+	}
+
+	return true;
+}
+
+static int tcmu_queue_cmd_ring(struct tcmu_cmd *tcmu_cmd)
+{
+	struct tcmu_dev *udev = tcmu_cmd->tcmu_dev;
+	struct se_cmd *se_cmd = tcmu_cmd->se_cmd;
+	size_t base_command_size, command_size;
+	size_t cmdr_space_needed;
+	struct tcmu_mailbox *mb;
+	size_t pad_size;
+	struct tcmu_cmd_entry *entry;
+	int i;
+	struct scatterlist *sg;
+	struct iovec *iov;
+	int iov_cnt = 0;
+	uint32_t cmd_head;
+	uint64_t cdb_off;
+
+	if (test_bit(TCMU_DEV_BIT_BROKEN, &udev->flags))
+		return -EINVAL;
+
+	/*
+	 * Must be a certain minimum size for response sense info, but
+	 * also may be larger if the iov array is large.
+	 *
+	 * iovs = sgl_nents+1, for end-of-ring case, plus another 1
+	 * b/c size == offsetof one-past-element.
+	*/
+	base_command_size = max(offsetof(struct tcmu_cmd_entry,
+					 req.iov[se_cmd->t_data_nents + 2]),
+				sizeof(struct tcmu_cmd_entry));
+	command_size = base_command_size
+		+ round_up(scsi_command_size(se_cmd->t_task_cdb), TCMU_OP_ALIGN_SIZE);
+
+	WARN_ON(command_size & (TCMU_OP_ALIGN_SIZE-1));
+
+	mb = udev->mb_addr;
+
+
+	spin_lock_irq(&udev->cmdr_lock);
+
+	cmd_head = mb->cmd_head % udev->cmdr_size; /* UAM */
+	if ((command_size > (udev->cmdr_size / 2))
+	    || tcmu_cmd->data_length > (udev->data_size - 1))
+		pr_warn("TCMU: Request of size %zu/%zu may be too big for %u/%zu "
+			"cmd/data ring buffers\n", command_size, tcmu_cmd->data_length,
+			udev->cmdr_size, udev->data_size);
+
+	/*
+	 * Cmd end-of-ring space is too small so we need space for a NOP plus
+	 * original cmd - cmds are internally contiguous.
+	 */
+	if (head_to_end(cmd_head, udev->cmdr_size) >= command_size)
+		pad_size = 0;
+	else
+		pad_size = head_to_end(cmd_head, udev->cmdr_size);
+	cmdr_space_needed = command_size + pad_size;
+
+	while (!is_ring_space_avail(udev, cmdr_space_needed, tcmu_cmd->data_length)) {
+		int ret;
+		DEFINE_WAIT(__wait);
+
+		prepare_to_wait(&udev->wait_cmdr, &__wait, TASK_INTERRUPTIBLE);
+
+		pr_debug("sleeping for ring space\n");
+		spin_unlock_irq(&udev->cmdr_lock);
+		ret = schedule_timeout(msecs_to_jiffies(TCMU_TIME_OUT));
+		finish_wait(&udev->wait_cmdr, &__wait);
+		if (!ret) {
+			pr_warn("tcmu: command timed out\n");
+			return -ETIMEDOUT;
+		}
+
+		spin_lock_irq(&udev->cmdr_lock);
+
+		/* We dropped cmdr_lock, cmd_head is stale */
+		cmd_head = mb->cmd_head % udev->cmdr_size; /* UAM */
+	}
+
+	if (pad_size) {
+		entry = (void *) mb + CMDR_OFF + cmd_head;
+		flush_dcache_range(entry, sizeof(*entry));
+		tcmu_hdr_set_op(&entry->hdr, TCMU_OP_PAD);
+		tcmu_hdr_set_len(&entry->hdr, pad_size);
+
+		UPDATE_HEAD(mb->cmd_head, pad_size, udev->cmdr_size);
+
+		cmd_head = mb->cmd_head % udev->cmdr_size; /* UAM */
+		WARN_ON(cmd_head != 0);
+	}
+
+	entry = (void *) mb + CMDR_OFF + cmd_head;
+	flush_dcache_range(entry, sizeof(*entry));
+	tcmu_hdr_set_op(&entry->hdr, TCMU_OP_CMD);
+	tcmu_hdr_set_len(&entry->hdr, command_size);
+	entry->cmd_id = tcmu_cmd->cmd_id;
+
+	/*
+	 * Fix up iovecs, and handle if allocation in data ring wrapped.
+	 */
+	iov = &entry->req.iov[0];
+	for_each_sg(se_cmd->t_data_sg, sg, se_cmd->t_data_nents, i) {
+		size_t copy_bytes = min((size_t)sg->length,
+				     head_to_end(udev->data_head, udev->data_size));
+		void *from = kmap_atomic(sg_page(sg)) + sg->offset;
+		void *to = (void *)mb + udev->data_off + udev->data_head;
+
+		if (tcmu_cmd->se_cmd->data_direction == DMA_TO_DEVICE) {
+			memcpy(to, from, copy_bytes);
+			flush_dcache_range(to, copy_bytes);
+		}
+
+		/* Even iov_base is relative to mb_addr */
+		iov->iov_len = copy_bytes;
+		iov->iov_base = (void *) udev->data_off + udev->data_head;
+		iov_cnt++;
+		iov++;
+
+		UPDATE_HEAD(udev->data_head, copy_bytes, udev->data_size);
+
+		/* Uh oh, we wrapped the buffer. Must split sg across 2 iovs. */
+		if (sg->length != copy_bytes) {
+			from += copy_bytes;
+			copy_bytes = sg->length - copy_bytes;
+
+			iov->iov_len = copy_bytes;
+			iov->iov_base = (void *) udev->data_off + udev->data_head;
+
+			if (se_cmd->data_direction == DMA_TO_DEVICE) {
+				to = (void *) mb + udev->data_off + udev->data_head;
+				memcpy(to, from, copy_bytes);
+				flush_dcache_range(to, copy_bytes);
+			}
+
+			iov_cnt++;
+			iov++;
+
+			UPDATE_HEAD(udev->data_head, copy_bytes, udev->data_size);
+		}
+
+		kunmap_atomic(from);
+	}
+	entry->req.iov_cnt = iov_cnt;
+
+	/* All offsets relative to mb_addr, not start of entry! */
+	cdb_off = CMDR_OFF + cmd_head + base_command_size;
+	memcpy((void *)mb + cdb_off, se_cmd->t_task_cdb, scsi_command_size(se_cmd->t_task_cdb));
+	entry->req.cdb_off = cdb_off;
+	flush_dcache_range(entry, sizeof(*entry));
+
+	UPDATE_HEAD(mb->cmd_head, command_size, udev->cmdr_size);
+	flush_dcache_range(mb, sizeof(*mb));
+
+	spin_unlock_irq(&udev->cmdr_lock);
+
+	/* TODO: only if FLUSH and FUA? */
+	uio_event_notify(&udev->uio_info);
+
+	mod_timer(&udev->timeout,
+		round_jiffies_up(jiffies + msecs_to_jiffies(TCMU_TIME_OUT)));
+
+	return 0;
+}
+
+static int tcmu_queue_cmd(struct se_cmd *se_cmd)
+{
+	struct se_device *se_dev = se_cmd->se_dev;
+	struct tcmu_dev *udev = TCMU_DEV(se_dev);
+	struct tcmu_cmd *tcmu_cmd;
+	int ret;
+
+	tcmu_cmd = tcmu_alloc_cmd(se_cmd);
+	if (!tcmu_cmd)
+		return -ENOMEM;
+
+	ret = tcmu_queue_cmd_ring(tcmu_cmd);
+	if (ret < 0) {
+		pr_err("TCMU: Could not queue command\n");
+		spin_lock_irq(&udev->commands_lock);
+		idr_remove(&udev->commands, tcmu_cmd->cmd_id);
+		spin_unlock_irq(&udev->commands_lock);
+
+		kmem_cache_free(tcmu_cmd_cache, tcmu_cmd);
+	}
+
+	return ret;
+}
+
+/* Core requires execute_rw be set, but just return unsupported */
+static sense_reason_t
+tcmu_retry_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
+	      enum dma_data_direction data_direction)
+{
+	return TCM_UNSUPPORTED_SCSI_OPCODE;
+}
+
+static struct sbc_ops tcmu_retry_ops = {
+	.execute_rw		= tcmu_retry_rw,
+};
+
+static void tcmu_work(struct work_struct *work)
+{
+	struct tcmu_cmd *cmd = container_of(work, struct tcmu_cmd, work);
+	struct se_cmd *se_cmd = cmd->se_cmd;
+
+	target_execute_cmd(se_cmd);
+	kmem_cache_free(tcmu_cmd_cache, cmd);
+}
+
+static void tcmu_emulate_cmd(struct tcmu_cmd *cmd)
+{
+	struct se_cmd *se_cmd = cmd->se_cmd;
+	sense_reason_t ret;
+	unsigned long flags;
+
+	/* Re-run parsing to set execute_cmd to value for possible emulation */
+	se_cmd->execute_cmd = NULL;
+
+	/*
+	 * Can't optionally call generic_request_failure if flags indicate it's
+	 * still being handled by us.
+	*/
+	spin_lock_irqsave(&se_cmd->t_state_lock, flags);
+	se_cmd->transport_state &= ~CMD_T_BUSY|CMD_T_SENT;
+	spin_unlock_irqrestore(&se_cmd->t_state_lock, flags);
+
+	ret = sbc_parse_cdb(se_cmd, &tcmu_retry_ops);
+	if (ret == TCM_NO_SENSE && se_cmd->execute_cmd) {
+		schedule_work(&cmd->work);
+	} else {
+		/* Can't emulate. */
+		transport_generic_request_failure(se_cmd, ret);
+		kmem_cache_free(tcmu_cmd_cache, cmd);
+	}
+}
+
+static void tcmu_handle_completion(struct tcmu_cmd *cmd, struct tcmu_cmd_entry *entry)
+{
+	struct se_cmd *se_cmd = cmd->se_cmd;
+	struct tcmu_dev *udev = cmd->tcmu_dev;
+
+	if (test_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags)) {
+		/* cmd has been completed already from timeout, just reclaim data
+		   ring space */
+		UPDATE_HEAD(udev->data_tail, cmd->data_length, udev->data_size);
+		return;
+	}
+
+	if (entry->rsp.scsi_status == SAM_STAT_CHECK_CONDITION) {
+		u8 *sense = entry->rsp.sense_buffer;
+
+		/*
+		 * Userspace can indicate it doesn't support an opcode with a
+		 * specific status, in which case we retry in the kernel.
+		 * Otherwise, just pass it back to initiator.
+		 *
+		 * CHECK_CONDITION: INVALID COMMAND OPERATION CODE
+		 */
+		if (sense[0] == 0x70 && sense[2] == 0x5 && sense[7] == 0xa
+		    && sense[12] == 0x20 && sense[13] == 0x0) {
+			UPDATE_HEAD(udev->data_tail, cmd->data_length, udev->data_size);
+			tcmu_emulate_cmd(cmd);
+			return;
+		}
+
+		memcpy(se_cmd->sense_buffer, entry->rsp.sense_buffer,
+			       se_cmd->scsi_sense_length);
+	}
+
+	if (se_cmd->data_direction == DMA_FROM_DEVICE) {
+		struct scatterlist *sg;
+		int i;
+
+		/* It'd be easier to look at entry's iovec again, but UAM */
+		for_each_sg(se_cmd->t_data_sg, sg, se_cmd->t_data_nents, i) {
+			size_t copy_bytes;
+			void *to;
+			void *from;
+
+			copy_bytes = min((size_t)sg->length,
+					 head_to_end(udev->data_tail, udev->data_size));
+
+			to = kmap_atomic(sg_page(sg)) + sg->offset;
+			WARN_ON(sg->length + sg->offset > PAGE_SIZE);
+			from = (void *) udev->mb_addr + udev->data_off + udev->data_tail;
+			flush_dcache_range(from, copy_bytes);
+			memcpy(to, from, copy_bytes);
+
+			UPDATE_HEAD(udev->data_tail, copy_bytes, udev->data_size);
+
+			/* Uh oh, wrapped the data buffer for this sg's data */
+			if (sg->length != copy_bytes) {
+				from = (void *) udev->mb_addr + udev->data_off + udev->data_tail;
+				WARN_ON(udev->data_tail);
+				to += copy_bytes;
+				copy_bytes = sg->length - copy_bytes;
+				flush_dcache_range(from, copy_bytes);
+				memcpy(to, from, copy_bytes);
+
+				UPDATE_HEAD(udev->data_tail, copy_bytes, udev->data_size);
+			}
+
+			kunmap_atomic(to);
+		}
+
+	} else if (se_cmd->data_direction == DMA_TO_DEVICE) {
+		UPDATE_HEAD(udev->data_tail, cmd->data_length, udev->data_size);
+	} else {
+		pr_warn("TCMU: data direction was %d!\n", se_cmd->data_direction);
+	}
+
+	target_complete_cmd(cmd->se_cmd, entry->rsp.scsi_status);
+	cmd->se_cmd = NULL;
+
+	kmem_cache_free(tcmu_cmd_cache, cmd);
+}
+
+static unsigned int tcmu_handle_completions(struct tcmu_dev *udev)
+{
+	struct tcmu_mailbox *mb;
+	LIST_HEAD(cpl_cmds);
+	unsigned long flags;
+	int handled = 0;
+
+	if (test_bit(TCMU_DEV_BIT_BROKEN, &udev->flags)) {
+		pr_err("ring broken, not handling completions\n");
+		return 0;
+	}
+
+	spin_lock_irqsave(&udev->cmdr_lock, flags);
+
+	mb = udev->mb_addr;
+	flush_dcache_range(mb, sizeof(*mb));
+
+	while (udev->cmdr_last_cleaned != ACCESS_ONCE(mb->cmd_tail)) {
+
+		struct tcmu_cmd_entry *entry = (void *) mb + CMDR_OFF + udev->cmdr_last_cleaned;
+		struct tcmu_cmd *cmd;
+
+		flush_dcache_range(entry, sizeof(*entry));
+
+		if (tcmu_hdr_get_op(&entry->hdr) == TCMU_OP_PAD) {
+			UPDATE_HEAD(udev->cmdr_last_cleaned, tcmu_hdr_get_len(&entry->hdr), udev->cmdr_size);
+			continue;
+		}
+		WARN_ON(tcmu_hdr_get_op(&entry->hdr) != TCMU_OP_CMD);
+
+		spin_lock(&udev->commands_lock);
+		cmd = idr_find(&udev->commands, entry->cmd_id);
+		if (cmd)
+			idr_remove(&udev->commands, cmd->cmd_id);
+		spin_unlock(&udev->commands_lock);
+
+		if (!cmd) {
+			pr_err("cmd_id not found, ring is broken\n");
+			set_bit(TCMU_DEV_BIT_BROKEN, &udev->flags);
+			break;
+		}
+
+		tcmu_handle_completion(cmd, entry);
+
+		UPDATE_HEAD(udev->cmdr_last_cleaned, tcmu_hdr_get_len(&entry->hdr), udev->cmdr_size);
+
+		handled++;
+	}
+
+	if (mb->cmd_tail == mb->cmd_head)
+		del_timer(&udev->timeout); /* no more pending cmds */
+
+	spin_unlock_irqrestore(&udev->cmdr_lock, flags);
+
+	wake_up(&udev->wait_cmdr);
+
+	return handled;
+}
+
+static int tcmu_check_expired_cmd(int id, void *p, void *data)
+{
+	struct tcmu_cmd *cmd = p;
+
+	if (test_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags))
+		return 0;
+
+	if (!time_after(cmd->deadline, jiffies))
+		return 0;
+
+	set_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags);
+	target_complete_cmd(cmd->se_cmd, SAM_STAT_CHECK_CONDITION);
+	cmd->se_cmd = NULL;
+
+	return 0;
+}
+
+static void tcmu_device_timedout(unsigned long data)
+{
+	struct tcmu_dev *udev = (struct tcmu_dev *)data;
+	unsigned long flags;
+	int handled;
+
+	handled = tcmu_handle_completions(udev);
+
+	pr_warn("%d completions handled from timeout\n", handled);
+
+	spin_lock_irqsave(&udev->commands_lock, flags);
+	idr_for_each(&udev->commands, tcmu_check_expired_cmd, NULL);
+	spin_unlock_irqrestore(&udev->commands_lock, flags);
+
+	/*
+	 * We don't need to wakeup threads on wait_cmdr since they have their
+	 * own timeout.
+	 */
+}
+
+static int tcmu_attach_hba(struct se_hba *hba, u32 host_id)
+{
+	struct tcmu_hba *tcmu_hba;
+
+	tcmu_hba = kzalloc(sizeof(struct tcmu_hba), GFP_KERNEL);
+	if (!tcmu_hba)
+		return -ENOMEM;
+
+	tcmu_hba->host_id = host_id;
+	hba->hba_ptr = tcmu_hba;
+
+	return 0;
+}
+
+static void tcmu_detach_hba(struct se_hba *hba)
+{
+	kfree(hba->hba_ptr);
+	hba->hba_ptr = NULL;
+}
+
+static struct se_device *tcmu_alloc_device(struct se_hba *hba, const char *name)
+{
+	struct tcmu_dev *udev;
+
+	udev = kzalloc(sizeof(struct tcmu_dev), GFP_KERNEL);
+	if (!udev)
+		return NULL;
+
+	udev->name = kstrdup(name, GFP_KERNEL);
+	if (!udev->name) {
+		kfree(udev);
+		return NULL;
+	}
+
+	udev->hba = hba;
+
+	init_waitqueue_head(&udev->wait_cmdr);
+	spin_lock_init(&udev->cmdr_lock);
+
+	idr_init(&udev->commands);
+	spin_lock_init(&udev->commands_lock);
+
+	setup_timer(&udev->timeout, tcmu_device_timedout,
+		(unsigned long)udev);
+
+	kref_init(&udev->ref);
+
+	return &udev->se_dev;
+}
+
+static int tcmu_irqcontrol(struct uio_info *info, s32 irq_on)
+{
+	struct tcmu_dev *tcmu_dev = container_of(info, struct tcmu_dev, uio_info);
+
+	tcmu_handle_completions(tcmu_dev);
+
+	return 0;
+}
+
+/*
+ * mmap code from uio.c. Copied here because we want to hook mmap()
+ * and this stuff must come along.
+ */
+static int tcmu_find_mem_index(struct vm_area_struct *vma)
+{
+	struct tcmu_dev *udev = vma->vm_private_data;
+	struct uio_info *info = &udev->uio_info;
+
+	if (vma->vm_pgoff < MAX_UIO_MAPS) {
+		if (info->mem[vma->vm_pgoff].size == 0)
+			return -1;
+		return (int)vma->vm_pgoff;
+	}
+	return -1;
+}
+
+static int tcmu_vma_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
+{
+	struct tcmu_dev *udev = vma->vm_private_data;
+	struct uio_info *info = &udev->uio_info;
+	struct page *page;
+	unsigned long offset;
+	void *addr;
+
+	int mi = tcmu_find_mem_index(vma);
+	if (mi < 0)
+		return VM_FAULT_SIGBUS;
+
+	/*
+	 * We need to subtract mi because userspace uses offset = N*PAGE_SIZE
+	 * to use mem[N].
+	 */
+	offset = (vmf->pgoff - mi) << PAGE_SHIFT;
+
+	addr = (void *)(unsigned long)info->mem[mi].addr + offset;
+	if (info->mem[mi].memtype == UIO_MEM_LOGICAL)
+		page = virt_to_page(addr);
+	else
+		page = vmalloc_to_page(addr);
+	get_page(page);
+	vmf->page = page;
+	return 0;
+}
+
+static const struct vm_operations_struct tcmu_vm_ops = {
+	.fault = tcmu_vma_fault,
+};
+
+static int tcmu_mmap(struct uio_info *info, struct vm_area_struct *vma)
+{
+	struct tcmu_dev *udev = container_of(info, struct tcmu_dev, uio_info);
+
+	vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+	vma->vm_ops = &tcmu_vm_ops;
+
+	vma->vm_private_data = udev;
+
+	/* Ensure the mmap is exactly the right size */
+	if (vma_pages(vma) != (TCMU_RING_SIZE >> PAGE_SHIFT))
+		return -EINVAL;
+
+	return 0;
+}
+
+static int tcmu_open(struct uio_info *info, struct inode *inode)
+{
+	struct tcmu_dev *udev = container_of(info, struct tcmu_dev, uio_info);
+
+	/* O_EXCL not supported for char devs, so fake it? */
+	if (test_and_set_bit(TCMU_DEV_BIT_OPEN, &udev->flags))
+		return -EBUSY;
+
+	pr_debug("open\n");
+
+	return 0;
+}
+
+static int tcmu_release(struct uio_info *info, struct inode *inode)
+{
+	struct tcmu_dev *udev = container_of(info, struct tcmu_dev, uio_info);
+
+	clear_bit(TCMU_DEV_BIT_OPEN, &udev->flags);
+
+	pr_debug("close\n");
+
+	return 0;
+}
+
+static int tcmu_netlink_event(enum tcmu_genl_cmd cmd, const char *name, int minor)
+{
+	struct sk_buff *skb;
+	void *msg_header;
+	int ret;
+
+	skb = genlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL);
+	if (!skb)
+		return -ENOMEM;
+
+	msg_header = genlmsg_put(skb, 0, 0, &tcmu_genl_family, 0, cmd);
+	if (!msg_header) {
+		nlmsg_free(skb);
+		return -ENOMEM;
+	}
+
+	ret = nla_put_string(skb, TCMU_ATTR_DEVICE, name);
+
+	ret = nla_put_u32(skb, TCMU_ATTR_MINOR, minor);
+
+	ret = genlmsg_end(skb, msg_header);
+	if (ret < 0) {
+		nlmsg_free(skb);
+		return ret;
+	}
+
+	return genlmsg_multicast(&tcmu_genl_family, skb, 0,
+		TCMU_MCGRP_CONFIG, GFP_KERNEL);
+}
+
+static int tcmu_configure_device(struct se_device *dev)
+{
+	struct tcmu_dev *udev = TCMU_DEV(dev);
+	struct tcmu_hba *hba = udev->hba->hba_ptr;
+	struct uio_info *info;
+	struct tcmu_mailbox *mb;
+	size_t size;
+	size_t used;
+	int ret = 0;
+	char *str;
+
+	info = &udev->uio_info;
+
+	size = snprintf(NULL, 0, "tcm-user/%u/%s/%s", hba->host_id, udev->name, udev->dev_config);
+	size += 1; /* for \0 */
+	str = kmalloc(size, GFP_KERNEL);
+	if (!str)
+		return -ENOMEM;
+
+	used = snprintf(str, size, "tcm-user/%u/%s", hba->host_id, udev->name);
+
+	if (udev->dev_config[0])
+		snprintf(str + used, size - used, "/%s", udev->dev_config);
+
+	info->name = str;
+
+	udev->mb_addr = vzalloc(TCMU_RING_SIZE);
+	if (!udev->mb_addr) {
+		kfree(info->name);
+		return -ENOMEM;
+	}
+
+	/* mailbox fits in first part of CMDR space */
+	udev->cmdr_size = CMDR_SIZE - CMDR_OFF;
+	udev->data_off = CMDR_SIZE;
+	udev->data_size = TCMU_RING_SIZE - CMDR_SIZE;
+
+	mb = udev->mb_addr;
+	mb->version = 1;
+	mb->cmdr_off = CMDR_OFF;
+	mb->cmdr_size = udev->cmdr_size;
+
+	WARN_ON(!PAGE_ALIGNED(udev->data_off));
+	WARN_ON(udev->data_size % PAGE_SIZE);
+
+	info->version = "1";
+
+	info->mem[0].name = "tcm-user command & data buffer";
+	info->mem[0].addr = (phys_addr_t) udev->mb_addr;
+	info->mem[0].size = TCMU_RING_SIZE;
+	info->mem[0].memtype = UIO_MEM_VIRTUAL;
+
+	info->irqcontrol = tcmu_irqcontrol;
+	info->irq = UIO_IRQ_CUSTOM;
+
+	info->mmap = tcmu_mmap;
+	info->open = tcmu_open;
+	info->release = tcmu_release;
+
+	ret = uio_register_device(tcmu_root_device, info);
+	if (ret) {
+		kfree(info->name);
+		vfree(udev->mb_addr);
+		return ret;
+	}
+
+	/* Other attributes can be configured in userspace */
+	dev->dev_attrib.hw_block_size = 4096;
+	dev->dev_attrib.hw_max_sectors = 128;
+	dev->dev_attrib.hw_queue_depth = 128;
+
+	tcmu_netlink_event(TCMU_CMD_ADDED_DEVICE, udev->uio_info.name,
+			   udev->uio_info.uio_dev->minor);
+
+	return ret;
+}
+
+static int tcmu_check_pending_cmd(int id, void *p, void *data)
+{
+	struct tcmu_cmd *cmd = p;
+
+	if (test_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags))
+		return 0;
+	return -EINVAL;
+}
+
+static void tcmu_free_device(struct se_device *dev)
+{
+	int i;
+	struct tcmu_dev *udev = TCMU_DEV(dev);
+
+	del_timer_sync(&udev->timeout);
+
+	vfree(udev->mb_addr);
+
+	/* upper layer should drain all requests before calling this */
+	spin_lock_irq(&udev->commands_lock);
+	i = idr_for_each(&udev->commands, tcmu_check_pending_cmd, NULL);
+	idr_destroy(&udev->commands);
+	spin_unlock_irq(&udev->commands_lock);
+	WARN_ON(i);
+
+	tcmu_netlink_event(TCMU_CMD_REMOVED_DEVICE, udev->uio_info.name,
+			   udev->uio_info.uio_dev->minor);
+
+	uio_unregister_device(&udev->uio_info);
+
+	kfree(udev->uio_info.name);
+
+	kfree(udev->name);
+
+	kref_put(&udev->ref, tcmu_destroy_device);
+}
+
+enum {
+	Opt_dev_config, Opt_dev_size, Opt_err,
+};
+
+static match_table_t tokens = {
+	{Opt_dev_config, "dev_config=%s"},
+	{Opt_dev_size, "dev_size=%u"},
+	{Opt_err, NULL}
+};
+
+static ssize_t tcmu_set_configfs_dev_params(struct se_device *dev,
+		const char *page, ssize_t count)
+{
+	struct tcmu_dev *udev = TCMU_DEV(dev);
+	char *orig, *ptr, *opts, *arg_p;
+	substring_t args[MAX_OPT_ARGS];
+	int ret = 0, token;
+
+	opts = kstrdup(page, GFP_KERNEL);
+	if (!opts)
+		return -ENOMEM;
+
+	orig = opts;
+
+	while ((ptr = strsep(&opts, ",\n")) != NULL) {
+		if (!*ptr)
+			continue;
+
+		token = match_token(ptr, tokens, args);
+		switch (token) {
+		case Opt_dev_config:
+			if (match_strlcpy(udev->dev_config, &args[0],
+				TCMU_CONFIG_LEN) == 0) {
+				ret = -EINVAL;
+				break;
+			}
+			pr_debug("TCMU: Referencing Path: %s\n", udev->dev_config);
+			break;
+		case Opt_dev_size:
+			arg_p = match_strdup(&args[0]);
+			if (!arg_p) {
+				ret = -ENOMEM;
+				break;
+			}
+			ret = kstrtoul(arg_p, 0, &udev->dev_size);
+			kfree(arg_p);
+			if (ret < 0)
+				pr_err("kstrtoul() failed for dev_size=\n");
+			break;
+		default:
+			break;
+		}
+	}
+
+	kfree(orig);
+	return (!ret) ? count : ret;
+}
+
+static ssize_t tcmu_show_configfs_dev_params(struct se_device *dev, char *b)
+{
+	struct tcmu_dev *udev = TCMU_DEV(dev);
+	ssize_t bl = 0;
+
+	bl = sprintf(b + bl, "Config: %s ",
+		     udev->dev_config[0] ? udev->dev_config : "NULL");
+	bl += sprintf(b + bl, "Size: %zu\n", udev->dev_size);
+
+	return bl;
+}
+
+static sector_t tcmu_get_blocks(struct se_device *dev)
+{
+	struct tcmu_dev *udev = TCMU_DEV(dev);
+
+	return div_u64(udev->dev_size - dev->dev_attrib.block_size,
+		       dev->dev_attrib.block_size);
+}
+
+static sense_reason_t
+tcmu_pass_op(struct se_cmd *se_cmd)
+{
+	int ret = tcmu_queue_cmd(se_cmd);
+
+	if (ret != 0)
+		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+	else
+		return TCM_NO_SENSE;
+}
+
+static sense_reason_t
+tcmu_parse_cdb(struct se_cmd *cmd)
+{
+	unsigned char *cdb = cmd->t_task_cdb;
+
+	/* We're just like pscsi, then */
+	/*
+	 * For REPORT LUNS we always need to emulate the response, for everything
+	 * else, pass it up.
+	 */
+	switch (cdb[0]) {
+	case REPORT_LUNS:
+		cmd->execute_cmd = spc_emulate_report_luns;
+		break;
+	case READ_6:
+	case READ_10:
+	case READ_12:
+	case READ_16:
+	case WRITE_6:
+	case WRITE_10:
+	case WRITE_12:
+	case WRITE_16:
+	case WRITE_VERIFY:
+		cmd->se_cmd_flags |= SCF_SCSI_DATA_CDB;
+		/* FALLTHROUGH */
+	default:
+		cmd->execute_cmd = tcmu_pass_op;
+	}
+
+	return TCM_NO_SENSE;
+}
+
+static struct se_subsystem_api tcmu_template = {
+	.name			= "user",
+	.inquiry_prod		= "USER",
+	.inquiry_rev		= TCMU_VERSION,
+	.owner			= THIS_MODULE,
+	.transport_type		= TRANSPORT_PLUGIN_VHBA_PDEV,
+	.attach_hba		= tcmu_attach_hba,
+	.detach_hba		= tcmu_detach_hba,
+	.alloc_device		= tcmu_alloc_device,
+	.configure_device	= tcmu_configure_device,
+	.free_device		= tcmu_free_device,
+	.parse_cdb		= tcmu_parse_cdb,
+	.set_configfs_dev_params = tcmu_set_configfs_dev_params,
+	.show_configfs_dev_params = tcmu_show_configfs_dev_params,
+	.get_device_type	= sbc_get_device_type,
+	.get_blocks		= tcmu_get_blocks,
+};
+
+static int __init tcmu_module_init(void)
+{
+	int ret;
+
+	BUILD_BUG_ON((sizeof(struct tcmu_cmd_entry) % TCMU_OP_ALIGN_SIZE) != 0);
+
+	tcmu_cmd_cache = kmem_cache_create("tcmu_cmd_cache",
+				sizeof(struct tcmu_cmd),
+				__alignof__(struct tcmu_cmd),
+				0, NULL);
+	if (!tcmu_cmd_cache)
+		return -ENOMEM;
+
+	tcmu_root_device = root_device_register("tcm_user");
+	if (IS_ERR(tcmu_root_device)) {
+		ret = PTR_ERR(tcmu_root_device);
+		goto out_free_cache;
+	}
+
+	ret = genl_register_family(&tcmu_genl_family);
+	if (ret < 0) {
+		goto out_unreg_device;
+	}
+
+	ret = transport_subsystem_register(&tcmu_template);
+	if (ret)
+		goto out_unreg_genl;
+
+	return 0;
+
+out_unreg_genl:
+	genl_unregister_family(&tcmu_genl_family);
+out_unreg_device:
+	root_device_unregister(tcmu_root_device);
+out_free_cache:
+	kmem_cache_destroy(tcmu_cmd_cache);
+
+	return ret;
+}
+
+static void __exit tcmu_module_exit(void)
+{
+	transport_subsystem_release(&tcmu_template);
+	genl_unregister_family(&tcmu_genl_family);
+	root_device_unregister(tcmu_root_device);
+	kmem_cache_destroy(tcmu_cmd_cache);
+}
+
+MODULE_DESCRIPTION("TCM USER subsystem plugin");
+MODULE_AUTHOR("Shaohua Li <shli@kernel.org>");
+MODULE_AUTHOR("Andy Grover <agrover@redhat.com>");
+MODULE_LICENSE("GPL");
+
+module_init(tcmu_module_init);
+module_exit(tcmu_module_exit);
diff --git a/include/uapi/linux/Kbuild b/include/uapi/linux/Kbuild
index 24e9033..0c13871 100644
--- a/include/uapi/linux/Kbuild
+++ b/include/uapi/linux/Kbuild
@@ -370,6 +370,7 @@ header-y += swab.h
 header-y += synclink.h
 header-y += sysctl.h
 header-y += sysinfo.h
+header-y += target_core_user.h
 header-y += taskstats.h
 header-y += tcp.h
 header-y += tcp_metrics.h
diff --git a/include/uapi/linux/target_core_user.h b/include/uapi/linux/target_core_user.h
new file mode 100644
index 0000000..7dcfbe6
--- /dev/null
+++ b/include/uapi/linux/target_core_user.h
@@ -0,0 +1,142 @@
+#ifndef __TARGET_CORE_USER_H
+#define __TARGET_CORE_USER_H
+
+/* This header will be used by application too */
+
+#include <linux/types.h>
+#include <linux/uio.h>
+
+#ifndef __packed
+#define __packed                        __attribute__((packed))
+#endif
+
+#define TCMU_VERSION "1.0"
+
+/*
+ * Ring Design
+ * -----------
+ *
+ * The mmaped area is divided into three parts:
+ * 1) The mailbox (struct tcmu_mailbox, below)
+ * 2) The command ring
+ * 3) Everything beyond the command ring (data)
+ *
+ * The mailbox tells userspace the offset of the command ring from the
+ * start of the shared memory region, and how big the command ring is.
+ *
+ * The kernel passes SCSI commands to userspace by putting a struct
+ * tcmu_cmd_entry in the ring, updating mailbox->cmd_head, and poking
+ * userspace via uio's interrupt mechanism.
+ *
+ * tcmu_cmd_entry contains a header. If the header type is PAD,
+ * userspace should skip hdr->length bytes (mod cmdr_size) to find the
+ * next cmd_entry.
+ *
+ * Otherwise, the entry will contain offsets into the mmaped area that
+ * contain the cdb and data buffers -- the latter accessible via the
+ * iov array. iov addresses are also offsets into the shared area.
+ *
+ * When userspace is completed handling the command, set
+ * entry->rsp.scsi_status, fill in rsp.sense_buffer if appropriate,
+ * and also set mailbox->cmd_tail equal to the old cmd_tail plus
+ * hdr->length, mod cmdr_size. If cmd_tail doesn't equal cmd_head, it
+ * should process the next packet the same way, and so on.
+ */
+
+#define TCMU_MAILBOX_VERSION 1
+#define ALIGN_SIZE 64 /* Should be enough for most CPUs */
+
+struct tcmu_mailbox {
+	__u16 version;
+	__u16 flags;
+	__u32 cmdr_off;
+	__u32 cmdr_size;
+
+	__u32 cmd_head;
+
+	/* Updated by user. On its own cacheline */
+	__u32 cmd_tail __attribute__((__aligned__(ALIGN_SIZE)));
+
+} __packed;
+
+enum tcmu_opcode {
+	TCMU_OP_PAD = 0,
+	TCMU_OP_CMD,
+};
+
+/*
+ * Only a few opcodes, and length is 8-byte aligned, so use low bits for opcode.
+ */
+struct tcmu_cmd_entry_hdr {
+		__u32 len_op;
+} __packed;
+
+#define TCMU_OP_MASK 0x7
+
+static inline enum tcmu_opcode tcmu_hdr_get_op(struct tcmu_cmd_entry_hdr *hdr)
+{
+	return hdr->len_op & TCMU_OP_MASK;
+}
+
+static inline void tcmu_hdr_set_op(struct tcmu_cmd_entry_hdr *hdr, enum tcmu_opcode op)
+{
+	hdr->len_op &= ~TCMU_OP_MASK;
+	hdr->len_op |= (op & TCMU_OP_MASK);
+}
+
+static inline __u32 tcmu_hdr_get_len(struct tcmu_cmd_entry_hdr *hdr)
+{
+	return hdr->len_op & ~TCMU_OP_MASK;
+}
+
+static inline void tcmu_hdr_set_len(struct tcmu_cmd_entry_hdr *hdr, __u32 len)
+{
+	hdr->len_op &= TCMU_OP_MASK;
+	hdr->len_op |= len;
+}
+
+/* Currently the same as SCSI_SENSE_BUFFERSIZE */
+#define TCMU_SENSE_BUFFERSIZE 96
+
+struct tcmu_cmd_entry {
+	struct tcmu_cmd_entry_hdr hdr;
+
+	uint16_t cmd_id;
+	uint16_t __pad1;
+
+	union {
+		struct {
+			uint64_t cdb_off;
+			uint64_t iov_cnt;
+			struct iovec iov[0];
+		} req;
+		struct {
+			uint8_t scsi_status;
+			uint8_t __pad1;
+			uint16_t __pad2;
+			uint32_t __pad3;
+			char sense_buffer[TCMU_SENSE_BUFFERSIZE];
+		} rsp;
+	};
+
+} __packed;
+
+#define TCMU_OP_ALIGN_SIZE sizeof(uint64_t)
+
+enum tcmu_genl_cmd {
+	TCMU_CMD_UNSPEC,
+	TCMU_CMD_ADDED_DEVICE,
+	TCMU_CMD_REMOVED_DEVICE,
+	__TCMU_CMD_MAX,
+};
+#define TCMU_CMD_MAX (__TCMU_CMD_MAX - 1)
+
+enum tcmu_genl_attr {
+	TCMU_ATTR_UNSPEC,
+	TCMU_ATTR_DEVICE,
+	TCMU_ATTR_MINOR,
+	__TCMU_ATTR_MAX,
+};
+#define TCMU_ATTR_MAX (__TCMU_ATTR_MAX - 1)
+
+#endif
-- 
1.9.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] target: Add a user-passthrough backstore
  2014-09-15 23:12 ` [PATCH 4/4] target: Add a user-passthrough backstore Andy Grover
@ 2014-09-19 21:14   ` Nicholas A. Bellinger
       [not found]     ` <lvi81j$ud8$2@ger.gmane.org>
  0 siblings, 1 reply; 13+ messages in thread
From: Nicholas A. Bellinger @ 2014-09-19 21:14 UTC (permalink / raw)
  To: Andy Grover; +Cc: target-devel, linux-scsi, hch, shli, linux-kernel, rjones

Hi Andy,

A few comments are inline below.

On Mon, 2014-09-15 at 16:12 -0700, Andy Grover wrote:
> Add a LIO storage engine that presents commands to userspace for execution.
> This would allow more complex backstores to be implemented out-of-kernel,
> and also make experimentation a-la FUSE (but at the SCSI level -- "SUSE"?)
> possible.
> 
> It uses a mmap()able UIO device per LUN to share a command ring and data
> area. The commands are raw SCSI CDBs and iovs for in/out data. The command
> ring is also reused for returning scsi command status and optional sense
> data.
> 
> This implementation is based on Shaohua Li's earlier version but heavily
> modified. Differences include:
> 
> * Shared memory allocated by kernel, not locked-down user pages
> * Single ring for command request and response
> * Offsets instead of embedded pointers
> * Generic SCSI CDB passthrough instead of per-cmd specialization in ring
>   format.
> * Uses UIO device instead of anon_file passed in mailbox.
> * Optional in-kernel handling of some commands.
> 

Shaohua, do you have any comments are these changes..?  It's not clear
that all of these changes are beneficial, so I'd really like to hear
input given your the original author of this code.

> The main reason for these differences is to permit greater resiliency
> if the user process dies or hangs.
> 
> Things not yet implemented (on purpose):
> 
> * Zero copy. The data area is flexible enough to allow page flipping or
>   backend-allocated pages to be used by fabrics, but it's not clear these
>   are performance wins. Can come later.
> * Out-of-order command completion by userspace. Possible to add by just
>   allowing userspace to change cmd_id in rsp cmd entries, but currently
>   not supported.
> * No locks between kernel cmd submission and completion routines. Sounds
>   like it's possible, but this can come later.
> * Sparse allocation of mmaped area. Current code vmallocs the whole thing.
>   If the mapped area was larger and not fully mapped then the driver would
>   have more freedom to change cmd and data area sizes based on demand.
> 
> Current code open issues:
> 
> * The use of idrs may be overkill -- we maybe can replace them with a
>   simple counter to generate cmd_ids, and a hash table to get a cmd_id's
>   associated pointer.
> * Use of a free-running counter for cmd ring instead of explicit modulo
>   math. This would require power-of-2 cmd ring size.
> 
> Signed-off-by: Andy Grover <agrover@redhat.com>
> ---
>  drivers/target/Kconfig                 |    5 +
>  drivers/target/Makefile                |    1 +
>  drivers/target/target_core_transport.c |    4 +
>  drivers/target/target_core_user.c      | 1169 ++++++++++++++++++++++++++++++++
>  include/uapi/linux/Kbuild              |    1 +
>  include/uapi/linux/target_core_user.h  |  142 ++++
>  6 files changed, 1322 insertions(+)
>  create mode 100644 drivers/target/target_core_user.c
>  create mode 100644 include/uapi/linux/target_core_user.h
> 

<SNIP>

> +
> +/* Core requires execute_rw be set, but just return unsupported */
> +static sense_reason_t
> +tcmu_retry_rw(struct se_cmd *cmd, struct scatterlist *sgl, u32 sgl_nents,
> +	      enum dma_data_direction data_direction)
> +{
> +	return TCM_UNSUPPORTED_SCSI_OPCODE;
> +}
> +
> +static struct sbc_ops tcmu_retry_ops = {
> +	.execute_rw		= tcmu_retry_rw,
> +};
> +
> +static void tcmu_work(struct work_struct *work)
> +{
> +	struct tcmu_cmd *cmd = container_of(work, struct tcmu_cmd, work);
> +	struct se_cmd *se_cmd = cmd->se_cmd;
> +
> +	target_execute_cmd(se_cmd);
> +	kmem_cache_free(tcmu_cmd_cache, cmd);
> +}
> +
> +static void tcmu_emulate_cmd(struct tcmu_cmd *cmd)
> +{
> +	struct se_cmd *se_cmd = cmd->se_cmd;
> +	sense_reason_t ret;
> +	unsigned long flags;
> +
> +	/* Re-run parsing to set execute_cmd to value for possible emulation */
> +	se_cmd->execute_cmd = NULL;
> +
> +	/*
> +	 * Can't optionally call generic_request_failure if flags indicate it's
> +	 * still being handled by us.
> +	*/
> +	spin_lock_irqsave(&se_cmd->t_state_lock, flags);
> +	se_cmd->transport_state &= ~CMD_T_BUSY|CMD_T_SENT;
> +	spin_unlock_irqrestore(&se_cmd->t_state_lock, flags);
> +
> +	ret = sbc_parse_cdb(se_cmd, &tcmu_retry_ops);
> +	if (ret == TCM_NO_SENSE && se_cmd->execute_cmd) {
> +		schedule_work(&cmd->work);
> +	} else {
> +		/* Can't emulate. */
> +		transport_generic_request_failure(se_cmd, ret);
> +		kmem_cache_free(tcmu_cmd_cache, cmd);
> +	}
> +}

So the idea of allowing the in-kernel CDB emulation to run after
user-space has returned unsupported opcode is problematic for a couple
of different reasons.

First, if the correct feature bits in standard INQUIRY + EVPD INQUIRY,
etc are not populated by user-space to match what in-kernel CDB
emulation actually supports, this can result in potentially undefined
results initiator side.

Also for example, if user-space decided to emulate only a subset of PR
operations, and leaves the rest of it up to the in-kernel emulation,
there's not really a way to sync current state between the two, which
also can end up with undefined results.

So that said, I think a saner approach would be two define two modes of
operation for TCMU:

   *) Passthrough Mode: All CDBs are passed to user-space, and no 
      in-kernel emulation is done in the event of an unsupported
      opcode response.

   *) I/O Mode: I/O related CDBs are passed into user-space, but
      all control CDBs continue to be processed by in-kernel emulation.
      This effectively limits operation to TYPE_DISK, but with this mode
      it's probably OK to assume this. 

This seems like the best trade-off between flexibility when everything
should be handled by user-space, vs. functionality when only block
remapping of I/O is occurring in user-space code.

> +static int tcmu_check_expired_cmd(int id, void *p, void *data)
> +{
> +	struct tcmu_cmd *cmd = p;
> +
> +	if (test_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags))
> +		return 0;
> +
> +	if (!time_after(cmd->deadline, jiffies))
> +		return 0;
> +
> +	set_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags);
> +	target_complete_cmd(cmd->se_cmd, SAM_STAT_CHECK_CONDITION);
> +	cmd->se_cmd = NULL;
> +

AFAICT this is leaking tcmu_cmd..

Where does tcmu_cmd get released for expired commands..?

> +	return 0;
> +}
> +
> +static void tcmu_device_timedout(unsigned long data)
> +{
> +	struct tcmu_dev *udev = (struct tcmu_dev *)data;
> +	unsigned long flags;
> +	int handled;
> +
> +	handled = tcmu_handle_completions(udev);
> +
> +	pr_warn("%d completions handled from timeout\n", handled);
> +
> +	spin_lock_irqsave(&udev->commands_lock, flags);
> +	idr_for_each(&udev->commands, tcmu_check_expired_cmd, NULL);
> +	spin_unlock_irqrestore(&udev->commands_lock, flags);
> +
> +	/*
> +	 * We don't need to wakeup threads on wait_cmdr since they have their
> +	 * own timeout.
> +	 */
> +}
> +
> +static int tcmu_attach_hba(struct se_hba *hba, u32 host_id)
> +{
> +	struct tcmu_hba *tcmu_hba;
> +
> +	tcmu_hba = kzalloc(sizeof(struct tcmu_hba), GFP_KERNEL);
> +	if (!tcmu_hba)
> +		return -ENOMEM;
> +
> +	tcmu_hba->host_id = host_id;
> +	hba->hba_ptr = tcmu_hba;
> +
> +	return 0;
> +}
> +
> +static void tcmu_detach_hba(struct se_hba *hba)
> +{
> +	kfree(hba->hba_ptr);
> +	hba->hba_ptr = NULL;
> +}
> +
> +static struct se_device *tcmu_alloc_device(struct se_hba *hba, const char *name)
> +{
> +	struct tcmu_dev *udev;
> +
> +	udev = kzalloc(sizeof(struct tcmu_dev), GFP_KERNEL);
> +	if (!udev)
> +		return NULL;
> +
> +	udev->name = kstrdup(name, GFP_KERNEL);
> +	if (!udev->name) {
> +		kfree(udev);
> +		return NULL;
> +	}
> +
> +	udev->hba = hba;
> +
> +	init_waitqueue_head(&udev->wait_cmdr);
> +	spin_lock_init(&udev->cmdr_lock);
> +
> +	idr_init(&udev->commands);
> +	spin_lock_init(&udev->commands_lock);
> +
> +	setup_timer(&udev->timeout, tcmu_device_timedout,
> +		(unsigned long)udev);
> +
> +	kref_init(&udev->ref);
> +

udev->ref is unnecessary. 

> +	return &udev->se_dev;
> +}

<SNIP>

> +static int tcmu_configure_device(struct se_device *dev)
> +{
> +	struct tcmu_dev *udev = TCMU_DEV(dev);
> +	struct tcmu_hba *hba = udev->hba->hba_ptr;
> +	struct uio_info *info;
> +	struct tcmu_mailbox *mb;
> +	size_t size;
> +	size_t used;
> +	int ret = 0;
> +	char *str;
> +
> +	info = &udev->uio_info;
> +
> +	size = snprintf(NULL, 0, "tcm-user/%u/%s/%s", hba->host_id, udev->name, udev->dev_config);
> +	size += 1; /* for \0 */
> +	str = kmalloc(size, GFP_KERNEL);
> +	if (!str)
> +		return -ENOMEM;
> +
> +	used = snprintf(str, size, "tcm-user/%u/%s", hba->host_id, udev->name);
> +
> +	if (udev->dev_config[0])
> +		snprintf(str + used, size - used, "/%s", udev->dev_config);
> +
> +	info->name = str;
> +
> +	udev->mb_addr = vzalloc(TCMU_RING_SIZE);
> +	if (!udev->mb_addr) {
> +		kfree(info->name);
> +		return -ENOMEM;
> +	}
> +
> +	/* mailbox fits in first part of CMDR space */
> +	udev->cmdr_size = CMDR_SIZE - CMDR_OFF;
> +	udev->data_off = CMDR_SIZE;
> +	udev->data_size = TCMU_RING_SIZE - CMDR_SIZE;
> +
> +	mb = udev->mb_addr;
> +	mb->version = 1;
> +	mb->cmdr_off = CMDR_OFF;
> +	mb->cmdr_size = udev->cmdr_size;
> +
> +	WARN_ON(!PAGE_ALIGNED(udev->data_off));
> +	WARN_ON(udev->data_size % PAGE_SIZE);
> +
> +	info->version = "1";
> +
> +	info->mem[0].name = "tcm-user command & data buffer";
> +	info->mem[0].addr = (phys_addr_t) udev->mb_addr;
> +	info->mem[0].size = TCMU_RING_SIZE;
> +	info->mem[0].memtype = UIO_MEM_VIRTUAL;
> +
> +	info->irqcontrol = tcmu_irqcontrol;
> +	info->irq = UIO_IRQ_CUSTOM;
> +
> +	info->mmap = tcmu_mmap;
> +	info->open = tcmu_open;
> +	info->release = tcmu_release;
> +
> +	ret = uio_register_device(tcmu_root_device, info);
> +	if (ret) {
> +		kfree(info->name);
> +		vfree(udev->mb_addr);
> +		return ret;
> +	}
> +
> +	/* Other attributes can be configured in userspace */
> +	dev->dev_attrib.hw_block_size = 4096;

Assuming 4k hw_block_size by default is problematic for initiators that
don't support it.

Switch to 512 blocksize by default to be safe.

> +	dev->dev_attrib.hw_max_sectors = 128;
> +	dev->dev_attrib.hw_queue_depth = 128;
> +
> +	tcmu_netlink_event(TCMU_CMD_ADDED_DEVICE, udev->uio_info.name,
> +			   udev->uio_info.uio_dev->minor);
> +
> +	return ret;
> +}
> +
> +static int tcmu_check_pending_cmd(int id, void *p, void *data)
> +{
> +	struct tcmu_cmd *cmd = p;
> +
> +	if (test_bit(TCMU_CMD_BIT_EXPIRED, &cmd->flags))
> +		return 0;
> +	return -EINVAL;
> +}
> +
> +static void tcmu_free_device(struct se_device *dev)
> +{
> +	int i;
> +	struct tcmu_dev *udev = TCMU_DEV(dev);
> +
> +	del_timer_sync(&udev->timeout);
> +
> +	vfree(udev->mb_addr);
> +
> +	/* upper layer should drain all requests before calling this */
> +	spin_lock_irq(&udev->commands_lock);
> +	i = idr_for_each(&udev->commands, tcmu_check_pending_cmd, NULL);
> +	idr_destroy(&udev->commands);
> +	spin_unlock_irq(&udev->commands_lock);
> +	WARN_ON(i);
> +
> +	tcmu_netlink_event(TCMU_CMD_REMOVED_DEVICE, udev->uio_info.name,
> +			   udev->uio_info.uio_dev->minor);
> +
> +	uio_unregister_device(&udev->uio_info);
> +
> +	kfree(udev->uio_info.name);
> +
> +	kfree(udev->name);
> +
> +	kref_put(&udev->ref, tcmu_destroy_device);
> +}
> +

The udev->ref here looks unnecessary, just release udev now.



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] target: Add a user-passthrough backstore
       [not found]     ` <lvi81j$ud8$2@ger.gmane.org>
@ 2014-09-19 23:39       ` Nicholas A. Bellinger
  2014-09-19 23:51         ` Alex Elsayed
  0 siblings, 1 reply; 13+ messages in thread
From: Nicholas A. Bellinger @ 2014-09-19 23:39 UTC (permalink / raw)
  To: Alex Elsayed; +Cc: linux-scsi, target-devel, linux-kernel

On Fri, 2014-09-19 at 14:43 -0700, Alex Elsayed wrote:
> Nicholas A. Bellinger wrote:
> 
> <snip>
> > So the idea of allowing the in-kernel CDB emulation to run after
> > user-space has returned unsupported opcode is problematic for a couple
> > of different reasons.
> > 
> > First, if the correct feature bits in standard INQUIRY + EVPD INQUIRY,
> > etc are not populated by user-space to match what in-kernel CDB
> > emulation actually supports, this can result in potentially undefined
> > results initiator side.
> > 
> > Also for example, if user-space decided to emulate only a subset of PR
> > operations, and leaves the rest of it up to the in-kernel emulation,
> > there's not really a way to sync current state between the two, which
> > also can end up with undefined results.
> > 
> > So that said, I think a saner approach would be two define two modes of
> > operation for TCMU:
> > 
> >    *) Passthrough Mode: All CDBs are passed to user-space, and no
> >       in-kernel emulation is done in the event of an unsupported
> >       opcode response.
> > 
> >    *) I/O Mode: I/O related CDBs are passed into user-space, but
> >       all control CDBs continue to be processed by in-kernel emulation.
> >       This effectively limits operation to TYPE_DISK, but with this mode
> >       it's probably OK to assume this.
> > 
> > This seems like the best trade-off between flexibility when everything
> > should be handled by user-space, vs. functionality when only block
> > remapping of I/O is occurring in user-space code.
> 
> The problem there is that the first case has all the issues of pscsi and 
> simply becomes a performance optimization over tgt+iscsi client+pscsi and 
> the latter case excludes the main use cases I'm interested in - OSDs, media 
> changers, optical discs (the biggest thing for me), and so forth.
> 
> One of the main things I want to do with this is hook up a plugin that uses 
> libmirage to handle various optical disc image formats.
> 

Not sure I follow..  How does the proposed passthrough mode prevent
someone from emulating OSDs, media changers, optical disks or anything
else in userspace with TCMU..?

The main thing that the above comments highlight is why attempting to
combine the existing in-kernel emulation with a userspace backend
providing it's own emulation can open up a number of problems with
mismatched state between the two.

--nab


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] target: Add a user-passthrough backstore
  2014-09-19 23:39       ` Nicholas A. Bellinger
@ 2014-09-19 23:51         ` Alex Elsayed
  2014-09-20  0:35           ` Andy Grover
  2014-09-22 20:36           ` Nicholas A. Bellinger
  0 siblings, 2 replies; 13+ messages in thread
From: Alex Elsayed @ 2014-09-19 23:51 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-scsi, target-devel, linux-scsi, linux-kernel

Nicholas A. Bellinger wrote:

> On Fri, 2014-09-19 at 14:43 -0700, Alex Elsayed wrote:
>> Nicholas A. Bellinger wrote:
>> 
>> <snip>
>> > So the idea of allowing the in-kernel CDB emulation to run after
>> > user-space has returned unsupported opcode is problematic for a couple
>> > of different reasons.
>> > 
>> > First, if the correct feature bits in standard INQUIRY + EVPD INQUIRY,
>> > etc are not populated by user-space to match what in-kernel CDB
>> > emulation actually supports, this can result in potentially undefined
>> > results initiator side.
>> > 
>> > Also for example, if user-space decided to emulate only a subset of PR
>> > operations, and leaves the rest of it up to the in-kernel emulation,
>> > there's not really a way to sync current state between the two, which
>> > also can end up with undefined results.
>> > 
>> > So that said, I think a saner approach would be two define two modes of
>> > operation for TCMU:
>> > 
>> >    *) Passthrough Mode: All CDBs are passed to user-space, and no
>> >       in-kernel emulation is done in the event of an unsupported
>> >       opcode response.
>> > 
>> >    *) I/O Mode: I/O related CDBs are passed into user-space, but
>> >       all control CDBs continue to be processed by in-kernel emulation.
>> >       This effectively limits operation to TYPE_DISK, but with this
>> >       mode it's probably OK to assume this.
>> > 
>> > This seems like the best trade-off between flexibility when everything
>> > should be handled by user-space, vs. functionality when only block
>> > remapping of I/O is occurring in user-space code.
>> 
>> The problem there is that the first case has all the issues of pscsi and
>> simply becomes a performance optimization over tgt+iscsi client+pscsi and
>> the latter case excludes the main use cases I'm interested in - OSDs,
>> media changers, optical discs (the biggest thing for me), and so forth.
>> 
>> One of the main things I want to do with this is hook up a plugin that
>> uses libmirage to handle various optical disc image formats.
>> 
> 
> Not sure I follow..  How does the proposed passthrough mode prevent
> someone from emulating OSDs, media changers, optical disks or anything
> else in userspace with TCMU..?
> 
> The main thing that the above comments highlight is why attempting to
> combine the existing in-kernel emulation with a userspace backend
> providing it's own emulation can open up a number of problems with
> mismatched state between the two.

It doesn't prevent it, but it _does_ put it in the exact same place as PSCSI 
regarding the warnings on the wiki. It means that if someone wants to 
implement (say) the optical disc or OSD CDBs, they then lose out on ALUA &co 
unless they implement it themselves - which seems unnecessary and painful, 
since those should really be disjoint. In particular, an OSD backed by RADOS 
objects could be a very nice thing indeed, _and_ could really benefit from 
ALUA.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] target: Add a user-passthrough backstore
  2014-09-19 23:51         ` Alex Elsayed
@ 2014-09-20  0:35           ` Andy Grover
  2014-09-22 20:58             ` Nicholas A. Bellinger
  2014-09-22 20:36           ` Nicholas A. Bellinger
  1 sibling, 1 reply; 13+ messages in thread
From: Andy Grover @ 2014-09-20  0:35 UTC (permalink / raw)
  To: Alex Elsayed, target-devel; +Cc: linux-scsi, linux-kernel

On 09/19/2014 04:51 PM, Alex Elsayed wrote:

>> Not sure I follow..  How does the proposed passthrough mode prevent
>> someone from emulating OSDs, media changers, optical disks or anything
>> else in userspace with TCMU..?
>>
>> The main thing that the above comments highlight is why attempting to
>> combine the existing in-kernel emulation with a userspace backend
>> providing it's own emulation can open up a number of problems with
>> mismatched state between the two.
>
> It doesn't prevent it, but it _does_ put it in the exact same place as PSCSI
> regarding the warnings on the wiki. It means that if someone wants to
> implement (say) the optical disc or OSD CDBs, they then lose out on ALUA &co
> unless they implement it themselves - which seems unnecessary and painful,
> since those should really be disjoint. In particular, an OSD backed by RADOS
> objects could be a very nice thing indeed, _and_ could really benefit from
> ALUA.

Some possible solutions:

1) The first time TCMU sees an opcode it passes it up and notes whether 
it is handled or not. If it was handled then future cmds with that 
opcode are passed up but not retried in the kernel. If it was not 
handled then it and all future commands with that opcode are handled by 
the kernel and not passed up.

2) Same as #1 but TCMU operates on SCSI spec granularity - e.g. handling 
any SSC opcode means userspace must handle all SSC commands.

3) Like #2 but define opcode groupings that must all be implemented 
together, independent of the specifications.

4) Have passthrough mode set at creation, but with more than two modes, 
either grouped by SCSI spec or our own groupings.

5) Never pass up certain opcodes, like the ALUA ones or whatever.


Have a good weekend -- Andy


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] target: Add a user-passthrough backstore
  2014-09-19 23:51         ` Alex Elsayed
  2014-09-20  0:35           ` Andy Grover
@ 2014-09-22 20:36           ` Nicholas A. Bellinger
  1 sibling, 0 replies; 13+ messages in thread
From: Nicholas A. Bellinger @ 2014-09-22 20:36 UTC (permalink / raw)
  To: Alex Elsayed; +Cc: target-devel, linux-scsi, linux-kernel

On Fri, 2014-09-19 at 16:51 -0700, Alex Elsayed wrote:
> Nicholas A. Bellinger wrote:
> 
> > On Fri, 2014-09-19 at 14:43 -0700, Alex Elsayed wrote:
> >> Nicholas A. Bellinger wrote:
> >> 
> >> <snip>
> >> > So the idea of allowing the in-kernel CDB emulation to run after
> >> > user-space has returned unsupported opcode is problematic for a couple
> >> > of different reasons.
> >> > 
> >> > First, if the correct feature bits in standard INQUIRY + EVPD INQUIRY,
> >> > etc are not populated by user-space to match what in-kernel CDB
> >> > emulation actually supports, this can result in potentially undefined
> >> > results initiator side.
> >> > 
> >> > Also for example, if user-space decided to emulate only a subset of PR
> >> > operations, and leaves the rest of it up to the in-kernel emulation,
> >> > there's not really a way to sync current state between the two, which
> >> > also can end up with undefined results.
> >> > 
> >> > So that said, I think a saner approach would be two define two modes of
> >> > operation for TCMU:
> >> > 
> >> >    *) Passthrough Mode: All CDBs are passed to user-space, and no
> >> >       in-kernel emulation is done in the event of an unsupported
> >> >       opcode response.
> >> > 
> >> >    *) I/O Mode: I/O related CDBs are passed into user-space, but
> >> >       all control CDBs continue to be processed by in-kernel emulation.
> >> >       This effectively limits operation to TYPE_DISK, but with this
> >> >       mode it's probably OK to assume this.
> >> > 
> >> > This seems like the best trade-off between flexibility when everything
> >> > should be handled by user-space, vs. functionality when only block
> >> > remapping of I/O is occurring in user-space code.
> >> 
> >> The problem there is that the first case has all the issues of pscsi and
> >> simply becomes a performance optimization over tgt+iscsi client+pscsi and
> >> the latter case excludes the main use cases I'm interested in - OSDs,
> >> media changers, optical discs (the biggest thing for me), and so forth.
> >> 
> >> One of the main things I want to do with this is hook up a plugin that
> >> uses libmirage to handle various optical disc image formats.
> >> 
> > 
> > Not sure I follow..  How does the proposed passthrough mode prevent
> > someone from emulating OSDs, media changers, optical disks or anything
> > else in userspace with TCMU..?
> > 
> > The main thing that the above comments highlight is why attempting to
> > combine the existing in-kernel emulation with a userspace backend
> > providing it's own emulation can open up a number of problems with
> > mismatched state between the two.
> 
> It doesn't prevent it, but it _does_ put it in the exact same place as PSCSI 
> regarding the warnings on the wiki.

Not exactly.  PSCSI has no mode where control commands are handled by
the in-kernel emulation.

> It means that if someone wants to 
> implement (say) the optical disc or OSD CDBs, they then lose out on ALUA &co 
> unless they implement it themselves - which seems unnecessary and painful, 
> since those should really be disjoint. In particular, an OSD backed by RADOS 
> objects could be a very nice thing indeed, _and_ could really benefit from 
> ALUA.
> 

For OSD strictly speaking in the T10 sense, the use-case your describing
would fall under the guise of I/O mode, where the OSD specific CDBs
would be propagated into the user-space backend driver.  This would
probably be OK, and the only thing that would need to change from the
perspective of existing in-kernel emulation is to expose TYPE_OSD
instead of TYPE_DISK.  Eg: AFAICT, there aren't any OSD specific EVPD or
mode pages.

As for optical discs, things are more complex because of mode pages
specific to MMC that effectively make it difficult to mark a clear
delineation between a hybrid I/O mode approach where some CDBs are
handed by in-kernel emulation, and some are handled in user-space.

So that said, an I/O mode that passes through OSD specific CDBs into
user-space should allow ALUA to still work as expected for TYPE_OSD,
where with optical drives it's less clear how to make a clean split, or
if anyone actually cares enough about ALUA emulation with TYPE_ROM.

--nab


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] target: Add a user-passthrough backstore
  2014-09-20  0:35           ` Andy Grover
@ 2014-09-22 20:58             ` Nicholas A. Bellinger
  2014-09-22 21:00               ` Andy Grover
  0 siblings, 1 reply; 13+ messages in thread
From: Nicholas A. Bellinger @ 2014-09-22 20:58 UTC (permalink / raw)
  To: Andy Grover; +Cc: Alex Elsayed, target-devel, linux-scsi, linux-kernel

On Fri, 2014-09-19 at 17:35 -0700, Andy Grover wrote:
> On 09/19/2014 04:51 PM, Alex Elsayed wrote:
> 
> >> Not sure I follow..  How does the proposed passthrough mode prevent
> >> someone from emulating OSDs, media changers, optical disks or anything
> >> else in userspace with TCMU..?
> >>
> >> The main thing that the above comments highlight is why attempting to
> >> combine the existing in-kernel emulation with a userspace backend
> >> providing it's own emulation can open up a number of problems with
> >> mismatched state between the two.
> >
> > It doesn't prevent it, but it _does_ put it in the exact same place as PSCSI
> > regarding the warnings on the wiki. It means that if someone wants to
> > implement (say) the optical disc or OSD CDBs, they then lose out on ALUA &co
> > unless they implement it themselves - which seems unnecessary and painful,
> > since those should really be disjoint. In particular, an OSD backed by RADOS
> > objects could be a very nice thing indeed, _and_ could really benefit from
> > ALUA.
> 
> Some possible solutions:
> 
> 1) The first time TCMU sees an opcode it passes it up and notes whether 
> it is handled or not. If it was handled then future cmds with that 
> opcode are passed up but not retried in the kernel. If it was not 
> handled then it and all future commands with that opcode are handled by 
> the kernel and not passed up.
> 

I'd rather avoid having to keep around a bunch of extra per-device
state, and having to send all opcodes out to user-space first is messy.

> 2) Same as #1 but TCMU operates on SCSI spec granularity - e.g. handling 
> any SSC opcode means userspace must handle all SSC commands.
> 

It's unclear how effective this will ultimately be, given that some
different standards include EVPD and/or mode pages specific to the
peripheral type.

> 3) Like #2 but define opcode groupings that must all be implemented 
> together, independent of the specifications.
> 

This is overly complex, and defining the groups of opcodes is
problematic to maintain.

> 4) Have passthrough mode set at creation, but with more than two modes, 
> either grouped by SCSI spec or our own groupings.
> 

Passthrough mode set at creation is the correct default, but grouping
things by SCSI spec and custom opcode groupings doesn't really make alot
of sense to me.

> 5) Never pass up certain opcodes, like the ALUA ones or whatever.
> 

This hurts flexibility, given that some folks might already have their
own ALUA implemented in a user-space driver, and would like to be able
to run in pure passthrough mode.

So I'd still like to start for an initial merge with the two different
modes mentioned earlier.  The pure-passthrough mode where everything is
handled by user-space, and an I/O passthrough mode where only
SCF_SCSI_DATA_CDB is passed along to userspace, but all other control
CDBs are handled by existing in-kernel emulation.

Andy, can you re-spin a series with these two approaches in mind..?

--nab


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] target: Add a user-passthrough backstore
  2014-09-22 20:58             ` Nicholas A. Bellinger
@ 2014-09-22 21:00               ` Andy Grover
  2014-09-22 21:03                 ` Nicholas A. Bellinger
  0 siblings, 1 reply; 13+ messages in thread
From: Andy Grover @ 2014-09-22 21:00 UTC (permalink / raw)
  To: Nicholas A. Bellinger
  Cc: Alex Elsayed, target-devel, linux-scsi, linux-kernel

On 09/22/2014 01:58 PM, Nicholas A. Bellinger wrote:
> So I'd still like to start for an initial merge with the two different
> modes mentioned earlier.  The pure-passthrough mode where everything is
> handled by user-space, and an I/O passthrough mode where only
> SCF_SCSI_DATA_CDB is passed along to userspace, but all other control
> CDBs are handled by existing in-kernel emulation.
>
> Andy, can you re-spin a series with these two approaches in mind..?

Will do. This is actually close to what the initial RFC version I posted 
did.

-- Andy



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] target: Add a user-passthrough backstore
  2014-09-22 21:00               ` Andy Grover
@ 2014-09-22 21:03                 ` Nicholas A. Bellinger
  0 siblings, 0 replies; 13+ messages in thread
From: Nicholas A. Bellinger @ 2014-09-22 21:03 UTC (permalink / raw)
  To: Andy Grover; +Cc: Alex Elsayed, target-devel, linux-scsi, linux-kernel

On Mon, 2014-09-22 at 14:00 -0700, Andy Grover wrote:
> On 09/22/2014 01:58 PM, Nicholas A. Bellinger wrote:
> > So I'd still like to start for an initial merge with the two different
> > modes mentioned earlier.  The pure-passthrough mode where everything is
> > handled by user-space, and an I/O passthrough mode where only
> > SCF_SCSI_DATA_CDB is passed along to userspace, but all other control
> > CDBs are handled by existing in-kernel emulation.
> >
> > Andy, can you re-spin a series with these two approaches in mind..?
> 
> Will do. This is actually close to what the initial RFC version I posted 
> did.
> 

Also, it would be nice to include the example user-space code that
proves the logic into the patch series, so it can be carried along with
kernel sources.

--nab


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2014-09-22 21:03 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-15 23:12 [PATCH 0/4] Userspace Passthrough backstore for LIO Andy Grover
2014-09-15 23:12 ` [PATCH 1/4] target: Remove unneeded check in sbc_parse_cdb Andy Grover
2014-09-15 23:12 ` [PATCH 2/4] uio: Export definition of struct uio_device Andy Grover
2014-09-15 23:12 ` [PATCH 3/4] target: Add documentation on the target userspace pass-through driver Andy Grover
2014-09-15 23:12 ` [PATCH 4/4] target: Add a user-passthrough backstore Andy Grover
2014-09-19 21:14   ` Nicholas A. Bellinger
     [not found]     ` <lvi81j$ud8$2@ger.gmane.org>
2014-09-19 23:39       ` Nicholas A. Bellinger
2014-09-19 23:51         ` Alex Elsayed
2014-09-20  0:35           ` Andy Grover
2014-09-22 20:58             ` Nicholas A. Bellinger
2014-09-22 21:00               ` Andy Grover
2014-09-22 21:03                 ` Nicholas A. Bellinger
2014-09-22 20:36           ` Nicholas A. Bellinger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).