All of lore.kernel.org
 help / color / mirror / Atom feed
* [vmw_vmci 00/11] VMCI for Linux
@ 2012-07-26 23:39 ` Andrew Stiegmann (stieg)
  0 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, cschamp, gregkh, Andrew Stiegmann (stieg)

In an effort to improve the out-of-the-box experience with Linux
kernels for VMware users, VMware is working on readying the Virtual
Machine Communication Interface (vmw_vmci) and VMCI Sockets
(vmw_vsock) kernel modules for inclusion in the Linux kernel. The
purpose of this post is to acquire feedback on the vmw_vmci kernel
module. The vmw_vsock kernel module will be presented in a later post.

VMCI allows virtual machines to communicate with host kernel modules
and the VMware hypervisors. User level applications both in a virtual
machine and on the host can use vmw_vmci through VMCI Sockets, a socket
address family designed to be compatible with UDP and TCP at the
interface level. Today, VMCI and VMCI Sockets are used by the VMware
shared folders (HGFS) and various VMware Tools components inside the
guest for zero-config, network-less access to VMware host services. In
addition to this, VMware's users are using VMCI Sockets for various
applications, where network access of the virtual machine is
restricted or non-existent. Examples of this are VMs communicating
with device proxies for proprietary hardware running as host
applications and automated testing of applications running within
virtual machines.

In a virtual machine, VMCI is exposed as a regular PCI device. The
primary communication mechanisms supported are a point-to-point
bidirectional transport based on a pair of memory-mapped queues, and
asynchronous notifications in the form of datagrams and
doorbells. These features are available to kernel level components
such as HGFS and VMCI Sockets through the VMCI kernel API. In addition
to this, the VMCI kernel API provides support for receiving events
related to the state of the VMCI communication channels, and the
virtual machine itself.

Outside the virtual machine, the host side support of the VMCI kernel
module makes the same VMCI kernel API available to VMCI endpoints on
the host. In addition to this, the host side manages each VMCI device
in a virtual machine through a context object. This context object
serves to identify the virtual machine for communication, and to track
the resource consumption of the given VMCI device. Both operations
related to communication between the virtual machine and the host
kernel, and those related to the management of the VMCI device state
in the host kernel, are invoked by the user level component of the
hypervisor through a set of ioctls on the VMCI device node.  To
provide seamless support for nested virtualization, where a virtual
machine may use both a VMCI PCI device to talk to its hypervisor, and
the VMCI host side support to run nested virtual machines, the VMCI
host and virtual machine support are combined in a single kernel
module.

For additional information about the use of VMCI and in particular
VMCI Sockets, please refer to the VMCI Socket Programming Guide
available at https://www.vmware.com/support/developer/vmci-sdk/.

Andrew Stiegmann (stieg) (11):
  Apply VMCI context code
  Apply VMCI datagram code
  Apply VMCI doorbell code
  Apply VMCI driver code
  Apply VMCI event code
  Apply dynamic array code
  Apply VMCI hash table
  Apply VMCI queue pairs
  Apply VMCI resource code
  Apply vmci routing code
  Apply the header code to make VMCI build

 drivers/misc/Kconfig                      |    1 +
 drivers/misc/Makefile                     |    1 +
 drivers/misc/vmw_vmci/Kconfig             |   16 +
 drivers/misc/vmw_vmci/Makefile            |   43 +
 drivers/misc/vmw_vmci/vmci_common_int.h   |   58 +
 drivers/misc/vmw_vmci/vmci_context.c      | 1269 +++++++++++
 drivers/misc/vmw_vmci/vmci_context.h      |  161 ++
 drivers/misc/vmw_vmci/vmci_datagram.c     |  586 +++++
 drivers/misc/vmw_vmci/vmci_datagram.h     |   56 +
 drivers/misc/vmw_vmci/vmci_doorbell.c     |  751 ++++++
 drivers/misc/vmw_vmci/vmci_doorbell.h     |   57 +
 drivers/misc/vmw_vmci/vmci_driver.c       | 2298 +++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_driver.h       |   52 +
 drivers/misc/vmw_vmci/vmci_event.c        |  451 ++++
 drivers/misc/vmw_vmci/vmci_event.h        |   29 +
 drivers/misc/vmw_vmci/vmci_handle_array.c |  174 ++
 drivers/misc/vmw_vmci/vmci_handle_array.h |   50 +
 drivers/misc/vmw_vmci/vmci_hash_table.c   |  332 +++
 drivers/misc/vmw_vmci/vmci_hash_table.h   |   56 +
 drivers/misc/vmw_vmci/vmci_queue_pair.c   | 3548 +++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_queue_pair.h   |  182 ++
 drivers/misc/vmw_vmci/vmci_resource.c     |  194 ++
 drivers/misc/vmw_vmci/vmci_resource.h     |   62 +
 drivers/misc/vmw_vmci/vmci_route.c        |  241 ++
 drivers/misc/vmw_vmci/vmci_route.h        |   34 +
 include/linux/vmw_vmci_api.h              |   89 +
 include/linux/vmw_vmci_defs.h             |  921 ++++++++
 27 files changed, 11712 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/Kconfig
 create mode 100644 drivers/misc/vmw_vmci/Makefile
 create mode 100644 drivers/misc/vmw_vmci/vmci_common_int.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_context.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_context.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_datagram.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_datagram.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_doorbell.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_doorbell.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_driver.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_driver.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_event.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_event.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_handle_array.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_handle_array.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_hash_table.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_hash_table.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_queue_pair.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_queue_pair.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_resource.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_resource.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_route.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_route.h
 create mode 100644 include/linux/vmw_vmci_api.h
 create mode 100644 include/linux/vmw_vmci_defs.h


^ permalink raw reply	[flat|nested] 72+ messages in thread

* [vmw_vmci 00/11] VMCI for Linux
@ 2012-07-26 23:39 ` Andrew Stiegmann (stieg)
  0 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, Andrew Stiegmann (stieg), cschamp, gregkh

In an effort to improve the out-of-the-box experience with Linux
kernels for VMware users, VMware is working on readying the Virtual
Machine Communication Interface (vmw_vmci) and VMCI Sockets
(vmw_vsock) kernel modules for inclusion in the Linux kernel. The
purpose of this post is to acquire feedback on the vmw_vmci kernel
module. The vmw_vsock kernel module will be presented in a later post.

VMCI allows virtual machines to communicate with host kernel modules
and the VMware hypervisors. User level applications both in a virtual
machine and on the host can use vmw_vmci through VMCI Sockets, a socket
address family designed to be compatible with UDP and TCP at the
interface level. Today, VMCI and VMCI Sockets are used by the VMware
shared folders (HGFS) and various VMware Tools components inside the
guest for zero-config, network-less access to VMware host services. In
addition to this, VMware's users are using VMCI Sockets for various
applications, where network access of the virtual machine is
restricted or non-existent. Examples of this are VMs communicating
with device proxies for proprietary hardware running as host
applications and automated testing of applications running within
virtual machines.

In a virtual machine, VMCI is exposed as a regular PCI device. The
primary communication mechanisms supported are a point-to-point
bidirectional transport based on a pair of memory-mapped queues, and
asynchronous notifications in the form of datagrams and
doorbells. These features are available to kernel level components
such as HGFS and VMCI Sockets through the VMCI kernel API. In addition
to this, the VMCI kernel API provides support for receiving events
related to the state of the VMCI communication channels, and the
virtual machine itself.

Outside the virtual machine, the host side support of the VMCI kernel
module makes the same VMCI kernel API available to VMCI endpoints on
the host. In addition to this, the host side manages each VMCI device
in a virtual machine through a context object. This context object
serves to identify the virtual machine for communication, and to track
the resource consumption of the given VMCI device. Both operations
related to communication between the virtual machine and the host
kernel, and those related to the management of the VMCI device state
in the host kernel, are invoked by the user level component of the
hypervisor through a set of ioctls on the VMCI device node.  To
provide seamless support for nested virtualization, where a virtual
machine may use both a VMCI PCI device to talk to its hypervisor, and
the VMCI host side support to run nested virtual machines, the VMCI
host and virtual machine support are combined in a single kernel
module.

For additional information about the use of VMCI and in particular
VMCI Sockets, please refer to the VMCI Socket Programming Guide
available at https://www.vmware.com/support/developer/vmci-sdk/.

Andrew Stiegmann (stieg) (11):
  Apply VMCI context code
  Apply VMCI datagram code
  Apply VMCI doorbell code
  Apply VMCI driver code
  Apply VMCI event code
  Apply dynamic array code
  Apply VMCI hash table
  Apply VMCI queue pairs
  Apply VMCI resource code
  Apply vmci routing code
  Apply the header code to make VMCI build

 drivers/misc/Kconfig                      |    1 +
 drivers/misc/Makefile                     |    1 +
 drivers/misc/vmw_vmci/Kconfig             |   16 +
 drivers/misc/vmw_vmci/Makefile            |   43 +
 drivers/misc/vmw_vmci/vmci_common_int.h   |   58 +
 drivers/misc/vmw_vmci/vmci_context.c      | 1269 +++++++++++
 drivers/misc/vmw_vmci/vmci_context.h      |  161 ++
 drivers/misc/vmw_vmci/vmci_datagram.c     |  586 +++++
 drivers/misc/vmw_vmci/vmci_datagram.h     |   56 +
 drivers/misc/vmw_vmci/vmci_doorbell.c     |  751 ++++++
 drivers/misc/vmw_vmci/vmci_doorbell.h     |   57 +
 drivers/misc/vmw_vmci/vmci_driver.c       | 2298 +++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_driver.h       |   52 +
 drivers/misc/vmw_vmci/vmci_event.c        |  451 ++++
 drivers/misc/vmw_vmci/vmci_event.h        |   29 +
 drivers/misc/vmw_vmci/vmci_handle_array.c |  174 ++
 drivers/misc/vmw_vmci/vmci_handle_array.h |   50 +
 drivers/misc/vmw_vmci/vmci_hash_table.c   |  332 +++
 drivers/misc/vmw_vmci/vmci_hash_table.h   |   56 +
 drivers/misc/vmw_vmci/vmci_queue_pair.c   | 3548 +++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_queue_pair.h   |  182 ++
 drivers/misc/vmw_vmci/vmci_resource.c     |  194 ++
 drivers/misc/vmw_vmci/vmci_resource.h     |   62 +
 drivers/misc/vmw_vmci/vmci_route.c        |  241 ++
 drivers/misc/vmw_vmci/vmci_route.h        |   34 +
 include/linux/vmw_vmci_api.h              |   89 +
 include/linux/vmw_vmci_defs.h             |  921 ++++++++
 27 files changed, 11712 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/Kconfig
 create mode 100644 drivers/misc/vmw_vmci/Makefile
 create mode 100644 drivers/misc/vmw_vmci/vmci_common_int.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_context.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_context.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_datagram.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_datagram.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_doorbell.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_doorbell.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_driver.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_driver.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_event.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_event.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_handle_array.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_handle_array.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_hash_table.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_hash_table.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_queue_pair.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_queue_pair.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_resource.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_resource.h
 create mode 100644 drivers/misc/vmw_vmci/vmci_route.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_route.h
 create mode 100644 include/linux/vmw_vmci_api.h
 create mode 100644 include/linux/vmw_vmci_defs.h

^ permalink raw reply	[flat|nested] 72+ messages in thread

* [vmw_vmci 01/11] Apply VMCI context code
  2012-07-26 23:39 ` Andrew Stiegmann (stieg)
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  -1 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, cschamp, gregkh, Andrew Stiegmann (stieg)

Context code maintains state for vmci and allows the driver
to communicate with multiple VMs.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_context.c | 1269 ++++++++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_context.h |  161 +++++
 2 files changed, 1430 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_context.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_context.h

diff --git a/drivers/misc/vmw_vmci/vmci_context.c b/drivers/misc/vmw_vmci/vmci_context.c
new file mode 100644
index 0000000..46faf10
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_context.c
@@ -0,0 +1,1269 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/highmem.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/vmw_vmci_api.h>
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_common_int.h"
+#include "vmci_context.h"
+#include "vmci_datagram.h"
+#include "vmci_doorbell.h"
+#include "vmci_driver.h"
+#include "vmci_event.h"
+#include "vmci_queue_pair.h"
+
+/*
+ * List of current VMCI contexts.
+ */
+static struct {
+	struct list_head head;
+	spinlock_t lock;
+	spinlock_t firingLock;
+} ctx_list;
+
+
+static void ctx_signal_notify(struct vmci_ctx *context)
+{
+	if (context->notify)
+		*context->notify = true;
+}
+
+static void ctx_clear_notify(struct vmci_ctx *context)
+{
+	if (context->notify)
+		*context->notify = false;
+}
+
+/*
+ * If nothing requires the attention of the guest, clears both
+ * notify flag and call.
+ */
+static void ctx_clear_notify_call(struct vmci_ctx *context)
+{
+	if (context->pendingDatagrams == 0 &&
+	    vmci_handle_arr_get_size(context->pendingDoorbellArray) == 0)
+		ctx_clear_notify(context);
+}
+
+/*
+ * Check if a context with the specified context ID exists.
+ * Assumes the ctx_list.lock is held.
+ */
+static bool ctx_exists_locked(uint32_t cid)
+{
+	struct vmci_ctx *context;
+
+	list_for_each_entry(context, &ctx_list.head, listItem) {
+		if (context->cid == cid)
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Sets the context's notify flag iff datagrams are pending for this
+ * context.  Called from vmci_setup_notify().
+ */
+void vmci_ctx_check_signal_notify(struct vmci_ctx *context)
+{
+	ASSERT(context);
+
+	spin_lock(&ctx_list.lock);
+	if (context->pendingDatagrams)
+		ctx_signal_notify(context);
+	spin_unlock(&ctx_list.lock);
+}
+
+int __init vmci_ctx_init(void)
+{
+	INIT_LIST_HEAD(&ctx_list.head);
+
+	spin_lock_init(&ctx_list.lock);
+	spin_lock_init(&ctx_list.firingLock);
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Allocates and initializes a VMCI context.
+ */
+int vmci_ctx_init_ctx(uint32_t cid,
+		      uint32_t privFlags,
+		      uintptr_t eventHnd,
+		      int userVersion,
+		      uid_t *user, struct vmci_ctx **outContext)
+{
+	struct vmci_ctx *context;
+	int result;
+
+	if (privFlags & ~VMCI_PRIVILEGE_ALL_FLAGS) {
+		pr_devel("Invalid flag (flags=0x%x) for VMCI context.",
+			 privFlags);
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	if (userVersion == 0)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	context = kzalloc(sizeof *context, GFP_KERNEL);
+	if (context == NULL) {
+		pr_warn("Failed to allocate memory for VMCI context.");
+		return VMCI_ERROR_NO_MEM;
+	}
+
+	INIT_LIST_HEAD(&context->listItem);
+	INIT_LIST_HEAD(&context->datagramQueue);
+
+	context->userVersion = userVersion;
+
+	context->queuePairArray = vmci_handle_arr_create(0);
+	if (!context->queuePairArray) {
+		result = VMCI_ERROR_NO_MEM;
+		goto error;
+	}
+
+	context->doorbellArray = vmci_handle_arr_create(0);
+	if (!context->doorbellArray) {
+		result = VMCI_ERROR_NO_MEM;
+		goto error;
+	}
+
+	context->pendingDoorbellArray = vmci_handle_arr_create(0);
+	if (!context->pendingDoorbellArray) {
+		result = VMCI_ERROR_NO_MEM;
+		goto error;
+	}
+
+	context->notifierArray = vmci_handle_arr_create(0);
+	if (context->notifierArray == NULL) {
+		result = VMCI_ERROR_NO_MEM;
+		goto error;
+	}
+
+	spin_lock_init(&context->lock);
+
+	atomic_set(&context->refCount, 1);
+
+	/* Inititialize host-specific VMCI context. */
+	init_waitqueue_head(&context->hostContext.waitQueue);
+
+	context->privFlags = privFlags;
+
+	/*
+	 * If we collide with an existing context we generate a new
+	 * and use it instead. The VMX will determine if regeneration
+	 * is okay. Since there isn't 4B - 16 VMs running on a given
+	 * host, the below loop will terminate.
+	 */
+	spin_lock(&ctx_list.lock);
+	ASSERT(cid != VMCI_INVALID_ID);
+	while (ctx_exists_locked(cid)) {
+
+		/*
+		 * If the cid is below our limit and we collide we are
+		 * creating duplicate contexts internally so we want
+		 * to assert fail in that case.
+		 */
+		ASSERT(cid >= VMCI_RESERVED_CID_LIMIT);
+
+		/* We reserve the lowest 16 ids for fixed contexts. */
+		cid = max(cid, VMCI_RESERVED_CID_LIMIT - 1) + 1;
+		if (cid == VMCI_INVALID_ID)
+			cid = VMCI_RESERVED_CID_LIMIT;
+	}
+	ASSERT(!ctx_exists_locked(cid));
+	context->cid = cid;
+	context->validUser = user != NULL;
+	if (context->validUser)
+		context->user = *user;
+	list_add(&context->listItem, &ctx_list.head);
+	spin_unlock(&ctx_list.lock);
+
+	context->notify = NULL;
+	context->notifyPage = NULL;
+
+	*outContext = context;
+	return VMCI_SUCCESS;
+
+error:
+	if (context->notifierArray)
+		vmci_handle_arr_destroy(context->notifierArray);
+	if (context->queuePairArray)
+		vmci_handle_arr_destroy(context->queuePairArray);
+	if (context->doorbellArray)
+		vmci_handle_arr_destroy(context->doorbellArray);
+	if (context->pendingDoorbellArray)
+		vmci_handle_arr_destroy(context->pendingDoorbellArray);
+	kfree(context);
+	return result;
+}
+
+/*
+ * Dequeue VMCI context.
+ */
+void vmci_ctx_release_ctx(struct vmci_ctx *context)
+{
+	spin_lock(&ctx_list.lock);
+	list_del(&context->listItem);
+	spin_unlock(&ctx_list.lock);
+
+	vmci_ctx_release(context);
+}
+
+/*
+ * Fire notification for all contexts interested in given cid.
+ */
+static int ctx_fire_notification(uint32_t contextID,
+				 uint32_t privFlags)
+{
+	uint32_t i, arraySize;
+	struct vmci_ctx *subCtx;
+	struct vmci_handle_arr *subscriberArray;
+	struct vmci_handle contextHandle =
+		vmci_make_handle(contextID, VMCI_EVENT_HANDLER);
+
+	/*
+	 * We create an array to hold the subscribers we find when
+	 * scanning through all contexts.
+	 */
+	subscriberArray = vmci_handle_arr_create(0);
+	if (subscriberArray == NULL)
+		return VMCI_ERROR_NO_MEM;
+
+	/*
+	 * Scan all contexts to find who is interested in being
+	 * notified about given contextID. We have a special
+	 * firingLock that we use to synchronize across all
+	 * notification operations. This avoids us having to take the
+	 * context lock for each HasEntry call and it solves a lock
+	 * ranking issue.
+	 */
+	spin_lock(&ctx_list.firingLock);
+	spin_lock(&ctx_list.lock);
+	list_for_each_entry(subCtx, &ctx_list.head, listItem) {
+		/*
+		 * We only deliver notifications of the removal of
+		 * contexts, if the two contexts are allowed to
+		 * interact.
+		 */
+		if (vmci_handle_arr_has_entry
+		    (subCtx->notifierArray, contextHandle)
+		    && !vmci_deny_interaction(privFlags, subCtx->privFlags)) {
+			vmci_handle_arr_append_entry(&subscriberArray,
+						     vmci_make_handle
+						     (subCtx->cid,
+						      VMCI_EVENT_HANDLER));
+		}
+	}
+	spin_unlock(&ctx_list.lock);
+	spin_unlock(&ctx_list.firingLock);
+
+	/* Fire event to all subscribers. */
+	arraySize = vmci_handle_arr_get_size(subscriberArray);
+	for (i = 0; i < arraySize; i++) {
+		int result;
+		struct vmci_event_msg *eMsg;
+		struct vmci_event_payld_ctx *evPayload;
+		char buf[sizeof *eMsg + sizeof *evPayload];
+
+		eMsg = (struct vmci_event_msg *)buf;
+
+		/* Clear out any garbage. */
+		memset(eMsg, 0, sizeof *eMsg + sizeof *evPayload);
+		eMsg->hdr.dst = vmci_handle_arr_get_entry(subscriberArray, i);
+		eMsg->hdr.src =
+			vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+					 VMCI_CONTEXT_RESOURCE_ID);
+		eMsg->hdr.payloadSize =
+			sizeof *eMsg + sizeof *evPayload - sizeof eMsg->hdr;
+		eMsg->eventData.event = VMCI_EVENT_CTX_REMOVED;
+		evPayload = vmci_event_data_payload(&eMsg->eventData);
+		evPayload->contextID = contextID;
+
+		result = vmci_dg_dispatch(VMCI_HYPERVISOR_CONTEXT_ID,
+					  (struct vmci_dg *)
+					  eMsg, false);
+		if (result < VMCI_SUCCESS) {
+			pr_devel("Failed to enqueue event datagram " \
+				 "(type=%d) for context (ID=0x%x).",
+				 eMsg->eventData.event, eMsg->hdr.dst.context);
+			/* We continue to enqueue on next subscriber. */
+		}
+	}
+	vmci_handle_arr_destroy(subscriberArray);
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Returns the current number of pending datagrams. The call may
+ * also serve as a synchronization point for the datagram queue,
+ * as no enqueue operations can occur concurrently.
+ */
+int vmci_ctx_pending_dgs(uint32_t cid,
+			 uint32_t *pending)
+{
+	struct vmci_ctx *context;
+
+	context = vmci_ctx_get(cid);
+	if (context == NULL)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	spin_lock(&context->lock);
+	if (pending)
+		*pending = context->pendingDatagrams;
+	spin_unlock(&context->lock);
+	vmci_ctx_release(context);
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Queues a VMCI datagram for the appropriate target VM context.
+ */
+int vmci_ctx_enqueue_dg(uint32_t cid,
+			struct vmci_dg *dg)
+{
+	struct vmci_dg_queue_entry *dqEntry;
+	struct vmci_ctx *context;
+	struct vmci_handle dgSrc;
+	size_t vmciDgSize;
+
+	ASSERT(dg);
+	vmciDgSize = VMCI_DG_SIZE(dg);
+	ASSERT(vmciDgSize <= VMCI_MAX_DG_SIZE);
+
+	/* Get the target VM's VMCI context. */
+	context = vmci_ctx_get(cid);
+	if (context == NULL) {
+		pr_devel("Invalid context (ID=0x%x).", cid);
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	/* Allocate guest call entry and add it to the target VM's queue. */
+	dqEntry = kmalloc(sizeof *dqEntry, GFP_KERNEL);
+	if (dqEntry == NULL) {
+		pr_warn("Failed to allocate memory for datagram.");
+		vmci_ctx_release(context);
+		return VMCI_ERROR_NO_MEM;
+	}
+	dqEntry->dg = dg;
+	dqEntry->dgSize = vmciDgSize;
+	dgSrc = dg->src;
+	INIT_LIST_HEAD(&dqEntry->listItem);
+
+	spin_lock(&context->lock);
+
+	/*
+	 * We put a higher limit on datagrams from the hypervisor.  If
+	 * the pending datagram is not from hypervisor, then we check
+	 * if enqueueing it would exceed the
+	 * VMCI_MAX_DATAGRAM_QUEUE_SIZE limit on the destination.  If
+	 * the pending datagram is from hypervisor, we allow it to be
+	 * queued at the destination side provided we don't reach the
+	 * VMCI_MAX_DATAGRAM_AND_EVENT_QUEUE_SIZE limit.
+	 */
+	if (context->datagramQueueSize + vmciDgSize >=
+	    VMCI_MAX_DATAGRAM_QUEUE_SIZE &&
+	    (!VMCI_HANDLE_EQUAL(dgSrc,
+				vmci_make_handle
+				(VMCI_HYPERVISOR_CONTEXT_ID,
+				 VMCI_CONTEXT_RESOURCE_ID))
+	     || context->datagramQueueSize + vmciDgSize >=
+	     VMCI_MAX_DATAGRAM_AND_EVENT_QUEUE_SIZE)) {
+		spin_unlock(&context->lock);
+		vmci_ctx_release(context);
+		kfree(dqEntry);
+		pr_devel("Context (ID=0x%x) receive queue is full.",
+			 cid);
+		return VMCI_ERROR_NO_RESOURCES;
+	}
+
+	list_add(&dqEntry->listItem, &context->datagramQueue);
+	context->pendingDatagrams++;
+	context->datagramQueueSize += vmciDgSize;
+	ctx_signal_notify(context);
+	wake_up(&context->hostContext.waitQueue);
+	spin_unlock(&context->lock);
+	vmci_ctx_release(context);
+
+	return vmciDgSize;
+}
+
+/*
+ * Verifies whether a context with the specified context ID exists.
+ */
+bool vmci_ctx_exists(uint32_t cid)
+{
+	bool rv;
+
+	spin_lock(&ctx_list.lock);
+	rv = ctx_exists_locked(cid);
+	spin_unlock(&ctx_list.lock);
+	return rv;
+}
+
+/*
+ * Retrieves VMCI context corresponding to the given cid.
+ */
+struct vmci_ctx *vmci_ctx_get(uint32_t cid)
+{
+	struct vmci_ctx *context = NULL;
+
+	if (cid == VMCI_INVALID_ID)
+		return NULL;
+
+	spin_lock(&ctx_list.lock);
+	list_for_each_entry(context, &ctx_list.head, listItem) {
+		if (context->cid == cid) {
+			/*
+			 * At this point, we are sure that the
+			 * reference count is larger already than
+			 * zero. When starting the destruction of a
+			 * context, we always remove it from the
+			 * context list before decreasing the
+			 * reference count. As we found the context
+			 * here, it hasn't been destroyed yet. This
+			 * means that we are not about to increase the
+			 * reference count of something that is in the
+			 * process of being destroyed.
+			 */
+
+			atomic_inc(&context->refCount);
+			break;
+		}
+	}
+	spin_unlock(&ctx_list.lock);
+
+	return (context && context->cid == cid) ? context : NULL;
+}
+
+/*
+ * Deallocates all parts of a context datastructure. This
+ * functions doesn't lock the context, because it assumes that
+ * the caller is holding the last reference to context.
+ */
+static void ctx_free_ctx(struct vmci_ctx *context)
+{
+	struct list_head *curr;
+	struct list_head *next;
+	struct vmci_dg_queue_entry *dqEntry;
+	struct vmci_handle tempHandle;
+
+	/*
+	 * Fire event to all contexts interested in knowing this
+	 * context is dying.
+	 */
+	ctx_fire_notification(context->cid, context->privFlags);
+
+	/*
+	 * Cleanup all queue pair resources attached to context.  If
+	 * the VM dies without cleaning up, this code will make sure
+	 * that no resources are leaked.
+	 */
+	tempHandle = vmci_handle_arr_get_entry(context->queuePairArray, 0);
+	while (!VMCI_HANDLE_EQUAL(tempHandle, VMCI_INVALID_HANDLE)) {
+		if (vmci_qp_broker_detach(tempHandle, context) < VMCI_SUCCESS) {
+			/*
+			 * When vmci_qp_broker_detach() succeeds it
+			 * removes the handle from the array.  If
+			 * detach fails, we must remove the handle
+			 * ourselves.
+			 */
+			vmci_handle_arr_remove_entry(context->queuePairArray,
+						     tempHandle);
+		}
+		tempHandle =
+			vmci_handle_arr_get_entry(context->queuePairArray, 0);
+	}
+
+	/*
+	 * It is fine to destroy this without locking the callQueue, as
+	 * this is the only thread having a reference to the context.
+	 */	list_for_each_safe(curr, next, &context->datagramQueue) {
+		dqEntry =
+			list_entry(curr, struct vmci_dg_queue_entry, listItem);
+		list_del(curr);
+		ASSERT(dqEntry && dqEntry->dg);
+		ASSERT(dqEntry->dgSize == VMCI_DG_SIZE(dqEntry->dg));
+		kfree(dqEntry->dg);
+		kfree(dqEntry);
+	}
+
+	vmci_handle_arr_destroy(context->notifierArray);
+	vmci_handle_arr_destroy(context->queuePairArray);
+	vmci_handle_arr_destroy(context->doorbellArray);
+	vmci_handle_arr_destroy(context->pendingDoorbellArray);
+	vmci_ctx_unset_notify(context);
+	kfree(context);
+}
+
+/*
+ * Releases the VMCI context. If this is the last reference to
+ * the context it will be deallocated. A context is created with
+ * a reference count of one, and on destroy, it is removed from
+ * the context list before its reference count is
+ * decremented. Thus, if we reach zero, we are sure that nobody
+ * else are about to increment it (they need the entry in the
+ * context list for that). This function musn't be called with a
+ * lock held.
+ */
+void vmci_ctx_release(struct vmci_ctx *context)
+{
+	ASSERT(context);
+	if (atomic_dec_and_test(&context->refCount))
+		ctx_free_ctx(context);
+}
+
+/*
+ * Dequeues the next datagram and returns it to caller.
+ * The caller passes in a pointer to the max size datagram
+ * it can handle and the datagram is only unqueued if the
+ * size is less than maxSize. If larger maxSize is set to
+ * the size of the datagram to give the caller a chance to
+ * set up a larger buffer for the guestcall.
+ */int vmci_ctx_dequeue_dg(struct vmci_ctx *context,
+			size_t *maxSize,
+			struct vmci_dg **dg)
+{
+	struct vmci_dg_queue_entry *dqEntry;
+	struct list_head *listItem;
+	int rv;
+
+	ASSERT(context && dg);
+
+	/* Dequeue the next datagram entry. */
+	spin_lock(&context->lock);
+	if (context->pendingDatagrams == 0) {
+		ctx_clear_notify_call(context);
+		spin_unlock(&context->lock);
+		pr_devel("No datagrams pending.");
+		return VMCI_ERROR_NO_MORE_DATAGRAMS;
+	}
+
+	listItem = context->datagramQueue.next;
+	ASSERT(!list_empty(&context->datagramQueue));
+
+	dqEntry = list_entry(listItem, struct vmci_dg_queue_entry, listItem);
+	ASSERT(dqEntry->dg);
+
+	/* Check size of caller's buffer. */
+	if (*maxSize < dqEntry->dgSize) {
+		*maxSize = dqEntry->dgSize;
+		spin_unlock(&context->lock);
+		pr_devel("Caller's buffer should be at least " \
+			 "(size=%u bytes).", (uint32_t) *maxSize);
+		return VMCI_ERROR_NO_MEM;
+	}
+
+	list_del(listItem);
+	context->pendingDatagrams--;
+	context->datagramQueueSize -= dqEntry->dgSize;
+	if (context->pendingDatagrams == 0) {
+		ctx_clear_notify_call(context);
+		rv = VMCI_SUCCESS;
+	} else {
+		/*
+		 * Return the size of the next datagram.
+		 */
+		struct vmci_dg_queue_entry *nextEntry;
+
+		listItem = context->datagramQueue.next;
+		ASSERT(!list_empty(&context->datagramQueue));
+		nextEntry = list_entry(listItem, struct vmci_dg_queue_entry,
+				       listItem);
+		ASSERT(nextEntry && nextEntry->dg);
+
+		/*
+		 * The following size_t -> int truncation is fine as
+		 * the maximum size of a (routable) datagram is 68KB.
+		 */
+		rv = (int)nextEntry->dgSize;
+	}
+	spin_unlock(&context->lock);
+
+	/* Caller must free datagram. */
+	ASSERT(dqEntry->dgSize == VMCI_DG_SIZE(dqEntry->dg));
+	*dg = dqEntry->dg;
+	dqEntry->dg = NULL;
+	kfree(dqEntry);
+
+	return rv;
+}
+
+/*
+ * Reverts actions set up by vmci_setup_notify().  Unmaps and unlocks the
+ * page mapped/locked by vmci_setup_notify().
+ */
+void vmci_ctx_unset_notify(struct vmci_ctx *context)
+{
+	struct page *notifyPage = context->notifyPage;
+
+	if (!notifyPage)
+		return;
+
+	context->notify = NULL;
+	context->notifyPage = NULL;
+	kunmap(notifyPage);
+	put_page(notifyPage);
+
+}
+
+uint32_t vmci_ctx_get_id(struct vmci_ctx *context)
+{
+	if (!context)
+		return VMCI_INVALID_ID;
+
+	ASSERT(context->cid != VMCI_INVALID_ID);
+	return context->cid;
+}
+
+/*
+ * Add remoteCID to list of contexts current contexts wants
+ * notifications from/about.
+ */
+int vmci_ctx_add_notification(uint32_t contextID,
+			      uint32_t remoteCID)
+{
+	int result = VMCI_ERROR_ALREADY_EXISTS;
+	struct vmci_handle notifierHandle;
+	struct vmci_ctx *context = vmci_ctx_get(contextID);
+	if (context == NULL)
+		return VMCI_ERROR_NOT_FOUND;
+
+	if (VMCI_CONTEXT_IS_VM(contextID) && VMCI_CONTEXT_IS_VM(remoteCID)) {
+		pr_devel("Context removed notifications for other VMs not " \
+			 "supported (src=0x%x, remote=0x%x).",
+			 contextID, remoteCID);
+		result = VMCI_ERROR_DST_UNREACHABLE;
+		goto out;
+	}
+
+	if (context->privFlags & VMCI_PRIVILEGE_FLAG_RESTRICTED) {
+		result = VMCI_ERROR_NO_ACCESS;
+		goto out;
+	}
+
+	notifierHandle = vmci_make_handle(remoteCID, VMCI_EVENT_HANDLER);
+	spin_lock(&ctx_list.firingLock);
+	spin_lock(&context->lock);
+	if (!vmci_handle_arr_has_entry(context->notifierArray,
+				       notifierHandle)) {
+		vmci_handle_arr_append_entry(&context->notifierArray,
+					     notifierHandle);
+		result = VMCI_SUCCESS;
+	}
+	spin_unlock(&context->lock);
+	spin_unlock(&ctx_list.firingLock);
+
+ out:
+	vmci_ctx_release(context);
+	return result;
+}
+
+/*
+ * Remove remoteCID from current context's list of contexts it is
+ * interested in getting notifications from/about.
+ */
+int vmci_ctx_remove_notification(uint32_t contextID,
+				 uint32_t remoteCID)
+{
+	struct vmci_ctx *context = vmci_ctx_get(contextID);
+	struct vmci_handle tmpHandle;
+	if (context == NULL)
+		return VMCI_ERROR_NOT_FOUND;
+
+	spin_lock(&ctx_list.firingLock);
+	spin_lock(&context->lock);
+	tmpHandle = vmci_make_handle(remoteCID, VMCI_EVENT_HANDLER);
+	tmpHandle = vmci_handle_arr_remove_entry(context->notifierArray,
+						 tmpHandle);
+	spin_unlock(&context->lock);
+	spin_unlock(&ctx_list.firingLock);
+	vmci_ctx_release(context);
+
+	if (VMCI_HANDLE_EQUAL(tmpHandle, VMCI_INVALID_HANDLE))
+		return VMCI_ERROR_NOT_FOUND;
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Get current context's checkpoint state of given type.
+ */
+int vmci_ctx_get_chkpt_state(uint32_t contextID,
+			     uint32_t cptType,
+			     uint32_t *bufSize,
+			     char **cptBufPtr)
+{
+	int i, result;
+	uint32_t arraySize, cptDataSize;
+	struct vmci_handle_arr *array;
+	struct vmci_ctx *context;
+	char *cptBuf;
+	bool getContextID;
+
+	ASSERT(bufSize && cptBufPtr);
+
+	context = vmci_ctx_get(contextID);
+	if (context == NULL)
+		return VMCI_ERROR_NOT_FOUND;
+
+	spin_lock(&context->lock);
+	if (cptType == VMCI_NOTIFICATION_CPT_STATE) {
+		ASSERT(context->notifierArray);
+		array = context->notifierArray;
+		getContextID = true;
+	} else if (cptType == VMCI_WELLKNOWN_CPT_STATE) {
+		/*
+		 * For compatibility with VMX'en with VM to VM communication, we
+		 * always return zero wellknown handles.
+		 */
+
+		*bufSize = 0;
+		*cptBufPtr = NULL;
+		result = VMCI_SUCCESS;
+		goto release;
+	} else if (cptType == VMCI_DOORBELL_CPT_STATE) {
+		ASSERT(context->doorbellArray);
+		array = context->doorbellArray;
+		getContextID = false;
+	} else {
+		pr_devel("Invalid cpt state (type=%d).", cptType);
+		result = VMCI_ERROR_INVALID_ARGS;
+		goto release;
+	}
+
+	arraySize = vmci_handle_arr_get_size(array);
+	if (arraySize > 0) {
+		if (cptType == VMCI_DOORBELL_CPT_STATE) {
+			cptDataSize =
+				arraySize * sizeof(struct dbell_cpt_state);
+		} else {
+			cptDataSize = arraySize * sizeof(uint32_t);
+		}
+
+		if (*bufSize < cptDataSize) {
+			*bufSize = cptDataSize;
+			result = VMCI_ERROR_MORE_DATA;
+			goto release;
+		}
+
+		cptBuf = kmalloc(cptDataSize, GFP_ATOMIC);
+
+		if (cptBuf == NULL) {
+			result = VMCI_ERROR_NO_MEM;
+			goto release;
+		}
+
+		for (i = 0; i < arraySize; i++) {
+			struct vmci_handle tmpHandle =
+				vmci_handle_arr_get_entry(array, i);
+			if (cptType == VMCI_DOORBELL_CPT_STATE) {
+				((struct dbell_cpt_state *)cptBuf)[i].handle =
+					tmpHandle;
+			} else {
+				((uint32_t *) cptBuf)[i] =
+					getContextID ? tmpHandle.context :
+					tmpHandle.resource;
+			}
+		}
+		*bufSize = cptDataSize;
+		*cptBufPtr = cptBuf;
+	} else {
+		*bufSize = 0;
+		*cptBufPtr = NULL;
+	}
+	result = VMCI_SUCCESS;
+
+release:
+	spin_unlock(&context->lock);
+	vmci_ctx_release(context);
+
+	return result;
+}
+
+/*
+ * Set current context's checkpoint state of given type.
+ */
+int vmci_ctx_set_chkpt_state(uint32_t contextID,
+			     uint32_t cptType,
+			     uint32_t bufSize,
+			     char *cptBuf)
+{
+	uint32_t i;
+	uint32_t currentID;
+	int result = VMCI_SUCCESS;
+	uint32_t numIDs = bufSize / sizeof(uint32_t);
+	ASSERT(cptBuf);
+
+	if (cptType == VMCI_WELLKNOWN_CPT_STATE && numIDs > 0) {
+		/*
+		 * We would end up here if VMX with VM to VM communication
+		 * attempts to restore a checkpoint with wellknown handles.
+		 */
+		pr_warn("Attempt to restore checkpoint with obsolete " \
+			"wellknown handles.");
+		return VMCI_ERROR_OBSOLETE;
+	}
+
+	if (cptType != VMCI_NOTIFICATION_CPT_STATE) {
+		pr_devel("Invalid cpt state (type=%d).", cptType);
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	for (i = 0; i < numIDs && result == VMCI_SUCCESS; i++) {
+		currentID = ((uint32_t *) cptBuf)[i];
+		result = vmci_ctx_add_notification(contextID, currentID);
+		if (result != VMCI_SUCCESS)
+			break;
+	}
+	if (result != VMCI_SUCCESS)
+		pr_devel("Failed to set cpt state (type=%d) " \
+			 "(error=%d).", cptType, result);
+
+	return result;
+}
+
+/*
+ * Retrieves the specified context's pending notifications in the
+ * form of a handle array. The handle arrays returned are the
+ * actual data - not a copy and should not be modified by the
+ * caller. They must be released using
+ * vmci_ctx_rcv_notifications_release.
+ */
+int vmci_ctx_rcv_notifications_get(uint32_t contextID,
+				   struct vmci_handle_arr **dbHandleArray,
+				   struct vmci_handle_arr **qpHandleArray)
+{
+	struct vmci_ctx *context;
+	int result = VMCI_SUCCESS;
+
+	ASSERT(dbHandleArray && qpHandleArray);
+
+	context = vmci_ctx_get(contextID);
+	if (context == NULL)
+		return VMCI_ERROR_NOT_FOUND;
+
+	spin_lock(&context->lock);
+
+	*dbHandleArray = context->pendingDoorbellArray;
+	context->pendingDoorbellArray = vmci_handle_arr_create(0);
+	if (!context->pendingDoorbellArray) {
+		context->pendingDoorbellArray = *dbHandleArray;
+		*dbHandleArray = NULL;
+		result = VMCI_ERROR_NO_MEM;
+	}
+	*qpHandleArray = NULL;
+
+	spin_unlock(&context->lock);
+	vmci_ctx_release(context);
+
+	return result;
+}
+
+/*
+ * Releases handle arrays with pending notifications previously
+ * retrieved using vmci_ctx_rcv_notifications_get. If the
+ * notifications were not successfully handed over to the guest,
+ * success must be false.
+ */
+void vmci_ctx_rcv_notifications_release(uint32_t contextID,
+					struct vmci_handle_arr *dbHandleArray,
+					struct vmci_handle_arr *qpHandleArray,
+					bool success)
+{
+	struct vmci_ctx *context = vmci_ctx_get(contextID);
+
+	if (!context) {
+		/*
+		 * The OS driver part is holding on to the context for the
+		 * duration of the receive notification ioctl, so it should
+		 * still be here.
+		 */
+		ASSERT(false);
+	}
+
+	spin_lock(&context->lock);
+	if (!success) {
+		struct vmci_handle handle;
+
+		/*
+		 * New notifications may have been added while we were not
+		 * holding the context lock, so we transfer any new pending
+		 * doorbell notifications to the old array, and reinstate the
+		 * old array.
+		 */
+
+		handle = vmci_handle_arr_remove_tail(
+			context->pendingDoorbellArray);
+		while (!VMCI_HANDLE_INVALID(handle)) {
+			ASSERT(vmci_handle_arr_has_entry
+			       (context->doorbellArray, handle));
+			if (!vmci_handle_arr_has_entry
+			    (dbHandleArray, handle)) {
+				vmci_handle_arr_append_entry
+					(&dbHandleArray, handle);
+			}
+			handle = vmci_handle_arr_remove_tail(
+				context->pendingDoorbellArray);
+		}
+		vmci_handle_arr_destroy(context->pendingDoorbellArray);
+		context->pendingDoorbellArray = dbHandleArray;
+		dbHandleArray = NULL;
+	} else {
+		ctx_clear_notify_call(context);
+	}
+	spin_unlock(&context->lock);
+	vmci_ctx_release(context);
+
+	if (dbHandleArray)
+		vmci_handle_arr_destroy(dbHandleArray);
+
+	if (qpHandleArray)
+		vmci_handle_arr_destroy(qpHandleArray);
+}
+
+/*
+ * Registers that a new doorbell handle has been allocated by the
+ * context. Only doorbell handles registered can be notified.
+ */
+int vmci_ctx_dbell_create(uint32_t contextID,
+			  struct vmci_handle handle)
+{
+	struct vmci_ctx *context;
+	int result;
+
+	if (contextID == VMCI_INVALID_ID || VMCI_HANDLE_INVALID(handle))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	context = vmci_ctx_get(contextID);
+	if (context == NULL)
+		return VMCI_ERROR_NOT_FOUND;
+
+	spin_lock(&context->lock);
+	if (!vmci_handle_arr_has_entry(context->doorbellArray, handle)) {
+		vmci_handle_arr_append_entry(&context->doorbellArray, handle);
+		result = VMCI_SUCCESS;
+	} else {
+		result = VMCI_ERROR_DUPLICATE_ENTRY;
+	}
+
+	spin_unlock(&context->lock);
+	vmci_ctx_release(context);
+
+	return result;
+}
+
+/*
+ * Unregisters a doorbell handle that was previously registered
+ * with vmci_ctx_dbell_create.
+ */
+int vmci_ctx_dbell_destroy(uint32_t contextID,
+			   struct vmci_handle handle)
+{
+	struct vmci_ctx *context;
+	struct vmci_handle removedHandle;
+
+	if (contextID == VMCI_INVALID_ID || VMCI_HANDLE_INVALID(handle))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	context = vmci_ctx_get(contextID);
+	if (context == NULL)
+		return VMCI_ERROR_NOT_FOUND;
+
+	spin_lock(&context->lock);
+	removedHandle =
+		vmci_handle_arr_remove_entry(context->doorbellArray, handle);
+	vmci_handle_arr_remove_entry(context->pendingDoorbellArray, handle);
+	spin_unlock(&context->lock);
+
+	vmci_ctx_release(context);
+
+	return VMCI_HANDLE_INVALID(removedHandle) ?
+		VMCI_ERROR_NOT_FOUND : VMCI_SUCCESS;
+}
+
+/*
+ * Unregisters all doorbell handles that were previously
+ * registered with vmci_ctx_dbell_create.
+ */
+int vmci_ctx_dbell_destroy_all(uint32_t contextID)
+{
+	struct vmci_ctx *context;
+	struct vmci_handle handle;
+
+	if (contextID == VMCI_INVALID_ID)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	context = vmci_ctx_get(contextID);
+	if (context == NULL)
+		return VMCI_ERROR_NOT_FOUND;
+
+	spin_lock(&context->lock);
+	do {
+		struct vmci_handle_arr *arr = context->doorbellArray;
+		handle = vmci_handle_arr_remove_tail(arr);
+	} while (!VMCI_HANDLE_INVALID(handle));
+	do {
+		struct vmci_handle_arr *arr = context->pendingDoorbellArray;
+		handle = vmci_handle_arr_remove_tail(arr);
+	} while (!VMCI_HANDLE_INVALID(handle));
+	spin_unlock(&context->lock);
+
+	vmci_ctx_release(context);
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Registers a notification of a doorbell handle initiated by the
+ * specified source context. The notification of doorbells are
+ * subject to the same isolation rules as datagram delivery. To
+ * allow host side senders of notifications a finer granularity
+ * of sender rights than those assigned to the sending context
+ * itself, the host context is required to specify a different
+ * set of privilege flags that will override the privileges of
+ * the source context.
+ */
+int vmci_ctx_notify_dbell(uint32_t srcCID,
+			  struct vmci_handle handle,
+			  uint32_t srcPrivFlags)
+{
+	struct vmci_ctx *dstContext;
+	int result;
+
+	if (VMCI_HANDLE_INVALID(handle))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	/* Get the target VM's VMCI context. */
+	dstContext = vmci_ctx_get(handle.context);
+	if (dstContext == NULL) {
+		pr_devel("Invalid context (ID=0x%x).", handle.context);
+		return VMCI_ERROR_NOT_FOUND;
+	}
+
+	if (srcCID != handle.context) {
+		uint32_t dstPrivFlags;
+
+		if (VMCI_CONTEXT_IS_VM(srcCID)
+		    && VMCI_CONTEXT_IS_VM(handle.context)) {
+			pr_devel("Doorbell notification from VM to VM not " \
+				 "supported (src=0x%x, dst=0x%x).", srcCID,
+				 handle.context);
+			result = VMCI_ERROR_DST_UNREACHABLE;
+			goto out;
+		}
+
+		result = vmci_dbell_get_priv_flags(handle, &dstPrivFlags);
+		if (result < VMCI_SUCCESS) {
+			pr_warn("Failed to get privilege flags for " \
+				"destination (handle=0x%x:0x%x).",
+				handle.context, handle.resource);
+			goto out;
+		}
+
+		if (srcCID != VMCI_HOST_CONTEXT_ID ||
+		    srcPrivFlags == VMCI_NO_PRIVILEGE_FLAGS) {
+			srcPrivFlags = VMCIContext_GetPrivFlags(srcCID);
+		}
+
+		if (vmci_deny_interaction(srcPrivFlags, dstPrivFlags)) {
+			result = VMCI_ERROR_NO_ACCESS;
+			goto out;
+		}
+	}
+
+	if (handle.context == VMCI_HOST_CONTEXT_ID) {
+		result = vmci_dbell_host_context_notify(srcCID, handle);
+	} else {
+		spin_lock(&dstContext->lock);
+
+		if (!vmci_handle_arr_has_entry
+		    (dstContext->doorbellArray, handle)) {
+			result = VMCI_ERROR_NOT_FOUND;
+		} else {
+			if (!vmci_handle_arr_has_entry
+			    (dstContext->pendingDoorbellArray, handle)) {
+				vmci_handle_arr_append_entry
+					(&dstContext->pendingDoorbellArray,
+					 handle);
+
+				ctx_signal_notify(dstContext);
+				wake_up(&dstContext->hostContext.waitQueue);
+
+			}
+			result = VMCI_SUCCESS;
+		}
+		spin_unlock(&dstContext->lock);
+	}
+
+out:
+	vmci_ctx_release(dstContext);
+
+	return result;
+}
+
+static int ctx_compare_user(uid_t *user1, uid_t *user2)
+{
+	if (!user1 || !user2)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	return (*user1 == *user2) ? VMCI_SUCCESS : VMCI_ERROR_GENERIC;
+}
+
+bool vmci_ctx_supports_host_qp(struct vmci_ctx *context)
+{
+	return context && context->userVersion >= VMCI_VERSION_HOSTQP;
+}
+
+/*
+ * Registers that a new queue pair handle has been allocated by
+ * the context.
+ */
+int vmci_ctx_qp_create(struct vmci_ctx *context,
+		       struct vmci_handle handle)
+{
+	int result;
+
+	if (context == NULL || VMCI_HANDLE_INVALID(handle))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	if (!vmci_handle_arr_has_entry(context->queuePairArray, handle)) {
+		vmci_handle_arr_append_entry(&context->queuePairArray, handle);
+		result = VMCI_SUCCESS;
+	} else {
+		result = VMCI_ERROR_DUPLICATE_ENTRY;
+	}
+
+	return result;
+}
+
+/*
+ * Unregisters a queue pair handle that was previously registered
+ * with vmci_ctx_qp_create.
+ */
+int vmci_ctx_qp_destroy(struct vmci_ctx *context,
+			struct vmci_handle handle)
+{
+	struct vmci_handle hndl;
+
+	if (context == NULL || VMCI_HANDLE_INVALID(handle))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	hndl = vmci_handle_arr_remove_entry(context->queuePairArray, handle);
+
+	return VMCI_HANDLE_INVALID(hndl) ?
+		VMCI_ERROR_NOT_FOUND : VMCI_SUCCESS;
+}
+
+/*
+ * Determines whether a given queue pair handle is registered
+ * with the given context.
+ */
+bool vmci_ctx_qp_exists(struct vmci_ctx *context,
+			struct vmci_handle handle)
+{
+	if (context == NULL || VMCI_HANDLE_INVALID(handle))
+		return false;
+
+	return vmci_handle_arr_has_entry(context->queuePairArray, handle);
+}
+
+/**
+ * VMCIContext_GetPrivFlags() - Retrieve privilege flags.
+ * @contextID:	The context ID of the VMCI context.
+ *
+ * Retrieves privilege flags of the given VMCI context ID.
+ */
+uint32_t VMCIContext_GetPrivFlags(uint32_t contextID)
+{
+	if (vmci_host_code_active()) {
+		uint32_t flags;
+		struct vmci_ctx *context;
+
+		context = vmci_ctx_get(contextID);
+		if (!context)
+			return VMCI_LEAST_PRIVILEGE_FLAGS;
+
+		flags = context->privFlags;
+		vmci_ctx_release(context);
+		return flags;
+	}
+	return VMCI_NO_PRIVILEGE_FLAGS;
+}
+EXPORT_SYMBOL(VMCIContext_GetPrivFlags);
+
+/**
+ * VMCI_ContextID2HostVmID() - Map CID to HostID
+ * @contextID:	Context ID of VMCI context.
+ * @hostVmID:	Host VM ID data
+ * @hostVmIDLen:	Length of Host VM ID Data.
+ *
+ * Maps a context ID to the host specific (process/world) ID
+ * of the VM/VMX.  This function is not used on Linux systems
+ * and should be ignored.
+ */
+int VMCI_ContextID2HostVmID(uint32_t contextID,
+			    void *hostVmID,
+			    size_t hostVmIDLen)
+{
+	return VMCI_ERROR_UNAVAILABLE;
+}
+EXPORT_SYMBOL(VMCI_ContextID2HostVmID);
+
+/**
+ * VMCI_IsContextOwner() - Determimnes if user is the context owner
+ * @contextID:	The context ID of the VMCI context.
+ * @hostUser:	The user as a void pointer.
+ *
+ * Determines whether a given host OS specific representation of
+ * user is the owner of the VM/VMX.
+ */
+int VMCI_IsContextOwner(uint32_t contextID,
+			void *hostUser)
+{
+	if (vmci_host_code_active()) {
+		struct vmci_ctx *context;
+		uid_t *user = hostUser;
+		int retval;
+
+		if (!hostUser)
+			return VMCI_ERROR_INVALID_ARGS;
+
+		context = vmci_ctx_get(contextID);
+		if (!context)
+			return VMCI_ERROR_NOT_FOUND;
+
+		if (context->validUser)
+			retval = ctx_compare_user(user, &context->user);
+		else
+			retval = VMCI_ERROR_UNAVAILABLE;
+
+		vmci_ctx_release(context);
+		return retval;
+	}
+	return VMCI_ERROR_UNAVAILABLE;
+}
+EXPORT_SYMBOL(VMCI_IsContextOwner);
diff --git a/drivers/misc/vmw_vmci/vmci_context.h b/drivers/misc/vmw_vmci/vmci_context.h
new file mode 100644
index 0000000..0b80a5d
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_context.h
@@ -0,0 +1,161 @@
+/*
+ * VMware VMCI driver (vmciContext.h)
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMCI_CONTEXT_H_
+#define _VMCI_CONTEXT_H_
+
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_datagram.h"
+#include "vmci_common_int.h"
+#include "vmci_handle_array.h"
+
+/* Used to determine what checkpoint state to get and set. */
+enum {
+	VMCI_NOTIFICATION_CPT_STATE = 1,
+	VMCI_WELLKNOWN_CPT_STATE    = 2,
+	VMCI_DG_OUT_STATE           = 3,
+	VMCI_DG_IN_STATE            = 4,
+	VMCI_DG_IN_SIZE_STATE       = 5,
+	VMCI_DOORBELL_CPT_STATE     = 6,
+};
+
+/* Host specific struct used for signalling */
+struct vmci_host {
+	wait_queue_head_t waitQueue;
+};
+
+struct vmci_ctx {
+	struct list_head listItem;	/* For global VMCI list. */
+	uint32_t cid;
+	atomic_t refCount;
+	struct list_head datagramQueue;	/* Head of per VM queue. */
+	uint32_t pendingDatagrams;
+	size_t datagramQueueSize;	/* Size of datagram queue in bytes. */
+
+	/*
+	 * Version of the code that created
+	 * this context; e.g., VMX.
+	 */
+	int userVersion;
+	spinlock_t lock;  /* Locks callQueue and handleArrays. */
+
+	/*
+	 * QueuePairs attached to.  The array of
+	 * handles for queue pairs is accessed
+	 * from the code for QP API, and there
+	 * it is protected by the QP lock.  It
+	 * is also accessed from the context
+	 * clean up path, which does not
+	 * require a lock.  VMCILock is not
+	 * used to protect the QP array field.
+	 */
+	struct vmci_handle_arr *queuePairArray;
+
+	/* Doorbells created by context. */
+	struct vmci_handle_arr *doorbellArray;
+
+	/* Doorbells pending for context. */
+	struct vmci_handle_arr *pendingDoorbellArray;
+
+	/* Contexts current context is subscribing to. */
+	struct vmci_handle_arr *notifierArray;
+	struct vmci_host hostContext;
+	uint32_t privFlags;
+	uid_t user;
+	bool validUser;
+	bool *notify;		/* Notify flag pointer - hosted only. */
+	struct page *notifyPage;	/* Page backing the notify UVA. */
+};
+
+/* VMCINotifyAddRemoveInfo: Used to add/remove remote context notifications. */
+struct vmci_ctx_info {
+	uint32_t remoteCID;
+	int result;
+};
+
+/* VMCICptBufInfo: Used to set/get current context's checkpoint state. */
+struct vmci_ctx_chkpt_buf_info {
+	uint64_t cptBuf;
+	uint32_t cptType;
+	uint32_t bufSize;
+	int32_t result;
+	uint32_t _pad;
+};
+
+/*
+ * VMCINotificationReceiveInfo: Used to recieve pending notifications
+ * for doorbells and queue pairs.
+ */
+struct vmci_ctx_notify_recv_info {
+	uint64_t dbHandleBufUVA;
+	uint64_t dbHandleBufSize;
+	uint64_t qpHandleBufUVA;
+	uint64_t qpHandleBufSize;
+	int32_t result;
+	uint32_t _pad;
+};
+
+int vmci_ctx_init(void);
+int vmci_ctx_init_ctx(uint32_t cid, uint32_t flags,
+		      uintptr_t eventHnd, int version,
+		      uid_t *user, struct vmci_ctx **context);
+
+bool vmci_ctx_supports_host_qp(struct vmci_ctx *context);
+void vmci_ctx_release_ctx(struct vmci_ctx *context);
+int vmci_ctx_enqueue_dg(uint32_t cid, struct vmci_dg *dg);
+int vmci_ctx_dequeue_dg(struct vmci_ctx *context,
+			size_t *maxSize, struct vmci_dg **dg);
+int vmci_ctx_pending_dgs(uint32_t cid, uint32_t *pending);
+struct vmci_ctx *vmci_ctx_get(uint32_t cid);
+void vmci_ctx_release(struct vmci_ctx *context);
+bool vmci_ctx_exists(uint32_t cid);
+
+uint32_t vmci_ctx_get_id(struct vmci_ctx *context);
+int vmci_ctx_add_notification(uint32_t contextID, uint32_t remoteCID);
+int vmci_ctx_remove_notification(uint32_t contextID, uint32_t remoteCID);
+int vmci_ctx_get_chkpt_state(uint32_t contextID, uint32_t cptType,
+			     uint32_t *numCIDs, char **cptBufPtr);
+int vmci_ctx_set_chkpt_state(uint32_t contextID, uint32_t cptType,
+			     uint32_t numCIDs, char *cptBuf);
+
+int vmci_ctx_qp_create(struct vmci_ctx *context,
+		       struct vmci_handle handle);
+int vmci_ctx_qp_destroy(struct vmci_ctx *context,
+			struct vmci_handle handle);
+bool vmci_ctx_qp_exists(struct vmci_ctx *context,
+			struct vmci_handle handle);
+
+void vmci_ctx_check_signal_notify(struct vmci_ctx *context);
+void vmci_ctx_unset_notify(struct vmci_ctx *context);
+
+int vmci_ctx_dbell_create(uint32_t contextID, struct vmci_handle handle);
+int vmci_ctx_dbell_destroy(uint32_t contextID, struct vmci_handle handle);
+int vmci_ctx_dbell_destroy_all(uint32_t contextID);
+int vmci_ctx_notify_dbell(uint32_t cid, struct vmci_handle handle,
+			  uint32_t srcPrivFlags);
+
+int vmci_ctx_rcv_notifications_get(uint32_t contextID, struct vmci_handle_arr
+				   **dbHandleArray, struct vmci_handle_arr
+				   **qpHandleArray);
+void
+vmci_ctx_rcv_notifications_release(uint32_t contextID, struct vmci_handle_arr
+				   *dbHandleArray, struct vmci_handle_arr
+				   *qpHandleArray, bool success);
+#endif /* _VMCI_CONTEXT_H_ */
-- 
1.7.0.4


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 01/11] Apply VMCI context code
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  0 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, Andrew Stiegmann (stieg), cschamp, gregkh

Context code maintains state for vmci and allows the driver
to communicate with multiple VMs.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_context.c | 1269 ++++++++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_context.h |  161 +++++
 2 files changed, 1430 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_context.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_context.h

diff --git a/drivers/misc/vmw_vmci/vmci_context.c b/drivers/misc/vmw_vmci/vmci_context.c
new file mode 100644
index 0000000..46faf10
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_context.c
@@ -0,0 +1,1269 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/highmem.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/vmw_vmci_api.h>
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_common_int.h"
+#include "vmci_context.h"
+#include "vmci_datagram.h"
+#include "vmci_doorbell.h"
+#include "vmci_driver.h"
+#include "vmci_event.h"
+#include "vmci_queue_pair.h"
+
+/*
+ * List of current VMCI contexts.
+ */
+static struct {
+	struct list_head head;
+	spinlock_t lock;
+	spinlock_t firingLock;
+} ctx_list;
+
+
+static void ctx_signal_notify(struct vmci_ctx *context)
+{
+	if (context->notify)
+		*context->notify = true;
+}
+
+static void ctx_clear_notify(struct vmci_ctx *context)
+{
+	if (context->notify)
+		*context->notify = false;
+}
+
+/*
+ * If nothing requires the attention of the guest, clears both
+ * notify flag and call.
+ */
+static void ctx_clear_notify_call(struct vmci_ctx *context)
+{
+	if (context->pendingDatagrams == 0 &&
+	    vmci_handle_arr_get_size(context->pendingDoorbellArray) == 0)
+		ctx_clear_notify(context);
+}
+
+/*
+ * Check if a context with the specified context ID exists.
+ * Assumes the ctx_list.lock is held.
+ */
+static bool ctx_exists_locked(uint32_t cid)
+{
+	struct vmci_ctx *context;
+
+	list_for_each_entry(context, &ctx_list.head, listItem) {
+		if (context->cid == cid)
+			return true;
+	}
+
+	return false;
+}
+
+/*
+ * Sets the context's notify flag iff datagrams are pending for this
+ * context.  Called from vmci_setup_notify().
+ */
+void vmci_ctx_check_signal_notify(struct vmci_ctx *context)
+{
+	ASSERT(context);
+
+	spin_lock(&ctx_list.lock);
+	if (context->pendingDatagrams)
+		ctx_signal_notify(context);
+	spin_unlock(&ctx_list.lock);
+}
+
+int __init vmci_ctx_init(void)
+{
+	INIT_LIST_HEAD(&ctx_list.head);
+
+	spin_lock_init(&ctx_list.lock);
+	spin_lock_init(&ctx_list.firingLock);
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Allocates and initializes a VMCI context.
+ */
+int vmci_ctx_init_ctx(uint32_t cid,
+		      uint32_t privFlags,
+		      uintptr_t eventHnd,
+		      int userVersion,
+		      uid_t *user, struct vmci_ctx **outContext)
+{
+	struct vmci_ctx *context;
+	int result;
+
+	if (privFlags & ~VMCI_PRIVILEGE_ALL_FLAGS) {
+		pr_devel("Invalid flag (flags=0x%x) for VMCI context.",
+			 privFlags);
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	if (userVersion == 0)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	context = kzalloc(sizeof *context, GFP_KERNEL);
+	if (context == NULL) {
+		pr_warn("Failed to allocate memory for VMCI context.");
+		return VMCI_ERROR_NO_MEM;
+	}
+
+	INIT_LIST_HEAD(&context->listItem);
+	INIT_LIST_HEAD(&context->datagramQueue);
+
+	context->userVersion = userVersion;
+
+	context->queuePairArray = vmci_handle_arr_create(0);
+	if (!context->queuePairArray) {
+		result = VMCI_ERROR_NO_MEM;
+		goto error;
+	}
+
+	context->doorbellArray = vmci_handle_arr_create(0);
+	if (!context->doorbellArray) {
+		result = VMCI_ERROR_NO_MEM;
+		goto error;
+	}
+
+	context->pendingDoorbellArray = vmci_handle_arr_create(0);
+	if (!context->pendingDoorbellArray) {
+		result = VMCI_ERROR_NO_MEM;
+		goto error;
+	}
+
+	context->notifierArray = vmci_handle_arr_create(0);
+	if (context->notifierArray == NULL) {
+		result = VMCI_ERROR_NO_MEM;
+		goto error;
+	}
+
+	spin_lock_init(&context->lock);
+
+	atomic_set(&context->refCount, 1);
+
+	/* Inititialize host-specific VMCI context. */
+	init_waitqueue_head(&context->hostContext.waitQueue);
+
+	context->privFlags = privFlags;
+
+	/*
+	 * If we collide with an existing context we generate a new
+	 * and use it instead. The VMX will determine if regeneration
+	 * is okay. Since there isn't 4B - 16 VMs running on a given
+	 * host, the below loop will terminate.
+	 */
+	spin_lock(&ctx_list.lock);
+	ASSERT(cid != VMCI_INVALID_ID);
+	while (ctx_exists_locked(cid)) {
+
+		/*
+		 * If the cid is below our limit and we collide we are
+		 * creating duplicate contexts internally so we want
+		 * to assert fail in that case.
+		 */
+		ASSERT(cid >= VMCI_RESERVED_CID_LIMIT);
+
+		/* We reserve the lowest 16 ids for fixed contexts. */
+		cid = max(cid, VMCI_RESERVED_CID_LIMIT - 1) + 1;
+		if (cid == VMCI_INVALID_ID)
+			cid = VMCI_RESERVED_CID_LIMIT;
+	}
+	ASSERT(!ctx_exists_locked(cid));
+	context->cid = cid;
+	context->validUser = user != NULL;
+	if (context->validUser)
+		context->user = *user;
+	list_add(&context->listItem, &ctx_list.head);
+	spin_unlock(&ctx_list.lock);
+
+	context->notify = NULL;
+	context->notifyPage = NULL;
+
+	*outContext = context;
+	return VMCI_SUCCESS;
+
+error:
+	if (context->notifierArray)
+		vmci_handle_arr_destroy(context->notifierArray);
+	if (context->queuePairArray)
+		vmci_handle_arr_destroy(context->queuePairArray);
+	if (context->doorbellArray)
+		vmci_handle_arr_destroy(context->doorbellArray);
+	if (context->pendingDoorbellArray)
+		vmci_handle_arr_destroy(context->pendingDoorbellArray);
+	kfree(context);
+	return result;
+}
+
+/*
+ * Dequeue VMCI context.
+ */
+void vmci_ctx_release_ctx(struct vmci_ctx *context)
+{
+	spin_lock(&ctx_list.lock);
+	list_del(&context->listItem);
+	spin_unlock(&ctx_list.lock);
+
+	vmci_ctx_release(context);
+}
+
+/*
+ * Fire notification for all contexts interested in given cid.
+ */
+static int ctx_fire_notification(uint32_t contextID,
+				 uint32_t privFlags)
+{
+	uint32_t i, arraySize;
+	struct vmci_ctx *subCtx;
+	struct vmci_handle_arr *subscriberArray;
+	struct vmci_handle contextHandle =
+		vmci_make_handle(contextID, VMCI_EVENT_HANDLER);
+
+	/*
+	 * We create an array to hold the subscribers we find when
+	 * scanning through all contexts.
+	 */
+	subscriberArray = vmci_handle_arr_create(0);
+	if (subscriberArray == NULL)
+		return VMCI_ERROR_NO_MEM;
+
+	/*
+	 * Scan all contexts to find who is interested in being
+	 * notified about given contextID. We have a special
+	 * firingLock that we use to synchronize across all
+	 * notification operations. This avoids us having to take the
+	 * context lock for each HasEntry call and it solves a lock
+	 * ranking issue.
+	 */
+	spin_lock(&ctx_list.firingLock);
+	spin_lock(&ctx_list.lock);
+	list_for_each_entry(subCtx, &ctx_list.head, listItem) {
+		/*
+		 * We only deliver notifications of the removal of
+		 * contexts, if the two contexts are allowed to
+		 * interact.
+		 */
+		if (vmci_handle_arr_has_entry
+		    (subCtx->notifierArray, contextHandle)
+		    && !vmci_deny_interaction(privFlags, subCtx->privFlags)) {
+			vmci_handle_arr_append_entry(&subscriberArray,
+						     vmci_make_handle
+						     (subCtx->cid,
+						      VMCI_EVENT_HANDLER));
+		}
+	}
+	spin_unlock(&ctx_list.lock);
+	spin_unlock(&ctx_list.firingLock);
+
+	/* Fire event to all subscribers. */
+	arraySize = vmci_handle_arr_get_size(subscriberArray);
+	for (i = 0; i < arraySize; i++) {
+		int result;
+		struct vmci_event_msg *eMsg;
+		struct vmci_event_payld_ctx *evPayload;
+		char buf[sizeof *eMsg + sizeof *evPayload];
+
+		eMsg = (struct vmci_event_msg *)buf;
+
+		/* Clear out any garbage. */
+		memset(eMsg, 0, sizeof *eMsg + sizeof *evPayload);
+		eMsg->hdr.dst = vmci_handle_arr_get_entry(subscriberArray, i);
+		eMsg->hdr.src =
+			vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+					 VMCI_CONTEXT_RESOURCE_ID);
+		eMsg->hdr.payloadSize =
+			sizeof *eMsg + sizeof *evPayload - sizeof eMsg->hdr;
+		eMsg->eventData.event = VMCI_EVENT_CTX_REMOVED;
+		evPayload = vmci_event_data_payload(&eMsg->eventData);
+		evPayload->contextID = contextID;
+
+		result = vmci_dg_dispatch(VMCI_HYPERVISOR_CONTEXT_ID,
+					  (struct vmci_dg *)
+					  eMsg, false);
+		if (result < VMCI_SUCCESS) {
+			pr_devel("Failed to enqueue event datagram " \
+				 "(type=%d) for context (ID=0x%x).",
+				 eMsg->eventData.event, eMsg->hdr.dst.context);
+			/* We continue to enqueue on next subscriber. */
+		}
+	}
+	vmci_handle_arr_destroy(subscriberArray);
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Returns the current number of pending datagrams. The call may
+ * also serve as a synchronization point for the datagram queue,
+ * as no enqueue operations can occur concurrently.
+ */
+int vmci_ctx_pending_dgs(uint32_t cid,
+			 uint32_t *pending)
+{
+	struct vmci_ctx *context;
+
+	context = vmci_ctx_get(cid);
+	if (context == NULL)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	spin_lock(&context->lock);
+	if (pending)
+		*pending = context->pendingDatagrams;
+	spin_unlock(&context->lock);
+	vmci_ctx_release(context);
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Queues a VMCI datagram for the appropriate target VM context.
+ */
+int vmci_ctx_enqueue_dg(uint32_t cid,
+			struct vmci_dg *dg)
+{
+	struct vmci_dg_queue_entry *dqEntry;
+	struct vmci_ctx *context;
+	struct vmci_handle dgSrc;
+	size_t vmciDgSize;
+
+	ASSERT(dg);
+	vmciDgSize = VMCI_DG_SIZE(dg);
+	ASSERT(vmciDgSize <= VMCI_MAX_DG_SIZE);
+
+	/* Get the target VM's VMCI context. */
+	context = vmci_ctx_get(cid);
+	if (context == NULL) {
+		pr_devel("Invalid context (ID=0x%x).", cid);
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	/* Allocate guest call entry and add it to the target VM's queue. */
+	dqEntry = kmalloc(sizeof *dqEntry, GFP_KERNEL);
+	if (dqEntry == NULL) {
+		pr_warn("Failed to allocate memory for datagram.");
+		vmci_ctx_release(context);
+		return VMCI_ERROR_NO_MEM;
+	}
+	dqEntry->dg = dg;
+	dqEntry->dgSize = vmciDgSize;
+	dgSrc = dg->src;
+	INIT_LIST_HEAD(&dqEntry->listItem);
+
+	spin_lock(&context->lock);
+
+	/*
+	 * We put a higher limit on datagrams from the hypervisor.  If
+	 * the pending datagram is not from hypervisor, then we check
+	 * if enqueueing it would exceed the
+	 * VMCI_MAX_DATAGRAM_QUEUE_SIZE limit on the destination.  If
+	 * the pending datagram is from hypervisor, we allow it to be
+	 * queued at the destination side provided we don't reach the
+	 * VMCI_MAX_DATAGRAM_AND_EVENT_QUEUE_SIZE limit.
+	 */
+	if (context->datagramQueueSize + vmciDgSize >=
+	    VMCI_MAX_DATAGRAM_QUEUE_SIZE &&
+	    (!VMCI_HANDLE_EQUAL(dgSrc,
+				vmci_make_handle
+				(VMCI_HYPERVISOR_CONTEXT_ID,
+				 VMCI_CONTEXT_RESOURCE_ID))
+	     || context->datagramQueueSize + vmciDgSize >=
+	     VMCI_MAX_DATAGRAM_AND_EVENT_QUEUE_SIZE)) {
+		spin_unlock(&context->lock);
+		vmci_ctx_release(context);
+		kfree(dqEntry);
+		pr_devel("Context (ID=0x%x) receive queue is full.",
+			 cid);
+		return VMCI_ERROR_NO_RESOURCES;
+	}
+
+	list_add(&dqEntry->listItem, &context->datagramQueue);
+	context->pendingDatagrams++;
+	context->datagramQueueSize += vmciDgSize;
+	ctx_signal_notify(context);
+	wake_up(&context->hostContext.waitQueue);
+	spin_unlock(&context->lock);
+	vmci_ctx_release(context);
+
+	return vmciDgSize;
+}
+
+/*
+ * Verifies whether a context with the specified context ID exists.
+ */
+bool vmci_ctx_exists(uint32_t cid)
+{
+	bool rv;
+
+	spin_lock(&ctx_list.lock);
+	rv = ctx_exists_locked(cid);
+	spin_unlock(&ctx_list.lock);
+	return rv;
+}
+
+/*
+ * Retrieves VMCI context corresponding to the given cid.
+ */
+struct vmci_ctx *vmci_ctx_get(uint32_t cid)
+{
+	struct vmci_ctx *context = NULL;
+
+	if (cid == VMCI_INVALID_ID)
+		return NULL;
+
+	spin_lock(&ctx_list.lock);
+	list_for_each_entry(context, &ctx_list.head, listItem) {
+		if (context->cid == cid) {
+			/*
+			 * At this point, we are sure that the
+			 * reference count is larger already than
+			 * zero. When starting the destruction of a
+			 * context, we always remove it from the
+			 * context list before decreasing the
+			 * reference count. As we found the context
+			 * here, it hasn't been destroyed yet. This
+			 * means that we are not about to increase the
+			 * reference count of something that is in the
+			 * process of being destroyed.
+			 */
+
+			atomic_inc(&context->refCount);
+			break;
+		}
+	}
+	spin_unlock(&ctx_list.lock);
+
+	return (context && context->cid == cid) ? context : NULL;
+}
+
+/*
+ * Deallocates all parts of a context datastructure. This
+ * functions doesn't lock the context, because it assumes that
+ * the caller is holding the last reference to context.
+ */
+static void ctx_free_ctx(struct vmci_ctx *context)
+{
+	struct list_head *curr;
+	struct list_head *next;
+	struct vmci_dg_queue_entry *dqEntry;
+	struct vmci_handle tempHandle;
+
+	/*
+	 * Fire event to all contexts interested in knowing this
+	 * context is dying.
+	 */
+	ctx_fire_notification(context->cid, context->privFlags);
+
+	/*
+	 * Cleanup all queue pair resources attached to context.  If
+	 * the VM dies without cleaning up, this code will make sure
+	 * that no resources are leaked.
+	 */
+	tempHandle = vmci_handle_arr_get_entry(context->queuePairArray, 0);
+	while (!VMCI_HANDLE_EQUAL(tempHandle, VMCI_INVALID_HANDLE)) {
+		if (vmci_qp_broker_detach(tempHandle, context) < VMCI_SUCCESS) {
+			/*
+			 * When vmci_qp_broker_detach() succeeds it
+			 * removes the handle from the array.  If
+			 * detach fails, we must remove the handle
+			 * ourselves.
+			 */
+			vmci_handle_arr_remove_entry(context->queuePairArray,
+						     tempHandle);
+		}
+		tempHandle =
+			vmci_handle_arr_get_entry(context->queuePairArray, 0);
+	}
+
+	/*
+	 * It is fine to destroy this without locking the callQueue, as
+	 * this is the only thread having a reference to the context.
+	 */	list_for_each_safe(curr, next, &context->datagramQueue) {
+		dqEntry =
+			list_entry(curr, struct vmci_dg_queue_entry, listItem);
+		list_del(curr);
+		ASSERT(dqEntry && dqEntry->dg);
+		ASSERT(dqEntry->dgSize == VMCI_DG_SIZE(dqEntry->dg));
+		kfree(dqEntry->dg);
+		kfree(dqEntry);
+	}
+
+	vmci_handle_arr_destroy(context->notifierArray);
+	vmci_handle_arr_destroy(context->queuePairArray);
+	vmci_handle_arr_destroy(context->doorbellArray);
+	vmci_handle_arr_destroy(context->pendingDoorbellArray);
+	vmci_ctx_unset_notify(context);
+	kfree(context);
+}
+
+/*
+ * Releases the VMCI context. If this is the last reference to
+ * the context it will be deallocated. A context is created with
+ * a reference count of one, and on destroy, it is removed from
+ * the context list before its reference count is
+ * decremented. Thus, if we reach zero, we are sure that nobody
+ * else are about to increment it (they need the entry in the
+ * context list for that). This function musn't be called with a
+ * lock held.
+ */
+void vmci_ctx_release(struct vmci_ctx *context)
+{
+	ASSERT(context);
+	if (atomic_dec_and_test(&context->refCount))
+		ctx_free_ctx(context);
+}
+
+/*
+ * Dequeues the next datagram and returns it to caller.
+ * The caller passes in a pointer to the max size datagram
+ * it can handle and the datagram is only unqueued if the
+ * size is less than maxSize. If larger maxSize is set to
+ * the size of the datagram to give the caller a chance to
+ * set up a larger buffer for the guestcall.
+ */int vmci_ctx_dequeue_dg(struct vmci_ctx *context,
+			size_t *maxSize,
+			struct vmci_dg **dg)
+{
+	struct vmci_dg_queue_entry *dqEntry;
+	struct list_head *listItem;
+	int rv;
+
+	ASSERT(context && dg);
+
+	/* Dequeue the next datagram entry. */
+	spin_lock(&context->lock);
+	if (context->pendingDatagrams == 0) {
+		ctx_clear_notify_call(context);
+		spin_unlock(&context->lock);
+		pr_devel("No datagrams pending.");
+		return VMCI_ERROR_NO_MORE_DATAGRAMS;
+	}
+
+	listItem = context->datagramQueue.next;
+	ASSERT(!list_empty(&context->datagramQueue));
+
+	dqEntry = list_entry(listItem, struct vmci_dg_queue_entry, listItem);
+	ASSERT(dqEntry->dg);
+
+	/* Check size of caller's buffer. */
+	if (*maxSize < dqEntry->dgSize) {
+		*maxSize = dqEntry->dgSize;
+		spin_unlock(&context->lock);
+		pr_devel("Caller's buffer should be at least " \
+			 "(size=%u bytes).", (uint32_t) *maxSize);
+		return VMCI_ERROR_NO_MEM;
+	}
+
+	list_del(listItem);
+	context->pendingDatagrams--;
+	context->datagramQueueSize -= dqEntry->dgSize;
+	if (context->pendingDatagrams == 0) {
+		ctx_clear_notify_call(context);
+		rv = VMCI_SUCCESS;
+	} else {
+		/*
+		 * Return the size of the next datagram.
+		 */
+		struct vmci_dg_queue_entry *nextEntry;
+
+		listItem = context->datagramQueue.next;
+		ASSERT(!list_empty(&context->datagramQueue));
+		nextEntry = list_entry(listItem, struct vmci_dg_queue_entry,
+				       listItem);
+		ASSERT(nextEntry && nextEntry->dg);
+
+		/*
+		 * The following size_t -> int truncation is fine as
+		 * the maximum size of a (routable) datagram is 68KB.
+		 */
+		rv = (int)nextEntry->dgSize;
+	}
+	spin_unlock(&context->lock);
+
+	/* Caller must free datagram. */
+	ASSERT(dqEntry->dgSize == VMCI_DG_SIZE(dqEntry->dg));
+	*dg = dqEntry->dg;
+	dqEntry->dg = NULL;
+	kfree(dqEntry);
+
+	return rv;
+}
+
+/*
+ * Reverts actions set up by vmci_setup_notify().  Unmaps and unlocks the
+ * page mapped/locked by vmci_setup_notify().
+ */
+void vmci_ctx_unset_notify(struct vmci_ctx *context)
+{
+	struct page *notifyPage = context->notifyPage;
+
+	if (!notifyPage)
+		return;
+
+	context->notify = NULL;
+	context->notifyPage = NULL;
+	kunmap(notifyPage);
+	put_page(notifyPage);
+
+}
+
+uint32_t vmci_ctx_get_id(struct vmci_ctx *context)
+{
+	if (!context)
+		return VMCI_INVALID_ID;
+
+	ASSERT(context->cid != VMCI_INVALID_ID);
+	return context->cid;
+}
+
+/*
+ * Add remoteCID to list of contexts current contexts wants
+ * notifications from/about.
+ */
+int vmci_ctx_add_notification(uint32_t contextID,
+			      uint32_t remoteCID)
+{
+	int result = VMCI_ERROR_ALREADY_EXISTS;
+	struct vmci_handle notifierHandle;
+	struct vmci_ctx *context = vmci_ctx_get(contextID);
+	if (context == NULL)
+		return VMCI_ERROR_NOT_FOUND;
+
+	if (VMCI_CONTEXT_IS_VM(contextID) && VMCI_CONTEXT_IS_VM(remoteCID)) {
+		pr_devel("Context removed notifications for other VMs not " \
+			 "supported (src=0x%x, remote=0x%x).",
+			 contextID, remoteCID);
+		result = VMCI_ERROR_DST_UNREACHABLE;
+		goto out;
+	}
+
+	if (context->privFlags & VMCI_PRIVILEGE_FLAG_RESTRICTED) {
+		result = VMCI_ERROR_NO_ACCESS;
+		goto out;
+	}
+
+	notifierHandle = vmci_make_handle(remoteCID, VMCI_EVENT_HANDLER);
+	spin_lock(&ctx_list.firingLock);
+	spin_lock(&context->lock);
+	if (!vmci_handle_arr_has_entry(context->notifierArray,
+				       notifierHandle)) {
+		vmci_handle_arr_append_entry(&context->notifierArray,
+					     notifierHandle);
+		result = VMCI_SUCCESS;
+	}
+	spin_unlock(&context->lock);
+	spin_unlock(&ctx_list.firingLock);
+
+ out:
+	vmci_ctx_release(context);
+	return result;
+}
+
+/*
+ * Remove remoteCID from current context's list of contexts it is
+ * interested in getting notifications from/about.
+ */
+int vmci_ctx_remove_notification(uint32_t contextID,
+				 uint32_t remoteCID)
+{
+	struct vmci_ctx *context = vmci_ctx_get(contextID);
+	struct vmci_handle tmpHandle;
+	if (context == NULL)
+		return VMCI_ERROR_NOT_FOUND;
+
+	spin_lock(&ctx_list.firingLock);
+	spin_lock(&context->lock);
+	tmpHandle = vmci_make_handle(remoteCID, VMCI_EVENT_HANDLER);
+	tmpHandle = vmci_handle_arr_remove_entry(context->notifierArray,
+						 tmpHandle);
+	spin_unlock(&context->lock);
+	spin_unlock(&ctx_list.firingLock);
+	vmci_ctx_release(context);
+
+	if (VMCI_HANDLE_EQUAL(tmpHandle, VMCI_INVALID_HANDLE))
+		return VMCI_ERROR_NOT_FOUND;
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Get current context's checkpoint state of given type.
+ */
+int vmci_ctx_get_chkpt_state(uint32_t contextID,
+			     uint32_t cptType,
+			     uint32_t *bufSize,
+			     char **cptBufPtr)
+{
+	int i, result;
+	uint32_t arraySize, cptDataSize;
+	struct vmci_handle_arr *array;
+	struct vmci_ctx *context;
+	char *cptBuf;
+	bool getContextID;
+
+	ASSERT(bufSize && cptBufPtr);
+
+	context = vmci_ctx_get(contextID);
+	if (context == NULL)
+		return VMCI_ERROR_NOT_FOUND;
+
+	spin_lock(&context->lock);
+	if (cptType == VMCI_NOTIFICATION_CPT_STATE) {
+		ASSERT(context->notifierArray);
+		array = context->notifierArray;
+		getContextID = true;
+	} else if (cptType == VMCI_WELLKNOWN_CPT_STATE) {
+		/*
+		 * For compatibility with VMX'en with VM to VM communication, we
+		 * always return zero wellknown handles.
+		 */
+
+		*bufSize = 0;
+		*cptBufPtr = NULL;
+		result = VMCI_SUCCESS;
+		goto release;
+	} else if (cptType == VMCI_DOORBELL_CPT_STATE) {
+		ASSERT(context->doorbellArray);
+		array = context->doorbellArray;
+		getContextID = false;
+	} else {
+		pr_devel("Invalid cpt state (type=%d).", cptType);
+		result = VMCI_ERROR_INVALID_ARGS;
+		goto release;
+	}
+
+	arraySize = vmci_handle_arr_get_size(array);
+	if (arraySize > 0) {
+		if (cptType == VMCI_DOORBELL_CPT_STATE) {
+			cptDataSize =
+				arraySize * sizeof(struct dbell_cpt_state);
+		} else {
+			cptDataSize = arraySize * sizeof(uint32_t);
+		}
+
+		if (*bufSize < cptDataSize) {
+			*bufSize = cptDataSize;
+			result = VMCI_ERROR_MORE_DATA;
+			goto release;
+		}
+
+		cptBuf = kmalloc(cptDataSize, GFP_ATOMIC);
+
+		if (cptBuf == NULL) {
+			result = VMCI_ERROR_NO_MEM;
+			goto release;
+		}
+
+		for (i = 0; i < arraySize; i++) {
+			struct vmci_handle tmpHandle =
+				vmci_handle_arr_get_entry(array, i);
+			if (cptType == VMCI_DOORBELL_CPT_STATE) {
+				((struct dbell_cpt_state *)cptBuf)[i].handle =
+					tmpHandle;
+			} else {
+				((uint32_t *) cptBuf)[i] =
+					getContextID ? tmpHandle.context :
+					tmpHandle.resource;
+			}
+		}
+		*bufSize = cptDataSize;
+		*cptBufPtr = cptBuf;
+	} else {
+		*bufSize = 0;
+		*cptBufPtr = NULL;
+	}
+	result = VMCI_SUCCESS;
+
+release:
+	spin_unlock(&context->lock);
+	vmci_ctx_release(context);
+
+	return result;
+}
+
+/*
+ * Set current context's checkpoint state of given type.
+ */
+int vmci_ctx_set_chkpt_state(uint32_t contextID,
+			     uint32_t cptType,
+			     uint32_t bufSize,
+			     char *cptBuf)
+{
+	uint32_t i;
+	uint32_t currentID;
+	int result = VMCI_SUCCESS;
+	uint32_t numIDs = bufSize / sizeof(uint32_t);
+	ASSERT(cptBuf);
+
+	if (cptType == VMCI_WELLKNOWN_CPT_STATE && numIDs > 0) {
+		/*
+		 * We would end up here if VMX with VM to VM communication
+		 * attempts to restore a checkpoint with wellknown handles.
+		 */
+		pr_warn("Attempt to restore checkpoint with obsolete " \
+			"wellknown handles.");
+		return VMCI_ERROR_OBSOLETE;
+	}
+
+	if (cptType != VMCI_NOTIFICATION_CPT_STATE) {
+		pr_devel("Invalid cpt state (type=%d).", cptType);
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	for (i = 0; i < numIDs && result == VMCI_SUCCESS; i++) {
+		currentID = ((uint32_t *) cptBuf)[i];
+		result = vmci_ctx_add_notification(contextID, currentID);
+		if (result != VMCI_SUCCESS)
+			break;
+	}
+	if (result != VMCI_SUCCESS)
+		pr_devel("Failed to set cpt state (type=%d) " \
+			 "(error=%d).", cptType, result);
+
+	return result;
+}
+
+/*
+ * Retrieves the specified context's pending notifications in the
+ * form of a handle array. The handle arrays returned are the
+ * actual data - not a copy and should not be modified by the
+ * caller. They must be released using
+ * vmci_ctx_rcv_notifications_release.
+ */
+int vmci_ctx_rcv_notifications_get(uint32_t contextID,
+				   struct vmci_handle_arr **dbHandleArray,
+				   struct vmci_handle_arr **qpHandleArray)
+{
+	struct vmci_ctx *context;
+	int result = VMCI_SUCCESS;
+
+	ASSERT(dbHandleArray && qpHandleArray);
+
+	context = vmci_ctx_get(contextID);
+	if (context == NULL)
+		return VMCI_ERROR_NOT_FOUND;
+
+	spin_lock(&context->lock);
+
+	*dbHandleArray = context->pendingDoorbellArray;
+	context->pendingDoorbellArray = vmci_handle_arr_create(0);
+	if (!context->pendingDoorbellArray) {
+		context->pendingDoorbellArray = *dbHandleArray;
+		*dbHandleArray = NULL;
+		result = VMCI_ERROR_NO_MEM;
+	}
+	*qpHandleArray = NULL;
+
+	spin_unlock(&context->lock);
+	vmci_ctx_release(context);
+
+	return result;
+}
+
+/*
+ * Releases handle arrays with pending notifications previously
+ * retrieved using vmci_ctx_rcv_notifications_get. If the
+ * notifications were not successfully handed over to the guest,
+ * success must be false.
+ */
+void vmci_ctx_rcv_notifications_release(uint32_t contextID,
+					struct vmci_handle_arr *dbHandleArray,
+					struct vmci_handle_arr *qpHandleArray,
+					bool success)
+{
+	struct vmci_ctx *context = vmci_ctx_get(contextID);
+
+	if (!context) {
+		/*
+		 * The OS driver part is holding on to the context for the
+		 * duration of the receive notification ioctl, so it should
+		 * still be here.
+		 */
+		ASSERT(false);
+	}
+
+	spin_lock(&context->lock);
+	if (!success) {
+		struct vmci_handle handle;
+
+		/*
+		 * New notifications may have been added while we were not
+		 * holding the context lock, so we transfer any new pending
+		 * doorbell notifications to the old array, and reinstate the
+		 * old array.
+		 */
+
+		handle = vmci_handle_arr_remove_tail(
+			context->pendingDoorbellArray);
+		while (!VMCI_HANDLE_INVALID(handle)) {
+			ASSERT(vmci_handle_arr_has_entry
+			       (context->doorbellArray, handle));
+			if (!vmci_handle_arr_has_entry
+			    (dbHandleArray, handle)) {
+				vmci_handle_arr_append_entry
+					(&dbHandleArray, handle);
+			}
+			handle = vmci_handle_arr_remove_tail(
+				context->pendingDoorbellArray);
+		}
+		vmci_handle_arr_destroy(context->pendingDoorbellArray);
+		context->pendingDoorbellArray = dbHandleArray;
+		dbHandleArray = NULL;
+	} else {
+		ctx_clear_notify_call(context);
+	}
+	spin_unlock(&context->lock);
+	vmci_ctx_release(context);
+
+	if (dbHandleArray)
+		vmci_handle_arr_destroy(dbHandleArray);
+
+	if (qpHandleArray)
+		vmci_handle_arr_destroy(qpHandleArray);
+}
+
+/*
+ * Registers that a new doorbell handle has been allocated by the
+ * context. Only doorbell handles registered can be notified.
+ */
+int vmci_ctx_dbell_create(uint32_t contextID,
+			  struct vmci_handle handle)
+{
+	struct vmci_ctx *context;
+	int result;
+
+	if (contextID == VMCI_INVALID_ID || VMCI_HANDLE_INVALID(handle))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	context = vmci_ctx_get(contextID);
+	if (context == NULL)
+		return VMCI_ERROR_NOT_FOUND;
+
+	spin_lock(&context->lock);
+	if (!vmci_handle_arr_has_entry(context->doorbellArray, handle)) {
+		vmci_handle_arr_append_entry(&context->doorbellArray, handle);
+		result = VMCI_SUCCESS;
+	} else {
+		result = VMCI_ERROR_DUPLICATE_ENTRY;
+	}
+
+	spin_unlock(&context->lock);
+	vmci_ctx_release(context);
+
+	return result;
+}
+
+/*
+ * Unregisters a doorbell handle that was previously registered
+ * with vmci_ctx_dbell_create.
+ */
+int vmci_ctx_dbell_destroy(uint32_t contextID,
+			   struct vmci_handle handle)
+{
+	struct vmci_ctx *context;
+	struct vmci_handle removedHandle;
+
+	if (contextID == VMCI_INVALID_ID || VMCI_HANDLE_INVALID(handle))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	context = vmci_ctx_get(contextID);
+	if (context == NULL)
+		return VMCI_ERROR_NOT_FOUND;
+
+	spin_lock(&context->lock);
+	removedHandle =
+		vmci_handle_arr_remove_entry(context->doorbellArray, handle);
+	vmci_handle_arr_remove_entry(context->pendingDoorbellArray, handle);
+	spin_unlock(&context->lock);
+
+	vmci_ctx_release(context);
+
+	return VMCI_HANDLE_INVALID(removedHandle) ?
+		VMCI_ERROR_NOT_FOUND : VMCI_SUCCESS;
+}
+
+/*
+ * Unregisters all doorbell handles that were previously
+ * registered with vmci_ctx_dbell_create.
+ */
+int vmci_ctx_dbell_destroy_all(uint32_t contextID)
+{
+	struct vmci_ctx *context;
+	struct vmci_handle handle;
+
+	if (contextID == VMCI_INVALID_ID)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	context = vmci_ctx_get(contextID);
+	if (context == NULL)
+		return VMCI_ERROR_NOT_FOUND;
+
+	spin_lock(&context->lock);
+	do {
+		struct vmci_handle_arr *arr = context->doorbellArray;
+		handle = vmci_handle_arr_remove_tail(arr);
+	} while (!VMCI_HANDLE_INVALID(handle));
+	do {
+		struct vmci_handle_arr *arr = context->pendingDoorbellArray;
+		handle = vmci_handle_arr_remove_tail(arr);
+	} while (!VMCI_HANDLE_INVALID(handle));
+	spin_unlock(&context->lock);
+
+	vmci_ctx_release(context);
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Registers a notification of a doorbell handle initiated by the
+ * specified source context. The notification of doorbells are
+ * subject to the same isolation rules as datagram delivery. To
+ * allow host side senders of notifications a finer granularity
+ * of sender rights than those assigned to the sending context
+ * itself, the host context is required to specify a different
+ * set of privilege flags that will override the privileges of
+ * the source context.
+ */
+int vmci_ctx_notify_dbell(uint32_t srcCID,
+			  struct vmci_handle handle,
+			  uint32_t srcPrivFlags)
+{
+	struct vmci_ctx *dstContext;
+	int result;
+
+	if (VMCI_HANDLE_INVALID(handle))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	/* Get the target VM's VMCI context. */
+	dstContext = vmci_ctx_get(handle.context);
+	if (dstContext == NULL) {
+		pr_devel("Invalid context (ID=0x%x).", handle.context);
+		return VMCI_ERROR_NOT_FOUND;
+	}
+
+	if (srcCID != handle.context) {
+		uint32_t dstPrivFlags;
+
+		if (VMCI_CONTEXT_IS_VM(srcCID)
+		    && VMCI_CONTEXT_IS_VM(handle.context)) {
+			pr_devel("Doorbell notification from VM to VM not " \
+				 "supported (src=0x%x, dst=0x%x).", srcCID,
+				 handle.context);
+			result = VMCI_ERROR_DST_UNREACHABLE;
+			goto out;
+		}
+
+		result = vmci_dbell_get_priv_flags(handle, &dstPrivFlags);
+		if (result < VMCI_SUCCESS) {
+			pr_warn("Failed to get privilege flags for " \
+				"destination (handle=0x%x:0x%x).",
+				handle.context, handle.resource);
+			goto out;
+		}
+
+		if (srcCID != VMCI_HOST_CONTEXT_ID ||
+		    srcPrivFlags == VMCI_NO_PRIVILEGE_FLAGS) {
+			srcPrivFlags = VMCIContext_GetPrivFlags(srcCID);
+		}
+
+		if (vmci_deny_interaction(srcPrivFlags, dstPrivFlags)) {
+			result = VMCI_ERROR_NO_ACCESS;
+			goto out;
+		}
+	}
+
+	if (handle.context == VMCI_HOST_CONTEXT_ID) {
+		result = vmci_dbell_host_context_notify(srcCID, handle);
+	} else {
+		spin_lock(&dstContext->lock);
+
+		if (!vmci_handle_arr_has_entry
+		    (dstContext->doorbellArray, handle)) {
+			result = VMCI_ERROR_NOT_FOUND;
+		} else {
+			if (!vmci_handle_arr_has_entry
+			    (dstContext->pendingDoorbellArray, handle)) {
+				vmci_handle_arr_append_entry
+					(&dstContext->pendingDoorbellArray,
+					 handle);
+
+				ctx_signal_notify(dstContext);
+				wake_up(&dstContext->hostContext.waitQueue);
+
+			}
+			result = VMCI_SUCCESS;
+		}
+		spin_unlock(&dstContext->lock);
+	}
+
+out:
+	vmci_ctx_release(dstContext);
+
+	return result;
+}
+
+static int ctx_compare_user(uid_t *user1, uid_t *user2)
+{
+	if (!user1 || !user2)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	return (*user1 == *user2) ? VMCI_SUCCESS : VMCI_ERROR_GENERIC;
+}
+
+bool vmci_ctx_supports_host_qp(struct vmci_ctx *context)
+{
+	return context && context->userVersion >= VMCI_VERSION_HOSTQP;
+}
+
+/*
+ * Registers that a new queue pair handle has been allocated by
+ * the context.
+ */
+int vmci_ctx_qp_create(struct vmci_ctx *context,
+		       struct vmci_handle handle)
+{
+	int result;
+
+	if (context == NULL || VMCI_HANDLE_INVALID(handle))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	if (!vmci_handle_arr_has_entry(context->queuePairArray, handle)) {
+		vmci_handle_arr_append_entry(&context->queuePairArray, handle);
+		result = VMCI_SUCCESS;
+	} else {
+		result = VMCI_ERROR_DUPLICATE_ENTRY;
+	}
+
+	return result;
+}
+
+/*
+ * Unregisters a queue pair handle that was previously registered
+ * with vmci_ctx_qp_create.
+ */
+int vmci_ctx_qp_destroy(struct vmci_ctx *context,
+			struct vmci_handle handle)
+{
+	struct vmci_handle hndl;
+
+	if (context == NULL || VMCI_HANDLE_INVALID(handle))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	hndl = vmci_handle_arr_remove_entry(context->queuePairArray, handle);
+
+	return VMCI_HANDLE_INVALID(hndl) ?
+		VMCI_ERROR_NOT_FOUND : VMCI_SUCCESS;
+}
+
+/*
+ * Determines whether a given queue pair handle is registered
+ * with the given context.
+ */
+bool vmci_ctx_qp_exists(struct vmci_ctx *context,
+			struct vmci_handle handle)
+{
+	if (context == NULL || VMCI_HANDLE_INVALID(handle))
+		return false;
+
+	return vmci_handle_arr_has_entry(context->queuePairArray, handle);
+}
+
+/**
+ * VMCIContext_GetPrivFlags() - Retrieve privilege flags.
+ * @contextID:	The context ID of the VMCI context.
+ *
+ * Retrieves privilege flags of the given VMCI context ID.
+ */
+uint32_t VMCIContext_GetPrivFlags(uint32_t contextID)
+{
+	if (vmci_host_code_active()) {
+		uint32_t flags;
+		struct vmci_ctx *context;
+
+		context = vmci_ctx_get(contextID);
+		if (!context)
+			return VMCI_LEAST_PRIVILEGE_FLAGS;
+
+		flags = context->privFlags;
+		vmci_ctx_release(context);
+		return flags;
+	}
+	return VMCI_NO_PRIVILEGE_FLAGS;
+}
+EXPORT_SYMBOL(VMCIContext_GetPrivFlags);
+
+/**
+ * VMCI_ContextID2HostVmID() - Map CID to HostID
+ * @contextID:	Context ID of VMCI context.
+ * @hostVmID:	Host VM ID data
+ * @hostVmIDLen:	Length of Host VM ID Data.
+ *
+ * Maps a context ID to the host specific (process/world) ID
+ * of the VM/VMX.  This function is not used on Linux systems
+ * and should be ignored.
+ */
+int VMCI_ContextID2HostVmID(uint32_t contextID,
+			    void *hostVmID,
+			    size_t hostVmIDLen)
+{
+	return VMCI_ERROR_UNAVAILABLE;
+}
+EXPORT_SYMBOL(VMCI_ContextID2HostVmID);
+
+/**
+ * VMCI_IsContextOwner() - Determimnes if user is the context owner
+ * @contextID:	The context ID of the VMCI context.
+ * @hostUser:	The user as a void pointer.
+ *
+ * Determines whether a given host OS specific representation of
+ * user is the owner of the VM/VMX.
+ */
+int VMCI_IsContextOwner(uint32_t contextID,
+			void *hostUser)
+{
+	if (vmci_host_code_active()) {
+		struct vmci_ctx *context;
+		uid_t *user = hostUser;
+		int retval;
+
+		if (!hostUser)
+			return VMCI_ERROR_INVALID_ARGS;
+
+		context = vmci_ctx_get(contextID);
+		if (!context)
+			return VMCI_ERROR_NOT_FOUND;
+
+		if (context->validUser)
+			retval = ctx_compare_user(user, &context->user);
+		else
+			retval = VMCI_ERROR_UNAVAILABLE;
+
+		vmci_ctx_release(context);
+		return retval;
+	}
+	return VMCI_ERROR_UNAVAILABLE;
+}
+EXPORT_SYMBOL(VMCI_IsContextOwner);
diff --git a/drivers/misc/vmw_vmci/vmci_context.h b/drivers/misc/vmw_vmci/vmci_context.h
new file mode 100644
index 0000000..0b80a5d
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_context.h
@@ -0,0 +1,161 @@
+/*
+ * VMware VMCI driver (vmciContext.h)
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMCI_CONTEXT_H_
+#define _VMCI_CONTEXT_H_
+
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_datagram.h"
+#include "vmci_common_int.h"
+#include "vmci_handle_array.h"
+
+/* Used to determine what checkpoint state to get and set. */
+enum {
+	VMCI_NOTIFICATION_CPT_STATE = 1,
+	VMCI_WELLKNOWN_CPT_STATE    = 2,
+	VMCI_DG_OUT_STATE           = 3,
+	VMCI_DG_IN_STATE            = 4,
+	VMCI_DG_IN_SIZE_STATE       = 5,
+	VMCI_DOORBELL_CPT_STATE     = 6,
+};
+
+/* Host specific struct used for signalling */
+struct vmci_host {
+	wait_queue_head_t waitQueue;
+};
+
+struct vmci_ctx {
+	struct list_head listItem;	/* For global VMCI list. */
+	uint32_t cid;
+	atomic_t refCount;
+	struct list_head datagramQueue;	/* Head of per VM queue. */
+	uint32_t pendingDatagrams;
+	size_t datagramQueueSize;	/* Size of datagram queue in bytes. */
+
+	/*
+	 * Version of the code that created
+	 * this context; e.g., VMX.
+	 */
+	int userVersion;
+	spinlock_t lock;  /* Locks callQueue and handleArrays. */
+
+	/*
+	 * QueuePairs attached to.  The array of
+	 * handles for queue pairs is accessed
+	 * from the code for QP API, and there
+	 * it is protected by the QP lock.  It
+	 * is also accessed from the context
+	 * clean up path, which does not
+	 * require a lock.  VMCILock is not
+	 * used to protect the QP array field.
+	 */
+	struct vmci_handle_arr *queuePairArray;
+
+	/* Doorbells created by context. */
+	struct vmci_handle_arr *doorbellArray;
+
+	/* Doorbells pending for context. */
+	struct vmci_handle_arr *pendingDoorbellArray;
+
+	/* Contexts current context is subscribing to. */
+	struct vmci_handle_arr *notifierArray;
+	struct vmci_host hostContext;
+	uint32_t privFlags;
+	uid_t user;
+	bool validUser;
+	bool *notify;		/* Notify flag pointer - hosted only. */
+	struct page *notifyPage;	/* Page backing the notify UVA. */
+};
+
+/* VMCINotifyAddRemoveInfo: Used to add/remove remote context notifications. */
+struct vmci_ctx_info {
+	uint32_t remoteCID;
+	int result;
+};
+
+/* VMCICptBufInfo: Used to set/get current context's checkpoint state. */
+struct vmci_ctx_chkpt_buf_info {
+	uint64_t cptBuf;
+	uint32_t cptType;
+	uint32_t bufSize;
+	int32_t result;
+	uint32_t _pad;
+};
+
+/*
+ * VMCINotificationReceiveInfo: Used to recieve pending notifications
+ * for doorbells and queue pairs.
+ */
+struct vmci_ctx_notify_recv_info {
+	uint64_t dbHandleBufUVA;
+	uint64_t dbHandleBufSize;
+	uint64_t qpHandleBufUVA;
+	uint64_t qpHandleBufSize;
+	int32_t result;
+	uint32_t _pad;
+};
+
+int vmci_ctx_init(void);
+int vmci_ctx_init_ctx(uint32_t cid, uint32_t flags,
+		      uintptr_t eventHnd, int version,
+		      uid_t *user, struct vmci_ctx **context);
+
+bool vmci_ctx_supports_host_qp(struct vmci_ctx *context);
+void vmci_ctx_release_ctx(struct vmci_ctx *context);
+int vmci_ctx_enqueue_dg(uint32_t cid, struct vmci_dg *dg);
+int vmci_ctx_dequeue_dg(struct vmci_ctx *context,
+			size_t *maxSize, struct vmci_dg **dg);
+int vmci_ctx_pending_dgs(uint32_t cid, uint32_t *pending);
+struct vmci_ctx *vmci_ctx_get(uint32_t cid);
+void vmci_ctx_release(struct vmci_ctx *context);
+bool vmci_ctx_exists(uint32_t cid);
+
+uint32_t vmci_ctx_get_id(struct vmci_ctx *context);
+int vmci_ctx_add_notification(uint32_t contextID, uint32_t remoteCID);
+int vmci_ctx_remove_notification(uint32_t contextID, uint32_t remoteCID);
+int vmci_ctx_get_chkpt_state(uint32_t contextID, uint32_t cptType,
+			     uint32_t *numCIDs, char **cptBufPtr);
+int vmci_ctx_set_chkpt_state(uint32_t contextID, uint32_t cptType,
+			     uint32_t numCIDs, char *cptBuf);
+
+int vmci_ctx_qp_create(struct vmci_ctx *context,
+		       struct vmci_handle handle);
+int vmci_ctx_qp_destroy(struct vmci_ctx *context,
+			struct vmci_handle handle);
+bool vmci_ctx_qp_exists(struct vmci_ctx *context,
+			struct vmci_handle handle);
+
+void vmci_ctx_check_signal_notify(struct vmci_ctx *context);
+void vmci_ctx_unset_notify(struct vmci_ctx *context);
+
+int vmci_ctx_dbell_create(uint32_t contextID, struct vmci_handle handle);
+int vmci_ctx_dbell_destroy(uint32_t contextID, struct vmci_handle handle);
+int vmci_ctx_dbell_destroy_all(uint32_t contextID);
+int vmci_ctx_notify_dbell(uint32_t cid, struct vmci_handle handle,
+			  uint32_t srcPrivFlags);
+
+int vmci_ctx_rcv_notifications_get(uint32_t contextID, struct vmci_handle_arr
+				   **dbHandleArray, struct vmci_handle_arr
+				   **qpHandleArray);
+void
+vmci_ctx_rcv_notifications_release(uint32_t contextID, struct vmci_handle_arr
+				   *dbHandleArray, struct vmci_handle_arr
+				   *qpHandleArray, bool success);
+#endif /* _VMCI_CONTEXT_H_ */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 02/11] Apply VMCI datagram code
  2012-07-26 23:39 ` Andrew Stiegmann (stieg)
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  -1 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, cschamp, gregkh, Andrew Stiegmann (stieg)

Implements datagrams to allow data to be sent between host
and guest.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_datagram.c |  586 +++++++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_datagram.h |   56 ++++
 2 files changed, 642 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_datagram.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_datagram.h

diff --git a/drivers/misc/vmw_vmci/vmci_datagram.c b/drivers/misc/vmw_vmci/vmci_datagram.c
new file mode 100644
index 0000000..a804f99
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_datagram.c
@@ -0,0 +1,586 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/bug.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/vmw_vmci_api.h>
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_common_int.h"
+#include "vmci_context.h"
+#include "vmci_datagram.h"
+#include "vmci_driver.h"
+#include "vmci_event.h"
+#include "vmci_hash_table.h"
+#include "vmci_resource.h"
+#include "vmci_route.h"
+
+/*
+ * struct datagram_entry describes the datagram entity. It is used for datagram
+ * entities created only on the host.
+ */
+struct datagram_entry {
+	struct vmci_resource resource;
+	uint32_t flags;
+	bool runDelayed;
+	VMCIDatagramRecvCB recvCB;
+	void *clientData;
+	wait_queue_head_t destroyEvent;
+	uint32_t privFlags;
+};
+
+struct delayed_datagram_info {
+	bool inDGHostQueue;
+	struct datagram_entry *entry;
+	struct vmci_dg msg;
+};
+
+static atomic_t delayedDGHostQueueSize;
+
+static void dg_free_cb(void *clientData)
+{
+	struct datagram_entry *entry = (struct datagram_entry *)clientData;
+	ASSERT(entry);
+
+	/*
+	 * Entry is freed in VMCIDatagram_DestroyHnd, who waits for
+	 * the signal.
+	 */
+	wake_up(&entry->destroyEvent);
+}
+
+static int dg_release_cb(void *clientData)
+{
+	struct datagram_entry *entry = (struct datagram_entry *)clientData;
+	ASSERT(entry);
+	vmci_resource_release(&entry->resource);
+	return 0;
+}
+
+/*
+ * Create a datagram entry given a handle pointer.
+ */
+static int dg_create_handle(uint32_t resourceID,
+			    uint32_t flags,
+			    uint32_t privFlags,
+			    VMCIDatagramRecvCB recvCB,
+			    void *clientData,
+			    struct vmci_handle *outHandle)
+{
+	int result;
+	uint32_t contextID;
+	struct vmci_handle handle;
+	struct datagram_entry *entry;
+
+	ASSERT(recvCB != NULL);
+	ASSERT(outHandle != NULL);
+	ASSERT(!(privFlags & ~VMCI_PRIVILEGE_ALL_FLAGS));
+
+	if ((flags & VMCI_FLAG_WELLKNOWN_DG_HND) != 0) {
+		return VMCI_ERROR_INVALID_ARGS;
+	} else {
+		if ((flags & VMCI_FLAG_ANYCID_DG_HND) != 0) {
+			contextID = VMCI_INVALID_ID;
+		} else {
+			contextID = VMCI_GetContextID();
+			if (contextID == VMCI_INVALID_ID)
+				return VMCI_ERROR_NO_RESOURCES;
+		}
+
+		if (resourceID == VMCI_INVALID_ID) {
+			resourceID = vmci_resource_get_id(contextID);
+			if (resourceID == VMCI_INVALID_ID)
+				return VMCI_ERROR_NO_HANDLE;
+		}
+
+		handle = vmci_make_handle(contextID, resourceID);
+	}
+
+	entry = kmalloc(sizeof *entry, GFP_KERNEL);
+	if (entry == NULL) {
+		pr_warn("Failed allocating memory for datagram entry.");
+		return VMCI_ERROR_NO_MEM;
+	}
+
+	entry->runDelayed = (flags & VMCI_FLAG_DG_DELAYED_CB) ? true : false;
+	entry->flags = flags;
+	entry->recvCB = recvCB;
+	entry->clientData = clientData;
+	init_waitqueue_head(&entry->destroyEvent);
+	entry->privFlags = privFlags;
+
+	/* Make datagram resource live. */
+	result = vmci_resource_add(&entry->resource,
+				   VMCI_RESOURCE_TYPE_DATAGRAM,
+				   handle, dg_free_cb, entry);
+	if (result != VMCI_SUCCESS) {
+		pr_warn("Failed to add new resource (handle=0x%x:0x%x).",
+			handle.context, handle.resource);
+		kfree(entry);
+		return result;
+	}
+	*outHandle = handle;
+
+	return VMCI_SUCCESS;
+}
+
+int __init vmci_dg_init(void)
+{
+	atomic_set(&delayedDGHostQueueSize, 0);
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Internal utilility function with the same purpose as
+ * vmci_dg_get_priv_flags that also takes a contextID.
+ */
+static int vmci_dg_get_priv_flags(uint32_t contextID,
+				  struct vmci_handle handle,
+				  uint32_t *privFlags)
+{
+	ASSERT(privFlags);
+	ASSERT(contextID != VMCI_INVALID_ID);
+
+	if (contextID == VMCI_HOST_CONTEXT_ID) {
+		struct datagram_entry *srcEntry;
+		struct vmci_resource *resource;
+
+		resource =
+			vmci_resource_get(handle, VMCI_RESOURCE_TYPE_DATAGRAM);
+		if (resource == NULL)
+			return VMCI_ERROR_INVALID_ARGS;
+
+		srcEntry = container_of(resource, struct datagram_entry,
+					resource);
+		*privFlags = srcEntry->privFlags;
+		vmci_resource_release(resource);
+	} else if (contextID == VMCI_HYPERVISOR_CONTEXT_ID) {
+		*privFlags = VMCI_MAX_PRIVILEGE_FLAGS;
+	} else {
+		*privFlags = VMCIContext_GetPrivFlags(contextID);
+	}
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Calls the specified callback in a delayed context.
+ */
+static void dg_delayed_dispatch_cb(void *data)
+{
+	bool inDGHostQueue;
+	struct delayed_datagram_info *dgInfo =
+		(struct delayed_datagram_info *)data;
+
+	ASSERT(data);
+
+	dgInfo->entry->recvCB(dgInfo->entry->clientData, &dgInfo->msg);
+
+	vmci_resource_release(&dgInfo->entry->resource);
+
+	inDGHostQueue = dgInfo->inDGHostQueue;
+	kfree(dgInfo);
+
+	if (inDGHostQueue)
+		atomic_dec(&delayedDGHostQueueSize);
+}
+
+/*
+ * Dispatch datagram as a host, to the host, or other vm context. This
+ * function cannot dispatch to hypervisor context handlers. This should
+ * have been handled before we get here by VMCIDatagramDispatch.
+ * Returns number of bytes sent on success, error code otherwise.
+ */
+static int dg_dispatch_as_host(uint32_t contextID,
+			       struct vmci_dg *dg)
+{
+	int retval;
+	size_t dgSize;
+	uint32_t srcPrivFlags;
+
+	ASSERT(dg);
+	ASSERT(vmci_host_code_active());
+
+	dgSize = VMCI_DG_SIZE(dg);
+
+	if (contextID == VMCI_HOST_CONTEXT_ID &&
+	    dg->dst.context == VMCI_HYPERVISOR_CONTEXT_ID)
+		return VMCI_ERROR_DST_UNREACHABLE;
+
+	ASSERT(dg->dst.context != VMCI_HYPERVISOR_CONTEXT_ID);
+
+	/* Check that source handle matches sending context. */
+	if (dg->src.context != contextID) {
+		pr_devel("Sender context (ID=0x%x) is not owner of src " \
+			 "datagram entry (handle=0x%x:0x%x).",
+			 contextID, dg->src.context, dg->src.resource);
+		return VMCI_ERROR_NO_ACCESS;
+	}
+
+	/* Get hold of privileges of sending endpoint. */
+	retval = vmci_dg_get_priv_flags(contextID, dg->src, &srcPrivFlags);
+	if (retval != VMCI_SUCCESS) {
+		pr_warn("Couldn't get privileges (handle=0x%x:0x%x).",
+			dg->src.context, dg->src.resource);
+		return retval;
+	}
+
+	/* Determine if we should route to host or guest destination. */
+	if (dg->dst.context == VMCI_HOST_CONTEXT_ID) {
+		/* Route to host datagram entry. */
+		struct datagram_entry *dstEntry;
+		struct vmci_resource *resource;
+
+		if (dg->src.context == VMCI_HYPERVISOR_CONTEXT_ID &&
+		    dg->dst.resource == VMCI_EVENT_HANDLER) {
+			return vmci_event_dispatch(dg);
+		}
+
+		resource = vmci_resource_get(dg->dst,
+					     VMCI_RESOURCE_TYPE_DATAGRAM);
+		if (resource == NULL) {
+			pr_devel("Sending to invalid destination " \
+				 "(handle=0x%x:0x%x).", dg->dst.context,
+				 dg->dst.resource);
+			return VMCI_ERROR_INVALID_RESOURCE;
+		}
+		dstEntry =
+			container_of(resource, struct datagram_entry,
+				     resource);
+		if (vmci_deny_interaction(srcPrivFlags, dstEntry->privFlags)) {
+			vmci_resource_release(resource);
+			return VMCI_ERROR_NO_ACCESS;
+		}
+		ASSERT(dstEntry->recvCB);
+
+		/*
+		 * If a VMCI datagram destined for the host is also sent by the
+		 * host, we always run it delayed. This ensures that no locks
+		 * are held when the datagram callback runs.
+		 */
+		if (dstEntry->runDelayed
+		    || dg->src.context == VMCI_HOST_CONTEXT_ID) {
+			struct delayed_datagram_info *dgInfo;
+
+			if (atomic_add_return(1, &delayedDGHostQueueSize)
+			    == VMCI_MAX_DELAYED_DG_HOST_QUEUE_SIZE) {
+				atomic_dec(&delayedDGHostQueueSize);
+				vmci_resource_release(resource);
+				return VMCI_ERROR_NO_MEM;
+			}
+
+			dgInfo =
+				kmalloc(sizeof *dgInfo +
+					(size_t) dg->payloadSize, GFP_ATOMIC);
+			if (NULL == dgInfo) {
+				atomic_dec(&delayedDGHostQueueSize);
+				vmci_resource_release(resource);
+				return VMCI_ERROR_NO_MEM;
+			}
+
+			dgInfo->inDGHostQueue = true;
+			dgInfo->entry = dstEntry;
+			memcpy(&dgInfo->msg, dg, dgSize);
+			retval =
+			  vmci_drv_schedule_delayed_work(dg_delayed_dispatch_cb,
+							 dgInfo);
+			if (retval < VMCI_SUCCESS) {
+				pr_warn("Failed to schedule delayed " \
+					"work for datagram (result=%d).",
+					retval);
+				kfree(dgInfo);
+				vmci_resource_release(resource);
+				atomic_dec(&delayedDGHostQueueSize);
+				return retval;
+			}
+		} else {
+			retval = dstEntry->recvCB(dstEntry->clientData, dg);
+			vmci_resource_release(resource);
+			if (retval < VMCI_SUCCESS)
+				return retval;
+		}
+	} else {
+		/* Route to destination VM context. */
+		struct vmci_dg *newDG;
+
+		if (contextID != dg->dst.context) {
+			if (vmci_deny_interaction(srcPrivFlags,
+						  VMCIContext_GetPrivFlags
+						  (dg->dst.context))) {
+				return VMCI_ERROR_NO_ACCESS;
+			} else if (VMCI_CONTEXT_IS_VM(contextID)) {
+				/*
+				 * If the sending context is a VM, it
+				 * cannot reach another VM.
+				 */
+
+				pr_devel("Datagram communication between VMs " \
+					 "not supported (src=0x%x, dst=0x%x).",
+					 contextID, dg->dst.context);
+				return VMCI_ERROR_DST_UNREACHABLE;
+			}
+		}
+
+		/* We make a copy to enqueue. */
+		newDG = kmalloc(dgSize, GFP_KERNEL);
+		if (newDG == NULL)
+			return VMCI_ERROR_NO_MEM;
+
+		memcpy(newDG, dg, dgSize);
+		retval = vmci_ctx_enqueue_dg(dg->dst.context, newDG);
+		if (retval < VMCI_SUCCESS) {
+			kfree(newDG);
+			return retval;
+		}
+	}
+
+	/*
+	 * We currently truncate the size to signed 32 bits. This doesn't
+	 * matter for this handler as it only support 4Kb messages.
+	 */
+	return (int)dgSize;
+}
+
+/*
+ * Dispatch datagram as a guest, down through the VMX and potentially to
+ * the host.
+ * Returns number of bytes sent on success, error code otherwise.
+ */
+static int dg_dispatch_as_guest(struct vmci_dg *dg)
+{
+	int retval;
+	struct vmci_resource *resource;
+
+	resource = vmci_resource_get(dg->src, VMCI_RESOURCE_TYPE_DATAGRAM);
+	if (NULL == resource)
+		return VMCI_ERROR_NO_HANDLE;
+
+	retval = vmci_send_dg(dg);
+	vmci_resource_release(resource);
+	return retval;
+}
+
+/*
+ * Dispatch datagram.  This will determine the routing for the datagram
+ * and dispatch it accordingly.
+ * Returns number of bytes sent on success, error code otherwise.
+ */
+int vmci_dg_dispatch(uint32_t contextID,
+		     struct vmci_dg *dg, bool fromGuest)
+{
+	int retval;
+	enum vmci_route route;
+
+	ASSERT(dg);
+	BUILD_BUG_ON(sizeof(struct vmci_dg) != 24);
+
+	if (VMCI_DG_SIZE(dg) > VMCI_MAX_DG_SIZE) {
+		pr_devel("Payload (size=%llu bytes) too big to " \
+			 "send.", (unsigned long long) dg->payloadSize);
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	retval = vmci_route(&dg->src, &dg->dst, fromGuest, &route);
+	if (retval < VMCI_SUCCESS) {
+		pr_devel("Failed to route datagram (src=0x%x, dst=0x%x, " \
+			 "err=%d).", dg->src.context, dg->dst.context,
+			 retval);
+		return retval;
+	}
+
+	if (VMCI_ROUTE_AS_HOST == route) {
+		if (VMCI_INVALID_ID == contextID)
+			contextID = VMCI_HOST_CONTEXT_ID;
+		return dg_dispatch_as_host(contextID, dg);
+	}
+
+	if (VMCI_ROUTE_AS_GUEST == route)
+		return dg_dispatch_as_guest(dg);
+
+	pr_warn("Unknown route (%d) for datagram.", route);
+	return VMCI_ERROR_DST_UNREACHABLE;
+}
+
+/*
+ * Invoke the handler for the given datagram.  This is intended to be
+ * called only when acting as a guest and receiving a datagram from the
+ * virtual device.
+ */
+int vmci_dg_invoke_guest_handler(struct vmci_dg *dg)
+{
+	int retval;
+	struct vmci_resource *resource;
+	struct datagram_entry *dstEntry;
+
+	ASSERT(dg);
+
+	resource = vmci_resource_get(dg->dst, VMCI_RESOURCE_TYPE_DATAGRAM);
+	if (NULL == resource) {
+		pr_devel("destination (handle=0x%x:0x%x) doesn't exist.",
+			 dg->dst.context, dg->dst.resource);
+		return VMCI_ERROR_NO_HANDLE;
+	}
+
+	dstEntry =
+		container_of(resource, struct datagram_entry, resource);
+	if (dstEntry->runDelayed) {
+		struct delayed_datagram_info *dgInfo;
+
+		dgInfo =
+			kmalloc(sizeof *dgInfo + (size_t) dg->payloadSize,
+				GFP_ATOMIC);
+		if (NULL == dgInfo) {
+			vmci_resource_release(resource);
+			retval = VMCI_ERROR_NO_MEM;
+			goto exit;
+		}
+
+		dgInfo->inDGHostQueue = false;
+		dgInfo->entry = dstEntry;
+		memcpy(&dgInfo->msg, dg, VMCI_DG_SIZE(dg));
+
+		retval =
+			vmci_drv_schedule_delayed_work(dg_delayed_dispatch_cb,
+						       dgInfo);
+		if (retval < VMCI_SUCCESS) {
+			pr_warn("Failed to schedule delayed work for " \
+				"datagram (result=%d).", retval);
+			kfree(dgInfo);
+			vmci_resource_release(resource);
+			dgInfo = NULL;
+			goto exit;
+		}
+	} else {
+		dstEntry->recvCB(dstEntry->clientData, dg);
+		vmci_resource_release(resource);
+		retval = VMCI_SUCCESS;
+	}
+
+exit:
+	return retval;
+}
+
+/**
+ * VMCIDatagram_CreateHndPriv() - Create host context datagram endpoint
+ * @resourceID:	The resource ID.
+ * @flags:	Datagram Flags.
+ * @privFlags:	Privilege Flags.
+ * @recvCB:	Callback when receiving datagrams.
+ * @clientData:	Pointer for a datagram_entry struct
+ * @outHandle:	vmci_handle that is populated as a result of this function.
+ *
+ * Creates a host context datagram endpoint and returns a handle to it.
+ */
+int VMCIDatagram_CreateHndPriv(uint32_t resourceID,
+			       uint32_t flags,
+			       uint32_t privFlags,
+			       VMCIDatagramRecvCB recvCB,
+			       void *clientData,
+			       struct vmci_handle *outHandle)
+{
+	if (outHandle == NULL)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	if (recvCB == NULL) {
+		pr_devel("Client callback needed when creating " \
+			 "datagram.");
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	if (privFlags & ~VMCI_PRIVILEGE_ALL_FLAGS)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	return dg_create_handle(resourceID, flags, privFlags, recvCB,
+				clientData, outHandle);
+}
+EXPORT_SYMBOL(VMCIDatagram_CreateHndPriv);
+
+/**
+ * VMCIDatagram_CreateHnd() - Create host context datagram endpoint
+ * @resourceID:	Resource ID.
+ * @flags:	Datagram Flags.
+ * @recvCB:	Callback when receiving datagrams.
+ * @clientData:	Pointer for a datagram_entry struct
+ * @outHandle:	vmci_handle that is populated as a result of this function.
+ *
+ * Creates a host context datagram endpoint and returns a handle to
+ * it.  Same as VMCIDatagram_CreateHndPriv without the priviledge
+ * flags argument.
+ */
+int VMCIDatagram_CreateHnd(uint32_t resourceID,
+			   uint32_t flags,
+			   VMCIDatagramRecvCB recvCB,
+			   void *clientData,
+			   struct vmci_handle *outHandle)
+{
+	return VMCIDatagram_CreateHndPriv(resourceID, flags,
+					  VMCI_DEFAULT_PROC_PRIVILEGE_FLAGS,
+					  recvCB, clientData, outHandle);
+}
+EXPORT_SYMBOL(VMCIDatagram_CreateHnd);
+
+/**
+ * VMCIDatagram_DestroyHnd() - Destroys datagram handle
+ * @handle:	vmci_handle to be destroyed and reaped.
+ *
+ * Use this function to destroy any datagram handles created by
+ * VMCIDatagram_CreateHnd{,Priv} functions.
+ */
+int VMCIDatagram_DestroyHnd(struct vmci_handle handle)
+{
+	struct datagram_entry *entry;
+	struct vmci_resource *resource;
+
+	resource = vmci_resource_get(handle, VMCI_RESOURCE_TYPE_DATAGRAM);
+	if (resource == NULL) {
+		pr_devel("Failed to destroy datagram (handle=0x%x:0x%x)" \
+			 ".", handle.context, handle.resource);
+		return VMCI_ERROR_NOT_FOUND;
+	}
+
+	entry = container_of(resource, struct datagram_entry, resource);
+	vmci_resource_remove(handle, VMCI_RESOURCE_TYPE_DATAGRAM);
+
+	/*
+	 * We now wait on the destroyEvent and release the reference we got
+	 * above.
+	 */
+	vmci_drv_wait_on_event_intr(&entry->destroyEvent, dg_release_cb,
+				    entry);
+	kfree(entry);
+
+	return VMCI_SUCCESS;
+}
+EXPORT_SYMBOL(VMCIDatagram_DestroyHnd);
+
+/**
+ * VMCIDatagram_Send() - Send a datagram
+ * @msg:	The datagram to send.
+ *
+ * Sends the provided datagram on its merry way.
+ */
+int VMCIDatagram_Send(struct vmci_dg *msg)
+{
+	if (msg == NULL)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	return vmci_dg_dispatch(VMCI_INVALID_ID, msg, false);
+}
+EXPORT_SYMBOL(VMCIDatagram_Send);
diff --git a/drivers/misc/vmw_vmci/vmci_datagram.h b/drivers/misc/vmw_vmci/vmci_datagram.h
new file mode 100644
index 0000000..e5e54c2
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_datagram.h
@@ -0,0 +1,56 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMCI_DATAGRAM_H_
+#define _VMCI_DATAGRAM_H_
+
+#include "vmci_context.h"
+
+#define VMCI_MAX_DELAYED_DG_HOST_QUEUE_SIZE 256
+
+/*
+ * The struct vmci_dg_queue_entry is a queue header for the in-kernel VMCI
+ * datagram queues. It is allocated in non-paged memory, as the
+ * content is accessed while holding a spinlock. The pending datagram
+ * itself may be allocated from paged memory. We shadow the size of
+ * the datagram in the non-paged queue entry as this size is used
+ * while holding the same spinlock as above.
+ */
+struct vmci_dg_queue_entry {
+	struct list_head listItem;	/* For queuing. */
+	size_t dgSize;		/* Size of datagram. */
+	struct vmci_dg *dg;	/* Pending datagram. */
+};
+
+/* VMCIDatagramSendRecvInfo */
+struct vmci_dg_snd_rcv_info {
+	uint64_t addr;
+	uint32_t len;
+	int32_t result;
+};
+
+/* Init functions. */
+int vmci_dg_init(void);
+
+/* Datagram API for non-public use. */
+int vmci_dg_dispatch(uint32_t contextID, struct vmci_dg *dg,
+		     bool fromGuest);
+int vmci_dg_invoke_guest_handler(struct vmci_dg *dg);
+
+#endif /* _VMCI_DATAGRAM_H_ */
-- 
1.7.0.4


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 02/11] Apply VMCI datagram code
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  0 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, Andrew Stiegmann (stieg), cschamp, gregkh

Implements datagrams to allow data to be sent between host
and guest.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_datagram.c |  586 +++++++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_datagram.h |   56 ++++
 2 files changed, 642 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_datagram.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_datagram.h

diff --git a/drivers/misc/vmw_vmci/vmci_datagram.c b/drivers/misc/vmw_vmci/vmci_datagram.c
new file mode 100644
index 0000000..a804f99
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_datagram.c
@@ -0,0 +1,586 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/bug.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/vmw_vmci_api.h>
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_common_int.h"
+#include "vmci_context.h"
+#include "vmci_datagram.h"
+#include "vmci_driver.h"
+#include "vmci_event.h"
+#include "vmci_hash_table.h"
+#include "vmci_resource.h"
+#include "vmci_route.h"
+
+/*
+ * struct datagram_entry describes the datagram entity. It is used for datagram
+ * entities created only on the host.
+ */
+struct datagram_entry {
+	struct vmci_resource resource;
+	uint32_t flags;
+	bool runDelayed;
+	VMCIDatagramRecvCB recvCB;
+	void *clientData;
+	wait_queue_head_t destroyEvent;
+	uint32_t privFlags;
+};
+
+struct delayed_datagram_info {
+	bool inDGHostQueue;
+	struct datagram_entry *entry;
+	struct vmci_dg msg;
+};
+
+static atomic_t delayedDGHostQueueSize;
+
+static void dg_free_cb(void *clientData)
+{
+	struct datagram_entry *entry = (struct datagram_entry *)clientData;
+	ASSERT(entry);
+
+	/*
+	 * Entry is freed in VMCIDatagram_DestroyHnd, who waits for
+	 * the signal.
+	 */
+	wake_up(&entry->destroyEvent);
+}
+
+static int dg_release_cb(void *clientData)
+{
+	struct datagram_entry *entry = (struct datagram_entry *)clientData;
+	ASSERT(entry);
+	vmci_resource_release(&entry->resource);
+	return 0;
+}
+
+/*
+ * Create a datagram entry given a handle pointer.
+ */
+static int dg_create_handle(uint32_t resourceID,
+			    uint32_t flags,
+			    uint32_t privFlags,
+			    VMCIDatagramRecvCB recvCB,
+			    void *clientData,
+			    struct vmci_handle *outHandle)
+{
+	int result;
+	uint32_t contextID;
+	struct vmci_handle handle;
+	struct datagram_entry *entry;
+
+	ASSERT(recvCB != NULL);
+	ASSERT(outHandle != NULL);
+	ASSERT(!(privFlags & ~VMCI_PRIVILEGE_ALL_FLAGS));
+
+	if ((flags & VMCI_FLAG_WELLKNOWN_DG_HND) != 0) {
+		return VMCI_ERROR_INVALID_ARGS;
+	} else {
+		if ((flags & VMCI_FLAG_ANYCID_DG_HND) != 0) {
+			contextID = VMCI_INVALID_ID;
+		} else {
+			contextID = VMCI_GetContextID();
+			if (contextID == VMCI_INVALID_ID)
+				return VMCI_ERROR_NO_RESOURCES;
+		}
+
+		if (resourceID == VMCI_INVALID_ID) {
+			resourceID = vmci_resource_get_id(contextID);
+			if (resourceID == VMCI_INVALID_ID)
+				return VMCI_ERROR_NO_HANDLE;
+		}
+
+		handle = vmci_make_handle(contextID, resourceID);
+	}
+
+	entry = kmalloc(sizeof *entry, GFP_KERNEL);
+	if (entry == NULL) {
+		pr_warn("Failed allocating memory for datagram entry.");
+		return VMCI_ERROR_NO_MEM;
+	}
+
+	entry->runDelayed = (flags & VMCI_FLAG_DG_DELAYED_CB) ? true : false;
+	entry->flags = flags;
+	entry->recvCB = recvCB;
+	entry->clientData = clientData;
+	init_waitqueue_head(&entry->destroyEvent);
+	entry->privFlags = privFlags;
+
+	/* Make datagram resource live. */
+	result = vmci_resource_add(&entry->resource,
+				   VMCI_RESOURCE_TYPE_DATAGRAM,
+				   handle, dg_free_cb, entry);
+	if (result != VMCI_SUCCESS) {
+		pr_warn("Failed to add new resource (handle=0x%x:0x%x).",
+			handle.context, handle.resource);
+		kfree(entry);
+		return result;
+	}
+	*outHandle = handle;
+
+	return VMCI_SUCCESS;
+}
+
+int __init vmci_dg_init(void)
+{
+	atomic_set(&delayedDGHostQueueSize, 0);
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Internal utilility function with the same purpose as
+ * vmci_dg_get_priv_flags that also takes a contextID.
+ */
+static int vmci_dg_get_priv_flags(uint32_t contextID,
+				  struct vmci_handle handle,
+				  uint32_t *privFlags)
+{
+	ASSERT(privFlags);
+	ASSERT(contextID != VMCI_INVALID_ID);
+
+	if (contextID == VMCI_HOST_CONTEXT_ID) {
+		struct datagram_entry *srcEntry;
+		struct vmci_resource *resource;
+
+		resource =
+			vmci_resource_get(handle, VMCI_RESOURCE_TYPE_DATAGRAM);
+		if (resource == NULL)
+			return VMCI_ERROR_INVALID_ARGS;
+
+		srcEntry = container_of(resource, struct datagram_entry,
+					resource);
+		*privFlags = srcEntry->privFlags;
+		vmci_resource_release(resource);
+	} else if (contextID == VMCI_HYPERVISOR_CONTEXT_ID) {
+		*privFlags = VMCI_MAX_PRIVILEGE_FLAGS;
+	} else {
+		*privFlags = VMCIContext_GetPrivFlags(contextID);
+	}
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Calls the specified callback in a delayed context.
+ */
+static void dg_delayed_dispatch_cb(void *data)
+{
+	bool inDGHostQueue;
+	struct delayed_datagram_info *dgInfo =
+		(struct delayed_datagram_info *)data;
+
+	ASSERT(data);
+
+	dgInfo->entry->recvCB(dgInfo->entry->clientData, &dgInfo->msg);
+
+	vmci_resource_release(&dgInfo->entry->resource);
+
+	inDGHostQueue = dgInfo->inDGHostQueue;
+	kfree(dgInfo);
+
+	if (inDGHostQueue)
+		atomic_dec(&delayedDGHostQueueSize);
+}
+
+/*
+ * Dispatch datagram as a host, to the host, or other vm context. This
+ * function cannot dispatch to hypervisor context handlers. This should
+ * have been handled before we get here by VMCIDatagramDispatch.
+ * Returns number of bytes sent on success, error code otherwise.
+ */
+static int dg_dispatch_as_host(uint32_t contextID,
+			       struct vmci_dg *dg)
+{
+	int retval;
+	size_t dgSize;
+	uint32_t srcPrivFlags;
+
+	ASSERT(dg);
+	ASSERT(vmci_host_code_active());
+
+	dgSize = VMCI_DG_SIZE(dg);
+
+	if (contextID == VMCI_HOST_CONTEXT_ID &&
+	    dg->dst.context == VMCI_HYPERVISOR_CONTEXT_ID)
+		return VMCI_ERROR_DST_UNREACHABLE;
+
+	ASSERT(dg->dst.context != VMCI_HYPERVISOR_CONTEXT_ID);
+
+	/* Check that source handle matches sending context. */
+	if (dg->src.context != contextID) {
+		pr_devel("Sender context (ID=0x%x) is not owner of src " \
+			 "datagram entry (handle=0x%x:0x%x).",
+			 contextID, dg->src.context, dg->src.resource);
+		return VMCI_ERROR_NO_ACCESS;
+	}
+
+	/* Get hold of privileges of sending endpoint. */
+	retval = vmci_dg_get_priv_flags(contextID, dg->src, &srcPrivFlags);
+	if (retval != VMCI_SUCCESS) {
+		pr_warn("Couldn't get privileges (handle=0x%x:0x%x).",
+			dg->src.context, dg->src.resource);
+		return retval;
+	}
+
+	/* Determine if we should route to host or guest destination. */
+	if (dg->dst.context == VMCI_HOST_CONTEXT_ID) {
+		/* Route to host datagram entry. */
+		struct datagram_entry *dstEntry;
+		struct vmci_resource *resource;
+
+		if (dg->src.context == VMCI_HYPERVISOR_CONTEXT_ID &&
+		    dg->dst.resource == VMCI_EVENT_HANDLER) {
+			return vmci_event_dispatch(dg);
+		}
+
+		resource = vmci_resource_get(dg->dst,
+					     VMCI_RESOURCE_TYPE_DATAGRAM);
+		if (resource == NULL) {
+			pr_devel("Sending to invalid destination " \
+				 "(handle=0x%x:0x%x).", dg->dst.context,
+				 dg->dst.resource);
+			return VMCI_ERROR_INVALID_RESOURCE;
+		}
+		dstEntry =
+			container_of(resource, struct datagram_entry,
+				     resource);
+		if (vmci_deny_interaction(srcPrivFlags, dstEntry->privFlags)) {
+			vmci_resource_release(resource);
+			return VMCI_ERROR_NO_ACCESS;
+		}
+		ASSERT(dstEntry->recvCB);
+
+		/*
+		 * If a VMCI datagram destined for the host is also sent by the
+		 * host, we always run it delayed. This ensures that no locks
+		 * are held when the datagram callback runs.
+		 */
+		if (dstEntry->runDelayed
+		    || dg->src.context == VMCI_HOST_CONTEXT_ID) {
+			struct delayed_datagram_info *dgInfo;
+
+			if (atomic_add_return(1, &delayedDGHostQueueSize)
+			    == VMCI_MAX_DELAYED_DG_HOST_QUEUE_SIZE) {
+				atomic_dec(&delayedDGHostQueueSize);
+				vmci_resource_release(resource);
+				return VMCI_ERROR_NO_MEM;
+			}
+
+			dgInfo =
+				kmalloc(sizeof *dgInfo +
+					(size_t) dg->payloadSize, GFP_ATOMIC);
+			if (NULL == dgInfo) {
+				atomic_dec(&delayedDGHostQueueSize);
+				vmci_resource_release(resource);
+				return VMCI_ERROR_NO_MEM;
+			}
+
+			dgInfo->inDGHostQueue = true;
+			dgInfo->entry = dstEntry;
+			memcpy(&dgInfo->msg, dg, dgSize);
+			retval =
+			  vmci_drv_schedule_delayed_work(dg_delayed_dispatch_cb,
+							 dgInfo);
+			if (retval < VMCI_SUCCESS) {
+				pr_warn("Failed to schedule delayed " \
+					"work for datagram (result=%d).",
+					retval);
+				kfree(dgInfo);
+				vmci_resource_release(resource);
+				atomic_dec(&delayedDGHostQueueSize);
+				return retval;
+			}
+		} else {
+			retval = dstEntry->recvCB(dstEntry->clientData, dg);
+			vmci_resource_release(resource);
+			if (retval < VMCI_SUCCESS)
+				return retval;
+		}
+	} else {
+		/* Route to destination VM context. */
+		struct vmci_dg *newDG;
+
+		if (contextID != dg->dst.context) {
+			if (vmci_deny_interaction(srcPrivFlags,
+						  VMCIContext_GetPrivFlags
+						  (dg->dst.context))) {
+				return VMCI_ERROR_NO_ACCESS;
+			} else if (VMCI_CONTEXT_IS_VM(contextID)) {
+				/*
+				 * If the sending context is a VM, it
+				 * cannot reach another VM.
+				 */
+
+				pr_devel("Datagram communication between VMs " \
+					 "not supported (src=0x%x, dst=0x%x).",
+					 contextID, dg->dst.context);
+				return VMCI_ERROR_DST_UNREACHABLE;
+			}
+		}
+
+		/* We make a copy to enqueue. */
+		newDG = kmalloc(dgSize, GFP_KERNEL);
+		if (newDG == NULL)
+			return VMCI_ERROR_NO_MEM;
+
+		memcpy(newDG, dg, dgSize);
+		retval = vmci_ctx_enqueue_dg(dg->dst.context, newDG);
+		if (retval < VMCI_SUCCESS) {
+			kfree(newDG);
+			return retval;
+		}
+	}
+
+	/*
+	 * We currently truncate the size to signed 32 bits. This doesn't
+	 * matter for this handler as it only support 4Kb messages.
+	 */
+	return (int)dgSize;
+}
+
+/*
+ * Dispatch datagram as a guest, down through the VMX and potentially to
+ * the host.
+ * Returns number of bytes sent on success, error code otherwise.
+ */
+static int dg_dispatch_as_guest(struct vmci_dg *dg)
+{
+	int retval;
+	struct vmci_resource *resource;
+
+	resource = vmci_resource_get(dg->src, VMCI_RESOURCE_TYPE_DATAGRAM);
+	if (NULL == resource)
+		return VMCI_ERROR_NO_HANDLE;
+
+	retval = vmci_send_dg(dg);
+	vmci_resource_release(resource);
+	return retval;
+}
+
+/*
+ * Dispatch datagram.  This will determine the routing for the datagram
+ * and dispatch it accordingly.
+ * Returns number of bytes sent on success, error code otherwise.
+ */
+int vmci_dg_dispatch(uint32_t contextID,
+		     struct vmci_dg *dg, bool fromGuest)
+{
+	int retval;
+	enum vmci_route route;
+
+	ASSERT(dg);
+	BUILD_BUG_ON(sizeof(struct vmci_dg) != 24);
+
+	if (VMCI_DG_SIZE(dg) > VMCI_MAX_DG_SIZE) {
+		pr_devel("Payload (size=%llu bytes) too big to " \
+			 "send.", (unsigned long long) dg->payloadSize);
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	retval = vmci_route(&dg->src, &dg->dst, fromGuest, &route);
+	if (retval < VMCI_SUCCESS) {
+		pr_devel("Failed to route datagram (src=0x%x, dst=0x%x, " \
+			 "err=%d).", dg->src.context, dg->dst.context,
+			 retval);
+		return retval;
+	}
+
+	if (VMCI_ROUTE_AS_HOST == route) {
+		if (VMCI_INVALID_ID == contextID)
+			contextID = VMCI_HOST_CONTEXT_ID;
+		return dg_dispatch_as_host(contextID, dg);
+	}
+
+	if (VMCI_ROUTE_AS_GUEST == route)
+		return dg_dispatch_as_guest(dg);
+
+	pr_warn("Unknown route (%d) for datagram.", route);
+	return VMCI_ERROR_DST_UNREACHABLE;
+}
+
+/*
+ * Invoke the handler for the given datagram.  This is intended to be
+ * called only when acting as a guest and receiving a datagram from the
+ * virtual device.
+ */
+int vmci_dg_invoke_guest_handler(struct vmci_dg *dg)
+{
+	int retval;
+	struct vmci_resource *resource;
+	struct datagram_entry *dstEntry;
+
+	ASSERT(dg);
+
+	resource = vmci_resource_get(dg->dst, VMCI_RESOURCE_TYPE_DATAGRAM);
+	if (NULL == resource) {
+		pr_devel("destination (handle=0x%x:0x%x) doesn't exist.",
+			 dg->dst.context, dg->dst.resource);
+		return VMCI_ERROR_NO_HANDLE;
+	}
+
+	dstEntry =
+		container_of(resource, struct datagram_entry, resource);
+	if (dstEntry->runDelayed) {
+		struct delayed_datagram_info *dgInfo;
+
+		dgInfo =
+			kmalloc(sizeof *dgInfo + (size_t) dg->payloadSize,
+				GFP_ATOMIC);
+		if (NULL == dgInfo) {
+			vmci_resource_release(resource);
+			retval = VMCI_ERROR_NO_MEM;
+			goto exit;
+		}
+
+		dgInfo->inDGHostQueue = false;
+		dgInfo->entry = dstEntry;
+		memcpy(&dgInfo->msg, dg, VMCI_DG_SIZE(dg));
+
+		retval =
+			vmci_drv_schedule_delayed_work(dg_delayed_dispatch_cb,
+						       dgInfo);
+		if (retval < VMCI_SUCCESS) {
+			pr_warn("Failed to schedule delayed work for " \
+				"datagram (result=%d).", retval);
+			kfree(dgInfo);
+			vmci_resource_release(resource);
+			dgInfo = NULL;
+			goto exit;
+		}
+	} else {
+		dstEntry->recvCB(dstEntry->clientData, dg);
+		vmci_resource_release(resource);
+		retval = VMCI_SUCCESS;
+	}
+
+exit:
+	return retval;
+}
+
+/**
+ * VMCIDatagram_CreateHndPriv() - Create host context datagram endpoint
+ * @resourceID:	The resource ID.
+ * @flags:	Datagram Flags.
+ * @privFlags:	Privilege Flags.
+ * @recvCB:	Callback when receiving datagrams.
+ * @clientData:	Pointer for a datagram_entry struct
+ * @outHandle:	vmci_handle that is populated as a result of this function.
+ *
+ * Creates a host context datagram endpoint and returns a handle to it.
+ */
+int VMCIDatagram_CreateHndPriv(uint32_t resourceID,
+			       uint32_t flags,
+			       uint32_t privFlags,
+			       VMCIDatagramRecvCB recvCB,
+			       void *clientData,
+			       struct vmci_handle *outHandle)
+{
+	if (outHandle == NULL)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	if (recvCB == NULL) {
+		pr_devel("Client callback needed when creating " \
+			 "datagram.");
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	if (privFlags & ~VMCI_PRIVILEGE_ALL_FLAGS)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	return dg_create_handle(resourceID, flags, privFlags, recvCB,
+				clientData, outHandle);
+}
+EXPORT_SYMBOL(VMCIDatagram_CreateHndPriv);
+
+/**
+ * VMCIDatagram_CreateHnd() - Create host context datagram endpoint
+ * @resourceID:	Resource ID.
+ * @flags:	Datagram Flags.
+ * @recvCB:	Callback when receiving datagrams.
+ * @clientData:	Pointer for a datagram_entry struct
+ * @outHandle:	vmci_handle that is populated as a result of this function.
+ *
+ * Creates a host context datagram endpoint and returns a handle to
+ * it.  Same as VMCIDatagram_CreateHndPriv without the priviledge
+ * flags argument.
+ */
+int VMCIDatagram_CreateHnd(uint32_t resourceID,
+			   uint32_t flags,
+			   VMCIDatagramRecvCB recvCB,
+			   void *clientData,
+			   struct vmci_handle *outHandle)
+{
+	return VMCIDatagram_CreateHndPriv(resourceID, flags,
+					  VMCI_DEFAULT_PROC_PRIVILEGE_FLAGS,
+					  recvCB, clientData, outHandle);
+}
+EXPORT_SYMBOL(VMCIDatagram_CreateHnd);
+
+/**
+ * VMCIDatagram_DestroyHnd() - Destroys datagram handle
+ * @handle:	vmci_handle to be destroyed and reaped.
+ *
+ * Use this function to destroy any datagram handles created by
+ * VMCIDatagram_CreateHnd{,Priv} functions.
+ */
+int VMCIDatagram_DestroyHnd(struct vmci_handle handle)
+{
+	struct datagram_entry *entry;
+	struct vmci_resource *resource;
+
+	resource = vmci_resource_get(handle, VMCI_RESOURCE_TYPE_DATAGRAM);
+	if (resource == NULL) {
+		pr_devel("Failed to destroy datagram (handle=0x%x:0x%x)" \
+			 ".", handle.context, handle.resource);
+		return VMCI_ERROR_NOT_FOUND;
+	}
+
+	entry = container_of(resource, struct datagram_entry, resource);
+	vmci_resource_remove(handle, VMCI_RESOURCE_TYPE_DATAGRAM);
+
+	/*
+	 * We now wait on the destroyEvent and release the reference we got
+	 * above.
+	 */
+	vmci_drv_wait_on_event_intr(&entry->destroyEvent, dg_release_cb,
+				    entry);
+	kfree(entry);
+
+	return VMCI_SUCCESS;
+}
+EXPORT_SYMBOL(VMCIDatagram_DestroyHnd);
+
+/**
+ * VMCIDatagram_Send() - Send a datagram
+ * @msg:	The datagram to send.
+ *
+ * Sends the provided datagram on its merry way.
+ */
+int VMCIDatagram_Send(struct vmci_dg *msg)
+{
+	if (msg == NULL)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	return vmci_dg_dispatch(VMCI_INVALID_ID, msg, false);
+}
+EXPORT_SYMBOL(VMCIDatagram_Send);
diff --git a/drivers/misc/vmw_vmci/vmci_datagram.h b/drivers/misc/vmw_vmci/vmci_datagram.h
new file mode 100644
index 0000000..e5e54c2
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_datagram.h
@@ -0,0 +1,56 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMCI_DATAGRAM_H_
+#define _VMCI_DATAGRAM_H_
+
+#include "vmci_context.h"
+
+#define VMCI_MAX_DELAYED_DG_HOST_QUEUE_SIZE 256
+
+/*
+ * The struct vmci_dg_queue_entry is a queue header for the in-kernel VMCI
+ * datagram queues. It is allocated in non-paged memory, as the
+ * content is accessed while holding a spinlock. The pending datagram
+ * itself may be allocated from paged memory. We shadow the size of
+ * the datagram in the non-paged queue entry as this size is used
+ * while holding the same spinlock as above.
+ */
+struct vmci_dg_queue_entry {
+	struct list_head listItem;	/* For queuing. */
+	size_t dgSize;		/* Size of datagram. */
+	struct vmci_dg *dg;	/* Pending datagram. */
+};
+
+/* VMCIDatagramSendRecvInfo */
+struct vmci_dg_snd_rcv_info {
+	uint64_t addr;
+	uint32_t len;
+	int32_t result;
+};
+
+/* Init functions. */
+int vmci_dg_init(void);
+
+/* Datagram API for non-public use. */
+int vmci_dg_dispatch(uint32_t contextID, struct vmci_dg *dg,
+		     bool fromGuest);
+int vmci_dg_invoke_guest_handler(struct vmci_dg *dg);
+
+#endif /* _VMCI_DATAGRAM_H_ */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 03/11] Apply VMCI doorbell code
  2012-07-26 23:39 ` Andrew Stiegmann (stieg)
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  -1 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, cschamp, gregkh, Andrew Stiegmann (stieg)

Doorbell code allows for notifcations between host and guest.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_doorbell.c |  751 +++++++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_doorbell.h |   57 +++
 2 files changed, 808 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_doorbell.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_doorbell.h

diff --git a/drivers/misc/vmw_vmci/vmci_doorbell.c b/drivers/misc/vmw_vmci/vmci_doorbell.c
new file mode 100644
index 0000000..389ba4c
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_doorbell.c
@@ -0,0 +1,751 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/vmw_vmci_api.h>
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_common_int.h"
+#include "vmci_datagram.h"
+#include "vmci_doorbell.h"
+#include "vmci_driver.h"
+#include "vmci_resource.h"
+#include "vmci_route.h"
+
+#define VMCI_DOORBELL_INDEX_TABLE_SIZE (1 << 6)
+#define VMCI_DOORBELL_HASH(_idx)				\
+	vmci_hash_calc((_idx), VMCI_DOORBELL_INDEX_TABLE_SIZE)
+
+/*
+ * DoorbellEntry describes the a doorbell notification handle allocated by the
+ * host.
+ */
+struct dbell_entry {
+	struct vmci_resource resource;
+	uint32_t idx;
+	struct list_head idxListItem;
+	uint32_t privFlags;
+	bool runDelayed;
+	VMCICallback notifyCB;
+	void *clientData;
+	wait_queue_head_t destroyEvent;
+	atomic_t active;	/* Only used by guest personality */
+};
+
+/* The VMCI index table keeps track of currently registered doorbells. */
+static struct dbell_index_table {
+	spinlock_t lock;
+	struct list_head entries[VMCI_DOORBELL_INDEX_TABLE_SIZE];
+} vmciDoorbellIT;
+
+/*
+ * The maxNotifyIdx is one larger than the currently known bitmap index in
+ * use, and is used to determine how much of the bitmap needs to be scanned.
+ */
+static uint32_t maxNotifyIdx;
+
+/*
+ * The notifyIdxCount is used for determining whether there are free entries
+ * within the bitmap (if notifyIdxCount + 1 < maxNotifyIdx).
+ */
+static uint32_t notifyIdxCount;
+
+/*
+ * The lastNotifyIdxReserved is used to track the last index handed out - in
+ * the case where multiple handles share a notification index, we hand out
+ * indexes round robin based on lastNotifyIdxReserved.
+ */
+static uint32_t lastNotifyIdxReserved;
+
+/* This is a one entry cache used to by the index allocation. */
+static uint32_t lastNotifyIdxReleased = PAGE_SIZE;
+
+/*
+ * General init code.
+ */
+int __init vmci_dbell_init(void)
+{
+	uint32_t bucket;
+
+	for (bucket = 0; bucket < ARRAY_SIZE(vmciDoorbellIT.entries); ++bucket)
+		INIT_LIST_HEAD(&vmciDoorbellIT.entries[bucket]);
+
+	spin_lock_init(&vmciDoorbellIT.lock);
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Callback to free doorbell entry structure when resource is no longer used,
+ * The entry is freed in VMCIDoorbell_Destroy(), which is waiting on the
+ * signal that gets fired here.
+ */
+static void dbell_free_cb(void *clientData)
+{
+	struct dbell_entry *entry = (struct dbell_entry *)clientData;
+	ASSERT(entry);
+	wake_up(&entry->destroyEvent);
+}
+
+static int dbell_release_cb(void *clientData)
+{
+	struct dbell_entry *entry = (struct dbell_entry *)clientData;
+	ASSERT(entry);
+	vmci_resource_release(&entry->resource);
+	return 0;
+}
+
+/*
+ * Utility function that retrieves the privilege flags associated
+ * with a given doorbell handle. For guest endpoints, the
+ * privileges are determined by the context ID, but for host
+ * endpoints privileges are associated with the complete
+ * handle. Hypervisor endpoints are not yet supported.
+ */
+int vmci_dbell_get_priv_flags(struct vmci_handle handle,
+			      uint32_t *privFlags)
+{
+	if (privFlags == NULL || handle.context == VMCI_INVALID_ID)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	if (handle.context == VMCI_HOST_CONTEXT_ID) {
+		struct dbell_entry *entry;
+		struct vmci_resource *resource;
+
+		resource =
+			vmci_resource_get(handle, VMCI_RESOURCE_TYPE_DOORBELL);
+		if (resource == NULL)
+			return VMCI_ERROR_NOT_FOUND;
+
+		entry =
+			container_of(resource, struct dbell_entry,
+				     resource);
+		*privFlags = entry->privFlags;
+		vmci_resource_release(resource);
+	} else if (handle.context == VMCI_HYPERVISOR_CONTEXT_ID) {
+		/*
+		 * Hypervisor endpoints for notifications are not
+		 * supported (yet).
+		 */
+		return VMCI_ERROR_INVALID_ARGS;
+	} else {
+		*privFlags = VMCIContext_GetPrivFlags(handle.context);
+	}
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Find doorbell entry by bitmap index.
+ */
+static struct dbell_entry *dbell_index_table_find(uint32_t idx)
+{
+	uint32_t bucket = VMCI_DOORBELL_HASH(idx);
+	struct dbell_entry *cur;
+
+	ASSERT(vmci_guest_code_active());
+
+	list_for_each_entry(cur, &vmciDoorbellIT.entries[bucket], idxListItem) {
+		if (idx == cur->idx)
+			return cur;
+	}
+
+	return NULL;
+}
+
+/*
+ * Add the given entry to the index table.  This will hold() the entry's
+ * resource so that the entry is not deleted before it is removed from the
+ * table.
+ */
+static void dbell_index_table_add(struct dbell_entry *entry)
+{
+	uint32_t bucket;
+	uint32_t newNotifyIdx;
+
+	ASSERT(entry);
+	ASSERT(vmci_guest_code_active());
+
+	vmci_resource_hold(&entry->resource);
+
+	spin_lock_bh(&vmciDoorbellIT.lock);
+
+	/*
+	 * Below we try to allocate an index in the notification
+	 * bitmap with "not too much" sharing between resources. If we
+	 * use less that the full bitmap, we either add to the end if
+	 * there are no unused flags within the currently used area,
+	 * or we search for unused ones. If we use the full bitmap, we
+	 * allocate the index round robin.
+	 */
+	if (maxNotifyIdx < PAGE_SIZE || notifyIdxCount < PAGE_SIZE) {
+		if (lastNotifyIdxReleased < maxNotifyIdx &&
+		    !dbell_index_table_find(lastNotifyIdxReleased)) {
+			newNotifyIdx = lastNotifyIdxReleased;
+			lastNotifyIdxReleased = PAGE_SIZE;
+		} else {
+			bool reused = false;
+			newNotifyIdx = lastNotifyIdxReserved;
+			if (notifyIdxCount + 1 < maxNotifyIdx) {
+				do {
+					if (!dbell_index_table_find
+					    (newNotifyIdx)) {
+						reused = true;
+						break;
+					}
+					newNotifyIdx = (newNotifyIdx + 1) %
+						maxNotifyIdx;
+				} while (newNotifyIdx != lastNotifyIdxReleased);
+			}
+			if (!reused) {
+				newNotifyIdx = maxNotifyIdx;
+				maxNotifyIdx++;
+			}
+		}
+	} else {
+		newNotifyIdx = (lastNotifyIdxReserved + 1) % PAGE_SIZE;
+	}
+
+	lastNotifyIdxReserved = newNotifyIdx;
+	notifyIdxCount++;
+
+	entry->idx = newNotifyIdx;
+	bucket = VMCI_DOORBELL_HASH(entry->idx);
+	list_add(&entry->idxListItem, &vmciDoorbellIT.entries[bucket]);
+
+	spin_unlock_bh(&vmciDoorbellIT.lock);
+}
+
+/*
+ * Remove the given entry from the index table.  This will release() the
+ * entry's resource.
+ */
+static void dbell_index_table_remove(struct dbell_entry *entry)
+{
+	ASSERT(entry);
+	ASSERT(vmci_guest_code_active());
+
+	spin_lock_bh(&vmciDoorbellIT.lock);
+
+	list_del(&entry->idxListItem);
+
+	notifyIdxCount--;
+	if (entry->idx == maxNotifyIdx - 1) {
+		/*
+		 * If we delete an entry with the maximum known
+		 * notification index, we take the opportunity to
+		 * prune the current max. As there might be other
+		 * unused indices immediately below, we lower the
+		 * maximum until we hit an index in use.
+		 */
+		while (maxNotifyIdx > 0 &&
+		       !dbell_index_table_find(maxNotifyIdx - 1)) {
+			maxNotifyIdx--;
+		}
+	}
+
+	lastNotifyIdxReleased = entry->idx;
+
+	spin_unlock_bh(&vmciDoorbellIT.lock);
+
+	vmci_resource_release(&entry->resource);
+}
+
+/*
+ * Creates a link between the given doorbell handle and the given
+ * index in the bitmap in the device backend. A notification state
+ * is created in hypervisor.
+ */
+static int dbell_link(struct vmci_handle handle,
+		      uint32_t notifyIdx)
+{
+	struct vmci_doorbell_link_msg linkMsg;
+
+	ASSERT(!VMCI_HANDLE_INVALID(handle));
+	ASSERT(vmci_guest_code_active());
+
+	linkMsg.hdr.dst = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+					   VMCI_DOORBELL_LINK);
+	linkMsg.hdr.src = VMCI_ANON_SRC_HANDLE;
+	linkMsg.hdr.payloadSize = sizeof linkMsg - VMCI_DG_HEADERSIZE;
+	linkMsg.handle = handle;
+	linkMsg.notifyIdx = notifyIdx;
+
+	return vmci_send_dg((struct vmci_dg *)&linkMsg);
+}
+
+/*
+ * Unlinks the given doorbell handle from an index in the bitmap in
+ * the device backend. The notification state is destroyed in hypervisor.
+ */
+static int dbell_unlink(struct vmci_handle handle)
+{
+	struct vmci_doorbell_unlink_msg unlinkMsg;
+
+	ASSERT(!VMCI_HANDLE_INVALID(handle));
+	ASSERT(vmci_guest_code_active());
+
+	unlinkMsg.hdr.dst = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+					     VMCI_DOORBELL_UNLINK);
+	unlinkMsg.hdr.src = VMCI_ANON_SRC_HANDLE;
+	unlinkMsg.hdr.payloadSize = sizeof unlinkMsg - VMCI_DG_HEADERSIZE;
+	unlinkMsg.handle = handle;
+
+	return vmci_send_dg((struct vmci_dg *)&unlinkMsg);
+}
+
+/*
+ * Notify another guest or the host.  We send a datagram down to the
+ * host via the hypervisor with the notification info.
+ */
+static int dbell_notify_as_guest(struct vmci_handle handle,
+				 uint32_t privFlags)
+{
+	struct vmci_doorbell_ntfy_msg notifyMsg;
+
+	ASSERT(vmci_guest_code_active());
+
+	notifyMsg.hdr.dst = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+					     VMCI_DOORBELL_NOTIFY);
+	notifyMsg.hdr.src = VMCI_ANON_SRC_HANDLE;
+	notifyMsg.hdr.payloadSize = sizeof notifyMsg - VMCI_DG_HEADERSIZE;
+	notifyMsg.handle = handle;
+
+	return vmci_send_dg((struct vmci_dg *)&notifyMsg);
+}
+
+/*
+ * Calls the specified callback in a delayed context.
+ */
+static void dbell_delayed_dispatch_cb(void *data)
+{
+	struct dbell_entry *entry = (struct dbell_entry *)data;
+
+	ASSERT(data);
+
+	entry->notifyCB(entry->clientData);
+	vmci_resource_release(&entry->resource);
+}
+
+/*
+ * Dispatches a doorbell notification to the host context.
+ */
+int vmci_dbell_host_context_notify(uint32_t srcCID,
+				   struct vmci_handle handle)
+{
+	struct dbell_entry *entry;
+	struct vmci_resource *resource;
+	int result;
+
+	ASSERT(vmci_host_code_active());
+
+	if (VMCI_HANDLE_INVALID(handle)) {
+		pr_devel("Notifying an invalid doorbell " \
+			 "(handle=0x%x:0x%x).", handle.context,
+			 handle.resource);
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	resource = vmci_resource_get(handle, VMCI_RESOURCE_TYPE_DOORBELL);
+	if (resource == NULL) {
+		pr_devel("Notifying an unknown doorbell " \
+			 "(handle=0x%x:0x%x).", handle.context,
+			 handle.resource);
+		return VMCI_ERROR_NOT_FOUND;
+	}
+	entry = container_of(resource, struct dbell_entry, resource);
+
+	if (entry->runDelayed) {
+		result = vmci_drv_schedule_delayed_work(
+			dbell_delayed_dispatch_cb,
+			entry);
+
+		if (result < VMCI_SUCCESS) {
+			/*
+			 * If we failed to schedule the delayed work,
+			 * we need to release the resource
+			 * immediately. Otherwise, the resource will
+			 * be released once the delayed callback has
+			 * been completed.
+			 */
+			pr_devel("Failed to schedule delayed doorbell " \
+				 "notification (result=%d).", result);
+			vmci_resource_release(resource);
+		}
+	} else {
+		entry->notifyCB(entry->clientData);
+		vmci_resource_release(resource);
+		result = VMCI_SUCCESS;
+	}
+	return result;
+}
+
+/*
+ * When a guest leaves hibernation, the device driver state is out of sync
+ * with the device state, since the driver state has doorbells registered
+ * that aren't known to the device.  This function takes care of
+ * reregistering any doorbells. In case an error occurs during
+ * reregistration (this is highly unlikely since 1) it succeeded the first
+ * time 2) the device driver is the only source of doorbell registrations),
+ * we simply log the error.  The doorbell can still be destroyed using
+ * VMCIDoorbell_Destroy.
+ */
+void vmci_dbell_hibernate(bool enterHibernate)
+{
+	uint32_t bucket;
+	struct dbell_entry *cur;
+
+	if (!vmci_guest_code_active() || enterHibernate)
+		return;
+
+	spin_lock_bh(&vmciDoorbellIT.lock);
+
+	for (bucket = 0; bucket < ARRAY_SIZE(vmciDoorbellIT.entries);
+	     bucket++) {
+		list_for_each_entry(cur, &vmciDoorbellIT.entries[bucket],
+				    idxListItem) {
+			int result;
+			struct vmci_handle h;
+
+			h = vmci_resource_handle(&cur->resource);
+			result = dbell_link(h, cur->idx);
+			if (result != VMCI_SUCCESS
+			    && result != VMCI_ERROR_DUPLICATE_ENTRY) {
+				pr_warn("Failed to reregister doorbell " \
+					"(handle=0x%x:0x%x) to index " \
+					"(error=%d).", h.context, h.resource,
+					result);
+			}
+		}
+	}
+
+	spin_unlock_bh(&vmciDoorbellIT.lock);
+}
+
+/*
+ * Register the notification bitmap with the host.
+ */
+bool vmci_dbell_register_notification_bitmap(uint32_t bitmapPPN)
+{
+	int result;
+	struct vmci_notify_bm_set_msg bitmapSetMsg;
+
+	/*
+	 * Do not ASSERT() on the guest device here.  This function
+	 * can get called during device initialization, so the
+	 * ASSERT() will fail even though the device is (almost) up.
+	 */
+	bitmapSetMsg.hdr.dst = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+						VMCI_SET_NOTIFY_BITMAP);
+	bitmapSetMsg.hdr.src = VMCI_ANON_SRC_HANDLE;
+	bitmapSetMsg.hdr.payloadSize = sizeof bitmapSetMsg - VMCI_DG_HEADERSIZE;
+	bitmapSetMsg.bitmapPPN = bitmapPPN;
+
+	result = vmci_send_dg((struct vmci_dg *)&bitmapSetMsg);
+	if (result != VMCI_SUCCESS) {
+		pr_devel("Failed to register (PPN=%u) as " \
+			 "notification bitmap (error=%d).",
+			 bitmapPPN, result);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Executes or schedules the handlers for a given notify index.
+ */
+static void dbell_fire_entries(uint32_t notifyIdx)
+{
+	uint32_t bucket = VMCI_DOORBELL_HASH(notifyIdx);
+	struct dbell_entry *cur;
+
+	ASSERT(vmci_guest_code_active());
+
+	spin_lock_bh(&vmciDoorbellIT.lock);
+
+	list_for_each_entry(cur, &vmciDoorbellIT.entries[bucket], idxListItem) {
+		if (cur->idx == notifyIdx && atomic_read(&cur->active) == 1) {
+			ASSERT(cur->notifyCB);
+			if (cur->runDelayed) {
+				int err;
+
+				vmci_resource_hold(&cur->resource);
+				err =
+					vmci_drv_schedule_delayed_work
+					(dbell_delayed_dispatch_cb, cur);
+				if (err != VMCI_SUCCESS) {
+					vmci_resource_release(&cur->resource);
+					goto out;
+				}
+			} else {
+				cur->notifyCB(cur->clientData);
+			}
+		}
+	}
+
+out:
+	spin_unlock_bh(&vmciDoorbellIT.lock);
+}
+
+/*
+ * Scans the notification bitmap, collects pending notifications,
+ * resets the bitmap and invokes appropriate callbacks.
+ */
+void vmci_dbell_scan_notification_entries(uint8_t *bitmap)
+{
+	uint32_t idx;
+
+	ASSERT(bitmap);
+	ASSERT(vmci_guest_code_active());
+
+	for (idx = 0; idx < maxNotifyIdx; idx++) {
+		if (bitmap[idx] & 0x1) {
+			bitmap[idx] &= ~1;
+			dbell_fire_entries(idx);
+		}
+	}
+}
+
+/**
+ * VMCIDoorbell_Create() - Creates a doorbell
+ * @handle:	A handle used to track the resource.  Can be invalid.
+ * @flags:	Flag that determines context of callback.
+ * @privFlags:	Privileges flags.
+ * @notifyCB:	The callback to be ivoked when the doorbell fires.
+ * @clientData:	A parameter to be passed to the callback.
+ *
+ * Creates a doorbell with the given callback. If the handle is
+ * VMCI_INVALID_HANDLE, a free handle will be assigned, if
+ * possible. The callback can be run immediately (potentially with
+ * locks held - the default) or delayed (in a kernel thread) by
+ * specifying the flag VMCI_FLAG_DELAYED_CB. If delayed execution
+ * is selected, a given callback may not be run if the kernel is
+ * unable to allocate memory for the delayed execution (highly
+ * unlikely).
+ */
+int VMCIDoorbell_Create(struct vmci_handle *handle,
+			uint32_t flags,
+			uint32_t privFlags,
+			VMCICallback notifyCB,
+			void *clientData)
+{
+	struct dbell_entry *entry;
+	struct vmci_handle newHandle;
+	int result;
+
+	if (!handle || !notifyCB || flags & ~VMCI_FLAG_DELAYED_CB ||
+	    privFlags & ~VMCI_PRIVILEGE_ALL_FLAGS)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	entry = kmalloc(sizeof *entry, GFP_KERNEL);
+	if (entry == NULL) {
+		pr_warn("Failed allocating memory for datagram entry.");
+		return VMCI_ERROR_NO_MEM;
+	}
+
+	if (VMCI_HANDLE_INVALID(*handle)) {
+		uint32_t contextID = VMCI_GetContextID();
+		uint32_t resourceID = vmci_resource_get_id(contextID);
+		if (resourceID == VMCI_INVALID_ID) {
+			result = VMCI_ERROR_NO_HANDLE;
+			goto freeMem;
+		}
+		newHandle = vmci_make_handle(contextID, resourceID);
+	} else {
+		bool validContext = false;
+
+		/*
+		 * Validate the handle.  We must do both of the checks below
+		 * because we can be acting as both a host and a guest at the
+		 * same time. We always allow the host context ID, since the
+		 * host functionality is in practice always there with the
+		 * unified driver.
+		 */
+		if (VMCI_HOST_CONTEXT_ID == handle->context ||
+		    (vmci_guest_code_active() &&
+		     VMCI_GetContextID() == handle->context))
+			validContext = true;
+
+		if (!validContext || VMCI_INVALID_ID == handle->resource) {
+			pr_devel("Invalid argument (handle=0x%x:0x%x).",
+				 handle->context, handle->resource);
+			result = VMCI_ERROR_INVALID_ARGS;
+			goto freeMem;
+		}
+
+		newHandle = *handle;
+	}
+
+	entry->idx = 0;
+	INIT_LIST_HEAD(&entry->idxListItem);
+	entry->privFlags = privFlags;
+	entry->runDelayed = (flags & VMCI_FLAG_DELAYED_CB) ? true : false;
+	entry->notifyCB = notifyCB;
+	entry->clientData = clientData;
+	atomic_set(&entry->active, 0);
+	init_waitqueue_head(&entry->destroyEvent);
+
+	result =
+		vmci_resource_add(&entry->resource, VMCI_RESOURCE_TYPE_DOORBELL,
+				  newHandle, dbell_free_cb, entry);
+	if (result != VMCI_SUCCESS) {
+		pr_warn("Failed to add new resource (handle=0x%x:0x%x).",
+			newHandle.context, newHandle.resource);
+		if (result == VMCI_ERROR_DUPLICATE_ENTRY)
+			result = VMCI_ERROR_ALREADY_EXISTS;
+
+		goto freeMem;
+	}
+
+	if (vmci_guest_code_active()) {
+		dbell_index_table_add(entry);
+		result = dbell_link(newHandle, entry->idx);
+		if (VMCI_SUCCESS != result)
+			goto destroyResource;
+
+		atomic_set(&entry->active, 1);
+	}
+
+	if (VMCI_HANDLE_INVALID(*handle))
+		*handle = newHandle;
+
+	return result;
+
+destroyResource:
+	dbell_index_table_remove(entry);
+	vmci_resource_remove(newHandle, VMCI_RESOURCE_TYPE_DOORBELL);
+freeMem:
+	kfree(entry);
+	return result;
+}
+EXPORT_SYMBOL(VMCIDoorbell_Create);
+
+/**
+ * VMCIDoorbell_Destroy() - Destroy a doorbell.
+ * @handle:	The handle tracking the resource.
+ *
+ * Destroys a doorbell previously created with VMCIDoorbell_Create. This
+ * operation may block waiting for a callback to finish.
+ */
+int VMCIDoorbell_Destroy(struct vmci_handle handle)
+{
+	struct dbell_entry *entry;
+	struct vmci_resource *resource;
+
+	if (VMCI_HANDLE_INVALID(handle))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	resource = vmci_resource_get(handle, VMCI_RESOURCE_TYPE_DOORBELL);
+	if (resource == NULL) {
+		pr_devel("Failed to destroy doorbell " \
+			 "(handle=0x%x:0x%x).", handle.context,
+			 handle.resource);
+		return VMCI_ERROR_NOT_FOUND;
+	}
+
+	entry = container_of(resource, struct dbell_entry, resource);
+
+	if (vmci_guest_code_active()) {
+		int result;
+
+		dbell_index_table_remove(entry);
+
+		result = dbell_unlink(handle);
+		if (VMCI_SUCCESS != result) {
+
+			/*
+			 * The only reason this should fail would be
+			 * an inconsistency between guest and
+			 * hypervisor state, where the guest believes
+			 * it has an active registration whereas the
+			 * hypervisor doesn't. One case where this may
+			 * happen is if a doorbell is unregistered
+			 * following a hibernation at a time where the
+			 * doorbell state hasn't been restored on the
+			 * hypervisor side yet. Since the handle has
+			 * now been removed in the guest, we just
+			 * print a warning and return success.
+			 */
+			pr_devel("Unlink of doorbell (handle=0x%x:0x%x) " \
+				 "unknown by hypervisor (error=%d).",
+				 handle.context, handle.resource,
+				 result);
+		}
+	}
+
+	/*
+	 * Now remove the resource from the table.  It might still be in use
+	 * after this, in a callback or still on the delayed work queue.
+	 */
+	vmci_resource_remove(handle, VMCI_RESOURCE_TYPE_DOORBELL);
+
+	/*
+	 * We now wait on the destroyEvent and release the reference we got
+	 * above.
+	 */
+	vmci_drv_wait_on_event_intr(&entry->destroyEvent,
+				    dbell_release_cb, entry);
+
+	/*
+	 * We know that we are now the only reference to the above entry so
+	 * can safely free it.
+	 */
+	kfree(entry);
+
+	return VMCI_SUCCESS;
+}
+EXPORT_SYMBOL(VMCIDoorbell_Destroy);
+
+/**
+ * VMCIDoorbell_Notify() - Ring the doorbell (and hide in the bushes).
+ * @dst:	The handlle identifying the doorbell resource
+ * @privFlags:	Priviledge flags.
+ *
+ * Generates a notification on the doorbell identified by the
+ * handle. For host side generation of notifications, the caller
+ * can specify what the privilege of the calling side is.
+ */
+int VMCIDoorbell_Notify(struct vmci_handle dst,
+			uint32_t privFlags)
+{
+	int retval;
+	enum vmci_route route;
+	struct vmci_handle src;
+
+	if (VMCI_HANDLE_INVALID(dst)
+	    || (privFlags & ~VMCI_PRIVILEGE_ALL_FLAGS))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	src = VMCI_INVALID_HANDLE;
+	retval = vmci_route(&src, &dst, false, &route);
+	if (retval < VMCI_SUCCESS)
+		return retval;
+
+	if (VMCI_ROUTE_AS_HOST == route)
+		return vmci_ctx_notify_dbell(VMCI_HOST_CONTEXT_ID,
+					     dst, privFlags);
+
+	if (VMCI_ROUTE_AS_GUEST == route)
+		return dbell_notify_as_guest(dst, privFlags);
+
+	pr_warn("Unknown route (%d) for doorbell.", route);
+	return VMCI_ERROR_DST_UNREACHABLE;
+}
+EXPORT_SYMBOL(VMCIDoorbell_Notify);
diff --git a/drivers/misc/vmw_vmci/vmci_doorbell.h b/drivers/misc/vmw_vmci/vmci_doorbell.h
new file mode 100644
index 0000000..f56a44b
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_doorbell.h
@@ -0,0 +1,57 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef VMCI_DOORBELL_H
+#define VMCI_DOORBELL_H
+
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_driver.h"
+
+/*
+ * VMCINotifyResourceInfo: Used to create and destroy doorbells, and
+ * generate a notification for a doorbell or queue pair.
+ */
+struct vmci_dbell_notify_resource_info {
+	struct vmci_handle handle;
+	uint16_t resource;
+	uint16_t action;
+	int32_t result;
+};
+
+/*
+ * Structure used for checkpointing the doorbell mappings. It is
+ * written to the checkpoint as is, so changing this structure will
+ * break checkpoint compatibility.
+ */
+struct dbell_cpt_state {
+	struct vmci_handle handle;
+	uint64_t bitmapIdx;
+};
+
+int vmci_dbell_init(void);
+void vmci_dbell_hibernate(bool enterHibernation);
+
+int vmci_dbell_host_context_notify(uint32_t srcCID, struct vmci_handle handle);
+int vmci_dbell_get_priv_flags(struct vmci_handle handle, uint32_t *privFlags);
+
+bool vmci_dbell_register_notification_bitmap(uint32_t bitmapPPN);
+void vmci_dbell_scan_notification_entries(uint8_t *bitmap);
+
+#endif /* VMCI_DOORBELL_H */
-- 
1.7.0.4


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 03/11] Apply VMCI doorbell code
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  0 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, Andrew Stiegmann (stieg), cschamp, gregkh

Doorbell code allows for notifcations between host and guest.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_doorbell.c |  751 +++++++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_doorbell.h |   57 +++
 2 files changed, 808 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_doorbell.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_doorbell.h

diff --git a/drivers/misc/vmw_vmci/vmci_doorbell.c b/drivers/misc/vmw_vmci/vmci_doorbell.c
new file mode 100644
index 0000000..389ba4c
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_doorbell.c
@@ -0,0 +1,751 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/vmw_vmci_api.h>
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_common_int.h"
+#include "vmci_datagram.h"
+#include "vmci_doorbell.h"
+#include "vmci_driver.h"
+#include "vmci_resource.h"
+#include "vmci_route.h"
+
+#define VMCI_DOORBELL_INDEX_TABLE_SIZE (1 << 6)
+#define VMCI_DOORBELL_HASH(_idx)				\
+	vmci_hash_calc((_idx), VMCI_DOORBELL_INDEX_TABLE_SIZE)
+
+/*
+ * DoorbellEntry describes the a doorbell notification handle allocated by the
+ * host.
+ */
+struct dbell_entry {
+	struct vmci_resource resource;
+	uint32_t idx;
+	struct list_head idxListItem;
+	uint32_t privFlags;
+	bool runDelayed;
+	VMCICallback notifyCB;
+	void *clientData;
+	wait_queue_head_t destroyEvent;
+	atomic_t active;	/* Only used by guest personality */
+};
+
+/* The VMCI index table keeps track of currently registered doorbells. */
+static struct dbell_index_table {
+	spinlock_t lock;
+	struct list_head entries[VMCI_DOORBELL_INDEX_TABLE_SIZE];
+} vmciDoorbellIT;
+
+/*
+ * The maxNotifyIdx is one larger than the currently known bitmap index in
+ * use, and is used to determine how much of the bitmap needs to be scanned.
+ */
+static uint32_t maxNotifyIdx;
+
+/*
+ * The notifyIdxCount is used for determining whether there are free entries
+ * within the bitmap (if notifyIdxCount + 1 < maxNotifyIdx).
+ */
+static uint32_t notifyIdxCount;
+
+/*
+ * The lastNotifyIdxReserved is used to track the last index handed out - in
+ * the case where multiple handles share a notification index, we hand out
+ * indexes round robin based on lastNotifyIdxReserved.
+ */
+static uint32_t lastNotifyIdxReserved;
+
+/* This is a one entry cache used to by the index allocation. */
+static uint32_t lastNotifyIdxReleased = PAGE_SIZE;
+
+/*
+ * General init code.
+ */
+int __init vmci_dbell_init(void)
+{
+	uint32_t bucket;
+
+	for (bucket = 0; bucket < ARRAY_SIZE(vmciDoorbellIT.entries); ++bucket)
+		INIT_LIST_HEAD(&vmciDoorbellIT.entries[bucket]);
+
+	spin_lock_init(&vmciDoorbellIT.lock);
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Callback to free doorbell entry structure when resource is no longer used,
+ * The entry is freed in VMCIDoorbell_Destroy(), which is waiting on the
+ * signal that gets fired here.
+ */
+static void dbell_free_cb(void *clientData)
+{
+	struct dbell_entry *entry = (struct dbell_entry *)clientData;
+	ASSERT(entry);
+	wake_up(&entry->destroyEvent);
+}
+
+static int dbell_release_cb(void *clientData)
+{
+	struct dbell_entry *entry = (struct dbell_entry *)clientData;
+	ASSERT(entry);
+	vmci_resource_release(&entry->resource);
+	return 0;
+}
+
+/*
+ * Utility function that retrieves the privilege flags associated
+ * with a given doorbell handle. For guest endpoints, the
+ * privileges are determined by the context ID, but for host
+ * endpoints privileges are associated with the complete
+ * handle. Hypervisor endpoints are not yet supported.
+ */
+int vmci_dbell_get_priv_flags(struct vmci_handle handle,
+			      uint32_t *privFlags)
+{
+	if (privFlags == NULL || handle.context == VMCI_INVALID_ID)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	if (handle.context == VMCI_HOST_CONTEXT_ID) {
+		struct dbell_entry *entry;
+		struct vmci_resource *resource;
+
+		resource =
+			vmci_resource_get(handle, VMCI_RESOURCE_TYPE_DOORBELL);
+		if (resource == NULL)
+			return VMCI_ERROR_NOT_FOUND;
+
+		entry =
+			container_of(resource, struct dbell_entry,
+				     resource);
+		*privFlags = entry->privFlags;
+		vmci_resource_release(resource);
+	} else if (handle.context == VMCI_HYPERVISOR_CONTEXT_ID) {
+		/*
+		 * Hypervisor endpoints for notifications are not
+		 * supported (yet).
+		 */
+		return VMCI_ERROR_INVALID_ARGS;
+	} else {
+		*privFlags = VMCIContext_GetPrivFlags(handle.context);
+	}
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Find doorbell entry by bitmap index.
+ */
+static struct dbell_entry *dbell_index_table_find(uint32_t idx)
+{
+	uint32_t bucket = VMCI_DOORBELL_HASH(idx);
+	struct dbell_entry *cur;
+
+	ASSERT(vmci_guest_code_active());
+
+	list_for_each_entry(cur, &vmciDoorbellIT.entries[bucket], idxListItem) {
+		if (idx == cur->idx)
+			return cur;
+	}
+
+	return NULL;
+}
+
+/*
+ * Add the given entry to the index table.  This will hold() the entry's
+ * resource so that the entry is not deleted before it is removed from the
+ * table.
+ */
+static void dbell_index_table_add(struct dbell_entry *entry)
+{
+	uint32_t bucket;
+	uint32_t newNotifyIdx;
+
+	ASSERT(entry);
+	ASSERT(vmci_guest_code_active());
+
+	vmci_resource_hold(&entry->resource);
+
+	spin_lock_bh(&vmciDoorbellIT.lock);
+
+	/*
+	 * Below we try to allocate an index in the notification
+	 * bitmap with "not too much" sharing between resources. If we
+	 * use less that the full bitmap, we either add to the end if
+	 * there are no unused flags within the currently used area,
+	 * or we search for unused ones. If we use the full bitmap, we
+	 * allocate the index round robin.
+	 */
+	if (maxNotifyIdx < PAGE_SIZE || notifyIdxCount < PAGE_SIZE) {
+		if (lastNotifyIdxReleased < maxNotifyIdx &&
+		    !dbell_index_table_find(lastNotifyIdxReleased)) {
+			newNotifyIdx = lastNotifyIdxReleased;
+			lastNotifyIdxReleased = PAGE_SIZE;
+		} else {
+			bool reused = false;
+			newNotifyIdx = lastNotifyIdxReserved;
+			if (notifyIdxCount + 1 < maxNotifyIdx) {
+				do {
+					if (!dbell_index_table_find
+					    (newNotifyIdx)) {
+						reused = true;
+						break;
+					}
+					newNotifyIdx = (newNotifyIdx + 1) %
+						maxNotifyIdx;
+				} while (newNotifyIdx != lastNotifyIdxReleased);
+			}
+			if (!reused) {
+				newNotifyIdx = maxNotifyIdx;
+				maxNotifyIdx++;
+			}
+		}
+	} else {
+		newNotifyIdx = (lastNotifyIdxReserved + 1) % PAGE_SIZE;
+	}
+
+	lastNotifyIdxReserved = newNotifyIdx;
+	notifyIdxCount++;
+
+	entry->idx = newNotifyIdx;
+	bucket = VMCI_DOORBELL_HASH(entry->idx);
+	list_add(&entry->idxListItem, &vmciDoorbellIT.entries[bucket]);
+
+	spin_unlock_bh(&vmciDoorbellIT.lock);
+}
+
+/*
+ * Remove the given entry from the index table.  This will release() the
+ * entry's resource.
+ */
+static void dbell_index_table_remove(struct dbell_entry *entry)
+{
+	ASSERT(entry);
+	ASSERT(vmci_guest_code_active());
+
+	spin_lock_bh(&vmciDoorbellIT.lock);
+
+	list_del(&entry->idxListItem);
+
+	notifyIdxCount--;
+	if (entry->idx == maxNotifyIdx - 1) {
+		/*
+		 * If we delete an entry with the maximum known
+		 * notification index, we take the opportunity to
+		 * prune the current max. As there might be other
+		 * unused indices immediately below, we lower the
+		 * maximum until we hit an index in use.
+		 */
+		while (maxNotifyIdx > 0 &&
+		       !dbell_index_table_find(maxNotifyIdx - 1)) {
+			maxNotifyIdx--;
+		}
+	}
+
+	lastNotifyIdxReleased = entry->idx;
+
+	spin_unlock_bh(&vmciDoorbellIT.lock);
+
+	vmci_resource_release(&entry->resource);
+}
+
+/*
+ * Creates a link between the given doorbell handle and the given
+ * index in the bitmap in the device backend. A notification state
+ * is created in hypervisor.
+ */
+static int dbell_link(struct vmci_handle handle,
+		      uint32_t notifyIdx)
+{
+	struct vmci_doorbell_link_msg linkMsg;
+
+	ASSERT(!VMCI_HANDLE_INVALID(handle));
+	ASSERT(vmci_guest_code_active());
+
+	linkMsg.hdr.dst = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+					   VMCI_DOORBELL_LINK);
+	linkMsg.hdr.src = VMCI_ANON_SRC_HANDLE;
+	linkMsg.hdr.payloadSize = sizeof linkMsg - VMCI_DG_HEADERSIZE;
+	linkMsg.handle = handle;
+	linkMsg.notifyIdx = notifyIdx;
+
+	return vmci_send_dg((struct vmci_dg *)&linkMsg);
+}
+
+/*
+ * Unlinks the given doorbell handle from an index in the bitmap in
+ * the device backend. The notification state is destroyed in hypervisor.
+ */
+static int dbell_unlink(struct vmci_handle handle)
+{
+	struct vmci_doorbell_unlink_msg unlinkMsg;
+
+	ASSERT(!VMCI_HANDLE_INVALID(handle));
+	ASSERT(vmci_guest_code_active());
+
+	unlinkMsg.hdr.dst = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+					     VMCI_DOORBELL_UNLINK);
+	unlinkMsg.hdr.src = VMCI_ANON_SRC_HANDLE;
+	unlinkMsg.hdr.payloadSize = sizeof unlinkMsg - VMCI_DG_HEADERSIZE;
+	unlinkMsg.handle = handle;
+
+	return vmci_send_dg((struct vmci_dg *)&unlinkMsg);
+}
+
+/*
+ * Notify another guest or the host.  We send a datagram down to the
+ * host via the hypervisor with the notification info.
+ */
+static int dbell_notify_as_guest(struct vmci_handle handle,
+				 uint32_t privFlags)
+{
+	struct vmci_doorbell_ntfy_msg notifyMsg;
+
+	ASSERT(vmci_guest_code_active());
+
+	notifyMsg.hdr.dst = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+					     VMCI_DOORBELL_NOTIFY);
+	notifyMsg.hdr.src = VMCI_ANON_SRC_HANDLE;
+	notifyMsg.hdr.payloadSize = sizeof notifyMsg - VMCI_DG_HEADERSIZE;
+	notifyMsg.handle = handle;
+
+	return vmci_send_dg((struct vmci_dg *)&notifyMsg);
+}
+
+/*
+ * Calls the specified callback in a delayed context.
+ */
+static void dbell_delayed_dispatch_cb(void *data)
+{
+	struct dbell_entry *entry = (struct dbell_entry *)data;
+
+	ASSERT(data);
+
+	entry->notifyCB(entry->clientData);
+	vmci_resource_release(&entry->resource);
+}
+
+/*
+ * Dispatches a doorbell notification to the host context.
+ */
+int vmci_dbell_host_context_notify(uint32_t srcCID,
+				   struct vmci_handle handle)
+{
+	struct dbell_entry *entry;
+	struct vmci_resource *resource;
+	int result;
+
+	ASSERT(vmci_host_code_active());
+
+	if (VMCI_HANDLE_INVALID(handle)) {
+		pr_devel("Notifying an invalid doorbell " \
+			 "(handle=0x%x:0x%x).", handle.context,
+			 handle.resource);
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	resource = vmci_resource_get(handle, VMCI_RESOURCE_TYPE_DOORBELL);
+	if (resource == NULL) {
+		pr_devel("Notifying an unknown doorbell " \
+			 "(handle=0x%x:0x%x).", handle.context,
+			 handle.resource);
+		return VMCI_ERROR_NOT_FOUND;
+	}
+	entry = container_of(resource, struct dbell_entry, resource);
+
+	if (entry->runDelayed) {
+		result = vmci_drv_schedule_delayed_work(
+			dbell_delayed_dispatch_cb,
+			entry);
+
+		if (result < VMCI_SUCCESS) {
+			/*
+			 * If we failed to schedule the delayed work,
+			 * we need to release the resource
+			 * immediately. Otherwise, the resource will
+			 * be released once the delayed callback has
+			 * been completed.
+			 */
+			pr_devel("Failed to schedule delayed doorbell " \
+				 "notification (result=%d).", result);
+			vmci_resource_release(resource);
+		}
+	} else {
+		entry->notifyCB(entry->clientData);
+		vmci_resource_release(resource);
+		result = VMCI_SUCCESS;
+	}
+	return result;
+}
+
+/*
+ * When a guest leaves hibernation, the device driver state is out of sync
+ * with the device state, since the driver state has doorbells registered
+ * that aren't known to the device.  This function takes care of
+ * reregistering any doorbells. In case an error occurs during
+ * reregistration (this is highly unlikely since 1) it succeeded the first
+ * time 2) the device driver is the only source of doorbell registrations),
+ * we simply log the error.  The doorbell can still be destroyed using
+ * VMCIDoorbell_Destroy.
+ */
+void vmci_dbell_hibernate(bool enterHibernate)
+{
+	uint32_t bucket;
+	struct dbell_entry *cur;
+
+	if (!vmci_guest_code_active() || enterHibernate)
+		return;
+
+	spin_lock_bh(&vmciDoorbellIT.lock);
+
+	for (bucket = 0; bucket < ARRAY_SIZE(vmciDoorbellIT.entries);
+	     bucket++) {
+		list_for_each_entry(cur, &vmciDoorbellIT.entries[bucket],
+				    idxListItem) {
+			int result;
+			struct vmci_handle h;
+
+			h = vmci_resource_handle(&cur->resource);
+			result = dbell_link(h, cur->idx);
+			if (result != VMCI_SUCCESS
+			    && result != VMCI_ERROR_DUPLICATE_ENTRY) {
+				pr_warn("Failed to reregister doorbell " \
+					"(handle=0x%x:0x%x) to index " \
+					"(error=%d).", h.context, h.resource,
+					result);
+			}
+		}
+	}
+
+	spin_unlock_bh(&vmciDoorbellIT.lock);
+}
+
+/*
+ * Register the notification bitmap with the host.
+ */
+bool vmci_dbell_register_notification_bitmap(uint32_t bitmapPPN)
+{
+	int result;
+	struct vmci_notify_bm_set_msg bitmapSetMsg;
+
+	/*
+	 * Do not ASSERT() on the guest device here.  This function
+	 * can get called during device initialization, so the
+	 * ASSERT() will fail even though the device is (almost) up.
+	 */
+	bitmapSetMsg.hdr.dst = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+						VMCI_SET_NOTIFY_BITMAP);
+	bitmapSetMsg.hdr.src = VMCI_ANON_SRC_HANDLE;
+	bitmapSetMsg.hdr.payloadSize = sizeof bitmapSetMsg - VMCI_DG_HEADERSIZE;
+	bitmapSetMsg.bitmapPPN = bitmapPPN;
+
+	result = vmci_send_dg((struct vmci_dg *)&bitmapSetMsg);
+	if (result != VMCI_SUCCESS) {
+		pr_devel("Failed to register (PPN=%u) as " \
+			 "notification bitmap (error=%d).",
+			 bitmapPPN, result);
+		return false;
+	}
+	return true;
+}
+
+/*
+ * Executes or schedules the handlers for a given notify index.
+ */
+static void dbell_fire_entries(uint32_t notifyIdx)
+{
+	uint32_t bucket = VMCI_DOORBELL_HASH(notifyIdx);
+	struct dbell_entry *cur;
+
+	ASSERT(vmci_guest_code_active());
+
+	spin_lock_bh(&vmciDoorbellIT.lock);
+
+	list_for_each_entry(cur, &vmciDoorbellIT.entries[bucket], idxListItem) {
+		if (cur->idx == notifyIdx && atomic_read(&cur->active) == 1) {
+			ASSERT(cur->notifyCB);
+			if (cur->runDelayed) {
+				int err;
+
+				vmci_resource_hold(&cur->resource);
+				err =
+					vmci_drv_schedule_delayed_work
+					(dbell_delayed_dispatch_cb, cur);
+				if (err != VMCI_SUCCESS) {
+					vmci_resource_release(&cur->resource);
+					goto out;
+				}
+			} else {
+				cur->notifyCB(cur->clientData);
+			}
+		}
+	}
+
+out:
+	spin_unlock_bh(&vmciDoorbellIT.lock);
+}
+
+/*
+ * Scans the notification bitmap, collects pending notifications,
+ * resets the bitmap and invokes appropriate callbacks.
+ */
+void vmci_dbell_scan_notification_entries(uint8_t *bitmap)
+{
+	uint32_t idx;
+
+	ASSERT(bitmap);
+	ASSERT(vmci_guest_code_active());
+
+	for (idx = 0; idx < maxNotifyIdx; idx++) {
+		if (bitmap[idx] & 0x1) {
+			bitmap[idx] &= ~1;
+			dbell_fire_entries(idx);
+		}
+	}
+}
+
+/**
+ * VMCIDoorbell_Create() - Creates a doorbell
+ * @handle:	A handle used to track the resource.  Can be invalid.
+ * @flags:	Flag that determines context of callback.
+ * @privFlags:	Privileges flags.
+ * @notifyCB:	The callback to be ivoked when the doorbell fires.
+ * @clientData:	A parameter to be passed to the callback.
+ *
+ * Creates a doorbell with the given callback. If the handle is
+ * VMCI_INVALID_HANDLE, a free handle will be assigned, if
+ * possible. The callback can be run immediately (potentially with
+ * locks held - the default) or delayed (in a kernel thread) by
+ * specifying the flag VMCI_FLAG_DELAYED_CB. If delayed execution
+ * is selected, a given callback may not be run if the kernel is
+ * unable to allocate memory for the delayed execution (highly
+ * unlikely).
+ */
+int VMCIDoorbell_Create(struct vmci_handle *handle,
+			uint32_t flags,
+			uint32_t privFlags,
+			VMCICallback notifyCB,
+			void *clientData)
+{
+	struct dbell_entry *entry;
+	struct vmci_handle newHandle;
+	int result;
+
+	if (!handle || !notifyCB || flags & ~VMCI_FLAG_DELAYED_CB ||
+	    privFlags & ~VMCI_PRIVILEGE_ALL_FLAGS)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	entry = kmalloc(sizeof *entry, GFP_KERNEL);
+	if (entry == NULL) {
+		pr_warn("Failed allocating memory for datagram entry.");
+		return VMCI_ERROR_NO_MEM;
+	}
+
+	if (VMCI_HANDLE_INVALID(*handle)) {
+		uint32_t contextID = VMCI_GetContextID();
+		uint32_t resourceID = vmci_resource_get_id(contextID);
+		if (resourceID == VMCI_INVALID_ID) {
+			result = VMCI_ERROR_NO_HANDLE;
+			goto freeMem;
+		}
+		newHandle = vmci_make_handle(contextID, resourceID);
+	} else {
+		bool validContext = false;
+
+		/*
+		 * Validate the handle.  We must do both of the checks below
+		 * because we can be acting as both a host and a guest at the
+		 * same time. We always allow the host context ID, since the
+		 * host functionality is in practice always there with the
+		 * unified driver.
+		 */
+		if (VMCI_HOST_CONTEXT_ID == handle->context ||
+		    (vmci_guest_code_active() &&
+		     VMCI_GetContextID() == handle->context))
+			validContext = true;
+
+		if (!validContext || VMCI_INVALID_ID == handle->resource) {
+			pr_devel("Invalid argument (handle=0x%x:0x%x).",
+				 handle->context, handle->resource);
+			result = VMCI_ERROR_INVALID_ARGS;
+			goto freeMem;
+		}
+
+		newHandle = *handle;
+	}
+
+	entry->idx = 0;
+	INIT_LIST_HEAD(&entry->idxListItem);
+	entry->privFlags = privFlags;
+	entry->runDelayed = (flags & VMCI_FLAG_DELAYED_CB) ? true : false;
+	entry->notifyCB = notifyCB;
+	entry->clientData = clientData;
+	atomic_set(&entry->active, 0);
+	init_waitqueue_head(&entry->destroyEvent);
+
+	result =
+		vmci_resource_add(&entry->resource, VMCI_RESOURCE_TYPE_DOORBELL,
+				  newHandle, dbell_free_cb, entry);
+	if (result != VMCI_SUCCESS) {
+		pr_warn("Failed to add new resource (handle=0x%x:0x%x).",
+			newHandle.context, newHandle.resource);
+		if (result == VMCI_ERROR_DUPLICATE_ENTRY)
+			result = VMCI_ERROR_ALREADY_EXISTS;
+
+		goto freeMem;
+	}
+
+	if (vmci_guest_code_active()) {
+		dbell_index_table_add(entry);
+		result = dbell_link(newHandle, entry->idx);
+		if (VMCI_SUCCESS != result)
+			goto destroyResource;
+
+		atomic_set(&entry->active, 1);
+	}
+
+	if (VMCI_HANDLE_INVALID(*handle))
+		*handle = newHandle;
+
+	return result;
+
+destroyResource:
+	dbell_index_table_remove(entry);
+	vmci_resource_remove(newHandle, VMCI_RESOURCE_TYPE_DOORBELL);
+freeMem:
+	kfree(entry);
+	return result;
+}
+EXPORT_SYMBOL(VMCIDoorbell_Create);
+
+/**
+ * VMCIDoorbell_Destroy() - Destroy a doorbell.
+ * @handle:	The handle tracking the resource.
+ *
+ * Destroys a doorbell previously created with VMCIDoorbell_Create. This
+ * operation may block waiting for a callback to finish.
+ */
+int VMCIDoorbell_Destroy(struct vmci_handle handle)
+{
+	struct dbell_entry *entry;
+	struct vmci_resource *resource;
+
+	if (VMCI_HANDLE_INVALID(handle))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	resource = vmci_resource_get(handle, VMCI_RESOURCE_TYPE_DOORBELL);
+	if (resource == NULL) {
+		pr_devel("Failed to destroy doorbell " \
+			 "(handle=0x%x:0x%x).", handle.context,
+			 handle.resource);
+		return VMCI_ERROR_NOT_FOUND;
+	}
+
+	entry = container_of(resource, struct dbell_entry, resource);
+
+	if (vmci_guest_code_active()) {
+		int result;
+
+		dbell_index_table_remove(entry);
+
+		result = dbell_unlink(handle);
+		if (VMCI_SUCCESS != result) {
+
+			/*
+			 * The only reason this should fail would be
+			 * an inconsistency between guest and
+			 * hypervisor state, where the guest believes
+			 * it has an active registration whereas the
+			 * hypervisor doesn't. One case where this may
+			 * happen is if a doorbell is unregistered
+			 * following a hibernation at a time where the
+			 * doorbell state hasn't been restored on the
+			 * hypervisor side yet. Since the handle has
+			 * now been removed in the guest, we just
+			 * print a warning and return success.
+			 */
+			pr_devel("Unlink of doorbell (handle=0x%x:0x%x) " \
+				 "unknown by hypervisor (error=%d).",
+				 handle.context, handle.resource,
+				 result);
+		}
+	}
+
+	/*
+	 * Now remove the resource from the table.  It might still be in use
+	 * after this, in a callback or still on the delayed work queue.
+	 */
+	vmci_resource_remove(handle, VMCI_RESOURCE_TYPE_DOORBELL);
+
+	/*
+	 * We now wait on the destroyEvent and release the reference we got
+	 * above.
+	 */
+	vmci_drv_wait_on_event_intr(&entry->destroyEvent,
+				    dbell_release_cb, entry);
+
+	/*
+	 * We know that we are now the only reference to the above entry so
+	 * can safely free it.
+	 */
+	kfree(entry);
+
+	return VMCI_SUCCESS;
+}
+EXPORT_SYMBOL(VMCIDoorbell_Destroy);
+
+/**
+ * VMCIDoorbell_Notify() - Ring the doorbell (and hide in the bushes).
+ * @dst:	The handlle identifying the doorbell resource
+ * @privFlags:	Priviledge flags.
+ *
+ * Generates a notification on the doorbell identified by the
+ * handle. For host side generation of notifications, the caller
+ * can specify what the privilege of the calling side is.
+ */
+int VMCIDoorbell_Notify(struct vmci_handle dst,
+			uint32_t privFlags)
+{
+	int retval;
+	enum vmci_route route;
+	struct vmci_handle src;
+
+	if (VMCI_HANDLE_INVALID(dst)
+	    || (privFlags & ~VMCI_PRIVILEGE_ALL_FLAGS))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	src = VMCI_INVALID_HANDLE;
+	retval = vmci_route(&src, &dst, false, &route);
+	if (retval < VMCI_SUCCESS)
+		return retval;
+
+	if (VMCI_ROUTE_AS_HOST == route)
+		return vmci_ctx_notify_dbell(VMCI_HOST_CONTEXT_ID,
+					     dst, privFlags);
+
+	if (VMCI_ROUTE_AS_GUEST == route)
+		return dbell_notify_as_guest(dst, privFlags);
+
+	pr_warn("Unknown route (%d) for doorbell.", route);
+	return VMCI_ERROR_DST_UNREACHABLE;
+}
+EXPORT_SYMBOL(VMCIDoorbell_Notify);
diff --git a/drivers/misc/vmw_vmci/vmci_doorbell.h b/drivers/misc/vmw_vmci/vmci_doorbell.h
new file mode 100644
index 0000000..f56a44b
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_doorbell.h
@@ -0,0 +1,57 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef VMCI_DOORBELL_H
+#define VMCI_DOORBELL_H
+
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_driver.h"
+
+/*
+ * VMCINotifyResourceInfo: Used to create and destroy doorbells, and
+ * generate a notification for a doorbell or queue pair.
+ */
+struct vmci_dbell_notify_resource_info {
+	struct vmci_handle handle;
+	uint16_t resource;
+	uint16_t action;
+	int32_t result;
+};
+
+/*
+ * Structure used for checkpointing the doorbell mappings. It is
+ * written to the checkpoint as is, so changing this structure will
+ * break checkpoint compatibility.
+ */
+struct dbell_cpt_state {
+	struct vmci_handle handle;
+	uint64_t bitmapIdx;
+};
+
+int vmci_dbell_init(void);
+void vmci_dbell_hibernate(bool enterHibernation);
+
+int vmci_dbell_host_context_notify(uint32_t srcCID, struct vmci_handle handle);
+int vmci_dbell_get_priv_flags(struct vmci_handle handle, uint32_t *privFlags);
+
+bool vmci_dbell_register_notification_bitmap(uint32_t bitmapPPN);
+void vmci_dbell_scan_notification_entries(uint8_t *bitmap);
+
+#endif /* VMCI_DOORBELL_H */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 04/11] Apply VMCI driver code
  2012-07-26 23:39 ` Andrew Stiegmann (stieg)
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  -1 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, cschamp, gregkh, Andrew Stiegmann (stieg)

This code implementes both the host and guest personalities of the
VMCI driver.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_driver.c | 2298 +++++++++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_driver.h |   52 +
 2 files changed, 2350 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_driver.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_driver.h

diff --git a/drivers/misc/vmw_vmci/vmci_driver.c b/drivers/misc/vmw_vmci/vmci_driver.c
new file mode 100644
index 0000000..abd9384
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_driver.c
@@ -0,0 +1,2298 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/atomic.h>
+#include <linux/file.h>
+#include <linux/fs.h>
+#include <linux/highmem.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/miscdevice.h>
+#include <linux/moduleparam.h>
+#include <linux/mutex.h>
+#include <linux/pci.h>
+#include <linux/poll.h>
+#include <linux/sched.h>
+#include <linux/smp.h>
+#include <linux/version.h>
+#include <linux/vmw_vmci_api.h>
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_handle_array.h"
+#include "vmci_common_int.h"
+#include "vmci_context.h"
+#include "vmci_datagram.h"
+#include "vmci_doorbell.h"
+#include "vmci_driver.h"
+#include "vmci_event.h"
+#include "vmci_hash_table.h"
+#include "vmci_queue_pair.h"
+#include "vmci_resource.h"
+
+#define VMCI_UTIL_NUM_RESOURCES 1
+
+enum {
+	VMCI_NOTIFY_RESOURCE_QUEUE_PAIR = 0,
+	VMCI_NOTIFY_RESOURCE_DOOR_BELL = 1,
+};
+
+enum {
+	VMCI_NOTIFY_RESOURCE_ACTION_NOTIFY = 0,
+	VMCI_NOTIFY_RESOURCE_ACTION_CREATE = 1,
+	VMCI_NOTIFY_RESOURCE_ACTION_DESTROY = 2,
+};
+
+static uint32_t ctxUpdateSubID = VMCI_INVALID_ID;
+static struct vmci_ctx *hostContext;
+static atomic_t vmContextID = { VMCI_INVALID_ID };
+
+struct vmci_delayed_work_info {
+	struct work_struct work;
+	VMCIWorkFn *workFn;
+	void *data;
+};
+
+/*
+ * VMCI driver initialization. This block can also be used to
+ * pass initial group membership etc.
+ */
+struct vmci_init_blk {
+	uint32_t cid;
+	uint32_t flags;
+};
+
+/* VMCIQueuePairAllocInfo_VMToVM */
+struct vmci_qp_alloc_info_vmvm {
+	struct vmci_handle handle;
+	uint32_t peer;
+	uint32_t flags;
+	uint64_t produceSize;
+	uint64_t consumeSize;
+	uint64_t producePageFile;	/* User VA. */
+	uint64_t consumePageFile;	/* User VA. */
+	uint64_t producePageFileSize;	/* Size of the file name array. */
+	uint64_t consumePageFileSize;	/* Size of the file name array. */
+	int32_t result;
+	uint32_t _pad;
+};
+
+/* VMCISetNotifyInfo: Used to pass notify flag's address to the host driver. */
+struct vmci_set_notify_info {
+	uint64_t notifyUVA;
+	int32_t result;
+	uint32_t _pad;
+};
+
+struct vmci_device {
+	struct mutex lock;
+
+	unsigned int ioaddr;
+	unsigned int ioaddr_size;
+	unsigned int irq;
+	unsigned int intr_type;
+	bool exclusive_vectors;
+	struct msix_entry msix_entries[VMCI_MAX_INTRS];
+
+	bool enabled;
+	spinlock_t dev_spinlock;
+	atomic_t datagrams_allowed;
+};
+
+static DEFINE_PCI_DEVICE_TABLE(vmci_ids) = {
+	{PCI_DEVICE(PCI_VENDOR_ID_VMWARE, PCI_DEVICE_ID_VMWARE_VMCI),},
+	{0},
+};
+
+static struct vmci_device vmci_dev;
+
+/* These options are false (0) by default */
+static bool vmci_disable_host;
+static bool vmci_disable_guest;
+static bool vmci_disable_msi;
+static bool vmci_disable_msix;
+
+/*
+ * Allocate a buffer for incoming datagrams globally to avoid repeated
+ * allocation in the interrupt handler's atomic context.
+ */
+static uint8_t *data_buffer;
+static uint32_t data_buffer_size = VMCI_MAX_DG_SIZE;
+
+/*
+ * If the VMCI hardware supports the notification bitmap, we allocate
+ * and register a page with the device.
+ */
+static uint8_t *notification_bitmap;
+
+/*
+ * Per-instance host state
+ */
+struct vmci_linux {
+	struct vmci_ctx *context;
+	int userVersion;
+	enum vmci_obj_type ctType;
+	struct mutex lock;
+};
+
+/*
+ * Static driver state.
+ */
+struct vmci_linux_state {
+	struct miscdevice misc;
+	char buf[1024];
+	atomic_t activeContexts;
+};
+
+/*
+ * Types and variables shared by both host and guest personality
+ */
+static bool guestDeviceInit;
+static atomic_t guestDeviceActive;
+static bool hostDeviceInit;
+
+static void drv_delayed_work_cb(struct work_struct *work)
+{
+	struct vmci_delayed_work_info *delayedWorkInfo;
+
+	delayedWorkInfo = container_of(work, struct vmci_delayed_work_info,
+				       work);
+	ASSERT(delayedWorkInfo);
+	ASSERT(delayedWorkInfo->workFn);
+
+	delayedWorkInfo->workFn(delayedWorkInfo->data);
+
+	kfree(delayedWorkInfo);
+}
+
+/*
+ * Schedule the specified callback.
+ */
+int vmci_drv_schedule_delayed_work(VMCIWorkFn *workFn,
+				   void *data)
+{
+	struct vmci_delayed_work_info *delayedWorkInfo;
+
+	ASSERT(workFn);
+
+	delayedWorkInfo = kmalloc(sizeof *delayedWorkInfo, GFP_ATOMIC);
+	if (!delayedWorkInfo)
+		return VMCI_ERROR_NO_MEM;
+
+	delayedWorkInfo->workFn = workFn;
+	delayedWorkInfo->data = data;
+
+	INIT_WORK(&delayedWorkInfo->work, drv_delayed_work_cb);
+
+	schedule_work(&delayedWorkInfo->work);
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * True if the wait was interrupted by a signal, false otherwise.
+ */
+bool vmci_drv_wait_on_event_intr(wait_queue_head_t *event,
+				 VMCIEventReleaseCB releaseCB,
+				 void *clientData)
+{
+	DECLARE_WAITQUEUE(wait, current);
+
+	if (event == NULL || releaseCB == NULL)
+		return false;
+
+	add_wait_queue(event, &wait);
+	current->state = TASK_INTERRUPTIBLE;
+
+	/*
+	 * Release the lock or other primitive that makes it possible for us to
+	 * put the current thread on the wait queue without missing the signal.
+	 * Ie. on Linux we need to put ourselves on the wait queue and set our
+	 * stateto TASK_INTERRUPTIBLE without another thread signalling us.
+	 * The releaseCB is used to synchronize this.
+	 */
+	releaseCB(clientData);
+
+	schedule();
+	current->state = TASK_RUNNING;
+	remove_wait_queue(event, &wait);
+
+	return signal_pending(current);
+}
+
+/*
+ * Cleans up the host specific components of the VMCI module.
+ */
+static void drv_host_cleanup(void)
+{
+	vmci_ctx_release_ctx(hostContext);
+	vmci_qp_broker_exit();
+}
+
+/*
+ * Checks whether the VMCI device is enabled.
+ */
+static bool drv_device_enabled(void)
+{
+	return vmci_guest_code_active()
+		|| vmci_host_code_active();
+}
+
+/*
+ * Gets called with the new context id if updated or resumed.
+ * Context id.
+ */
+static void drv_util_cid_update(uint32_t subID,
+				struct vmci_event_data *eventData,
+				void *clientData)
+{
+	struct vmci_event_payld_ctx *evPayload =
+		vmci_event_data_payload(eventData);
+
+	if (subID != ctxUpdateSubID) {
+		pr_devel("Invalid subscriber (ID=0x%x).", subID);
+		return;
+	}
+
+	if (eventData == NULL || evPayload->contextID == VMCI_INVALID_ID) {
+		pr_devel("Invalid event data.");
+		return;
+	}
+
+	pr_devel("Updating context from (ID=0x%x) to (ID=0x%x) on event " \
+		 "(type=%d).", atomic_read(&vmContextID), evPayload->contextID,
+		 eventData->event);
+
+	atomic_set(&vmContextID, evPayload->contextID);
+}
+
+/*
+ * Subscribe to context id update event.
+ */
+static void __devinit drv_util_init(void)
+{
+	/*
+	 * We subscribe to the VMCI_EVENT_CTX_ID_UPDATE here so we can
+	 * update the internal context id when needed.
+	 */
+	if (VMCIEvent_Subscribe
+	    (VMCI_EVENT_CTX_ID_UPDATE, VMCI_FLAG_EVENT_NONE,
+	     drv_util_cid_update, NULL, &ctxUpdateSubID) < VMCI_SUCCESS) {
+		pr_warn("Failed to subscribe to event (type=%d).",
+			VMCI_EVENT_CTX_ID_UPDATE);
+	}
+}
+
+static void vmci_util_exit(void)
+{
+	if (VMCIEvent_Unsubscribe(ctxUpdateSubID) < VMCI_SUCCESS) {
+		pr_warn("Failed to unsubscribe to event (type=%d) with " \
+			"subscriber (ID=0x%x).", VMCI_EVENT_CTX_ID_UPDATE,
+			ctxUpdateSubID);
+	}
+}
+
+/*
+ * Verify that the host supports the hypercalls we need. If it does not,
+ * try to find fallback hypercalls and use those instead.  Returns
+ * true if required hypercalls (or fallback hypercalls) are
+ * supported by the host, false otherwise.
+ */
+static bool drv_check_host_caps(void)
+{
+	bool result;
+	struct vmci_resource_query_msg *msg;
+	uint32_t msgSize = sizeof(struct vmci_resource_query_hdr) +
+		VMCI_UTIL_NUM_RESOURCES * sizeof(uint32_t);
+	struct vmci_dg *checkMsg = kmalloc(msgSize, GFP_KERNEL);
+
+	if (checkMsg == NULL) {
+		pr_warn("Check host: Insufficient memory.");
+		return false;
+	}
+
+	checkMsg->dst = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+					 VMCI_RESOURCES_QUERY);
+	checkMsg->src = VMCI_ANON_SRC_HANDLE;
+	checkMsg->payloadSize = msgSize - VMCI_DG_HEADERSIZE;
+	msg = (struct vmci_resource_query_msg *)VMCI_DG_PAYLOAD(checkMsg);
+
+	msg->numResources = VMCI_UTIL_NUM_RESOURCES;
+	msg->resources[0] = VMCI_GET_CONTEXT_ID;
+
+	/* Checks that hyper calls are supported */
+	result = (0x1 == vmci_send_dg(checkMsg));
+	kfree(checkMsg);
+
+	pr_info("Host capability check: %s.",
+		result ? "PASSED" : "FAILED");
+
+	/* We need the vector. There are no fallbacks. */
+	return result;
+}
+
+/*
+ * Reads datagrams from the data in port and dispatches them. We
+ * always start reading datagrams into only the first page of the
+ * datagram buffer. If the datagrams don't fit into one page, we
+ * use the maximum datagram buffer size for the remainder of the
+ * invocation. This is a simple heuristic for not penalizing
+ * small datagrams.
+ *
+ * This function assumes that it has exclusive access to the data
+ * in port for the duration of the call.
+ */
+static void drv_read_dgs_from_port(int ioHandle,
+				   unsigned short int dgInPort,
+				   uint8_t *dgInBuffer,
+				   size_t dgInBufferSize)
+{
+	struct vmci_dg *dg;
+	size_t currentDgInBufferSize = PAGE_SIZE;
+	size_t remainingBytes;
+
+	ASSERT(dgInBufferSize >= PAGE_SIZE);
+
+	insb(dgInPort, dgInBuffer, currentDgInBufferSize);
+	dg = (struct vmci_dg *)dgInBuffer;
+	remainingBytes = currentDgInBufferSize;
+
+	while (dg->dst.resource != VMCI_INVALID_ID
+	       || remainingBytes > PAGE_SIZE) {
+		unsigned dgInSize;
+
+		/*
+		 * When the input buffer spans multiple pages, a datagram can
+		 * start on any page boundary in the buffer.
+		 */
+		if (dg->dst.resource == VMCI_INVALID_ID) {
+			ASSERT(remainingBytes > PAGE_SIZE);
+			dg = (struct vmci_dg *)roundup((uintptr_t)
+						       dg + 1, PAGE_SIZE);
+			ASSERT((uint8_t *) dg <
+			       dgInBuffer + currentDgInBufferSize);
+			remainingBytes =
+				(size_t) (dgInBuffer + currentDgInBufferSize -
+					  (uint8_t *) dg);
+			continue;
+		}
+
+		dgInSize = VMCI_DG_SIZE_ALIGNED(dg);
+
+		if (dgInSize <= dgInBufferSize) {
+			int result;
+
+			/*
+			 * If the remaining bytes in the datagram
+			 * buffer doesn't contain the complete
+			 * datagram, we first make sure we have enough
+			 * room for it and then we read the reminder
+			 * of the datagram and possibly any following
+			 * datagrams.
+			 */
+			if (dgInSize > remainingBytes) {
+				if (remainingBytes != currentDgInBufferSize) {
+
+					/*
+					 * We move the partial
+					 * datagram to the front and
+					 * read the reminder of the
+					 * datagram and possibly
+					 * following calls into the
+					 * following bytes.
+					 */
+					memmove(dgInBuffer, dgInBuffer +
+						currentDgInBufferSize -
+						remainingBytes, remainingBytes);
+					dg = (struct vmci_dg *)
+						dgInBuffer;
+				}
+
+				if (currentDgInBufferSize != dgInBufferSize)
+					currentDgInBufferSize = dgInBufferSize;
+
+				insb(dgInPort, dgInBuffer + remainingBytes,
+				     currentDgInBufferSize - remainingBytes);
+			}
+
+			/*
+			 * We special case event datagrams from the
+			 * hypervisor.
+			 */
+			if (dg->src.context == VMCI_HYPERVISOR_CONTEXT_ID
+			    && dg->dst.resource == VMCI_EVENT_HANDLER) {
+				result = vmci_event_dispatch(dg);
+			} else {
+				result = vmci_dg_invoke_guest_handler(dg);
+			}
+			if (result < VMCI_SUCCESS) {
+				pr_devel("Datagram with resource " \
+					 "(ID=0x%x) failed (err=%d).",
+					 dg->dst.resource, result);
+			}
+
+			/* On to the next datagram. */
+			dg = (struct vmci_dg *)((uint8_t *) dg +
+						dgInSize);
+		} else {
+			size_t bytesToSkip;
+
+			/*
+			 * Datagram doesn't fit in datagram buffer of maximal
+			 * size. We drop it.
+			 */
+			pr_devel("Failed to receive datagram (size=%u bytes).",
+				 dgInSize);
+
+			bytesToSkip = dgInSize - remainingBytes;
+			if (currentDgInBufferSize != dgInBufferSize)
+				currentDgInBufferSize = dgInBufferSize;
+
+			for (;;) {
+				insb(dgInPort, dgInBuffer,
+				     currentDgInBufferSize);
+				if (bytesToSkip <= currentDgInBufferSize)
+					break;
+
+				bytesToSkip -= currentDgInBufferSize;
+			}
+			dg = (struct vmci_dg *)(dgInBuffer + bytesToSkip);
+		}
+
+		remainingBytes =
+			(size_t) (dgInBuffer + currentDgInBufferSize -
+				  (uint8_t *) dg);
+
+		if (remainingBytes < VMCI_DG_HEADERSIZE) {
+			/* Get the next batch of datagrams. */
+
+			insb(dgInPort, dgInBuffer, currentDgInBufferSize);
+			dg = (struct vmci_dg *)dgInBuffer;
+			remainingBytes = currentDgInBufferSize;
+		}
+	}
+}
+
+/*
+ * Initializes VMCI components shared between guest and host
+ * driver. This registers core hypercalls.
+ */
+static int __init drv_shared_init(void)
+{
+	int result;
+
+	result = vmci_resource_init();
+	if (result < VMCI_SUCCESS) {
+		pr_warn("Failed to initialize VMCIResource (result=%d).",
+			result);
+		goto errorExit;
+	}
+
+	result = vmci_ctx_init();
+	if (result < VMCI_SUCCESS) {
+		pr_warn("Failed to initialize VMCIContext (result=%d).",
+			result);
+		goto resourceExit;
+	}
+
+	result = vmci_dg_init();
+	if (result < VMCI_SUCCESS) {
+		pr_warn("Failed to initialize VMCIDatagram (result=%d).",
+			result);
+		goto resourceExit;
+	}
+
+	result = vmci_event_init();
+	if (result < VMCI_SUCCESS) {
+		pr_warn("Failed to initialize VMCIEvent (result=%d).",
+			result);
+		goto resourceExit;
+	}
+
+	result = vmci_dbell_init();
+	if (result < VMCI_SUCCESS) {
+		pr_warn("Failed to initialize VMCIDoorbell (result=%d).",
+			result);
+		goto eventExit;
+	}
+
+	pr_notice("shared components initialized.");
+	return VMCI_SUCCESS;
+
+eventExit:
+	vmci_event_exit();
+resourceExit:
+	vmci_resource_exit();
+errorExit:
+	return result;
+}
+
+/*
+ * Cleans up VMCI components shared between guest and host
+ * driver.
+ */
+static void drv_shared_cleanup(void)
+{
+	vmci_event_exit();
+	vmci_resource_exit();
+}
+
+static const struct file_operations vmuser_fops;
+static struct vmci_linux_state linuxState = {
+	.misc = {
+		.name = MODULE_NAME,
+		.minor = MISC_DYNAMIC_MINOR,
+		.fops = &vmuser_fops,
+	},
+	.activeContexts = ATOMIC_INIT(0),
+};
+
+/*
+ * Called on open of /dev/vmci.
+ */
+static int drv_driver_open(struct inode *inode,
+			   struct file *filp)
+{
+	struct vmci_linux *vmciLinux;
+
+	vmciLinux = kzalloc(sizeof(struct vmci_linux), GFP_KERNEL);
+	if (vmciLinux == NULL)
+		return -ENOMEM;
+
+	vmciLinux->ctType = VMCIOBJ_NOT_SET;
+	mutex_init(&vmciLinux->lock);
+	filp->private_data = vmciLinux;
+
+	return 0;
+}
+
+/*
+ * Called on close of /dev/vmci, most often when the process
+ * exits.
+ */
+static int drv_driver_close(struct inode *inode,
+			    struct file *filp)
+{
+	struct vmci_linux *vmciLinux;
+
+	vmciLinux = (struct vmci_linux *)filp->private_data;
+	ASSERT(vmciLinux);
+
+	if (vmciLinux->ctType == VMCIOBJ_CONTEXT) {
+		ASSERT(vmciLinux->context);
+
+		vmci_ctx_release_ctx(vmciLinux->context);
+		vmciLinux->context = NULL;
+
+		/*
+		 * The number of active contexts is used to track whether any
+		 * VMX'en are using the host personality. It is incremented when
+		 * a context is created through the IOCTL_VMCI_INIT_CONTEXT
+		 * ioctl.
+		 */
+		atomic_dec(&linuxState.activeContexts);
+	}
+	vmciLinux->ctType = VMCIOBJ_NOT_SET;
+
+	kfree(vmciLinux);
+	filp->private_data = NULL;
+	return 0;
+}
+
+/*
+ * This is used to wake up the VMX when a VMCI call arrives, or
+ * to wake up select() or poll() at the next clock tick.
+ */
+static unsigned int drv_driver_poll(struct file *filp, poll_table *wait)
+{
+	struct vmci_linux *vmciLinux = (struct vmci_linux *)filp->private_data;
+	unsigned int mask = 0;
+
+	if (vmciLinux->ctType == VMCIOBJ_CONTEXT) {
+		ASSERT(vmciLinux->context != NULL);
+
+		/* Check for VMCI calls to this VM context. */
+		if (wait != NULL) {
+			poll_wait(filp,
+				  &vmciLinux->context->hostContext.waitQueue,
+				  wait);
+		}
+
+		spin_lock(&vmciLinux->context->lock);
+		if (vmciLinux->context->pendingDatagrams > 0 ||
+		    vmci_handle_arr_get_size(vmciLinux->context->
+					     pendingDoorbellArray) > 0) {
+			mask = POLLIN;
+		}
+		spin_unlock(&vmciLinux->context->lock);
+	}
+	return mask;
+}
+
+static int __init drv_host_init(void)
+{
+	int error;
+	int result;
+
+
+	result = vmci_ctx_init_ctx(VMCI_HOST_CONTEXT_ID,
+				   VMCI_DEFAULT_PROC_PRIVILEGE_FLAGS,
+				   -1, VMCI_VERSION, NULL, &hostContext);
+	if (result < VMCI_SUCCESS) {
+		pr_warn("Failed to initialize VMCIContext (result=%d).",
+			result);
+		return -ENOMEM;
+	}
+
+	result = vmci_qp_broker_init();
+	if (result < VMCI_SUCCESS) {
+		pr_warn("Failed to initialize broker (result=%d).",
+			result);
+		vmci_ctx_release_ctx(hostContext);
+		return -ENOMEM;
+	}
+
+	error = misc_register(&linuxState.misc);
+	if (error) {
+		pr_warn("Module registration error " \
+			"(name=%s, major=%d, minor=%d, err=%d).",
+			linuxState.misc.name, MISC_MAJOR, linuxState.misc.minor,
+			error);
+		drv_host_cleanup();
+		return error;
+	}
+
+	pr_notice("Module registered (name=%s, major=%d, minor=%d).", \
+		  linuxState.misc.name, MISC_MAJOR, linuxState.misc.minor);
+
+	return 0;
+}
+
+/*
+ * Copies the handles of a handle array into a user buffer, and
+ * returns the new length in userBufferSize. If the copy to the
+ * user buffer fails, the functions still returns VMCI_SUCCESS,
+ * but retval != 0.
+ */
+static int drv_cp_harray_to_user(void __user *userBufUVA,
+				 uint64_t *userBufSize,
+				 struct vmci_handle_arr *handleArray,
+				 int *retval)
+{
+	uint32_t arraySize = 0;
+	struct vmci_handle *handles;
+
+	if (handleArray)
+		arraySize = vmci_handle_arr_get_size(handleArray);
+
+	if (arraySize * sizeof *handles > *userBufSize)
+		return VMCI_ERROR_MORE_DATA;
+
+	*userBufSize = arraySize * sizeof *handles;
+	if (*userBufSize)
+		*retval = copy_to_user(userBufUVA,
+				       vmci_handle_arr_get_handles
+				       (handleArray), *userBufSize);
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Helper function for creating queue pair and copying the result
+ * to user memory.
+ */
+static int drv_qp_broker_alloc(struct vmci_handle handle,
+			       uint32_t peer,
+			       uint32_t flags,
+			       uint64_t produceSize,
+			       uint64_t consumeSize,
+			       struct vmci_qp_page_store *pageStore,
+			       struct vmci_ctx *context,
+			       bool vmToVm,
+			       void __user *resultUVA)
+{
+	uint32_t cid;
+	int result;
+	int retval;
+
+	cid = vmci_ctx_get_id(context);
+
+	result =
+		vmci_qp_broker_alloc(handle, peer, flags,
+				     VMCI_NO_PRIVILEGE_FLAGS, produceSize,
+				     consumeSize, pageStore, context);
+	if (result == VMCI_SUCCESS && vmToVm)
+		result = VMCI_SUCCESS_QUEUEPAIR_CREATE;
+
+	retval = copy_to_user(resultUVA, &result, sizeof result);
+	if (retval) {
+		retval = -EFAULT;
+		if (result >= VMCI_SUCCESS) {
+			result = vmci_qp_broker_detach(handle, context);
+			ASSERT(result >= VMCI_SUCCESS);
+		}
+	}
+
+	return retval;
+}
+
+/*
+ * Lock physical page backing a given user VA.
+ */
+static struct page *drv_user_va_lock_page(uintptr_t addr)
+{
+	struct page *page = NULL;
+	int retval;
+
+	down_read(&current->mm->mmap_sem);
+	retval = get_user_pages(current, current->mm, addr,
+				1, 1, 0, &page, NULL);
+	up_read(&current->mm->mmap_sem);
+
+	if (retval != 1)
+		return NULL;
+
+	return page;
+}
+
+/*
+ * Lock physical page backing a given user VA and maps it to kernel
+ * address space.  The range of the mapped memory should be within a
+ * single page otherwise an error is returned.
+ */
+static int drv_map_bool_ptr(uintptr_t notifyUVA,
+			    struct page **p,
+			    bool **notifyPtr)
+{
+	if (!access_ok(VERIFY_WRITE, (void __user *)notifyUVA,
+		       sizeof(**notifyPtr)) ||
+	    (((notifyUVA + sizeof(**notifyPtr) - 1) & ~(PAGE_SIZE - 1)) !=
+	     (notifyUVA & ~(PAGE_SIZE - 1)))) {
+		return -EINVAL;
+	}
+
+	*p = drv_user_va_lock_page(notifyUVA);
+	if (*p == NULL)
+		return -EAGAIN;
+
+	*notifyPtr =
+		(bool *) ((uint8_t *) kmap(*p) + (notifyUVA & (PAGE_SIZE - 1)));
+	return 0;
+}
+
+/*
+ * Sets up a given context for notify to work.  Calls drv_map_bool_ptr()
+ * which maps the notify boolean in user VA in kernel space.
+ */
+static int drv_setup_notify(struct vmci_ctx *context,
+			    uintptr_t notifyUVA)
+{
+	int retval;
+
+	if (context->notify) {
+		pr_warn("Notify mechanism is already set up.");
+		return VMCI_ERROR_DUPLICATE_ENTRY;
+	}
+
+	retval = drv_map_bool_ptr(notifyUVA, &context->notifyPage,
+				  &context->notify);
+	if (retval == 0) {
+		vmci_ctx_check_signal_notify(context);
+		return VMCI_SUCCESS;
+	}
+
+	return VMCI_ERROR_GENERIC;
+}
+
+static long drv_driver_unlocked_ioctl(struct file *filp,
+				      u_int iocmd,
+				      unsigned long ioarg)
+{
+	struct vmci_linux *vmciLinux = (struct vmci_linux *)filp->private_data;
+	int retval = 0;
+
+	switch (iocmd) {
+	case IOCTL_VMCI_VERSION2:{
+		int verFromUser;
+
+		if (copy_from_user
+		    (&verFromUser, (void *)ioarg, sizeof verFromUser)) {
+			retval = -EFAULT;
+			break;
+		}
+
+		vmciLinux->userVersion = verFromUser;
+	}
+		/* Fall through. */
+	case IOCTL_VMCI_VERSION:
+		/*
+		 * The basic logic here is:
+		 *
+		 * If the user sends in a version of 0 tell it our version.
+		 * If the user didn't send in a version, tell it our version.
+		 * If the user sent in an old version, tell it -its- version.
+		 * If the user sent in an newer version, tell it our version.
+		 *
+		 * The rationale behind telling the caller its version is that
+		 * Workstation 6.5 required that VMX and VMCI kernel module were
+		 * version sync'd.  All new VMX users will be programmed to
+		 * handle the VMCI kernel module version.
+		 */
+
+		if (vmciLinux->userVersion > 0 &&
+		    vmciLinux->userVersion < VMCI_VERSION_HOSTQP) {
+			retval = vmciLinux->userVersion;
+		} else {
+			retval = VMCI_VERSION;
+		}
+		break;
+
+	case IOCTL_VMCI_INIT_CONTEXT:{
+		struct vmci_init_blk initBlock;
+		uid_t user;
+
+		retval = copy_from_user(&initBlock, (void *)ioarg,
+					sizeof initBlock);
+		if (retval != 0) {
+			pr_info("Error reading init block.");
+			retval = -EFAULT;
+			break;
+		}
+
+		mutex_lock(&vmciLinux->lock);
+		if (vmciLinux->ctType != VMCIOBJ_NOT_SET) {
+			pr_info("Received VMCI init on initialized handle.");
+			retval = -EINVAL;
+			goto init_release;
+		}
+
+		if (initBlock.flags & ~VMCI_PRIVILEGE_FLAG_RESTRICTED) {
+			pr_info("Unsupported VMCI restriction flag.");
+			retval = -EINVAL;
+			goto init_release;
+		}
+
+		user = current_uid();
+		retval = vmci_ctx_init_ctx(initBlock.cid,
+					   initBlock.flags,
+					   0, vmciLinux->userVersion,
+					   &user, &vmciLinux->context);
+		if (retval < VMCI_SUCCESS) {
+			pr_info("Error initializing context.");
+			retval = (retval == VMCI_ERROR_DUPLICATE_ENTRY) ?
+				-EEXIST : -EINVAL;
+			goto init_release;
+		}
+
+		/*
+		 * Copy cid to userlevel, we do this to allow the VMX
+		 * to enforce its policy on cid generation.
+		 */
+		initBlock.cid = vmci_ctx_get_id(vmciLinux->context);
+		retval = copy_to_user((void *)ioarg, &initBlock,
+				      sizeof initBlock);
+		if (retval != 0) {
+			vmci_ctx_release_ctx(vmciLinux->context);
+			vmciLinux->context = NULL;
+			pr_info("Error writing init block.");
+			retval = -EFAULT;
+			goto init_release;
+		}
+
+		ASSERT(initBlock.cid != VMCI_INVALID_ID);
+		vmciLinux->ctType = VMCIOBJ_CONTEXT;
+		atomic_inc(&linuxState.activeContexts);
+
+init_release:
+		mutex_unlock(&vmciLinux->lock);
+		break;
+	}
+
+	case IOCTL_VMCI_DATAGRAM_SEND:{
+		struct vmci_dg_snd_rcv_info sendInfo;
+		struct vmci_dg *dg = NULL;
+		uint32_t cid;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_warn("Ioctl only valid for context handle (iocmd=%d).",
+				iocmd);
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&sendInfo, (void *)ioarg,
+					sizeof sendInfo);
+		if (retval) {
+			pr_warn("copy_from_user failed.");
+			retval = -EFAULT;
+			break;
+		}
+
+		if (sendInfo.len > VMCI_MAX_DG_SIZE) {
+			pr_warn("Datagram too big (size=%d).",
+				sendInfo.len);
+			retval = -EINVAL;
+			break;
+		}
+
+		if (sendInfo.len < sizeof *dg) {
+			pr_warn("Datagram too small (size=%d).",
+				sendInfo.len);
+			retval = -EINVAL;
+			break;
+		}
+
+		dg = kmalloc(sendInfo.len, GFP_KERNEL);
+		if (dg == NULL) {
+			pr_info("Cannot allocate memory to dispatch datagram.");
+			retval = -ENOMEM;
+			break;
+		}
+
+		retval = copy_from_user(dg,
+					(char *)(uintptr_t) sendInfo.addr,
+					sendInfo.len);
+		if (retval != 0) {
+			pr_info("Error getting datagram (err=%d).",
+				retval);
+			kfree(dg);
+			retval = -EFAULT;
+			break;
+		}
+
+		pr_devel("Datagram dst (handle=0x%x:0x%x) src " \
+			 "(handle=0x%x:0x%x), payload " \
+			 "(size=%llu bytes).",
+			 dg->dst.context, dg->dst.resource,
+			 dg->src.context, dg->src.resource,
+			 (unsigned long long) dg->payloadSize);
+
+		/* Get source context id. */
+		ASSERT(vmciLinux->context);
+		cid = vmci_ctx_get_id(vmciLinux->context);
+		ASSERT(cid != VMCI_INVALID_ID);
+		sendInfo.result = vmci_dg_dispatch(cid, dg, true);
+		kfree(dg);
+		retval =
+			copy_to_user((void *)ioarg, &sendInfo,
+				     sizeof sendInfo);
+		break;
+	}
+
+	case IOCTL_VMCI_DATAGRAM_RECEIVE:{
+		struct vmci_dg_snd_rcv_info recvInfo;
+		struct vmci_dg *dg = NULL;
+		size_t size;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_warn("Ioctl only valid for context handle (iocmd=%d).",
+				iocmd);
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&recvInfo, (void *)ioarg,
+					sizeof recvInfo);
+		if (retval) {
+			pr_warn("copy_from_user failed.");
+			retval = -EFAULT;
+			break;
+		}
+
+		ASSERT(vmciLinux->ctType == VMCIOBJ_CONTEXT);
+		ASSERT(vmciLinux->context);
+		size = recvInfo.len;
+		recvInfo.result =
+			vmci_ctx_dequeue_dg(vmciLinux->context,
+					    &size, &dg);
+
+		if (recvInfo.result >= VMCI_SUCCESS) {
+			ASSERT(dg);
+			retval = copy_to_user((void *)((uintptr_t)
+						       recvInfo.addr),
+					      dg, VMCI_DG_SIZE(dg));
+			kfree(dg);
+			if (retval != 0)
+				break;
+		}
+		retval = copy_to_user((void *)ioarg, &recvInfo,
+				      sizeof recvInfo);
+		break;
+	}
+
+	case IOCTL_VMCI_QUEUEPAIR_ALLOC:{
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_QUEUEPAIR_ALLOC only valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		if (vmciLinux->userVersion < VMCI_VERSION_NOVMVM) {
+			struct vmci_qp_alloc_info_vmvm queuePairAllocInfo;
+			struct vmci_qp_alloc_info_vmvm *info =
+				(struct vmci_qp_alloc_info_vmvm *)ioarg;
+
+			retval = copy_from_user(&queuePairAllocInfo,
+						(void *)ioarg,
+						sizeof queuePairAllocInfo);
+			if (retval) {
+				retval = -EFAULT;
+				break;
+			}
+
+			retval = drv_qp_broker_alloc(
+				queuePairAllocInfo.handle,
+				queuePairAllocInfo.peer,
+				queuePairAllocInfo.flags,
+				queuePairAllocInfo.produceSize,
+				queuePairAllocInfo.consumeSize,
+				NULL, vmciLinux->context,
+				true, &info->result);
+		} else {
+			struct vmci_qp_alloc_info
+				queuePairAllocInfo;
+			struct vmci_qp_alloc_info *info =
+				(struct vmci_qp_alloc_info *)ioarg;
+			struct vmci_qp_page_store pageStore;
+
+			retval = copy_from_user(&queuePairAllocInfo,
+						(void *)ioarg,
+						sizeof queuePairAllocInfo);
+			if (retval) {
+				retval = -EFAULT;
+				break;
+			}
+
+			pageStore.pages = queuePairAllocInfo.ppnVA;
+			pageStore.len = queuePairAllocInfo.numPPNs;
+
+			retval = drv_qp_broker_alloc(
+				queuePairAllocInfo.handle,
+				queuePairAllocInfo.peer,
+				queuePairAllocInfo.flags,
+				queuePairAllocInfo.produceSize,
+				queuePairAllocInfo.consumeSize,
+				&pageStore, vmciLinux->context,
+				false, &info->result);
+		}
+		break;
+	}
+
+	case IOCTL_VMCI_QUEUEPAIR_SETVA:{
+		struct vmci_qp_set_va_info setVAInfo;
+		struct vmci_qp_set_va_info *info =
+			(struct vmci_qp_set_va_info *)ioarg;
+		int32_t result;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_QUEUEPAIR_SETVA only valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		if (vmciLinux->userVersion < VMCI_VERSION_NOVMVM) {
+			pr_info("IOCTL_VMCI_QUEUEPAIR_SETVA not supported for this VMX version.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&setVAInfo, (void *)ioarg,
+					sizeof setVAInfo);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		if (setVAInfo.va) {
+			/*
+			 * VMX is passing down a new VA for the queue
+			 * pair mapping.
+			 */
+			result = vmci_qp_broker_map(setVAInfo.handle,
+						    vmciLinux->context,
+						    setVAInfo.va);
+		} else {
+			/*
+			 * The queue pair is about to be unmapped by
+			 * the VMX.
+			 */
+			result = vmci_qp_broker_unmap(setVAInfo.handle,
+						      vmciLinux->context, 0);
+		}
+
+		retval = copy_to_user(&info->result, &result, sizeof result);
+		if (retval)
+			retval = -EFAULT;
+
+		break;
+	}
+
+	case IOCTL_VMCI_QUEUEPAIR_SETPAGEFILE:{
+		struct vmci_qp_page_file_info pageFileInfo;
+		struct vmci_qp_page_file_info *info =
+			(struct vmci_qp_page_file_info *)ioarg;
+		int32_t result;
+
+		if (vmciLinux->userVersion < VMCI_VERSION_HOSTQP ||
+		    vmciLinux->userVersion >= VMCI_VERSION_NOVMVM) {
+			pr_info("IOCTL_VMCI_QUEUEPAIR_SETPAGEFILE not " \
+				"supported this VMX (version=%d).",
+				vmciLinux->userVersion);
+			retval = -EINVAL;
+			break;
+		}
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_QUEUEPAIR_SETPAGEFILE only " \
+				"valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&pageFileInfo, (void *)ioarg,
+					sizeof *info);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		/*
+		 * Communicate success pre-emptively to the caller.
+		 * Note that the basic premise is that it is incumbent
+		 * upon the caller not to look at the info.result
+		 * field until after the ioctl() returns.  And then,
+		 * only if the ioctl() result indicates no error.  We
+		 * send up the SUCCESS status before calling
+		 * SetPageStore() store because failing to copy up the
+		 * result code means unwinding the SetPageStore().
+		 *
+		 * It turns out the logic to unwind a SetPageStore()
+		 * opens a can of worms.  For example, if a host had
+		 * created the QueuePair and a guest attaches and
+		 * SetPageStore() is successful but writing success
+		 * fails, then ... the host has to be stopped from
+		 * writing (anymore) data into the QueuePair.  That
+		 * means an additional test in the VMCI_Enqueue() code
+		 * path.  Ugh.
+		 */
+
+		result = VMCI_SUCCESS;
+		retval = copy_to_user(&info->result, &result, sizeof result);
+		if (retval == 0) {
+			result = vmci_qp_broker_set_page_store(
+				pageFileInfo.handle,
+				pageFileInfo.produceVA,
+				pageFileInfo.consumeVA,
+				vmciLinux->context);
+			if (result < VMCI_SUCCESS) {
+				retval = copy_to_user(&info->result,
+						      &result,
+						      sizeof result);
+				if (retval != 0) {
+					/*
+					 * Note that in this case the
+					 * SetPageStore() call failed
+					 * but we were unable to
+					 * communicate that to the
+					 * caller (because the
+					 * copy_to_user() call
+					 * failed).  So, if we simply
+					 * return an error (in this
+					 * case -EFAULT) then the
+					 * caller will know that the
+					 * SetPageStore failed even
+					 * though we couldn't put the
+					 * result code in the result
+					 * field and indicate exactly
+					 * why it failed.
+					 *
+					 * That says nothing about the
+					 * issue where we were once
+					 * able to write to the
+					 * caller's info memory and
+					 * now can't.  Something more
+					 * serious is probably going
+					 * on than the fact that
+					 * SetPageStore() didn't work.
+					 */
+					retval = -EFAULT;
+				}
+			}
+
+		} else {
+			/*
+			 * In this case, we can't write a result field of the
+			 * caller's info block.  So, we don't even try to
+			 * SetPageStore().
+			 */
+			retval = -EFAULT;
+		}
+
+		break;
+	}
+
+	case IOCTL_VMCI_QUEUEPAIR_DETACH:{
+		struct vmci_qp_dtch_info detachInfo;
+		struct vmci_qp_dtch_info *info =
+			(struct vmci_qp_dtch_info *)ioarg;
+		int32_t result;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_QUEUEPAIR_DETACH only valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&detachInfo, (void *)ioarg,
+					sizeof detachInfo);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		result = vmci_qp_broker_detach(detachInfo.handle,
+					       vmciLinux->context);
+		if (result == VMCI_SUCCESS
+		    && vmciLinux->userVersion < VMCI_VERSION_NOVMVM)
+			result = VMCI_SUCCESS_LAST_DETACH;
+
+		retval = copy_to_user(&info->result, &result, sizeof result);
+		if (retval)
+			retval = -EFAULT;
+
+		break;
+	}
+
+	case IOCTL_VMCI_CTX_ADD_NOTIFICATION:{
+		struct vmci_ctx_info arInfo;
+		struct vmci_ctx_info *info =
+			(struct vmci_ctx_info *)ioarg;
+		int32_t result;
+		uint32_t cid;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_CTX_ADD_NOTIFICATION only " \
+				"valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&arInfo, (void *)ioarg,
+					sizeof arInfo);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		cid = vmci_ctx_get_id(vmciLinux->context);
+		result = vmci_ctx_add_notification(cid, arInfo.remoteCID);
+		retval = copy_to_user(&info->result, &result, sizeof result);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+		break;
+	}
+
+	case IOCTL_VMCI_CTX_REMOVE_NOTIFICATION:{
+		struct vmci_ctx_info arInfo;
+		struct vmci_ctx_info *info =
+			(struct vmci_ctx_info *)ioarg;
+		int32_t result;
+		uint32_t cid;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_CTX_REMOVE_NOTIFICATION only " \
+				"valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&arInfo, (void *)ioarg,
+					sizeof arInfo);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		cid = vmci_ctx_get_id(vmciLinux->context);
+		result = vmci_ctx_remove_notification(cid,
+						      arInfo.remoteCID);
+		retval = copy_to_user(&info->result, &result, sizeof result);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		break;
+	}
+
+	case IOCTL_VMCI_CTX_GET_CPT_STATE:{
+		struct vmci_ctx_chkpt_buf_info getInfo;
+		uint32_t cid;
+		char *cptBuf;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_CTX_GET_CPT_STATE only valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&getInfo, (void *)ioarg,
+					sizeof getInfo);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		cid = vmci_ctx_get_id(vmciLinux->context);
+		getInfo.result =
+			vmci_ctx_get_chkpt_state(cid,
+						 getInfo.cptType,
+						 &getInfo.bufSize,
+						 &cptBuf);
+		if (getInfo.result == VMCI_SUCCESS && getInfo.bufSize) {
+			retval = copy_to_user((void *)(uintptr_t)
+					      getInfo.cptBuf, cptBuf,
+					      getInfo.bufSize);
+			kfree(cptBuf);
+			if (retval) {
+				retval = -EFAULT;
+				break;
+			}
+		}
+		retval = copy_to_user((void *)ioarg, &getInfo,
+				      sizeof getInfo);
+		if (retval)
+			retval = -EFAULT;
+
+		break;
+	}
+
+	case IOCTL_VMCI_CTX_SET_CPT_STATE:{
+		struct vmci_ctx_chkpt_buf_info setInfo;
+		uint32_t cid;
+		char *cptBuf;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_CTX_SET_CPT_STATE only valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&setInfo, (void *)ioarg,
+					sizeof setInfo);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		cptBuf = kmalloc(setInfo.bufSize, GFP_KERNEL);
+		if (cptBuf == NULL) {
+			pr_info("Cannot allocate memory to set cpt state (type=%d).",
+				setInfo.cptType);
+			retval = -ENOMEM;
+			break;
+		}
+		retval = copy_from_user(cptBuf,
+					(void *)(uintptr_t) setInfo.cptBuf,
+					setInfo.bufSize);
+		if (retval) {
+			kfree(cptBuf);
+			retval = -EFAULT;
+			break;
+		}
+
+		cid = vmci_ctx_get_id(vmciLinux->context);
+		setInfo.result =
+			vmci_ctx_set_chkpt_state(cid,
+						 setInfo.cptType,
+						 setInfo.bufSize,
+						 cptBuf);
+		kfree(cptBuf);
+		retval = copy_to_user((void *)ioarg, &setInfo,
+				      sizeof setInfo);
+		if (retval)
+			retval = -EFAULT;
+
+		break;
+	}
+
+	case IOCTL_VMCI_GET_CONTEXT_ID:{
+		uint32_t cid = VMCI_HOST_CONTEXT_ID;
+
+		retval = copy_to_user((void *)ioarg, &cid, sizeof cid);
+		break;
+	}
+
+	case IOCTL_VMCI_SET_NOTIFY:{
+		struct vmci_set_notify_info notifyInfo;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_SET_NOTIFY only valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&notifyInfo, (void *)ioarg,
+					sizeof notifyInfo);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		if ((uintptr_t) notifyInfo.notifyUVA !=
+		    (uintptr_t) NULL) {
+			notifyInfo.result =
+				drv_setup_notify(vmciLinux->context,
+						 (uintptr_t)
+						 notifyInfo.notifyUVA);
+		} else {
+			spin_lock(&vmciLinux->context->lock);
+			vmci_ctx_unset_notify(vmciLinux->context);
+			spin_unlock(&vmciLinux->context->lock);
+			notifyInfo.result = VMCI_SUCCESS;
+		}
+
+		retval = copy_to_user((void *)ioarg, &notifyInfo,
+				      sizeof notifyInfo);
+		if (retval)
+			retval = -EFAULT;
+
+		break;
+	}
+
+	case IOCTL_VMCI_NOTIFY_RESOURCE:{
+		struct vmci_dbell_notify_resource_info info;
+		uint32_t cid;
+
+		if (vmciLinux->userVersion < VMCI_VERSION_NOTIFY) {
+			pr_info("IOCTL_VMCI_NOTIFY_RESOURCE is invalid " \
+				"for current VMX versions.");
+			retval = -EINVAL;
+			break;
+		}
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_NOTIFY_RESOURCE is only valid " \
+				"for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&info, (void *)ioarg, sizeof info);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		cid = vmci_ctx_get_id(vmciLinux->context);
+		switch (info.action) {
+		case VMCI_NOTIFY_RESOURCE_ACTION_NOTIFY:
+			if (info.resource ==
+			    VMCI_NOTIFY_RESOURCE_DOOR_BELL) {
+				uint32_t flags = VMCI_NO_PRIVILEGE_FLAGS;
+				info.result =
+					vmci_ctx_notify_dbell(cid,
+							      info.handle,
+							      flags);
+			} else {
+				info.result = VMCI_ERROR_UNAVAILABLE;
+			}
+			break;
+		case VMCI_NOTIFY_RESOURCE_ACTION_CREATE:
+			info.result =
+				vmci_ctx_dbell_create(cid,
+						      info.handle);
+			break;
+		case VMCI_NOTIFY_RESOURCE_ACTION_DESTROY:
+			info.result =
+				vmci_ctx_dbell_destroy(cid,
+						       info.handle);
+			break;
+		default:
+			pr_info("IOCTL_VMCI_NOTIFY_RESOURCE got unknown " \
+				"action (action=%d).", info.action);
+			info.result = VMCI_ERROR_INVALID_ARGS;
+		}
+		retval = copy_to_user((void *)ioarg, &info,
+				      sizeof info);
+		if (retval)
+			retval = -EFAULT;
+
+		break;
+	}
+
+	case IOCTL_VMCI_NOTIFICATIONS_RECEIVE:{
+		struct vmci_ctx_notify_recv_info info;
+		struct vmci_handle_arr *dbHandleArray;
+		struct vmci_handle_arr *qpHandleArray;
+		uint32_t cid;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_NOTIFICATIONS_RECEIVE is only " \
+				"valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		if (vmciLinux->userVersion < VMCI_VERSION_NOTIFY) {
+			pr_info("IOCTL_VMCI_NOTIFICATIONS_RECEIVE is not " \
+				"supported for the current vmx version.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval =
+			copy_from_user(&info, (void *)ioarg, sizeof info);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		if ((info.dbHandleBufSize && !info.dbHandleBufUVA)
+		    || (info.qpHandleBufSize && !info.qpHandleBufUVA)) {
+			retval = -EINVAL;
+			break;
+		}
+
+		cid = vmci_ctx_get_id(vmciLinux->context);
+		info.result =
+			vmci_ctx_rcv_notifications_get(cid,
+						       &dbHandleArray,
+						       &qpHandleArray);
+		if (info.result == VMCI_SUCCESS) {
+			info.result = drv_cp_harray_to_user((void *)
+							    (uintptr_t)
+							    info.
+							    dbHandleBufUVA,
+							    &info.
+							    dbHandleBufSize,
+							    dbHandleArray,
+							    &retval);
+			if (info.result == VMCI_SUCCESS && !retval) {
+				info.result =
+					drv_cp_harray_to_user((void *)
+							      (uintptr_t)
+							      info.
+							      qpHandleBufUVA,
+							      &info.
+							      qpHandleBufSize,
+							      qpHandleArray,
+							      &retval);
+			}
+			if (!retval) {
+				retval = copy_to_user((void *)ioarg,
+						      &info, sizeof info);
+			}
+			vmci_ctx_rcv_notifications_release
+				(cid, dbHandleArray, qpHandleArray,
+				 info.result == VMCI_SUCCESS && !retval);
+		} else {
+			retval = copy_to_user((void *)ioarg, &info,
+					      sizeof info);
+		}
+		break;
+	}
+
+	default:
+		pr_warn("Unknown ioctl (iocmd=%d).", iocmd);
+		retval = -EINVAL;
+	}
+
+	return retval;
+}
+
+/*
+ * Reads and dispatches incoming datagrams.
+ */
+static void drv_dispatch_dgs(unsigned long data)
+{
+	struct vmci_device *dev = (struct vmci_device *)data;
+
+	if (dev == NULL) {
+		pr_devel("No virtual device present in %s.", __func__);
+		return;
+	}
+
+	if (data_buffer == NULL) {
+		pr_devel("No buffer present in %s.", __func__);
+		return;
+	}
+
+	drv_read_dgs_from_port((int)0,
+			       dev->ioaddr + VMCI_DATA_IN_ADDR,
+			       data_buffer, data_buffer_size);
+}
+DECLARE_TASKLET(vmci_dg_tasklet, drv_dispatch_dgs, (unsigned long)&vmci_dev);
+
+/*
+ * Scans the notification bitmap for raised flags, clears them
+ * and handles the notifications.
+ */
+static void drv_process_bitmap(unsigned long data)
+{
+	struct vmci_device *dev = (struct vmci_device *)data;
+
+	if (dev == NULL) {
+		pr_devel("No virtual device present in %s.", __func__);
+		return;
+	}
+
+	if (notification_bitmap == NULL) {
+		pr_devel("No bitmap present in %s.", __func__);
+		return;
+	}
+
+	vmci_dbell_scan_notification_entries(notification_bitmap);
+}
+DECLARE_TASKLET(vmci_bm_tasklet, drv_process_bitmap, (unsigned long)&vmci_dev);
+
+/*
+ * Enable MSI-X.  Try exclusive vectors first, then shared vectors.
+ */
+static int drv_enable_msix(struct pci_dev *pdev)
+{
+	int i;
+	int result;
+
+	for (i = 0; i < VMCI_MAX_INTRS; ++i) {
+		vmci_dev.msix_entries[i].entry = i;
+		vmci_dev.msix_entries[i].vector = i;
+	}
+
+	result = pci_enable_msix(pdev, vmci_dev.msix_entries, VMCI_MAX_INTRS);
+	if (result == 0)
+		vmci_dev.exclusive_vectors = true;
+	else if (result > 0)
+		result = pci_enable_msix(pdev, vmci_dev.msix_entries, 1);
+
+	return result;
+}
+
+/*
+ * Interrupt handler for legacy or MSI interrupt, or for first MSI-X
+ * interrupt (vector VMCI_INTR_DATAGRAM).
+ */
+static irqreturn_t drv_interrupt(int irq,
+				 void *clientdata)
+{
+	struct vmci_device *dev = clientdata;
+
+	if (dev == NULL) {
+		pr_devel("Irq %d for unknown device in %s.", irq, __func__);
+		return IRQ_NONE;
+	}
+
+	/*
+	 * If we are using MSI-X with exclusive vectors then we simply schedule
+	 * the datagram tasklet, since we know the interrupt was meant for us.
+	 * Otherwise we must read the ICR to determine what to do.
+	 */
+
+	if (dev->intr_type == VMCI_INTR_TYPE_MSIX && dev->exclusive_vectors) {
+		tasklet_schedule(&vmci_dg_tasklet);
+	} else {
+		unsigned int icr;
+
+		ASSERT(dev->intr_type == VMCI_INTR_TYPE_INTX ||
+		       dev->intr_type == VMCI_INTR_TYPE_MSI);
+
+		/* Acknowledge interrupt and determine what needs doing. */
+		icr = inl(dev->ioaddr + VMCI_ICR_ADDR);
+		if (icr == 0 || icr == ~0)
+			return IRQ_NONE;
+
+		if (icr & VMCI_ICR_DATAGRAM) {
+			tasklet_schedule(&vmci_dg_tasklet);
+			icr &= ~VMCI_ICR_DATAGRAM;
+		}
+
+		if (icr & VMCI_ICR_NOTIFICATION) {
+			tasklet_schedule(&vmci_bm_tasklet);
+			icr &= ~VMCI_ICR_NOTIFICATION;
+		}
+
+		if (icr != 0)
+			pr_info("Ignoring unknown interrupt cause (%d).", icr);
+	}
+
+	return IRQ_HANDLED;
+}
+
+/*
+ * Interrupt handler for MSI-X interrupt vector VMCI_INTR_NOTIFICATION,
+ * which is for the notification bitmap.  Will only get called if we are
+ * using MSI-X with exclusive vectors.
+ */
+static irqreturn_t drv_interrupt_bm(int irq,
+				    void *clientdata)
+{
+	struct vmci_device *dev = clientdata;
+
+	if (dev == NULL) {
+		pr_devel("Irq %d for unknown device in %s.", irq, __func__);
+		return IRQ_NONE;
+	}
+
+	/* For MSI-X we can just assume it was meant for us. */
+	ASSERT(dev->intr_type == VMCI_INTR_TYPE_MSIX && dev->exclusive_vectors);
+	tasklet_schedule(&vmci_bm_tasklet);
+
+	return IRQ_HANDLED;
+}
+
+/*
+ * Most of the initialization at module load time is done here.
+ */
+static int __devinit drv_probe_device(struct pci_dev *pdev,
+				      const struct pci_device_id *id)
+{
+	unsigned int ioaddr;
+	unsigned int ioaddr_size;
+	unsigned int capabilities;
+	int result;
+
+	pr_info("Probing for vmci/PCI.");
+
+	result = pci_enable_device(pdev);
+	if (result) {
+		pr_err("Cannot enable VMCI device %s: error %d",
+		       pci_name(pdev), result);
+		return result;
+	}
+	pci_set_master(pdev);	/* To enable QueuePair functionality. */
+	ioaddr = pci_resource_start(pdev, 0);
+	ioaddr_size = pci_resource_len(pdev, 0);
+
+	/*
+	 * Request I/O region with adjusted base address and size. The
+	 * adjusted values are needed and used if we release the
+	 * region in case of failure.
+	 */
+	if (!request_region(ioaddr, ioaddr_size, MODULE_NAME)) {
+		pr_info(MODULE_NAME ": Another driver already loaded " \
+			"for device in slot %s.", pci_name(pdev));
+		goto pci_disable;
+	}
+
+	pr_info("Found VMCI PCI device at %#x, irq %u.", ioaddr, pdev->irq);
+
+	/*
+	 * Verify that the VMCI Device supports the capabilities that
+	 * we need. If the device is missing capabilities that we would
+	 * like to use, check for fallback capabilities and use those
+	 * instead (so we can run a new VM on old hosts). Fail the load if
+	 * a required capability is missing and there is no fallback.
+	 *
+	 * Right now, we need datagrams. There are no fallbacks.
+	 */
+	capabilities = inl(ioaddr + VMCI_CAPS_ADDR);
+
+	if ((capabilities & VMCI_CAPS_DATAGRAM) == 0) {
+		pr_err("Device does not support datagrams.");
+		goto release;
+	}
+
+	/*
+	 * If the hardware supports notifications, we will use that as
+	 * well.
+	 */
+	if (capabilities & VMCI_CAPS_NOTIFICATIONS) {
+		capabilities = VMCI_CAPS_DATAGRAM;
+		notification_bitmap = vmalloc(PAGE_SIZE);
+		if (notification_bitmap == NULL) {
+			pr_err("Device unable to allocate notification " \
+			       "bitmap.");
+		} else {
+			memset(notification_bitmap, 0, PAGE_SIZE);
+			capabilities |= VMCI_CAPS_NOTIFICATIONS;
+		}
+	} else {
+		capabilities = VMCI_CAPS_DATAGRAM;
+	}
+	pr_info("Using capabilities 0x%x.", capabilities);
+
+	/* Let the host know which capabilities we intend to use. */
+	outl(capabilities, ioaddr + VMCI_CAPS_ADDR);
+
+	/* Device struct initialization. */
+	mutex_lock(&vmci_dev.lock);
+	if (vmci_dev.enabled) {
+		pr_err("Device already enabled.");
+		goto unlock;
+	}
+
+	vmci_dev.ioaddr = ioaddr;
+	vmci_dev.ioaddr_size = ioaddr_size;
+	atomic_set(&vmci_dev.datagrams_allowed, 1);
+
+	/*
+	 * Register notification bitmap with device if that capability is
+	 * used
+	 */
+	if (capabilities & VMCI_CAPS_NOTIFICATIONS) {
+		unsigned long bitmapPPN;
+		bitmapPPN = page_to_pfn(vmalloc_to_page(notification_bitmap));
+		if (!vmci_dbell_register_notification_bitmap(bitmapPPN)) {
+			pr_err("VMCI device unable to register notification " \
+			       "bitmap with PPN 0x%x.", (uint32_t) bitmapPPN);
+			goto datagram_disallow;
+		}
+	}
+
+	/* Check host capabilities. */
+	if (!drv_check_host_caps())
+		goto remove_bitmap;
+
+	/* Enable device. */
+	vmci_dev.enabled = true;
+	pci_set_drvdata(pdev, &vmci_dev);
+
+	/*
+	 * We do global initialization here because we need datagrams
+	 * during drv_util_init, since it registers for VMCI
+	 * events. If we ever support more than one VMCI device we
+	 * will have to create seperate LateInit/EarlyExit functions
+	 * that can be used to do initialization/cleanup that depends
+	 * on the device being accessible.  We need to initialize VMCI
+	 * components before requesting an irq - the VMCI interrupt
+	 * handler uses these components, and it may be invoked once
+	 * request_irq() has registered the handler (as the irq line
+	 * may be shared).
+	 */
+	drv_util_init();
+
+	if (vmci_qp_guest_endpoints_init() < VMCI_SUCCESS)
+		goto util_exit;
+
+	/*
+	 * Enable interrupts.  Try MSI-X first, then MSI, and then fallback on
+	 * legacy interrupts.
+	 */
+	if (!vmci_disable_msix && !drv_enable_msix(pdev)) {
+		vmci_dev.intr_type = VMCI_INTR_TYPE_MSIX;
+		vmci_dev.irq = vmci_dev.msix_entries[0].vector;
+	} else if (!vmci_disable_msi && !pci_enable_msi(pdev)) {
+		vmci_dev.intr_type = VMCI_INTR_TYPE_MSI;
+		vmci_dev.irq = pdev->irq;
+	} else {
+		vmci_dev.intr_type = VMCI_INTR_TYPE_INTX;
+		vmci_dev.irq = pdev->irq;
+	}
+
+	/*
+	 * Request IRQ for legacy or MSI interrupts, or for first
+	 * MSI-X vector.
+	 */
+	result = request_irq(vmci_dev.irq, drv_interrupt, IRQF_SHARED,
+			     MODULE_NAME, &vmci_dev);
+	if (result) {
+		pr_err("Irq %u in use: %d", vmci_dev.irq, result);
+		goto components_exit;
+	}
+
+	/*
+	 * For MSI-X with exclusive vectors we need to request an
+	 * interrupt for each vector so that we get a separate
+	 * interrupt handler routine.  This allows us to distinguish
+	 * between the vectors.
+	 */
+	if (vmci_dev.exclusive_vectors) {
+		ASSERT(vmci_dev.intr_type == VMCI_INTR_TYPE_MSIX);
+		result = request_irq(vmci_dev.msix_entries[1].vector,
+				     drv_interrupt_bm, 0, MODULE_NAME,
+				     &vmci_dev);
+		if (result) {
+			pr_err("Irq %u in use: %d",
+			       vmci_dev.msix_entries[1].vector, result);
+			free_irq(vmci_dev.irq, &vmci_dev);
+			goto components_exit;
+		}
+	}
+
+	pr_info("Registered device.");
+	atomic_inc(&guestDeviceActive);
+	mutex_unlock(&vmci_dev.lock);
+
+	/* Enable specific interrupt bits. */
+	if (capabilities & VMCI_CAPS_NOTIFICATIONS) {
+		outl(VMCI_IMR_DATAGRAM | VMCI_IMR_NOTIFICATION,
+		     vmci_dev.ioaddr + VMCI_IMR_ADDR);
+	} else {
+		outl(VMCI_IMR_DATAGRAM, vmci_dev.ioaddr + VMCI_IMR_ADDR);
+	}
+
+	/* Enable interrupts. */
+	outl(VMCI_CONTROL_INT_ENABLE, vmci_dev.ioaddr + VMCI_CONTROL_ADDR);
+
+	return 0;
+
+components_exit:
+	vmci_qp_guest_endpoints_exit();
+util_exit:
+	vmci_util_exit();
+	vmci_dev.enabled = false;
+	if (vmci_dev.intr_type == VMCI_INTR_TYPE_MSIX)
+		pci_disable_msix(pdev);
+	else if (vmci_dev.intr_type == VMCI_INTR_TYPE_MSI)
+		pci_disable_msi(pdev);
+
+remove_bitmap:
+	if (notification_bitmap)
+		outl(VMCI_CONTROL_RESET, vmci_dev.ioaddr + VMCI_CONTROL_ADDR);
+
+datagram_disallow:
+	atomic_set(&vmci_dev.datagrams_allowed, 0);
+unlock:
+	mutex_unlock(&vmci_dev.lock);
+release:
+	if (notification_bitmap) {
+		vfree(notification_bitmap);
+		notification_bitmap = NULL;
+	}
+	release_region(ioaddr, ioaddr_size);
+pci_disable:
+	pci_disable_device(pdev);
+	return -EBUSY;
+}
+
+static void __devexit drv_remove_device(struct pci_dev *pdev)
+{
+	struct vmci_device *dev = pci_get_drvdata(pdev);
+
+	pr_info("Removing device");
+	atomic_dec(&guestDeviceActive);
+	vmci_qp_guest_endpoints_exit();
+	vmci_util_exit();
+	mutex_lock(&dev->lock);
+	atomic_set(&vmci_dev.datagrams_allowed, 0);
+	pr_info("Resetting vmci device");
+	outl(VMCI_CONTROL_RESET, vmci_dev.ioaddr + VMCI_CONTROL_ADDR);
+
+	/*
+	 * Free IRQ and then disable MSI/MSI-X as appropriate.  For
+	 * MSI-X, we might have multiple vectors, each with their own
+	 * IRQ, which we must free too.
+	 */
+	free_irq(dev->irq, dev);
+	if (dev->intr_type == VMCI_INTR_TYPE_MSIX) {
+		if (dev->exclusive_vectors)
+			free_irq(dev->msix_entries[1].vector, dev);
+
+		pci_disable_msix(pdev);
+	} else if (dev->intr_type == VMCI_INTR_TYPE_MSI) {
+		pci_disable_msi(pdev);
+	}
+	dev->exclusive_vectors = false;
+	dev->intr_type = VMCI_INTR_TYPE_INTX;
+
+	release_region(dev->ioaddr, dev->ioaddr_size);
+	dev->enabled = false;
+	if (notification_bitmap) {
+		/*
+		 * The device reset above cleared the bitmap state of the
+		 * device, so we can safely free it here.
+		 */
+
+		vfree(notification_bitmap);
+		notification_bitmap = NULL;
+	}
+
+	pr_info("Unregistered device.");
+	mutex_unlock(&dev->lock);
+
+	pci_disable_device(pdev);
+}
+
+static struct pci_driver vmci_driver = {
+	.name = MODULE_NAME,
+	.id_table = vmci_ids,
+	.probe = drv_probe_device,
+	.remove = __devexit_p(drv_remove_device),
+};
+
+/*
+ * Initializes the VMCI PCI device. The initialization might fail
+ * if there is no VMCI PCI device.
+ */
+static int __init dev_guest_init(void)
+{
+	int retval;
+
+	/* Initialize guest device data. */
+	mutex_init(&vmci_dev.lock);
+	vmci_dev.intr_type = VMCI_INTR_TYPE_INTX;
+	vmci_dev.exclusive_vectors = false;
+	spin_lock_init(&vmci_dev.dev_spinlock);
+	vmci_dev.enabled = false;
+	atomic_set(&vmci_dev.datagrams_allowed, 0);
+	atomic_set(&guestDeviceActive, 0);
+
+	data_buffer = vmalloc(data_buffer_size);
+	if (!data_buffer)
+		return -ENOMEM;
+
+	/* This should be last to make sure we are done initializing. */
+	retval = pci_register_driver(&vmci_driver);
+	if (retval < 0) {
+		vfree(data_buffer);
+		data_buffer = NULL;
+		return retval;
+	}
+
+	return 0;
+}
+
+static const struct file_operations vmuser_fops = {
+	.owner = THIS_MODULE,
+	.open = drv_driver_open,
+	.release = drv_driver_close,
+	.poll = drv_driver_poll,
+	.unlocked_ioctl = drv_driver_unlocked_ioctl,
+	.compat_ioctl = drv_driver_unlocked_ioctl,
+};
+
+/*
+ * VM to hypervisor call mechanism. We use the standard VMware naming
+ * convention since shared code is calling this function as well.
+ */
+int vmci_send_dg(struct vmci_dg *dg)
+{
+	unsigned long flags;
+	int result;
+
+	/* Check args. */
+	if (dg == NULL)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	if (atomic_read(&vmci_dev.datagrams_allowed) == 0)
+		return VMCI_ERROR_UNAVAILABLE;
+
+	/*
+	 * Need to acquire spinlock on the device because the datagram
+	 * data may be spread over multiple pages and the monitor may
+	 * interleave device user rpc calls from multiple
+	 * VCPUs. Acquiring the spinlock precludes that
+	 * possibility. Disabling interrupts to avoid incoming
+	 * datagrams during a "rep out" and possibly landing up in
+	 * this function.
+	 */
+	spin_lock_irqsave(&vmci_dev.dev_spinlock, flags);
+
+	__asm__ __volatile__("cld\n\t" \
+			     "rep outsb\n\t"
+			     : /* No output. */
+			     : "d"(vmci_dev.ioaddr + VMCI_DATA_OUT_ADDR),
+			       "c"(VMCI_DG_SIZE(dg)), "S"(dg)
+		);
+
+	result = inl(vmci_dev.ioaddr + VMCI_RESULT_LOW_ADDR);
+	spin_unlock_irqrestore(&vmci_dev.dev_spinlock, flags);
+
+	return result;
+}
+
+bool vmci_guest_code_active(void)
+{
+	return guestDeviceInit && atomic_read(&guestDeviceActive) > 0;
+}
+
+/*
+ * Determines whether the VMCI host personality is
+ * available. Since the core functionality of the host driver is
+ * always present, all guests could possibly use the host
+ * personality. However, to minimize the deviation from the
+ * pre-unified driver state of affairs, we only consider the host
+ * device active if there is no active guest device or if there
+ * are VMX'en with active VMCI contexts using the host device.
+ */
+bool vmci_host_code_active(void)
+{
+	return hostDeviceInit &&
+		(!vmci_guest_code_active() ||
+		 atomic_read(&linuxState.activeContexts) > 0);
+}
+
+static int __init drv_init(void)
+{
+	int retval;
+
+	retval = drv_shared_init();
+	if (retval != VMCI_SUCCESS) {
+		pr_warn("Failed to initialize common " \
+			"components (err=%d).", retval);
+		return -ENOMEM;
+	}
+
+	if (!vmci_disable_guest) {
+		retval = dev_guest_init();
+		if (retval != 0) {
+			pr_warn("Failed to initialize guest " \
+				"personality (err=%d).", retval);
+		} else {
+			const char *state = vmci_guest_code_active() ?
+				"active" : "inactive";
+			guestDeviceInit = true;
+			pr_info("Guest personality initialized and is %s",
+				state);
+		}
+	}
+
+	if (!vmci_disable_host) {
+		retval = drv_host_init();
+		if (retval != 0) {
+			pr_warn("Unable to initialize host " \
+				"personality (err=%d).", retval);
+		} else {
+			hostDeviceInit = true;
+			pr_info("Initialized host personality");
+		}
+	}
+
+	if (!guestDeviceInit && !hostDeviceInit) {
+		drv_shared_cleanup();
+		return -ENODEV;
+	}
+
+	pr_info("Module is initialized");
+	return 0;
+}
+
+static void __exit drv_exit(void)
+{
+	if (guestDeviceInit) {
+		pci_unregister_driver(&vmci_driver);
+		vfree(data_buffer);
+		guestDeviceInit = false;
+	}
+
+	if (hostDeviceInit) {
+		drv_host_cleanup();
+
+		if (misc_deregister(&linuxState.misc))
+			pr_warn("Error unregistering");
+		else
+			pr_info("Module unloaded");
+
+		hostDeviceInit = false;
+	}
+
+	drv_shared_cleanup();
+}
+
+/**
+ * VMCI_DeviceGet() - Checks for VMCI device.
+ * @apiVersion:	The API version to use
+ * @deviceShutdownCB:	Callback used when shutdown happens (Unused)
+ * @userData:	Data to be passed to the callback (Unused)
+ * @deviceRegistration:	A device registration handle. (Unused)
+ *
+ * Verifies that a valid VMCI device is present, and indicates
+ * the callers intention to use the device until it calls
+ * VMCI_DeviceRelease().
+ */
+bool VMCI_DeviceGet(uint32_t *apiVersion,
+		    VMCI_DeviceShutdownFn *deviceShutdownCB,
+		    void *userData,
+		    void **deviceRegistration)
+{
+	if (*apiVersion > VMCI_KERNEL_API_VERSION) {
+		*apiVersion = VMCI_KERNEL_API_VERSION;
+		return false;
+	}
+
+	return drv_device_enabled();
+}
+EXPORT_SYMBOL(VMCI_DeviceGet);
+
+/**
+ * VMCI_DeviceReelase() - Releases the device (Unused)
+ * @deviceRegistration:	The device registration handle.
+ *
+ * Indicates that the caller is done using the VMCI device.  This
+ * function is a noop on Linux systems.
+ */
+void VMCI_DeviceRelease(void *deviceRegistration)
+{
+}
+EXPORT_SYMBOL(VMCI_DeviceRelease);
+
+/**
+ * VMCI_GetContextID() - Gets the current context ID.
+ *
+ * Returns the current context ID.  Note that since this is accessed only
+ * from code running in the host, this always returns the host context ID.
+ */
+uint32_t VMCI_GetContextID(void)
+{
+	if (vmci_guest_code_active()) {
+		if (atomic_read(&vmContextID) == VMCI_INVALID_ID) {
+			uint32_t result;
+			struct vmci_dg getCidMsg;
+			getCidMsg.dst =
+				vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+						 VMCI_GET_CONTEXT_ID);
+			getCidMsg.src = VMCI_ANON_SRC_HANDLE;
+			getCidMsg.payloadSize = 0;
+			result = vmci_send_dg(&getCidMsg);
+			atomic_set(&vmContextID, result);
+		}
+		return atomic_read(&vmContextID);
+	} else if (vmci_host_code_active()) {
+		return VMCI_HOST_CONTEXT_ID;
+	}
+	return VMCI_INVALID_ID;
+}
+EXPORT_SYMBOL(VMCI_GetContextID);
+
+/**
+ * VMCI_Version() - Returns the version of the driver.
+ *
+ * Returns the version of the VMCI driver.
+ */
+uint32_t VMCI_Version(void)
+{
+	return VMCI_VERSION;
+}
+EXPORT_SYMBOL(VMCI_Version);
+
+module_init(drv_init);
+module_exit(drv_exit);
+MODULE_DEVICE_TABLE(pci, vmci_ids);
+
+MODULE_AUTHOR("VMware, Inc.");
+MODULE_DESCRIPTION("VMware Virtual Machine Communication Interface.");
+MODULE_VERSION(VMCI_DRIVER_VERSION_STRING);
+MODULE_LICENSE("GPL v2");
+
+module_param_named(disable_host, vmci_disable_host, bool, 0);
+MODULE_PARM_DESC(disable_host, "Disable driver host personality - (default=0)");
+
+module_param_named(disable_guest, vmci_disable_guest, bool, 0);
+MODULE_PARM_DESC(disable_guest,
+		 "Disable driver guest personality - (default=0)");
+
+module_param_named(disable_msi, vmci_disable_msi, bool, 0);
+MODULE_PARM_DESC(disable_msi, "Disable MSI use in driver - (default=0)");
+
+module_param_named(disable_msix, vmci_disable_msix, bool, 0);
+MODULE_PARM_DESC(disable_msix, "Disable MSI-X use in driver - (default=0)");
diff --git a/drivers/misc/vmw_vmci/vmci_driver.h b/drivers/misc/vmw_vmci/vmci_driver.h
new file mode 100644
index 0000000..1c306c4
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_driver.h
@@ -0,0 +1,52 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMCI_DRIVER_H_
+#define _VMCI_DRIVER_H_
+
+#include <linux/vmw_vmci_defs.h>
+#include <linux/wait.h>
+
+#include "vmci_context.h"
+#include "vmci_queue_pair.h"
+
+enum vmci_obj_type {
+	VMCIOBJ_VMX_VM = 10,
+	VMCIOBJ_CONTEXT,
+	VMCIOBJ_SOCKET,
+	VMCIOBJ_NOT_SET,
+};
+
+/* For storing VMCI structures in file handles. */
+struct vmci_obj {
+	void *ptr;
+	enum vmci_obj_type type;
+};
+
+typedef void (VMCIWorkFn) (void *data);
+bool vmci_host_code_active(void);
+bool vmci_guest_code_active(void);
+bool vmci_drv_wait_on_event_intr(wait_queue_head_t *event,
+				 VMCIEventReleaseCB releaseCB,
+				 void *clientData);
+int vmci_drv_schedule_delayed_work(VMCIWorkFn *workFn, void *data);
+uint32_t VMCI_GetContextID(void);
+int vmci_send_dg(struct vmci_dg *dg);
+
+#endif /* _VMCI_DRIVER_H_ */
-- 
1.7.0.4


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 04/11] Apply VMCI driver code
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  0 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, Andrew Stiegmann (stieg), cschamp, gregkh

This code implementes both the host and guest personalities of the
VMCI driver.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_driver.c | 2298 +++++++++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_driver.h |   52 +
 2 files changed, 2350 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_driver.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_driver.h

diff --git a/drivers/misc/vmw_vmci/vmci_driver.c b/drivers/misc/vmw_vmci/vmci_driver.c
new file mode 100644
index 0000000..abd9384
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_driver.c
@@ -0,0 +1,2298 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/atomic.h>
+#include <linux/file.h>
+#include <linux/fs.h>
+#include <linux/highmem.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/miscdevice.h>
+#include <linux/moduleparam.h>
+#include <linux/mutex.h>
+#include <linux/pci.h>
+#include <linux/poll.h>
+#include <linux/sched.h>
+#include <linux/smp.h>
+#include <linux/version.h>
+#include <linux/vmw_vmci_api.h>
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_handle_array.h"
+#include "vmci_common_int.h"
+#include "vmci_context.h"
+#include "vmci_datagram.h"
+#include "vmci_doorbell.h"
+#include "vmci_driver.h"
+#include "vmci_event.h"
+#include "vmci_hash_table.h"
+#include "vmci_queue_pair.h"
+#include "vmci_resource.h"
+
+#define VMCI_UTIL_NUM_RESOURCES 1
+
+enum {
+	VMCI_NOTIFY_RESOURCE_QUEUE_PAIR = 0,
+	VMCI_NOTIFY_RESOURCE_DOOR_BELL = 1,
+};
+
+enum {
+	VMCI_NOTIFY_RESOURCE_ACTION_NOTIFY = 0,
+	VMCI_NOTIFY_RESOURCE_ACTION_CREATE = 1,
+	VMCI_NOTIFY_RESOURCE_ACTION_DESTROY = 2,
+};
+
+static uint32_t ctxUpdateSubID = VMCI_INVALID_ID;
+static struct vmci_ctx *hostContext;
+static atomic_t vmContextID = { VMCI_INVALID_ID };
+
+struct vmci_delayed_work_info {
+	struct work_struct work;
+	VMCIWorkFn *workFn;
+	void *data;
+};
+
+/*
+ * VMCI driver initialization. This block can also be used to
+ * pass initial group membership etc.
+ */
+struct vmci_init_blk {
+	uint32_t cid;
+	uint32_t flags;
+};
+
+/* VMCIQueuePairAllocInfo_VMToVM */
+struct vmci_qp_alloc_info_vmvm {
+	struct vmci_handle handle;
+	uint32_t peer;
+	uint32_t flags;
+	uint64_t produceSize;
+	uint64_t consumeSize;
+	uint64_t producePageFile;	/* User VA. */
+	uint64_t consumePageFile;	/* User VA. */
+	uint64_t producePageFileSize;	/* Size of the file name array. */
+	uint64_t consumePageFileSize;	/* Size of the file name array. */
+	int32_t result;
+	uint32_t _pad;
+};
+
+/* VMCISetNotifyInfo: Used to pass notify flag's address to the host driver. */
+struct vmci_set_notify_info {
+	uint64_t notifyUVA;
+	int32_t result;
+	uint32_t _pad;
+};
+
+struct vmci_device {
+	struct mutex lock;
+
+	unsigned int ioaddr;
+	unsigned int ioaddr_size;
+	unsigned int irq;
+	unsigned int intr_type;
+	bool exclusive_vectors;
+	struct msix_entry msix_entries[VMCI_MAX_INTRS];
+
+	bool enabled;
+	spinlock_t dev_spinlock;
+	atomic_t datagrams_allowed;
+};
+
+static DEFINE_PCI_DEVICE_TABLE(vmci_ids) = {
+	{PCI_DEVICE(PCI_VENDOR_ID_VMWARE, PCI_DEVICE_ID_VMWARE_VMCI),},
+	{0},
+};
+
+static struct vmci_device vmci_dev;
+
+/* These options are false (0) by default */
+static bool vmci_disable_host;
+static bool vmci_disable_guest;
+static bool vmci_disable_msi;
+static bool vmci_disable_msix;
+
+/*
+ * Allocate a buffer for incoming datagrams globally to avoid repeated
+ * allocation in the interrupt handler's atomic context.
+ */
+static uint8_t *data_buffer;
+static uint32_t data_buffer_size = VMCI_MAX_DG_SIZE;
+
+/*
+ * If the VMCI hardware supports the notification bitmap, we allocate
+ * and register a page with the device.
+ */
+static uint8_t *notification_bitmap;
+
+/*
+ * Per-instance host state
+ */
+struct vmci_linux {
+	struct vmci_ctx *context;
+	int userVersion;
+	enum vmci_obj_type ctType;
+	struct mutex lock;
+};
+
+/*
+ * Static driver state.
+ */
+struct vmci_linux_state {
+	struct miscdevice misc;
+	char buf[1024];
+	atomic_t activeContexts;
+};
+
+/*
+ * Types and variables shared by both host and guest personality
+ */
+static bool guestDeviceInit;
+static atomic_t guestDeviceActive;
+static bool hostDeviceInit;
+
+static void drv_delayed_work_cb(struct work_struct *work)
+{
+	struct vmci_delayed_work_info *delayedWorkInfo;
+
+	delayedWorkInfo = container_of(work, struct vmci_delayed_work_info,
+				       work);
+	ASSERT(delayedWorkInfo);
+	ASSERT(delayedWorkInfo->workFn);
+
+	delayedWorkInfo->workFn(delayedWorkInfo->data);
+
+	kfree(delayedWorkInfo);
+}
+
+/*
+ * Schedule the specified callback.
+ */
+int vmci_drv_schedule_delayed_work(VMCIWorkFn *workFn,
+				   void *data)
+{
+	struct vmci_delayed_work_info *delayedWorkInfo;
+
+	ASSERT(workFn);
+
+	delayedWorkInfo = kmalloc(sizeof *delayedWorkInfo, GFP_ATOMIC);
+	if (!delayedWorkInfo)
+		return VMCI_ERROR_NO_MEM;
+
+	delayedWorkInfo->workFn = workFn;
+	delayedWorkInfo->data = data;
+
+	INIT_WORK(&delayedWorkInfo->work, drv_delayed_work_cb);
+
+	schedule_work(&delayedWorkInfo->work);
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * True if the wait was interrupted by a signal, false otherwise.
+ */
+bool vmci_drv_wait_on_event_intr(wait_queue_head_t *event,
+				 VMCIEventReleaseCB releaseCB,
+				 void *clientData)
+{
+	DECLARE_WAITQUEUE(wait, current);
+
+	if (event == NULL || releaseCB == NULL)
+		return false;
+
+	add_wait_queue(event, &wait);
+	current->state = TASK_INTERRUPTIBLE;
+
+	/*
+	 * Release the lock or other primitive that makes it possible for us to
+	 * put the current thread on the wait queue without missing the signal.
+	 * Ie. on Linux we need to put ourselves on the wait queue and set our
+	 * stateto TASK_INTERRUPTIBLE without another thread signalling us.
+	 * The releaseCB is used to synchronize this.
+	 */
+	releaseCB(clientData);
+
+	schedule();
+	current->state = TASK_RUNNING;
+	remove_wait_queue(event, &wait);
+
+	return signal_pending(current);
+}
+
+/*
+ * Cleans up the host specific components of the VMCI module.
+ */
+static void drv_host_cleanup(void)
+{
+	vmci_ctx_release_ctx(hostContext);
+	vmci_qp_broker_exit();
+}
+
+/*
+ * Checks whether the VMCI device is enabled.
+ */
+static bool drv_device_enabled(void)
+{
+	return vmci_guest_code_active()
+		|| vmci_host_code_active();
+}
+
+/*
+ * Gets called with the new context id if updated or resumed.
+ * Context id.
+ */
+static void drv_util_cid_update(uint32_t subID,
+				struct vmci_event_data *eventData,
+				void *clientData)
+{
+	struct vmci_event_payld_ctx *evPayload =
+		vmci_event_data_payload(eventData);
+
+	if (subID != ctxUpdateSubID) {
+		pr_devel("Invalid subscriber (ID=0x%x).", subID);
+		return;
+	}
+
+	if (eventData == NULL || evPayload->contextID == VMCI_INVALID_ID) {
+		pr_devel("Invalid event data.");
+		return;
+	}
+
+	pr_devel("Updating context from (ID=0x%x) to (ID=0x%x) on event " \
+		 "(type=%d).", atomic_read(&vmContextID), evPayload->contextID,
+		 eventData->event);
+
+	atomic_set(&vmContextID, evPayload->contextID);
+}
+
+/*
+ * Subscribe to context id update event.
+ */
+static void __devinit drv_util_init(void)
+{
+	/*
+	 * We subscribe to the VMCI_EVENT_CTX_ID_UPDATE here so we can
+	 * update the internal context id when needed.
+	 */
+	if (VMCIEvent_Subscribe
+	    (VMCI_EVENT_CTX_ID_UPDATE, VMCI_FLAG_EVENT_NONE,
+	     drv_util_cid_update, NULL, &ctxUpdateSubID) < VMCI_SUCCESS) {
+		pr_warn("Failed to subscribe to event (type=%d).",
+			VMCI_EVENT_CTX_ID_UPDATE);
+	}
+}
+
+static void vmci_util_exit(void)
+{
+	if (VMCIEvent_Unsubscribe(ctxUpdateSubID) < VMCI_SUCCESS) {
+		pr_warn("Failed to unsubscribe to event (type=%d) with " \
+			"subscriber (ID=0x%x).", VMCI_EVENT_CTX_ID_UPDATE,
+			ctxUpdateSubID);
+	}
+}
+
+/*
+ * Verify that the host supports the hypercalls we need. If it does not,
+ * try to find fallback hypercalls and use those instead.  Returns
+ * true if required hypercalls (or fallback hypercalls) are
+ * supported by the host, false otherwise.
+ */
+static bool drv_check_host_caps(void)
+{
+	bool result;
+	struct vmci_resource_query_msg *msg;
+	uint32_t msgSize = sizeof(struct vmci_resource_query_hdr) +
+		VMCI_UTIL_NUM_RESOURCES * sizeof(uint32_t);
+	struct vmci_dg *checkMsg = kmalloc(msgSize, GFP_KERNEL);
+
+	if (checkMsg == NULL) {
+		pr_warn("Check host: Insufficient memory.");
+		return false;
+	}
+
+	checkMsg->dst = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+					 VMCI_RESOURCES_QUERY);
+	checkMsg->src = VMCI_ANON_SRC_HANDLE;
+	checkMsg->payloadSize = msgSize - VMCI_DG_HEADERSIZE;
+	msg = (struct vmci_resource_query_msg *)VMCI_DG_PAYLOAD(checkMsg);
+
+	msg->numResources = VMCI_UTIL_NUM_RESOURCES;
+	msg->resources[0] = VMCI_GET_CONTEXT_ID;
+
+	/* Checks that hyper calls are supported */
+	result = (0x1 == vmci_send_dg(checkMsg));
+	kfree(checkMsg);
+
+	pr_info("Host capability check: %s.",
+		result ? "PASSED" : "FAILED");
+
+	/* We need the vector. There are no fallbacks. */
+	return result;
+}
+
+/*
+ * Reads datagrams from the data in port and dispatches them. We
+ * always start reading datagrams into only the first page of the
+ * datagram buffer. If the datagrams don't fit into one page, we
+ * use the maximum datagram buffer size for the remainder of the
+ * invocation. This is a simple heuristic for not penalizing
+ * small datagrams.
+ *
+ * This function assumes that it has exclusive access to the data
+ * in port for the duration of the call.
+ */
+static void drv_read_dgs_from_port(int ioHandle,
+				   unsigned short int dgInPort,
+				   uint8_t *dgInBuffer,
+				   size_t dgInBufferSize)
+{
+	struct vmci_dg *dg;
+	size_t currentDgInBufferSize = PAGE_SIZE;
+	size_t remainingBytes;
+
+	ASSERT(dgInBufferSize >= PAGE_SIZE);
+
+	insb(dgInPort, dgInBuffer, currentDgInBufferSize);
+	dg = (struct vmci_dg *)dgInBuffer;
+	remainingBytes = currentDgInBufferSize;
+
+	while (dg->dst.resource != VMCI_INVALID_ID
+	       || remainingBytes > PAGE_SIZE) {
+		unsigned dgInSize;
+
+		/*
+		 * When the input buffer spans multiple pages, a datagram can
+		 * start on any page boundary in the buffer.
+		 */
+		if (dg->dst.resource == VMCI_INVALID_ID) {
+			ASSERT(remainingBytes > PAGE_SIZE);
+			dg = (struct vmci_dg *)roundup((uintptr_t)
+						       dg + 1, PAGE_SIZE);
+			ASSERT((uint8_t *) dg <
+			       dgInBuffer + currentDgInBufferSize);
+			remainingBytes =
+				(size_t) (dgInBuffer + currentDgInBufferSize -
+					  (uint8_t *) dg);
+			continue;
+		}
+
+		dgInSize = VMCI_DG_SIZE_ALIGNED(dg);
+
+		if (dgInSize <= dgInBufferSize) {
+			int result;
+
+			/*
+			 * If the remaining bytes in the datagram
+			 * buffer doesn't contain the complete
+			 * datagram, we first make sure we have enough
+			 * room for it and then we read the reminder
+			 * of the datagram and possibly any following
+			 * datagrams.
+			 */
+			if (dgInSize > remainingBytes) {
+				if (remainingBytes != currentDgInBufferSize) {
+
+					/*
+					 * We move the partial
+					 * datagram to the front and
+					 * read the reminder of the
+					 * datagram and possibly
+					 * following calls into the
+					 * following bytes.
+					 */
+					memmove(dgInBuffer, dgInBuffer +
+						currentDgInBufferSize -
+						remainingBytes, remainingBytes);
+					dg = (struct vmci_dg *)
+						dgInBuffer;
+				}
+
+				if (currentDgInBufferSize != dgInBufferSize)
+					currentDgInBufferSize = dgInBufferSize;
+
+				insb(dgInPort, dgInBuffer + remainingBytes,
+				     currentDgInBufferSize - remainingBytes);
+			}
+
+			/*
+			 * We special case event datagrams from the
+			 * hypervisor.
+			 */
+			if (dg->src.context == VMCI_HYPERVISOR_CONTEXT_ID
+			    && dg->dst.resource == VMCI_EVENT_HANDLER) {
+				result = vmci_event_dispatch(dg);
+			} else {
+				result = vmci_dg_invoke_guest_handler(dg);
+			}
+			if (result < VMCI_SUCCESS) {
+				pr_devel("Datagram with resource " \
+					 "(ID=0x%x) failed (err=%d).",
+					 dg->dst.resource, result);
+			}
+
+			/* On to the next datagram. */
+			dg = (struct vmci_dg *)((uint8_t *) dg +
+						dgInSize);
+		} else {
+			size_t bytesToSkip;
+
+			/*
+			 * Datagram doesn't fit in datagram buffer of maximal
+			 * size. We drop it.
+			 */
+			pr_devel("Failed to receive datagram (size=%u bytes).",
+				 dgInSize);
+
+			bytesToSkip = dgInSize - remainingBytes;
+			if (currentDgInBufferSize != dgInBufferSize)
+				currentDgInBufferSize = dgInBufferSize;
+
+			for (;;) {
+				insb(dgInPort, dgInBuffer,
+				     currentDgInBufferSize);
+				if (bytesToSkip <= currentDgInBufferSize)
+					break;
+
+				bytesToSkip -= currentDgInBufferSize;
+			}
+			dg = (struct vmci_dg *)(dgInBuffer + bytesToSkip);
+		}
+
+		remainingBytes =
+			(size_t) (dgInBuffer + currentDgInBufferSize -
+				  (uint8_t *) dg);
+
+		if (remainingBytes < VMCI_DG_HEADERSIZE) {
+			/* Get the next batch of datagrams. */
+
+			insb(dgInPort, dgInBuffer, currentDgInBufferSize);
+			dg = (struct vmci_dg *)dgInBuffer;
+			remainingBytes = currentDgInBufferSize;
+		}
+	}
+}
+
+/*
+ * Initializes VMCI components shared between guest and host
+ * driver. This registers core hypercalls.
+ */
+static int __init drv_shared_init(void)
+{
+	int result;
+
+	result = vmci_resource_init();
+	if (result < VMCI_SUCCESS) {
+		pr_warn("Failed to initialize VMCIResource (result=%d).",
+			result);
+		goto errorExit;
+	}
+
+	result = vmci_ctx_init();
+	if (result < VMCI_SUCCESS) {
+		pr_warn("Failed to initialize VMCIContext (result=%d).",
+			result);
+		goto resourceExit;
+	}
+
+	result = vmci_dg_init();
+	if (result < VMCI_SUCCESS) {
+		pr_warn("Failed to initialize VMCIDatagram (result=%d).",
+			result);
+		goto resourceExit;
+	}
+
+	result = vmci_event_init();
+	if (result < VMCI_SUCCESS) {
+		pr_warn("Failed to initialize VMCIEvent (result=%d).",
+			result);
+		goto resourceExit;
+	}
+
+	result = vmci_dbell_init();
+	if (result < VMCI_SUCCESS) {
+		pr_warn("Failed to initialize VMCIDoorbell (result=%d).",
+			result);
+		goto eventExit;
+	}
+
+	pr_notice("shared components initialized.");
+	return VMCI_SUCCESS;
+
+eventExit:
+	vmci_event_exit();
+resourceExit:
+	vmci_resource_exit();
+errorExit:
+	return result;
+}
+
+/*
+ * Cleans up VMCI components shared between guest and host
+ * driver.
+ */
+static void drv_shared_cleanup(void)
+{
+	vmci_event_exit();
+	vmci_resource_exit();
+}
+
+static const struct file_operations vmuser_fops;
+static struct vmci_linux_state linuxState = {
+	.misc = {
+		.name = MODULE_NAME,
+		.minor = MISC_DYNAMIC_MINOR,
+		.fops = &vmuser_fops,
+	},
+	.activeContexts = ATOMIC_INIT(0),
+};
+
+/*
+ * Called on open of /dev/vmci.
+ */
+static int drv_driver_open(struct inode *inode,
+			   struct file *filp)
+{
+	struct vmci_linux *vmciLinux;
+
+	vmciLinux = kzalloc(sizeof(struct vmci_linux), GFP_KERNEL);
+	if (vmciLinux == NULL)
+		return -ENOMEM;
+
+	vmciLinux->ctType = VMCIOBJ_NOT_SET;
+	mutex_init(&vmciLinux->lock);
+	filp->private_data = vmciLinux;
+
+	return 0;
+}
+
+/*
+ * Called on close of /dev/vmci, most often when the process
+ * exits.
+ */
+static int drv_driver_close(struct inode *inode,
+			    struct file *filp)
+{
+	struct vmci_linux *vmciLinux;
+
+	vmciLinux = (struct vmci_linux *)filp->private_data;
+	ASSERT(vmciLinux);
+
+	if (vmciLinux->ctType == VMCIOBJ_CONTEXT) {
+		ASSERT(vmciLinux->context);
+
+		vmci_ctx_release_ctx(vmciLinux->context);
+		vmciLinux->context = NULL;
+
+		/*
+		 * The number of active contexts is used to track whether any
+		 * VMX'en are using the host personality. It is incremented when
+		 * a context is created through the IOCTL_VMCI_INIT_CONTEXT
+		 * ioctl.
+		 */
+		atomic_dec(&linuxState.activeContexts);
+	}
+	vmciLinux->ctType = VMCIOBJ_NOT_SET;
+
+	kfree(vmciLinux);
+	filp->private_data = NULL;
+	return 0;
+}
+
+/*
+ * This is used to wake up the VMX when a VMCI call arrives, or
+ * to wake up select() or poll() at the next clock tick.
+ */
+static unsigned int drv_driver_poll(struct file *filp, poll_table *wait)
+{
+	struct vmci_linux *vmciLinux = (struct vmci_linux *)filp->private_data;
+	unsigned int mask = 0;
+
+	if (vmciLinux->ctType == VMCIOBJ_CONTEXT) {
+		ASSERT(vmciLinux->context != NULL);
+
+		/* Check for VMCI calls to this VM context. */
+		if (wait != NULL) {
+			poll_wait(filp,
+				  &vmciLinux->context->hostContext.waitQueue,
+				  wait);
+		}
+
+		spin_lock(&vmciLinux->context->lock);
+		if (vmciLinux->context->pendingDatagrams > 0 ||
+		    vmci_handle_arr_get_size(vmciLinux->context->
+					     pendingDoorbellArray) > 0) {
+			mask = POLLIN;
+		}
+		spin_unlock(&vmciLinux->context->lock);
+	}
+	return mask;
+}
+
+static int __init drv_host_init(void)
+{
+	int error;
+	int result;
+
+
+	result = vmci_ctx_init_ctx(VMCI_HOST_CONTEXT_ID,
+				   VMCI_DEFAULT_PROC_PRIVILEGE_FLAGS,
+				   -1, VMCI_VERSION, NULL, &hostContext);
+	if (result < VMCI_SUCCESS) {
+		pr_warn("Failed to initialize VMCIContext (result=%d).",
+			result);
+		return -ENOMEM;
+	}
+
+	result = vmci_qp_broker_init();
+	if (result < VMCI_SUCCESS) {
+		pr_warn("Failed to initialize broker (result=%d).",
+			result);
+		vmci_ctx_release_ctx(hostContext);
+		return -ENOMEM;
+	}
+
+	error = misc_register(&linuxState.misc);
+	if (error) {
+		pr_warn("Module registration error " \
+			"(name=%s, major=%d, minor=%d, err=%d).",
+			linuxState.misc.name, MISC_MAJOR, linuxState.misc.minor,
+			error);
+		drv_host_cleanup();
+		return error;
+	}
+
+	pr_notice("Module registered (name=%s, major=%d, minor=%d).", \
+		  linuxState.misc.name, MISC_MAJOR, linuxState.misc.minor);
+
+	return 0;
+}
+
+/*
+ * Copies the handles of a handle array into a user buffer, and
+ * returns the new length in userBufferSize. If the copy to the
+ * user buffer fails, the functions still returns VMCI_SUCCESS,
+ * but retval != 0.
+ */
+static int drv_cp_harray_to_user(void __user *userBufUVA,
+				 uint64_t *userBufSize,
+				 struct vmci_handle_arr *handleArray,
+				 int *retval)
+{
+	uint32_t arraySize = 0;
+	struct vmci_handle *handles;
+
+	if (handleArray)
+		arraySize = vmci_handle_arr_get_size(handleArray);
+
+	if (arraySize * sizeof *handles > *userBufSize)
+		return VMCI_ERROR_MORE_DATA;
+
+	*userBufSize = arraySize * sizeof *handles;
+	if (*userBufSize)
+		*retval = copy_to_user(userBufUVA,
+				       vmci_handle_arr_get_handles
+				       (handleArray), *userBufSize);
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Helper function for creating queue pair and copying the result
+ * to user memory.
+ */
+static int drv_qp_broker_alloc(struct vmci_handle handle,
+			       uint32_t peer,
+			       uint32_t flags,
+			       uint64_t produceSize,
+			       uint64_t consumeSize,
+			       struct vmci_qp_page_store *pageStore,
+			       struct vmci_ctx *context,
+			       bool vmToVm,
+			       void __user *resultUVA)
+{
+	uint32_t cid;
+	int result;
+	int retval;
+
+	cid = vmci_ctx_get_id(context);
+
+	result =
+		vmci_qp_broker_alloc(handle, peer, flags,
+				     VMCI_NO_PRIVILEGE_FLAGS, produceSize,
+				     consumeSize, pageStore, context);
+	if (result == VMCI_SUCCESS && vmToVm)
+		result = VMCI_SUCCESS_QUEUEPAIR_CREATE;
+
+	retval = copy_to_user(resultUVA, &result, sizeof result);
+	if (retval) {
+		retval = -EFAULT;
+		if (result >= VMCI_SUCCESS) {
+			result = vmci_qp_broker_detach(handle, context);
+			ASSERT(result >= VMCI_SUCCESS);
+		}
+	}
+
+	return retval;
+}
+
+/*
+ * Lock physical page backing a given user VA.
+ */
+static struct page *drv_user_va_lock_page(uintptr_t addr)
+{
+	struct page *page = NULL;
+	int retval;
+
+	down_read(&current->mm->mmap_sem);
+	retval = get_user_pages(current, current->mm, addr,
+				1, 1, 0, &page, NULL);
+	up_read(&current->mm->mmap_sem);
+
+	if (retval != 1)
+		return NULL;
+
+	return page;
+}
+
+/*
+ * Lock physical page backing a given user VA and maps it to kernel
+ * address space.  The range of the mapped memory should be within a
+ * single page otherwise an error is returned.
+ */
+static int drv_map_bool_ptr(uintptr_t notifyUVA,
+			    struct page **p,
+			    bool **notifyPtr)
+{
+	if (!access_ok(VERIFY_WRITE, (void __user *)notifyUVA,
+		       sizeof(**notifyPtr)) ||
+	    (((notifyUVA + sizeof(**notifyPtr) - 1) & ~(PAGE_SIZE - 1)) !=
+	     (notifyUVA & ~(PAGE_SIZE - 1)))) {
+		return -EINVAL;
+	}
+
+	*p = drv_user_va_lock_page(notifyUVA);
+	if (*p == NULL)
+		return -EAGAIN;
+
+	*notifyPtr =
+		(bool *) ((uint8_t *) kmap(*p) + (notifyUVA & (PAGE_SIZE - 1)));
+	return 0;
+}
+
+/*
+ * Sets up a given context for notify to work.  Calls drv_map_bool_ptr()
+ * which maps the notify boolean in user VA in kernel space.
+ */
+static int drv_setup_notify(struct vmci_ctx *context,
+			    uintptr_t notifyUVA)
+{
+	int retval;
+
+	if (context->notify) {
+		pr_warn("Notify mechanism is already set up.");
+		return VMCI_ERROR_DUPLICATE_ENTRY;
+	}
+
+	retval = drv_map_bool_ptr(notifyUVA, &context->notifyPage,
+				  &context->notify);
+	if (retval == 0) {
+		vmci_ctx_check_signal_notify(context);
+		return VMCI_SUCCESS;
+	}
+
+	return VMCI_ERROR_GENERIC;
+}
+
+static long drv_driver_unlocked_ioctl(struct file *filp,
+				      u_int iocmd,
+				      unsigned long ioarg)
+{
+	struct vmci_linux *vmciLinux = (struct vmci_linux *)filp->private_data;
+	int retval = 0;
+
+	switch (iocmd) {
+	case IOCTL_VMCI_VERSION2:{
+		int verFromUser;
+
+		if (copy_from_user
+		    (&verFromUser, (void *)ioarg, sizeof verFromUser)) {
+			retval = -EFAULT;
+			break;
+		}
+
+		vmciLinux->userVersion = verFromUser;
+	}
+		/* Fall through. */
+	case IOCTL_VMCI_VERSION:
+		/*
+		 * The basic logic here is:
+		 *
+		 * If the user sends in a version of 0 tell it our version.
+		 * If the user didn't send in a version, tell it our version.
+		 * If the user sent in an old version, tell it -its- version.
+		 * If the user sent in an newer version, tell it our version.
+		 *
+		 * The rationale behind telling the caller its version is that
+		 * Workstation 6.5 required that VMX and VMCI kernel module were
+		 * version sync'd.  All new VMX users will be programmed to
+		 * handle the VMCI kernel module version.
+		 */
+
+		if (vmciLinux->userVersion > 0 &&
+		    vmciLinux->userVersion < VMCI_VERSION_HOSTQP) {
+			retval = vmciLinux->userVersion;
+		} else {
+			retval = VMCI_VERSION;
+		}
+		break;
+
+	case IOCTL_VMCI_INIT_CONTEXT:{
+		struct vmci_init_blk initBlock;
+		uid_t user;
+
+		retval = copy_from_user(&initBlock, (void *)ioarg,
+					sizeof initBlock);
+		if (retval != 0) {
+			pr_info("Error reading init block.");
+			retval = -EFAULT;
+			break;
+		}
+
+		mutex_lock(&vmciLinux->lock);
+		if (vmciLinux->ctType != VMCIOBJ_NOT_SET) {
+			pr_info("Received VMCI init on initialized handle.");
+			retval = -EINVAL;
+			goto init_release;
+		}
+
+		if (initBlock.flags & ~VMCI_PRIVILEGE_FLAG_RESTRICTED) {
+			pr_info("Unsupported VMCI restriction flag.");
+			retval = -EINVAL;
+			goto init_release;
+		}
+
+		user = current_uid();
+		retval = vmci_ctx_init_ctx(initBlock.cid,
+					   initBlock.flags,
+					   0, vmciLinux->userVersion,
+					   &user, &vmciLinux->context);
+		if (retval < VMCI_SUCCESS) {
+			pr_info("Error initializing context.");
+			retval = (retval == VMCI_ERROR_DUPLICATE_ENTRY) ?
+				-EEXIST : -EINVAL;
+			goto init_release;
+		}
+
+		/*
+		 * Copy cid to userlevel, we do this to allow the VMX
+		 * to enforce its policy on cid generation.
+		 */
+		initBlock.cid = vmci_ctx_get_id(vmciLinux->context);
+		retval = copy_to_user((void *)ioarg, &initBlock,
+				      sizeof initBlock);
+		if (retval != 0) {
+			vmci_ctx_release_ctx(vmciLinux->context);
+			vmciLinux->context = NULL;
+			pr_info("Error writing init block.");
+			retval = -EFAULT;
+			goto init_release;
+		}
+
+		ASSERT(initBlock.cid != VMCI_INVALID_ID);
+		vmciLinux->ctType = VMCIOBJ_CONTEXT;
+		atomic_inc(&linuxState.activeContexts);
+
+init_release:
+		mutex_unlock(&vmciLinux->lock);
+		break;
+	}
+
+	case IOCTL_VMCI_DATAGRAM_SEND:{
+		struct vmci_dg_snd_rcv_info sendInfo;
+		struct vmci_dg *dg = NULL;
+		uint32_t cid;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_warn("Ioctl only valid for context handle (iocmd=%d).",
+				iocmd);
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&sendInfo, (void *)ioarg,
+					sizeof sendInfo);
+		if (retval) {
+			pr_warn("copy_from_user failed.");
+			retval = -EFAULT;
+			break;
+		}
+
+		if (sendInfo.len > VMCI_MAX_DG_SIZE) {
+			pr_warn("Datagram too big (size=%d).",
+				sendInfo.len);
+			retval = -EINVAL;
+			break;
+		}
+
+		if (sendInfo.len < sizeof *dg) {
+			pr_warn("Datagram too small (size=%d).",
+				sendInfo.len);
+			retval = -EINVAL;
+			break;
+		}
+
+		dg = kmalloc(sendInfo.len, GFP_KERNEL);
+		if (dg == NULL) {
+			pr_info("Cannot allocate memory to dispatch datagram.");
+			retval = -ENOMEM;
+			break;
+		}
+
+		retval = copy_from_user(dg,
+					(char *)(uintptr_t) sendInfo.addr,
+					sendInfo.len);
+		if (retval != 0) {
+			pr_info("Error getting datagram (err=%d).",
+				retval);
+			kfree(dg);
+			retval = -EFAULT;
+			break;
+		}
+
+		pr_devel("Datagram dst (handle=0x%x:0x%x) src " \
+			 "(handle=0x%x:0x%x), payload " \
+			 "(size=%llu bytes).",
+			 dg->dst.context, dg->dst.resource,
+			 dg->src.context, dg->src.resource,
+			 (unsigned long long) dg->payloadSize);
+
+		/* Get source context id. */
+		ASSERT(vmciLinux->context);
+		cid = vmci_ctx_get_id(vmciLinux->context);
+		ASSERT(cid != VMCI_INVALID_ID);
+		sendInfo.result = vmci_dg_dispatch(cid, dg, true);
+		kfree(dg);
+		retval =
+			copy_to_user((void *)ioarg, &sendInfo,
+				     sizeof sendInfo);
+		break;
+	}
+
+	case IOCTL_VMCI_DATAGRAM_RECEIVE:{
+		struct vmci_dg_snd_rcv_info recvInfo;
+		struct vmci_dg *dg = NULL;
+		size_t size;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_warn("Ioctl only valid for context handle (iocmd=%d).",
+				iocmd);
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&recvInfo, (void *)ioarg,
+					sizeof recvInfo);
+		if (retval) {
+			pr_warn("copy_from_user failed.");
+			retval = -EFAULT;
+			break;
+		}
+
+		ASSERT(vmciLinux->ctType == VMCIOBJ_CONTEXT);
+		ASSERT(vmciLinux->context);
+		size = recvInfo.len;
+		recvInfo.result =
+			vmci_ctx_dequeue_dg(vmciLinux->context,
+					    &size, &dg);
+
+		if (recvInfo.result >= VMCI_SUCCESS) {
+			ASSERT(dg);
+			retval = copy_to_user((void *)((uintptr_t)
+						       recvInfo.addr),
+					      dg, VMCI_DG_SIZE(dg));
+			kfree(dg);
+			if (retval != 0)
+				break;
+		}
+		retval = copy_to_user((void *)ioarg, &recvInfo,
+				      sizeof recvInfo);
+		break;
+	}
+
+	case IOCTL_VMCI_QUEUEPAIR_ALLOC:{
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_QUEUEPAIR_ALLOC only valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		if (vmciLinux->userVersion < VMCI_VERSION_NOVMVM) {
+			struct vmci_qp_alloc_info_vmvm queuePairAllocInfo;
+			struct vmci_qp_alloc_info_vmvm *info =
+				(struct vmci_qp_alloc_info_vmvm *)ioarg;
+
+			retval = copy_from_user(&queuePairAllocInfo,
+						(void *)ioarg,
+						sizeof queuePairAllocInfo);
+			if (retval) {
+				retval = -EFAULT;
+				break;
+			}
+
+			retval = drv_qp_broker_alloc(
+				queuePairAllocInfo.handle,
+				queuePairAllocInfo.peer,
+				queuePairAllocInfo.flags,
+				queuePairAllocInfo.produceSize,
+				queuePairAllocInfo.consumeSize,
+				NULL, vmciLinux->context,
+				true, &info->result);
+		} else {
+			struct vmci_qp_alloc_info
+				queuePairAllocInfo;
+			struct vmci_qp_alloc_info *info =
+				(struct vmci_qp_alloc_info *)ioarg;
+			struct vmci_qp_page_store pageStore;
+
+			retval = copy_from_user(&queuePairAllocInfo,
+						(void *)ioarg,
+						sizeof queuePairAllocInfo);
+			if (retval) {
+				retval = -EFAULT;
+				break;
+			}
+
+			pageStore.pages = queuePairAllocInfo.ppnVA;
+			pageStore.len = queuePairAllocInfo.numPPNs;
+
+			retval = drv_qp_broker_alloc(
+				queuePairAllocInfo.handle,
+				queuePairAllocInfo.peer,
+				queuePairAllocInfo.flags,
+				queuePairAllocInfo.produceSize,
+				queuePairAllocInfo.consumeSize,
+				&pageStore, vmciLinux->context,
+				false, &info->result);
+		}
+		break;
+	}
+
+	case IOCTL_VMCI_QUEUEPAIR_SETVA:{
+		struct vmci_qp_set_va_info setVAInfo;
+		struct vmci_qp_set_va_info *info =
+			(struct vmci_qp_set_va_info *)ioarg;
+		int32_t result;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_QUEUEPAIR_SETVA only valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		if (vmciLinux->userVersion < VMCI_VERSION_NOVMVM) {
+			pr_info("IOCTL_VMCI_QUEUEPAIR_SETVA not supported for this VMX version.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&setVAInfo, (void *)ioarg,
+					sizeof setVAInfo);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		if (setVAInfo.va) {
+			/*
+			 * VMX is passing down a new VA for the queue
+			 * pair mapping.
+			 */
+			result = vmci_qp_broker_map(setVAInfo.handle,
+						    vmciLinux->context,
+						    setVAInfo.va);
+		} else {
+			/*
+			 * The queue pair is about to be unmapped by
+			 * the VMX.
+			 */
+			result = vmci_qp_broker_unmap(setVAInfo.handle,
+						      vmciLinux->context, 0);
+		}
+
+		retval = copy_to_user(&info->result, &result, sizeof result);
+		if (retval)
+			retval = -EFAULT;
+
+		break;
+	}
+
+	case IOCTL_VMCI_QUEUEPAIR_SETPAGEFILE:{
+		struct vmci_qp_page_file_info pageFileInfo;
+		struct vmci_qp_page_file_info *info =
+			(struct vmci_qp_page_file_info *)ioarg;
+		int32_t result;
+
+		if (vmciLinux->userVersion < VMCI_VERSION_HOSTQP ||
+		    vmciLinux->userVersion >= VMCI_VERSION_NOVMVM) {
+			pr_info("IOCTL_VMCI_QUEUEPAIR_SETPAGEFILE not " \
+				"supported this VMX (version=%d).",
+				vmciLinux->userVersion);
+			retval = -EINVAL;
+			break;
+		}
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_QUEUEPAIR_SETPAGEFILE only " \
+				"valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&pageFileInfo, (void *)ioarg,
+					sizeof *info);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		/*
+		 * Communicate success pre-emptively to the caller.
+		 * Note that the basic premise is that it is incumbent
+		 * upon the caller not to look at the info.result
+		 * field until after the ioctl() returns.  And then,
+		 * only if the ioctl() result indicates no error.  We
+		 * send up the SUCCESS status before calling
+		 * SetPageStore() store because failing to copy up the
+		 * result code means unwinding the SetPageStore().
+		 *
+		 * It turns out the logic to unwind a SetPageStore()
+		 * opens a can of worms.  For example, if a host had
+		 * created the QueuePair and a guest attaches and
+		 * SetPageStore() is successful but writing success
+		 * fails, then ... the host has to be stopped from
+		 * writing (anymore) data into the QueuePair.  That
+		 * means an additional test in the VMCI_Enqueue() code
+		 * path.  Ugh.
+		 */
+
+		result = VMCI_SUCCESS;
+		retval = copy_to_user(&info->result, &result, sizeof result);
+		if (retval == 0) {
+			result = vmci_qp_broker_set_page_store(
+				pageFileInfo.handle,
+				pageFileInfo.produceVA,
+				pageFileInfo.consumeVA,
+				vmciLinux->context);
+			if (result < VMCI_SUCCESS) {
+				retval = copy_to_user(&info->result,
+						      &result,
+						      sizeof result);
+				if (retval != 0) {
+					/*
+					 * Note that in this case the
+					 * SetPageStore() call failed
+					 * but we were unable to
+					 * communicate that to the
+					 * caller (because the
+					 * copy_to_user() call
+					 * failed).  So, if we simply
+					 * return an error (in this
+					 * case -EFAULT) then the
+					 * caller will know that the
+					 * SetPageStore failed even
+					 * though we couldn't put the
+					 * result code in the result
+					 * field and indicate exactly
+					 * why it failed.
+					 *
+					 * That says nothing about the
+					 * issue where we were once
+					 * able to write to the
+					 * caller's info memory and
+					 * now can't.  Something more
+					 * serious is probably going
+					 * on than the fact that
+					 * SetPageStore() didn't work.
+					 */
+					retval = -EFAULT;
+				}
+			}
+
+		} else {
+			/*
+			 * In this case, we can't write a result field of the
+			 * caller's info block.  So, we don't even try to
+			 * SetPageStore().
+			 */
+			retval = -EFAULT;
+		}
+
+		break;
+	}
+
+	case IOCTL_VMCI_QUEUEPAIR_DETACH:{
+		struct vmci_qp_dtch_info detachInfo;
+		struct vmci_qp_dtch_info *info =
+			(struct vmci_qp_dtch_info *)ioarg;
+		int32_t result;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_QUEUEPAIR_DETACH only valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&detachInfo, (void *)ioarg,
+					sizeof detachInfo);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		result = vmci_qp_broker_detach(detachInfo.handle,
+					       vmciLinux->context);
+		if (result == VMCI_SUCCESS
+		    && vmciLinux->userVersion < VMCI_VERSION_NOVMVM)
+			result = VMCI_SUCCESS_LAST_DETACH;
+
+		retval = copy_to_user(&info->result, &result, sizeof result);
+		if (retval)
+			retval = -EFAULT;
+
+		break;
+	}
+
+	case IOCTL_VMCI_CTX_ADD_NOTIFICATION:{
+		struct vmci_ctx_info arInfo;
+		struct vmci_ctx_info *info =
+			(struct vmci_ctx_info *)ioarg;
+		int32_t result;
+		uint32_t cid;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_CTX_ADD_NOTIFICATION only " \
+				"valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&arInfo, (void *)ioarg,
+					sizeof arInfo);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		cid = vmci_ctx_get_id(vmciLinux->context);
+		result = vmci_ctx_add_notification(cid, arInfo.remoteCID);
+		retval = copy_to_user(&info->result, &result, sizeof result);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+		break;
+	}
+
+	case IOCTL_VMCI_CTX_REMOVE_NOTIFICATION:{
+		struct vmci_ctx_info arInfo;
+		struct vmci_ctx_info *info =
+			(struct vmci_ctx_info *)ioarg;
+		int32_t result;
+		uint32_t cid;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_CTX_REMOVE_NOTIFICATION only " \
+				"valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&arInfo, (void *)ioarg,
+					sizeof arInfo);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		cid = vmci_ctx_get_id(vmciLinux->context);
+		result = vmci_ctx_remove_notification(cid,
+						      arInfo.remoteCID);
+		retval = copy_to_user(&info->result, &result, sizeof result);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		break;
+	}
+
+	case IOCTL_VMCI_CTX_GET_CPT_STATE:{
+		struct vmci_ctx_chkpt_buf_info getInfo;
+		uint32_t cid;
+		char *cptBuf;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_CTX_GET_CPT_STATE only valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&getInfo, (void *)ioarg,
+					sizeof getInfo);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		cid = vmci_ctx_get_id(vmciLinux->context);
+		getInfo.result =
+			vmci_ctx_get_chkpt_state(cid,
+						 getInfo.cptType,
+						 &getInfo.bufSize,
+						 &cptBuf);
+		if (getInfo.result == VMCI_SUCCESS && getInfo.bufSize) {
+			retval = copy_to_user((void *)(uintptr_t)
+					      getInfo.cptBuf, cptBuf,
+					      getInfo.bufSize);
+			kfree(cptBuf);
+			if (retval) {
+				retval = -EFAULT;
+				break;
+			}
+		}
+		retval = copy_to_user((void *)ioarg, &getInfo,
+				      sizeof getInfo);
+		if (retval)
+			retval = -EFAULT;
+
+		break;
+	}
+
+	case IOCTL_VMCI_CTX_SET_CPT_STATE:{
+		struct vmci_ctx_chkpt_buf_info setInfo;
+		uint32_t cid;
+		char *cptBuf;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_CTX_SET_CPT_STATE only valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&setInfo, (void *)ioarg,
+					sizeof setInfo);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		cptBuf = kmalloc(setInfo.bufSize, GFP_KERNEL);
+		if (cptBuf == NULL) {
+			pr_info("Cannot allocate memory to set cpt state (type=%d).",
+				setInfo.cptType);
+			retval = -ENOMEM;
+			break;
+		}
+		retval = copy_from_user(cptBuf,
+					(void *)(uintptr_t) setInfo.cptBuf,
+					setInfo.bufSize);
+		if (retval) {
+			kfree(cptBuf);
+			retval = -EFAULT;
+			break;
+		}
+
+		cid = vmci_ctx_get_id(vmciLinux->context);
+		setInfo.result =
+			vmci_ctx_set_chkpt_state(cid,
+						 setInfo.cptType,
+						 setInfo.bufSize,
+						 cptBuf);
+		kfree(cptBuf);
+		retval = copy_to_user((void *)ioarg, &setInfo,
+				      sizeof setInfo);
+		if (retval)
+			retval = -EFAULT;
+
+		break;
+	}
+
+	case IOCTL_VMCI_GET_CONTEXT_ID:{
+		uint32_t cid = VMCI_HOST_CONTEXT_ID;
+
+		retval = copy_to_user((void *)ioarg, &cid, sizeof cid);
+		break;
+	}
+
+	case IOCTL_VMCI_SET_NOTIFY:{
+		struct vmci_set_notify_info notifyInfo;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_SET_NOTIFY only valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&notifyInfo, (void *)ioarg,
+					sizeof notifyInfo);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		if ((uintptr_t) notifyInfo.notifyUVA !=
+		    (uintptr_t) NULL) {
+			notifyInfo.result =
+				drv_setup_notify(vmciLinux->context,
+						 (uintptr_t)
+						 notifyInfo.notifyUVA);
+		} else {
+			spin_lock(&vmciLinux->context->lock);
+			vmci_ctx_unset_notify(vmciLinux->context);
+			spin_unlock(&vmciLinux->context->lock);
+			notifyInfo.result = VMCI_SUCCESS;
+		}
+
+		retval = copy_to_user((void *)ioarg, &notifyInfo,
+				      sizeof notifyInfo);
+		if (retval)
+			retval = -EFAULT;
+
+		break;
+	}
+
+	case IOCTL_VMCI_NOTIFY_RESOURCE:{
+		struct vmci_dbell_notify_resource_info info;
+		uint32_t cid;
+
+		if (vmciLinux->userVersion < VMCI_VERSION_NOTIFY) {
+			pr_info("IOCTL_VMCI_NOTIFY_RESOURCE is invalid " \
+				"for current VMX versions.");
+			retval = -EINVAL;
+			break;
+		}
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_NOTIFY_RESOURCE is only valid " \
+				"for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval = copy_from_user(&info, (void *)ioarg, sizeof info);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		cid = vmci_ctx_get_id(vmciLinux->context);
+		switch (info.action) {
+		case VMCI_NOTIFY_RESOURCE_ACTION_NOTIFY:
+			if (info.resource ==
+			    VMCI_NOTIFY_RESOURCE_DOOR_BELL) {
+				uint32_t flags = VMCI_NO_PRIVILEGE_FLAGS;
+				info.result =
+					vmci_ctx_notify_dbell(cid,
+							      info.handle,
+							      flags);
+			} else {
+				info.result = VMCI_ERROR_UNAVAILABLE;
+			}
+			break;
+		case VMCI_NOTIFY_RESOURCE_ACTION_CREATE:
+			info.result =
+				vmci_ctx_dbell_create(cid,
+						      info.handle);
+			break;
+		case VMCI_NOTIFY_RESOURCE_ACTION_DESTROY:
+			info.result =
+				vmci_ctx_dbell_destroy(cid,
+						       info.handle);
+			break;
+		default:
+			pr_info("IOCTL_VMCI_NOTIFY_RESOURCE got unknown " \
+				"action (action=%d).", info.action);
+			info.result = VMCI_ERROR_INVALID_ARGS;
+		}
+		retval = copy_to_user((void *)ioarg, &info,
+				      sizeof info);
+		if (retval)
+			retval = -EFAULT;
+
+		break;
+	}
+
+	case IOCTL_VMCI_NOTIFICATIONS_RECEIVE:{
+		struct vmci_ctx_notify_recv_info info;
+		struct vmci_handle_arr *dbHandleArray;
+		struct vmci_handle_arr *qpHandleArray;
+		uint32_t cid;
+
+		if (vmciLinux->ctType != VMCIOBJ_CONTEXT) {
+			pr_info("IOCTL_VMCI_NOTIFICATIONS_RECEIVE is only " \
+				"valid for contexts.");
+			retval = -EINVAL;
+			break;
+		}
+
+		if (vmciLinux->userVersion < VMCI_VERSION_NOTIFY) {
+			pr_info("IOCTL_VMCI_NOTIFICATIONS_RECEIVE is not " \
+				"supported for the current vmx version.");
+			retval = -EINVAL;
+			break;
+		}
+
+		retval =
+			copy_from_user(&info, (void *)ioarg, sizeof info);
+		if (retval) {
+			retval = -EFAULT;
+			break;
+		}
+
+		if ((info.dbHandleBufSize && !info.dbHandleBufUVA)
+		    || (info.qpHandleBufSize && !info.qpHandleBufUVA)) {
+			retval = -EINVAL;
+			break;
+		}
+
+		cid = vmci_ctx_get_id(vmciLinux->context);
+		info.result =
+			vmci_ctx_rcv_notifications_get(cid,
+						       &dbHandleArray,
+						       &qpHandleArray);
+		if (info.result == VMCI_SUCCESS) {
+			info.result = drv_cp_harray_to_user((void *)
+							    (uintptr_t)
+							    info.
+							    dbHandleBufUVA,
+							    &info.
+							    dbHandleBufSize,
+							    dbHandleArray,
+							    &retval);
+			if (info.result == VMCI_SUCCESS && !retval) {
+				info.result =
+					drv_cp_harray_to_user((void *)
+							      (uintptr_t)
+							      info.
+							      qpHandleBufUVA,
+							      &info.
+							      qpHandleBufSize,
+							      qpHandleArray,
+							      &retval);
+			}
+			if (!retval) {
+				retval = copy_to_user((void *)ioarg,
+						      &info, sizeof info);
+			}
+			vmci_ctx_rcv_notifications_release
+				(cid, dbHandleArray, qpHandleArray,
+				 info.result == VMCI_SUCCESS && !retval);
+		} else {
+			retval = copy_to_user((void *)ioarg, &info,
+					      sizeof info);
+		}
+		break;
+	}
+
+	default:
+		pr_warn("Unknown ioctl (iocmd=%d).", iocmd);
+		retval = -EINVAL;
+	}
+
+	return retval;
+}
+
+/*
+ * Reads and dispatches incoming datagrams.
+ */
+static void drv_dispatch_dgs(unsigned long data)
+{
+	struct vmci_device *dev = (struct vmci_device *)data;
+
+	if (dev == NULL) {
+		pr_devel("No virtual device present in %s.", __func__);
+		return;
+	}
+
+	if (data_buffer == NULL) {
+		pr_devel("No buffer present in %s.", __func__);
+		return;
+	}
+
+	drv_read_dgs_from_port((int)0,
+			       dev->ioaddr + VMCI_DATA_IN_ADDR,
+			       data_buffer, data_buffer_size);
+}
+DECLARE_TASKLET(vmci_dg_tasklet, drv_dispatch_dgs, (unsigned long)&vmci_dev);
+
+/*
+ * Scans the notification bitmap for raised flags, clears them
+ * and handles the notifications.
+ */
+static void drv_process_bitmap(unsigned long data)
+{
+	struct vmci_device *dev = (struct vmci_device *)data;
+
+	if (dev == NULL) {
+		pr_devel("No virtual device present in %s.", __func__);
+		return;
+	}
+
+	if (notification_bitmap == NULL) {
+		pr_devel("No bitmap present in %s.", __func__);
+		return;
+	}
+
+	vmci_dbell_scan_notification_entries(notification_bitmap);
+}
+DECLARE_TASKLET(vmci_bm_tasklet, drv_process_bitmap, (unsigned long)&vmci_dev);
+
+/*
+ * Enable MSI-X.  Try exclusive vectors first, then shared vectors.
+ */
+static int drv_enable_msix(struct pci_dev *pdev)
+{
+	int i;
+	int result;
+
+	for (i = 0; i < VMCI_MAX_INTRS; ++i) {
+		vmci_dev.msix_entries[i].entry = i;
+		vmci_dev.msix_entries[i].vector = i;
+	}
+
+	result = pci_enable_msix(pdev, vmci_dev.msix_entries, VMCI_MAX_INTRS);
+	if (result == 0)
+		vmci_dev.exclusive_vectors = true;
+	else if (result > 0)
+		result = pci_enable_msix(pdev, vmci_dev.msix_entries, 1);
+
+	return result;
+}
+
+/*
+ * Interrupt handler for legacy or MSI interrupt, or for first MSI-X
+ * interrupt (vector VMCI_INTR_DATAGRAM).
+ */
+static irqreturn_t drv_interrupt(int irq,
+				 void *clientdata)
+{
+	struct vmci_device *dev = clientdata;
+
+	if (dev == NULL) {
+		pr_devel("Irq %d for unknown device in %s.", irq, __func__);
+		return IRQ_NONE;
+	}
+
+	/*
+	 * If we are using MSI-X with exclusive vectors then we simply schedule
+	 * the datagram tasklet, since we know the interrupt was meant for us.
+	 * Otherwise we must read the ICR to determine what to do.
+	 */
+
+	if (dev->intr_type == VMCI_INTR_TYPE_MSIX && dev->exclusive_vectors) {
+		tasklet_schedule(&vmci_dg_tasklet);
+	} else {
+		unsigned int icr;
+
+		ASSERT(dev->intr_type == VMCI_INTR_TYPE_INTX ||
+		       dev->intr_type == VMCI_INTR_TYPE_MSI);
+
+		/* Acknowledge interrupt and determine what needs doing. */
+		icr = inl(dev->ioaddr + VMCI_ICR_ADDR);
+		if (icr == 0 || icr == ~0)
+			return IRQ_NONE;
+
+		if (icr & VMCI_ICR_DATAGRAM) {
+			tasklet_schedule(&vmci_dg_tasklet);
+			icr &= ~VMCI_ICR_DATAGRAM;
+		}
+
+		if (icr & VMCI_ICR_NOTIFICATION) {
+			tasklet_schedule(&vmci_bm_tasklet);
+			icr &= ~VMCI_ICR_NOTIFICATION;
+		}
+
+		if (icr != 0)
+			pr_info("Ignoring unknown interrupt cause (%d).", icr);
+	}
+
+	return IRQ_HANDLED;
+}
+
+/*
+ * Interrupt handler for MSI-X interrupt vector VMCI_INTR_NOTIFICATION,
+ * which is for the notification bitmap.  Will only get called if we are
+ * using MSI-X with exclusive vectors.
+ */
+static irqreturn_t drv_interrupt_bm(int irq,
+				    void *clientdata)
+{
+	struct vmci_device *dev = clientdata;
+
+	if (dev == NULL) {
+		pr_devel("Irq %d for unknown device in %s.", irq, __func__);
+		return IRQ_NONE;
+	}
+
+	/* For MSI-X we can just assume it was meant for us. */
+	ASSERT(dev->intr_type == VMCI_INTR_TYPE_MSIX && dev->exclusive_vectors);
+	tasklet_schedule(&vmci_bm_tasklet);
+
+	return IRQ_HANDLED;
+}
+
+/*
+ * Most of the initialization at module load time is done here.
+ */
+static int __devinit drv_probe_device(struct pci_dev *pdev,
+				      const struct pci_device_id *id)
+{
+	unsigned int ioaddr;
+	unsigned int ioaddr_size;
+	unsigned int capabilities;
+	int result;
+
+	pr_info("Probing for vmci/PCI.");
+
+	result = pci_enable_device(pdev);
+	if (result) {
+		pr_err("Cannot enable VMCI device %s: error %d",
+		       pci_name(pdev), result);
+		return result;
+	}
+	pci_set_master(pdev);	/* To enable QueuePair functionality. */
+	ioaddr = pci_resource_start(pdev, 0);
+	ioaddr_size = pci_resource_len(pdev, 0);
+
+	/*
+	 * Request I/O region with adjusted base address and size. The
+	 * adjusted values are needed and used if we release the
+	 * region in case of failure.
+	 */
+	if (!request_region(ioaddr, ioaddr_size, MODULE_NAME)) {
+		pr_info(MODULE_NAME ": Another driver already loaded " \
+			"for device in slot %s.", pci_name(pdev));
+		goto pci_disable;
+	}
+
+	pr_info("Found VMCI PCI device at %#x, irq %u.", ioaddr, pdev->irq);
+
+	/*
+	 * Verify that the VMCI Device supports the capabilities that
+	 * we need. If the device is missing capabilities that we would
+	 * like to use, check for fallback capabilities and use those
+	 * instead (so we can run a new VM on old hosts). Fail the load if
+	 * a required capability is missing and there is no fallback.
+	 *
+	 * Right now, we need datagrams. There are no fallbacks.
+	 */
+	capabilities = inl(ioaddr + VMCI_CAPS_ADDR);
+
+	if ((capabilities & VMCI_CAPS_DATAGRAM) == 0) {
+		pr_err("Device does not support datagrams.");
+		goto release;
+	}
+
+	/*
+	 * If the hardware supports notifications, we will use that as
+	 * well.
+	 */
+	if (capabilities & VMCI_CAPS_NOTIFICATIONS) {
+		capabilities = VMCI_CAPS_DATAGRAM;
+		notification_bitmap = vmalloc(PAGE_SIZE);
+		if (notification_bitmap == NULL) {
+			pr_err("Device unable to allocate notification " \
+			       "bitmap.");
+		} else {
+			memset(notification_bitmap, 0, PAGE_SIZE);
+			capabilities |= VMCI_CAPS_NOTIFICATIONS;
+		}
+	} else {
+		capabilities = VMCI_CAPS_DATAGRAM;
+	}
+	pr_info("Using capabilities 0x%x.", capabilities);
+
+	/* Let the host know which capabilities we intend to use. */
+	outl(capabilities, ioaddr + VMCI_CAPS_ADDR);
+
+	/* Device struct initialization. */
+	mutex_lock(&vmci_dev.lock);
+	if (vmci_dev.enabled) {
+		pr_err("Device already enabled.");
+		goto unlock;
+	}
+
+	vmci_dev.ioaddr = ioaddr;
+	vmci_dev.ioaddr_size = ioaddr_size;
+	atomic_set(&vmci_dev.datagrams_allowed, 1);
+
+	/*
+	 * Register notification bitmap with device if that capability is
+	 * used
+	 */
+	if (capabilities & VMCI_CAPS_NOTIFICATIONS) {
+		unsigned long bitmapPPN;
+		bitmapPPN = page_to_pfn(vmalloc_to_page(notification_bitmap));
+		if (!vmci_dbell_register_notification_bitmap(bitmapPPN)) {
+			pr_err("VMCI device unable to register notification " \
+			       "bitmap with PPN 0x%x.", (uint32_t) bitmapPPN);
+			goto datagram_disallow;
+		}
+	}
+
+	/* Check host capabilities. */
+	if (!drv_check_host_caps())
+		goto remove_bitmap;
+
+	/* Enable device. */
+	vmci_dev.enabled = true;
+	pci_set_drvdata(pdev, &vmci_dev);
+
+	/*
+	 * We do global initialization here because we need datagrams
+	 * during drv_util_init, since it registers for VMCI
+	 * events. If we ever support more than one VMCI device we
+	 * will have to create seperate LateInit/EarlyExit functions
+	 * that can be used to do initialization/cleanup that depends
+	 * on the device being accessible.  We need to initialize VMCI
+	 * components before requesting an irq - the VMCI interrupt
+	 * handler uses these components, and it may be invoked once
+	 * request_irq() has registered the handler (as the irq line
+	 * may be shared).
+	 */
+	drv_util_init();
+
+	if (vmci_qp_guest_endpoints_init() < VMCI_SUCCESS)
+		goto util_exit;
+
+	/*
+	 * Enable interrupts.  Try MSI-X first, then MSI, and then fallback on
+	 * legacy interrupts.
+	 */
+	if (!vmci_disable_msix && !drv_enable_msix(pdev)) {
+		vmci_dev.intr_type = VMCI_INTR_TYPE_MSIX;
+		vmci_dev.irq = vmci_dev.msix_entries[0].vector;
+	} else if (!vmci_disable_msi && !pci_enable_msi(pdev)) {
+		vmci_dev.intr_type = VMCI_INTR_TYPE_MSI;
+		vmci_dev.irq = pdev->irq;
+	} else {
+		vmci_dev.intr_type = VMCI_INTR_TYPE_INTX;
+		vmci_dev.irq = pdev->irq;
+	}
+
+	/*
+	 * Request IRQ for legacy or MSI interrupts, or for first
+	 * MSI-X vector.
+	 */
+	result = request_irq(vmci_dev.irq, drv_interrupt, IRQF_SHARED,
+			     MODULE_NAME, &vmci_dev);
+	if (result) {
+		pr_err("Irq %u in use: %d", vmci_dev.irq, result);
+		goto components_exit;
+	}
+
+	/*
+	 * For MSI-X with exclusive vectors we need to request an
+	 * interrupt for each vector so that we get a separate
+	 * interrupt handler routine.  This allows us to distinguish
+	 * between the vectors.
+	 */
+	if (vmci_dev.exclusive_vectors) {
+		ASSERT(vmci_dev.intr_type == VMCI_INTR_TYPE_MSIX);
+		result = request_irq(vmci_dev.msix_entries[1].vector,
+				     drv_interrupt_bm, 0, MODULE_NAME,
+				     &vmci_dev);
+		if (result) {
+			pr_err("Irq %u in use: %d",
+			       vmci_dev.msix_entries[1].vector, result);
+			free_irq(vmci_dev.irq, &vmci_dev);
+			goto components_exit;
+		}
+	}
+
+	pr_info("Registered device.");
+	atomic_inc(&guestDeviceActive);
+	mutex_unlock(&vmci_dev.lock);
+
+	/* Enable specific interrupt bits. */
+	if (capabilities & VMCI_CAPS_NOTIFICATIONS) {
+		outl(VMCI_IMR_DATAGRAM | VMCI_IMR_NOTIFICATION,
+		     vmci_dev.ioaddr + VMCI_IMR_ADDR);
+	} else {
+		outl(VMCI_IMR_DATAGRAM, vmci_dev.ioaddr + VMCI_IMR_ADDR);
+	}
+
+	/* Enable interrupts. */
+	outl(VMCI_CONTROL_INT_ENABLE, vmci_dev.ioaddr + VMCI_CONTROL_ADDR);
+
+	return 0;
+
+components_exit:
+	vmci_qp_guest_endpoints_exit();
+util_exit:
+	vmci_util_exit();
+	vmci_dev.enabled = false;
+	if (vmci_dev.intr_type == VMCI_INTR_TYPE_MSIX)
+		pci_disable_msix(pdev);
+	else if (vmci_dev.intr_type == VMCI_INTR_TYPE_MSI)
+		pci_disable_msi(pdev);
+
+remove_bitmap:
+	if (notification_bitmap)
+		outl(VMCI_CONTROL_RESET, vmci_dev.ioaddr + VMCI_CONTROL_ADDR);
+
+datagram_disallow:
+	atomic_set(&vmci_dev.datagrams_allowed, 0);
+unlock:
+	mutex_unlock(&vmci_dev.lock);
+release:
+	if (notification_bitmap) {
+		vfree(notification_bitmap);
+		notification_bitmap = NULL;
+	}
+	release_region(ioaddr, ioaddr_size);
+pci_disable:
+	pci_disable_device(pdev);
+	return -EBUSY;
+}
+
+static void __devexit drv_remove_device(struct pci_dev *pdev)
+{
+	struct vmci_device *dev = pci_get_drvdata(pdev);
+
+	pr_info("Removing device");
+	atomic_dec(&guestDeviceActive);
+	vmci_qp_guest_endpoints_exit();
+	vmci_util_exit();
+	mutex_lock(&dev->lock);
+	atomic_set(&vmci_dev.datagrams_allowed, 0);
+	pr_info("Resetting vmci device");
+	outl(VMCI_CONTROL_RESET, vmci_dev.ioaddr + VMCI_CONTROL_ADDR);
+
+	/*
+	 * Free IRQ and then disable MSI/MSI-X as appropriate.  For
+	 * MSI-X, we might have multiple vectors, each with their own
+	 * IRQ, which we must free too.
+	 */
+	free_irq(dev->irq, dev);
+	if (dev->intr_type == VMCI_INTR_TYPE_MSIX) {
+		if (dev->exclusive_vectors)
+			free_irq(dev->msix_entries[1].vector, dev);
+
+		pci_disable_msix(pdev);
+	} else if (dev->intr_type == VMCI_INTR_TYPE_MSI) {
+		pci_disable_msi(pdev);
+	}
+	dev->exclusive_vectors = false;
+	dev->intr_type = VMCI_INTR_TYPE_INTX;
+
+	release_region(dev->ioaddr, dev->ioaddr_size);
+	dev->enabled = false;
+	if (notification_bitmap) {
+		/*
+		 * The device reset above cleared the bitmap state of the
+		 * device, so we can safely free it here.
+		 */
+
+		vfree(notification_bitmap);
+		notification_bitmap = NULL;
+	}
+
+	pr_info("Unregistered device.");
+	mutex_unlock(&dev->lock);
+
+	pci_disable_device(pdev);
+}
+
+static struct pci_driver vmci_driver = {
+	.name = MODULE_NAME,
+	.id_table = vmci_ids,
+	.probe = drv_probe_device,
+	.remove = __devexit_p(drv_remove_device),
+};
+
+/*
+ * Initializes the VMCI PCI device. The initialization might fail
+ * if there is no VMCI PCI device.
+ */
+static int __init dev_guest_init(void)
+{
+	int retval;
+
+	/* Initialize guest device data. */
+	mutex_init(&vmci_dev.lock);
+	vmci_dev.intr_type = VMCI_INTR_TYPE_INTX;
+	vmci_dev.exclusive_vectors = false;
+	spin_lock_init(&vmci_dev.dev_spinlock);
+	vmci_dev.enabled = false;
+	atomic_set(&vmci_dev.datagrams_allowed, 0);
+	atomic_set(&guestDeviceActive, 0);
+
+	data_buffer = vmalloc(data_buffer_size);
+	if (!data_buffer)
+		return -ENOMEM;
+
+	/* This should be last to make sure we are done initializing. */
+	retval = pci_register_driver(&vmci_driver);
+	if (retval < 0) {
+		vfree(data_buffer);
+		data_buffer = NULL;
+		return retval;
+	}
+
+	return 0;
+}
+
+static const struct file_operations vmuser_fops = {
+	.owner = THIS_MODULE,
+	.open = drv_driver_open,
+	.release = drv_driver_close,
+	.poll = drv_driver_poll,
+	.unlocked_ioctl = drv_driver_unlocked_ioctl,
+	.compat_ioctl = drv_driver_unlocked_ioctl,
+};
+
+/*
+ * VM to hypervisor call mechanism. We use the standard VMware naming
+ * convention since shared code is calling this function as well.
+ */
+int vmci_send_dg(struct vmci_dg *dg)
+{
+	unsigned long flags;
+	int result;
+
+	/* Check args. */
+	if (dg == NULL)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	if (atomic_read(&vmci_dev.datagrams_allowed) == 0)
+		return VMCI_ERROR_UNAVAILABLE;
+
+	/*
+	 * Need to acquire spinlock on the device because the datagram
+	 * data may be spread over multiple pages and the monitor may
+	 * interleave device user rpc calls from multiple
+	 * VCPUs. Acquiring the spinlock precludes that
+	 * possibility. Disabling interrupts to avoid incoming
+	 * datagrams during a "rep out" and possibly landing up in
+	 * this function.
+	 */
+	spin_lock_irqsave(&vmci_dev.dev_spinlock, flags);
+
+	__asm__ __volatile__("cld\n\t" \
+			     "rep outsb\n\t"
+			     : /* No output. */
+			     : "d"(vmci_dev.ioaddr + VMCI_DATA_OUT_ADDR),
+			       "c"(VMCI_DG_SIZE(dg)), "S"(dg)
+		);
+
+	result = inl(vmci_dev.ioaddr + VMCI_RESULT_LOW_ADDR);
+	spin_unlock_irqrestore(&vmci_dev.dev_spinlock, flags);
+
+	return result;
+}
+
+bool vmci_guest_code_active(void)
+{
+	return guestDeviceInit && atomic_read(&guestDeviceActive) > 0;
+}
+
+/*
+ * Determines whether the VMCI host personality is
+ * available. Since the core functionality of the host driver is
+ * always present, all guests could possibly use the host
+ * personality. However, to minimize the deviation from the
+ * pre-unified driver state of affairs, we only consider the host
+ * device active if there is no active guest device or if there
+ * are VMX'en with active VMCI contexts using the host device.
+ */
+bool vmci_host_code_active(void)
+{
+	return hostDeviceInit &&
+		(!vmci_guest_code_active() ||
+		 atomic_read(&linuxState.activeContexts) > 0);
+}
+
+static int __init drv_init(void)
+{
+	int retval;
+
+	retval = drv_shared_init();
+	if (retval != VMCI_SUCCESS) {
+		pr_warn("Failed to initialize common " \
+			"components (err=%d).", retval);
+		return -ENOMEM;
+	}
+
+	if (!vmci_disable_guest) {
+		retval = dev_guest_init();
+		if (retval != 0) {
+			pr_warn("Failed to initialize guest " \
+				"personality (err=%d).", retval);
+		} else {
+			const char *state = vmci_guest_code_active() ?
+				"active" : "inactive";
+			guestDeviceInit = true;
+			pr_info("Guest personality initialized and is %s",
+				state);
+		}
+	}
+
+	if (!vmci_disable_host) {
+		retval = drv_host_init();
+		if (retval != 0) {
+			pr_warn("Unable to initialize host " \
+				"personality (err=%d).", retval);
+		} else {
+			hostDeviceInit = true;
+			pr_info("Initialized host personality");
+		}
+	}
+
+	if (!guestDeviceInit && !hostDeviceInit) {
+		drv_shared_cleanup();
+		return -ENODEV;
+	}
+
+	pr_info("Module is initialized");
+	return 0;
+}
+
+static void __exit drv_exit(void)
+{
+	if (guestDeviceInit) {
+		pci_unregister_driver(&vmci_driver);
+		vfree(data_buffer);
+		guestDeviceInit = false;
+	}
+
+	if (hostDeviceInit) {
+		drv_host_cleanup();
+
+		if (misc_deregister(&linuxState.misc))
+			pr_warn("Error unregistering");
+		else
+			pr_info("Module unloaded");
+
+		hostDeviceInit = false;
+	}
+
+	drv_shared_cleanup();
+}
+
+/**
+ * VMCI_DeviceGet() - Checks for VMCI device.
+ * @apiVersion:	The API version to use
+ * @deviceShutdownCB:	Callback used when shutdown happens (Unused)
+ * @userData:	Data to be passed to the callback (Unused)
+ * @deviceRegistration:	A device registration handle. (Unused)
+ *
+ * Verifies that a valid VMCI device is present, and indicates
+ * the callers intention to use the device until it calls
+ * VMCI_DeviceRelease().
+ */
+bool VMCI_DeviceGet(uint32_t *apiVersion,
+		    VMCI_DeviceShutdownFn *deviceShutdownCB,
+		    void *userData,
+		    void **deviceRegistration)
+{
+	if (*apiVersion > VMCI_KERNEL_API_VERSION) {
+		*apiVersion = VMCI_KERNEL_API_VERSION;
+		return false;
+	}
+
+	return drv_device_enabled();
+}
+EXPORT_SYMBOL(VMCI_DeviceGet);
+
+/**
+ * VMCI_DeviceReelase() - Releases the device (Unused)
+ * @deviceRegistration:	The device registration handle.
+ *
+ * Indicates that the caller is done using the VMCI device.  This
+ * function is a noop on Linux systems.
+ */
+void VMCI_DeviceRelease(void *deviceRegistration)
+{
+}
+EXPORT_SYMBOL(VMCI_DeviceRelease);
+
+/**
+ * VMCI_GetContextID() - Gets the current context ID.
+ *
+ * Returns the current context ID.  Note that since this is accessed only
+ * from code running in the host, this always returns the host context ID.
+ */
+uint32_t VMCI_GetContextID(void)
+{
+	if (vmci_guest_code_active()) {
+		if (atomic_read(&vmContextID) == VMCI_INVALID_ID) {
+			uint32_t result;
+			struct vmci_dg getCidMsg;
+			getCidMsg.dst =
+				vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+						 VMCI_GET_CONTEXT_ID);
+			getCidMsg.src = VMCI_ANON_SRC_HANDLE;
+			getCidMsg.payloadSize = 0;
+			result = vmci_send_dg(&getCidMsg);
+			atomic_set(&vmContextID, result);
+		}
+		return atomic_read(&vmContextID);
+	} else if (vmci_host_code_active()) {
+		return VMCI_HOST_CONTEXT_ID;
+	}
+	return VMCI_INVALID_ID;
+}
+EXPORT_SYMBOL(VMCI_GetContextID);
+
+/**
+ * VMCI_Version() - Returns the version of the driver.
+ *
+ * Returns the version of the VMCI driver.
+ */
+uint32_t VMCI_Version(void)
+{
+	return VMCI_VERSION;
+}
+EXPORT_SYMBOL(VMCI_Version);
+
+module_init(drv_init);
+module_exit(drv_exit);
+MODULE_DEVICE_TABLE(pci, vmci_ids);
+
+MODULE_AUTHOR("VMware, Inc.");
+MODULE_DESCRIPTION("VMware Virtual Machine Communication Interface.");
+MODULE_VERSION(VMCI_DRIVER_VERSION_STRING);
+MODULE_LICENSE("GPL v2");
+
+module_param_named(disable_host, vmci_disable_host, bool, 0);
+MODULE_PARM_DESC(disable_host, "Disable driver host personality - (default=0)");
+
+module_param_named(disable_guest, vmci_disable_guest, bool, 0);
+MODULE_PARM_DESC(disable_guest,
+		 "Disable driver guest personality - (default=0)");
+
+module_param_named(disable_msi, vmci_disable_msi, bool, 0);
+MODULE_PARM_DESC(disable_msi, "Disable MSI use in driver - (default=0)");
+
+module_param_named(disable_msix, vmci_disable_msix, bool, 0);
+MODULE_PARM_DESC(disable_msix, "Disable MSI-X use in driver - (default=0)");
diff --git a/drivers/misc/vmw_vmci/vmci_driver.h b/drivers/misc/vmw_vmci/vmci_driver.h
new file mode 100644
index 0000000..1c306c4
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_driver.h
@@ -0,0 +1,52 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMCI_DRIVER_H_
+#define _VMCI_DRIVER_H_
+
+#include <linux/vmw_vmci_defs.h>
+#include <linux/wait.h>
+
+#include "vmci_context.h"
+#include "vmci_queue_pair.h"
+
+enum vmci_obj_type {
+	VMCIOBJ_VMX_VM = 10,
+	VMCIOBJ_CONTEXT,
+	VMCIOBJ_SOCKET,
+	VMCIOBJ_NOT_SET,
+};
+
+/* For storing VMCI structures in file handles. */
+struct vmci_obj {
+	void *ptr;
+	enum vmci_obj_type type;
+};
+
+typedef void (VMCIWorkFn) (void *data);
+bool vmci_host_code_active(void);
+bool vmci_guest_code_active(void);
+bool vmci_drv_wait_on_event_intr(wait_queue_head_t *event,
+				 VMCIEventReleaseCB releaseCB,
+				 void *clientData);
+int vmci_drv_schedule_delayed_work(VMCIWorkFn *workFn, void *data);
+uint32_t VMCI_GetContextID(void);
+int vmci_send_dg(struct vmci_dg *dg);
+
+#endif /* _VMCI_DRIVER_H_ */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 05/11] Apply VMCI event code
  2012-07-26 23:39 ` Andrew Stiegmann (stieg)
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  -1 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, cschamp, gregkh, Andrew Stiegmann (stieg)

Code that manages event handlers and handles callbacks when
specific events fire.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_event.c |  451 ++++++++++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_event.h |   29 +++
 2 files changed, 480 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_event.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_event.h

diff --git a/drivers/misc/vmw_vmci/vmci_event.c b/drivers/misc/vmw_vmci/vmci_event.c
new file mode 100644
index 0000000..bc4e976
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_event.c
@@ -0,0 +1,451 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/module.h>
+#include <linux/list.h>
+#include <linux/sched.h>
+#include <linux/vmw_vmci_api.h>
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_event.h"
+#include "vmci_driver.h"
+
+#define EVENT_MAGIC 0xEABE0000
+#define VMCI_EVENT_MAX_ATTEMPTS 10
+
+struct vmci_subscription {
+	uint32_t id;
+	int refCount;
+	bool runDelayed;
+	wait_queue_head_t destroyEvent;
+	uint32_t event;
+	VMCI_EventCB callback;
+	void *callbackData;
+	struct list_head subscriberListItem;
+};
+
+static struct list_head subscriberArray[VMCI_EVENT_MAX];
+static spinlock_t subscriberLock;
+
+struct delayed_event_info {
+	struct vmci_subscription *sub;
+	uint8_t eventPayload[sizeof(struct vmci_event_data_max)];
+};
+
+struct event_ref {
+	struct vmci_subscription *sub;
+	struct list_head listItem;
+};
+
+int __init vmci_event_init(void)
+{
+	int i;
+
+	for (i = 0; i < VMCI_EVENT_MAX; i++)
+		INIT_LIST_HEAD(&subscriberArray[i]);
+
+	spin_lock_init(&subscriberLock);
+	return VMCI_SUCCESS;
+}
+
+void vmci_event_exit(void)
+{
+	int e;
+
+	/* We free all memory at exit. */
+	for (e = 0; e < VMCI_EVENT_MAX; e++) {
+		struct vmci_subscription *cur, *p2;
+		list_for_each_entry_safe(cur, p2, &subscriberArray[e],
+					 subscriberListItem) {
+
+			/*
+			 * We should never get here because all events
+			 * should have been unregistered before we try
+			 * to unload the driver module.  Also, delayed
+			 * callbacks could still be firing so this
+			 * cleanup would not be safe.  Still it is
+			 * better to free the memory than not ... so
+			 * we leave this code in just in case....
+			 */
+			pr_warn("Unexpected free events occuring.");
+			kfree(cur);
+		}
+	}
+
+}
+
+/*
+ * Gets a reference to the given VMCISubscription.
+ */
+static void event_get(struct vmci_subscription *entry)
+{
+	ASSERT(entry);
+
+	entry->refCount++;
+}
+
+/*
+ * Releases the given VMCISubscription.
+ * Fires the destroy event if the reference count has gone to zero.
+ */
+static void event_release(struct vmci_subscription *entry)
+{
+	ASSERT(entry);
+	ASSERT(entry->refCount > 0);
+
+	entry->refCount--;
+	if (entry->refCount == 0)
+		wake_up(&entry->destroyEvent);
+}
+
+/*
+ * Callback to release the event entry reference. It is called by the
+ * VMCI_WaitOnEvent function before it blocks.
+ */
+static int event_release_cb(void *clientData)
+{
+	struct vmci_subscription *sub = (struct vmci_subscription *)clientData;
+
+	ASSERT(sub);
+
+	spin_lock_bh(&subscriberLock);
+	event_release(sub);
+	spin_unlock_bh(&subscriberLock);
+
+	return 0;
+}
+
+/*
+ * Find entry. Assumes lock is held.
+ * Increments the VMCISubscription refcount if an entry is found.
+ */
+static struct vmci_subscription *event_find(uint32_t subID)
+{
+	int e;
+
+	for (e = 0; e < VMCI_EVENT_MAX; e++) {
+		struct vmci_subscription *cur;
+		list_for_each_entry(cur, &subscriberArray[e],
+				    subscriberListItem) {
+			if (cur->id == subID) {
+				event_get(cur);
+				return cur;
+			}
+		}
+	}
+	return NULL;
+}
+
+/*
+ * Calls the specified callback in a delayed context.
+ */
+static void event_delayed_dispatch_cb(void *data)
+{
+	struct delayed_event_info *eventInfo;
+	struct vmci_subscription *sub;
+	struct vmci_event_data *ed;
+
+	eventInfo = data;
+
+	ASSERT(eventInfo);
+	ASSERT(eventInfo->sub);
+
+	sub = eventInfo->sub;
+	ed = (struct vmci_event_data *)eventInfo->eventPayload;
+
+	sub->callback(sub->id, ed, sub->callbackData);
+
+	spin_lock_bh(&subscriberLock);
+	event_release(sub);
+	spin_unlock_bh(&subscriberLock);
+
+	kfree(eventInfo);
+}
+
+/*
+ * Actually delivers the events to the subscribers.
+ * The callback function for each subscriber is invoked.
+ */
+static int event_deliver(struct vmci_event_msg *eventMsg)
+{
+	int err = VMCI_SUCCESS;
+	struct vmci_subscription *cur;
+	struct list_head noDelayList;
+	struct vmci_event_data *ed;
+	struct event_ref *eventRef, *p2;
+
+	ASSERT(eventMsg);
+
+	INIT_LIST_HEAD(&noDelayList);
+
+	spin_lock_bh(&subscriberLock);
+	list_for_each_entry(cur, &subscriberArray[eventMsg->eventData.event],
+			    subscriberListItem) {
+		ASSERT(cur && cur->event == eventMsg->eventData.event);
+
+		if (cur->runDelayed) {
+			struct delayed_event_info *eventInfo;
+			eventInfo = kzalloc(sizeof *eventInfo, GFP_ATOMIC);
+			if (!eventInfo) {
+				err = VMCI_ERROR_NO_MEM;
+				goto out;
+			}
+
+			event_get(cur);
+			memcpy(eventInfo->eventPayload,
+			       VMCI_DG_PAYLOAD(eventMsg),
+			       (size_t) eventMsg->hdr.payloadSize);
+			eventInfo->sub = cur;
+			err = vmci_drv_schedule_delayed_work(
+				event_delayed_dispatch_cb,
+				eventInfo);
+			if (err != VMCI_SUCCESS) {
+				event_release(cur);
+				kfree(eventInfo);
+				goto out;
+			}
+
+		} else {
+			/*
+			 * To avoid possible lock rank voilation when holding
+			 * subscriberLock, we construct a local list of
+			 * subscribers and release subscriberLock before
+			 * invokes the callbacks. This is similar to delayed
+			 * callbacks, but callbacks is invoked right away here.
+			 */
+			eventRef = kzalloc(sizeof *eventRef, GFP_ATOMIC);
+			if (!eventRef) {
+				err = VMCI_ERROR_NO_MEM;
+				goto out;
+			}
+
+			event_get(cur);
+			eventRef->sub = cur;
+			INIT_LIST_HEAD(&eventRef->listItem);
+			list_add(&eventRef->listItem, &noDelayList);
+		}
+	}
+
+out:
+	spin_unlock_bh(&subscriberLock);
+
+	list_for_each_entry_safe(eventRef, p2, &noDelayList, listItem) {
+		uint8_t eventPayload[sizeof(struct vmci_event_data_max)]
+			= { 0 };
+
+		/*
+		 * We set event data before each callback to ensure
+		 * isolation.
+		 */
+		memcpy(eventPayload, VMCI_DG_PAYLOAD(eventMsg),
+		       (size_t) eventMsg->hdr.payloadSize);
+		ed = (struct vmci_event_data *)eventPayload;
+		cur = eventRef->sub;
+		cur->callback(cur->id, ed, cur->callbackData);
+
+		spin_lock_bh(&subscriberLock);
+		event_release(cur);
+		spin_unlock_bh(&subscriberLock);
+		kfree(eventRef);
+	}
+
+	return err;
+}
+
+/*
+ * Dispatcher for the VMCI_EVENT_RECEIVE datagrams. Calls all
+ * subscribers for given event.
+ */
+int vmci_event_dispatch(struct vmci_dg *msg)
+{
+	struct vmci_event_msg *eventMsg = (struct vmci_event_msg *)msg;
+
+	ASSERT(msg &&
+	       msg->src.context == VMCI_HYPERVISOR_CONTEXT_ID &&
+	       msg->dst.resource == VMCI_EVENT_HANDLER);
+
+	if (msg->payloadSize < sizeof(uint32_t) ||
+	    msg->payloadSize > sizeof(struct vmci_event_data_max))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	if (!VMCI_EVENT_VALID(eventMsg->eventData.event))
+		return VMCI_ERROR_EVENT_UNKNOWN;
+
+	event_deliver(eventMsg);
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Initialize and add subscription to subscriber list.
+ */
+static int event_register_subscription(struct vmci_subscription *sub,
+				       uint32_t event,
+				       uint32_t flags,
+				       VMCI_EventCB callback,
+				       void *callbackData)
+{
+	static uint32_t subscriptionID;
+	uint32_t attempts = 0;
+	int result;
+	bool success;
+
+	ASSERT(sub);
+
+	if (!VMCI_EVENT_VALID(event) || callback == NULL) {
+		pr_devel("Failed to subscribe to event (type=%d) " \
+			 "(callback=%p) (data=%p).", event,
+			 callback, callbackData);
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	sub->runDelayed = !!(flags & VMCI_FLAG_EVENT_DELAYED_CB);
+	sub->refCount = 1;
+	sub->event = event;
+	sub->callback = callback;
+	sub->callbackData = callbackData;
+	INIT_LIST_HEAD(&sub->subscriberListItem);
+
+	spin_lock_bh(&subscriberLock);
+
+	/* Creation of a new event is always allowed. */
+	for (success = false, attempts = 0;
+	     success == false && attempts < VMCI_EVENT_MAX_ATTEMPTS;
+	     attempts++) {
+		struct vmci_subscription *existingSub = NULL;
+
+		/*
+		 * We try to get an id a couple of time before
+		 * claiming we are out of resources.
+		 */
+		sub->id = ++subscriptionID;
+
+		/* Test for duplicate id. */
+		existingSub = event_find(sub->id);
+		if (existingSub == NULL)
+			success = true;
+		else
+			event_release(existingSub);
+	}
+
+	if (success) {
+		init_waitqueue_head(&sub->destroyEvent);
+		list_add(&sub->subscriberListItem, &subscriberArray[event]);
+		result = VMCI_SUCCESS;
+	} else {
+		result = VMCI_ERROR_NO_RESOURCES;
+	}
+
+	spin_unlock_bh(&subscriberLock);
+	return result;
+}
+
+/*
+ * Remove subscription from subscriber list.
+ */
+static struct vmci_subscription *event_unregister_subscription(uint32_t subID)
+{
+	struct vmci_subscription *s;
+
+	spin_lock_bh(&subscriberLock);
+	s = event_find(subID);
+	if (s != NULL) {
+		event_release(s);
+		list_del(&s->subscriberListItem);
+	}
+	spin_unlock_bh(&subscriberLock);
+
+	if (s != NULL)
+		vmci_drv_wait_on_event_intr(&s->destroyEvent,
+					    event_release_cb, s);
+
+	return s;
+}
+
+/**
+ * VMCIEvent_Subscribe() - Subscribe to a given event.
+ * @event:	The event to subscribe to.
+ * @flags:	Event flags.  VMCI_FLAG_EVENT_*
+ * @callback:	The callback to invoke upon the event.
+ * @callbackData:	Data to pass to the callback.
+ * @subscriptionID:	ID used to track subscription.  Used with
+ *		VMCIEvent_Unsubscribe()
+ *
+ * Subscribes to the provided event.  The callback specified can be fired
+ * in different contexts depending on what flag is specified while
+ * registering. If flags contains VMCI_FLAG_EVENT_NONE then the
+ * callback is fired with the subscriber lock held (and BH context
+ * on the guest). If flags contain VMCI_FLAG_EVENT_DELAYED_CB then
+ * the callback is fired with no locks held in thread context.
+ * This is useful because other VMCIEvent functions can be called,
+ * but it also increases the chances that an event will be dropped.
+ */
+int VMCIEvent_Subscribe(uint32_t event,
+			uint32_t flags,
+			VMCI_EventCB callback,
+			void *callbackData,
+			uint32_t *subscriptionID)
+{
+	int retval;
+	struct vmci_subscription *s = NULL;
+
+	if (subscriptionID == NULL) {
+		pr_devel("Invalid subscription (NULL).");
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	s = kmalloc(sizeof *s, GFP_KERNEL);
+	if (s == NULL)
+		return VMCI_ERROR_NO_MEM;
+
+	retval = event_register_subscription(s, event, flags,
+					     callback, callbackData);
+	if (retval < VMCI_SUCCESS) {
+		kfree(s);
+		return retval;
+	}
+
+	*subscriptionID = s->id;
+	return retval;
+}
+EXPORT_SYMBOL(VMCIEvent_Subscribe);
+
+/**
+ * VMCIEvent_Unsubscribe() - Unsubscribe to an event.
+ * @subID:	A subscription ID ad provided by VMCIEvent_Subscribe()
+ *
+ * Unsubscribe to given event. Removes it from list and frees it.
+ * Will return callbackData if requested by caller.
+ */
+int VMCIEvent_Unsubscribe(uint32_t subID)
+{
+	struct vmci_subscription *s;
+
+	/*
+	 * Return subscription. At this point we know noone else is accessing
+	 * the subscription so we can free it.
+	 */
+	s = event_unregister_subscription(subID);
+	if (s == NULL)
+		return VMCI_ERROR_NOT_FOUND;
+
+	kfree(s);
+
+	return VMCI_SUCCESS;
+}
+EXPORT_SYMBOL(VMCIEvent_Unsubscribe);
diff --git a/drivers/misc/vmw_vmci/vmci_event.h b/drivers/misc/vmw_vmci/vmci_event.h
new file mode 100644
index 0000000..83574c6
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_event.h
@@ -0,0 +1,29 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef __VMCI_EVENT_H__
+#define __VMCI_EVENT_H__
+
+#include <linux/vmw_vmci_api.h>
+
+int vmci_event_init(void);
+void vmci_event_exit(void);
+int vmci_event_dispatch(struct vmci_dg *msg);
+
+#endif /*__VMCI_EVENT_H__ */
-- 
1.7.0.4


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 05/11] Apply VMCI event code
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  0 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, Andrew Stiegmann (stieg), cschamp, gregkh

Code that manages event handlers and handles callbacks when
specific events fire.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_event.c |  451 ++++++++++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_event.h |   29 +++
 2 files changed, 480 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_event.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_event.h

diff --git a/drivers/misc/vmw_vmci/vmci_event.c b/drivers/misc/vmw_vmci/vmci_event.c
new file mode 100644
index 0000000..bc4e976
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_event.c
@@ -0,0 +1,451 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/module.h>
+#include <linux/list.h>
+#include <linux/sched.h>
+#include <linux/vmw_vmci_api.h>
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_event.h"
+#include "vmci_driver.h"
+
+#define EVENT_MAGIC 0xEABE0000
+#define VMCI_EVENT_MAX_ATTEMPTS 10
+
+struct vmci_subscription {
+	uint32_t id;
+	int refCount;
+	bool runDelayed;
+	wait_queue_head_t destroyEvent;
+	uint32_t event;
+	VMCI_EventCB callback;
+	void *callbackData;
+	struct list_head subscriberListItem;
+};
+
+static struct list_head subscriberArray[VMCI_EVENT_MAX];
+static spinlock_t subscriberLock;
+
+struct delayed_event_info {
+	struct vmci_subscription *sub;
+	uint8_t eventPayload[sizeof(struct vmci_event_data_max)];
+};
+
+struct event_ref {
+	struct vmci_subscription *sub;
+	struct list_head listItem;
+};
+
+int __init vmci_event_init(void)
+{
+	int i;
+
+	for (i = 0; i < VMCI_EVENT_MAX; i++)
+		INIT_LIST_HEAD(&subscriberArray[i]);
+
+	spin_lock_init(&subscriberLock);
+	return VMCI_SUCCESS;
+}
+
+void vmci_event_exit(void)
+{
+	int e;
+
+	/* We free all memory at exit. */
+	for (e = 0; e < VMCI_EVENT_MAX; e++) {
+		struct vmci_subscription *cur, *p2;
+		list_for_each_entry_safe(cur, p2, &subscriberArray[e],
+					 subscriberListItem) {
+
+			/*
+			 * We should never get here because all events
+			 * should have been unregistered before we try
+			 * to unload the driver module.  Also, delayed
+			 * callbacks could still be firing so this
+			 * cleanup would not be safe.  Still it is
+			 * better to free the memory than not ... so
+			 * we leave this code in just in case....
+			 */
+			pr_warn("Unexpected free events occuring.");
+			kfree(cur);
+		}
+	}
+
+}
+
+/*
+ * Gets a reference to the given VMCISubscription.
+ */
+static void event_get(struct vmci_subscription *entry)
+{
+	ASSERT(entry);
+
+	entry->refCount++;
+}
+
+/*
+ * Releases the given VMCISubscription.
+ * Fires the destroy event if the reference count has gone to zero.
+ */
+static void event_release(struct vmci_subscription *entry)
+{
+	ASSERT(entry);
+	ASSERT(entry->refCount > 0);
+
+	entry->refCount--;
+	if (entry->refCount == 0)
+		wake_up(&entry->destroyEvent);
+}
+
+/*
+ * Callback to release the event entry reference. It is called by the
+ * VMCI_WaitOnEvent function before it blocks.
+ */
+static int event_release_cb(void *clientData)
+{
+	struct vmci_subscription *sub = (struct vmci_subscription *)clientData;
+
+	ASSERT(sub);
+
+	spin_lock_bh(&subscriberLock);
+	event_release(sub);
+	spin_unlock_bh(&subscriberLock);
+
+	return 0;
+}
+
+/*
+ * Find entry. Assumes lock is held.
+ * Increments the VMCISubscription refcount if an entry is found.
+ */
+static struct vmci_subscription *event_find(uint32_t subID)
+{
+	int e;
+
+	for (e = 0; e < VMCI_EVENT_MAX; e++) {
+		struct vmci_subscription *cur;
+		list_for_each_entry(cur, &subscriberArray[e],
+				    subscriberListItem) {
+			if (cur->id == subID) {
+				event_get(cur);
+				return cur;
+			}
+		}
+	}
+	return NULL;
+}
+
+/*
+ * Calls the specified callback in a delayed context.
+ */
+static void event_delayed_dispatch_cb(void *data)
+{
+	struct delayed_event_info *eventInfo;
+	struct vmci_subscription *sub;
+	struct vmci_event_data *ed;
+
+	eventInfo = data;
+
+	ASSERT(eventInfo);
+	ASSERT(eventInfo->sub);
+
+	sub = eventInfo->sub;
+	ed = (struct vmci_event_data *)eventInfo->eventPayload;
+
+	sub->callback(sub->id, ed, sub->callbackData);
+
+	spin_lock_bh(&subscriberLock);
+	event_release(sub);
+	spin_unlock_bh(&subscriberLock);
+
+	kfree(eventInfo);
+}
+
+/*
+ * Actually delivers the events to the subscribers.
+ * The callback function for each subscriber is invoked.
+ */
+static int event_deliver(struct vmci_event_msg *eventMsg)
+{
+	int err = VMCI_SUCCESS;
+	struct vmci_subscription *cur;
+	struct list_head noDelayList;
+	struct vmci_event_data *ed;
+	struct event_ref *eventRef, *p2;
+
+	ASSERT(eventMsg);
+
+	INIT_LIST_HEAD(&noDelayList);
+
+	spin_lock_bh(&subscriberLock);
+	list_for_each_entry(cur, &subscriberArray[eventMsg->eventData.event],
+			    subscriberListItem) {
+		ASSERT(cur && cur->event == eventMsg->eventData.event);
+
+		if (cur->runDelayed) {
+			struct delayed_event_info *eventInfo;
+			eventInfo = kzalloc(sizeof *eventInfo, GFP_ATOMIC);
+			if (!eventInfo) {
+				err = VMCI_ERROR_NO_MEM;
+				goto out;
+			}
+
+			event_get(cur);
+			memcpy(eventInfo->eventPayload,
+			       VMCI_DG_PAYLOAD(eventMsg),
+			       (size_t) eventMsg->hdr.payloadSize);
+			eventInfo->sub = cur;
+			err = vmci_drv_schedule_delayed_work(
+				event_delayed_dispatch_cb,
+				eventInfo);
+			if (err != VMCI_SUCCESS) {
+				event_release(cur);
+				kfree(eventInfo);
+				goto out;
+			}
+
+		} else {
+			/*
+			 * To avoid possible lock rank voilation when holding
+			 * subscriberLock, we construct a local list of
+			 * subscribers and release subscriberLock before
+			 * invokes the callbacks. This is similar to delayed
+			 * callbacks, but callbacks is invoked right away here.
+			 */
+			eventRef = kzalloc(sizeof *eventRef, GFP_ATOMIC);
+			if (!eventRef) {
+				err = VMCI_ERROR_NO_MEM;
+				goto out;
+			}
+
+			event_get(cur);
+			eventRef->sub = cur;
+			INIT_LIST_HEAD(&eventRef->listItem);
+			list_add(&eventRef->listItem, &noDelayList);
+		}
+	}
+
+out:
+	spin_unlock_bh(&subscriberLock);
+
+	list_for_each_entry_safe(eventRef, p2, &noDelayList, listItem) {
+		uint8_t eventPayload[sizeof(struct vmci_event_data_max)]
+			= { 0 };
+
+		/*
+		 * We set event data before each callback to ensure
+		 * isolation.
+		 */
+		memcpy(eventPayload, VMCI_DG_PAYLOAD(eventMsg),
+		       (size_t) eventMsg->hdr.payloadSize);
+		ed = (struct vmci_event_data *)eventPayload;
+		cur = eventRef->sub;
+		cur->callback(cur->id, ed, cur->callbackData);
+
+		spin_lock_bh(&subscriberLock);
+		event_release(cur);
+		spin_unlock_bh(&subscriberLock);
+		kfree(eventRef);
+	}
+
+	return err;
+}
+
+/*
+ * Dispatcher for the VMCI_EVENT_RECEIVE datagrams. Calls all
+ * subscribers for given event.
+ */
+int vmci_event_dispatch(struct vmci_dg *msg)
+{
+	struct vmci_event_msg *eventMsg = (struct vmci_event_msg *)msg;
+
+	ASSERT(msg &&
+	       msg->src.context == VMCI_HYPERVISOR_CONTEXT_ID &&
+	       msg->dst.resource == VMCI_EVENT_HANDLER);
+
+	if (msg->payloadSize < sizeof(uint32_t) ||
+	    msg->payloadSize > sizeof(struct vmci_event_data_max))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	if (!VMCI_EVENT_VALID(eventMsg->eventData.event))
+		return VMCI_ERROR_EVENT_UNKNOWN;
+
+	event_deliver(eventMsg);
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Initialize and add subscription to subscriber list.
+ */
+static int event_register_subscription(struct vmci_subscription *sub,
+				       uint32_t event,
+				       uint32_t flags,
+				       VMCI_EventCB callback,
+				       void *callbackData)
+{
+	static uint32_t subscriptionID;
+	uint32_t attempts = 0;
+	int result;
+	bool success;
+
+	ASSERT(sub);
+
+	if (!VMCI_EVENT_VALID(event) || callback == NULL) {
+		pr_devel("Failed to subscribe to event (type=%d) " \
+			 "(callback=%p) (data=%p).", event,
+			 callback, callbackData);
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	sub->runDelayed = !!(flags & VMCI_FLAG_EVENT_DELAYED_CB);
+	sub->refCount = 1;
+	sub->event = event;
+	sub->callback = callback;
+	sub->callbackData = callbackData;
+	INIT_LIST_HEAD(&sub->subscriberListItem);
+
+	spin_lock_bh(&subscriberLock);
+
+	/* Creation of a new event is always allowed. */
+	for (success = false, attempts = 0;
+	     success == false && attempts < VMCI_EVENT_MAX_ATTEMPTS;
+	     attempts++) {
+		struct vmci_subscription *existingSub = NULL;
+
+		/*
+		 * We try to get an id a couple of time before
+		 * claiming we are out of resources.
+		 */
+		sub->id = ++subscriptionID;
+
+		/* Test for duplicate id. */
+		existingSub = event_find(sub->id);
+		if (existingSub == NULL)
+			success = true;
+		else
+			event_release(existingSub);
+	}
+
+	if (success) {
+		init_waitqueue_head(&sub->destroyEvent);
+		list_add(&sub->subscriberListItem, &subscriberArray[event]);
+		result = VMCI_SUCCESS;
+	} else {
+		result = VMCI_ERROR_NO_RESOURCES;
+	}
+
+	spin_unlock_bh(&subscriberLock);
+	return result;
+}
+
+/*
+ * Remove subscription from subscriber list.
+ */
+static struct vmci_subscription *event_unregister_subscription(uint32_t subID)
+{
+	struct vmci_subscription *s;
+
+	spin_lock_bh(&subscriberLock);
+	s = event_find(subID);
+	if (s != NULL) {
+		event_release(s);
+		list_del(&s->subscriberListItem);
+	}
+	spin_unlock_bh(&subscriberLock);
+
+	if (s != NULL)
+		vmci_drv_wait_on_event_intr(&s->destroyEvent,
+					    event_release_cb, s);
+
+	return s;
+}
+
+/**
+ * VMCIEvent_Subscribe() - Subscribe to a given event.
+ * @event:	The event to subscribe to.
+ * @flags:	Event flags.  VMCI_FLAG_EVENT_*
+ * @callback:	The callback to invoke upon the event.
+ * @callbackData:	Data to pass to the callback.
+ * @subscriptionID:	ID used to track subscription.  Used with
+ *		VMCIEvent_Unsubscribe()
+ *
+ * Subscribes to the provided event.  The callback specified can be fired
+ * in different contexts depending on what flag is specified while
+ * registering. If flags contains VMCI_FLAG_EVENT_NONE then the
+ * callback is fired with the subscriber lock held (and BH context
+ * on the guest). If flags contain VMCI_FLAG_EVENT_DELAYED_CB then
+ * the callback is fired with no locks held in thread context.
+ * This is useful because other VMCIEvent functions can be called,
+ * but it also increases the chances that an event will be dropped.
+ */
+int VMCIEvent_Subscribe(uint32_t event,
+			uint32_t flags,
+			VMCI_EventCB callback,
+			void *callbackData,
+			uint32_t *subscriptionID)
+{
+	int retval;
+	struct vmci_subscription *s = NULL;
+
+	if (subscriptionID == NULL) {
+		pr_devel("Invalid subscription (NULL).");
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	s = kmalloc(sizeof *s, GFP_KERNEL);
+	if (s == NULL)
+		return VMCI_ERROR_NO_MEM;
+
+	retval = event_register_subscription(s, event, flags,
+					     callback, callbackData);
+	if (retval < VMCI_SUCCESS) {
+		kfree(s);
+		return retval;
+	}
+
+	*subscriptionID = s->id;
+	return retval;
+}
+EXPORT_SYMBOL(VMCIEvent_Subscribe);
+
+/**
+ * VMCIEvent_Unsubscribe() - Unsubscribe to an event.
+ * @subID:	A subscription ID ad provided by VMCIEvent_Subscribe()
+ *
+ * Unsubscribe to given event. Removes it from list and frees it.
+ * Will return callbackData if requested by caller.
+ */
+int VMCIEvent_Unsubscribe(uint32_t subID)
+{
+	struct vmci_subscription *s;
+
+	/*
+	 * Return subscription. At this point we know noone else is accessing
+	 * the subscription so we can free it.
+	 */
+	s = event_unregister_subscription(subID);
+	if (s == NULL)
+		return VMCI_ERROR_NOT_FOUND;
+
+	kfree(s);
+
+	return VMCI_SUCCESS;
+}
+EXPORT_SYMBOL(VMCIEvent_Unsubscribe);
diff --git a/drivers/misc/vmw_vmci/vmci_event.h b/drivers/misc/vmw_vmci/vmci_event.h
new file mode 100644
index 0000000..83574c6
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_event.h
@@ -0,0 +1,29 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef __VMCI_EVENT_H__
+#define __VMCI_EVENT_H__
+
+#include <linux/vmw_vmci_api.h>
+
+int vmci_event_init(void);
+void vmci_event_exit(void);
+int vmci_event_dispatch(struct vmci_dg *msg);
+
+#endif /*__VMCI_EVENT_H__ */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 06/11] Apply dynamic array code
  2012-07-26 23:39 ` Andrew Stiegmann (stieg)
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  -1 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, cschamp, gregkh, Andrew Stiegmann (stieg)

This code adds support for dynamic arrays that will grow if they
need to.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_handle_array.c |  174 +++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_handle_array.h |   50 ++++++++
 2 files changed, 224 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_handle_array.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_handle_array.h

diff --git a/drivers/misc/vmw_vmci/vmci_handle_array.c b/drivers/misc/vmw_vmci/vmci_handle_array.c
new file mode 100644
index 0000000..e23e82b
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_handle_array.c
@@ -0,0 +1,174 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/slab.h>
+
+#include "vmci_common_int.h"
+#include "vmci_handle_array.h"
+
+static unsigned handle_arr_calc_size(uint32_t c)
+{
+	/*
+	 * Decrement c because vmci_handle_arr already includes
+	 * one vmci_handle entry.
+	 */
+	return sizeof(struct vmci_handle_arr) +
+		--c  * sizeof(struct vmci_handle);
+}
+
+struct vmci_handle_arr *vmci_handle_arr_create(uint32_t capacity)
+{
+	struct vmci_handle_arr *array;
+	uint32_t arr_size;
+
+	if (capacity == 0)
+		capacity = VMCI_HANDLE_ARRAY_DEFAULT_SIZE;
+
+	arr_size = handle_arr_calc_size(capacity);
+	array = kmalloc(arr_size, GFP_ATOMIC);
+	if (!array)
+		return NULL;
+
+	array->capacity = capacity;
+	array->size = 0;
+
+	return array;
+}
+
+void vmci_handle_arr_destroy(struct vmci_handle_arr *array)
+{
+	kfree(array);
+}
+
+void vmci_handle_arr_append_entry(struct vmci_handle_arr **arrayPtr,
+				  struct vmci_handle handle)
+{
+	struct vmci_handle_arr *array;
+
+	ASSERT(arrayPtr && *arrayPtr);
+	array = *arrayPtr;
+
+	if (unlikely(array->size >= array->capacity)) {
+		/* reallocate. */
+		struct vmci_handle_arr *newArray;
+		const uint32_t arraySize =
+			handle_arr_calc_size(array->capacity *
+					     VMCI_ARR_CAP_MULT);
+
+		newArray = kmalloc(arraySize, GFP_ATOMIC);
+		if (!newArray)
+			return;
+
+		memcpy(newArray, array, arraySize);
+		newArray->capacity *= VMCI_ARR_CAP_MULT;
+		kfree(array);
+		*arrayPtr = newArray;
+		array = newArray;
+	}
+
+	array->entries[array->size] = handle;
+	array->size++;
+}
+
+/*
+ * Handle that was removed, VMCI_INVALID_HANDLE if entry not found.
+ */
+struct vmci_handle vmci_handle_arr_remove_entry(struct vmci_handle_arr *array,
+						struct vmci_handle entryHandle)
+{
+	uint32_t i;
+	struct vmci_handle handle = VMCI_INVALID_HANDLE;
+
+	ASSERT(array);
+	for (i = 0; i < array->size; i++) {
+		if (VMCI_HANDLE_EQUAL(array->entries[i], entryHandle)) {
+			handle = array->entries[i];
+			array->size--;
+			array->entries[i] = array->entries[array->size];
+			array->entries[array->size] = VMCI_INVALID_HANDLE;
+			break;
+		}
+	}
+
+	return handle;
+}
+
+/*
+ * Handle that was removed, VMCI_INVALID_HANDLE if array was empty.
+ */
+struct vmci_handle vmci_handle_arr_remove_tail(struct vmci_handle_arr *array)
+{
+	struct vmci_handle handle = VMCI_INVALID_HANDLE;
+
+	if (array->size) {
+		array->size--;
+		handle = array->entries[array->size];
+		array->entries[array->size] = VMCI_INVALID_HANDLE;
+	}
+
+	return handle;
+}
+
+/*
+ * Handle at given index, VMCI_INVALID_HANDLE if invalid index.
+ */
+struct vmci_handle
+vmci_handle_arr_get_entry(const struct vmci_handle_arr *array,
+			  uint32_t index)
+{
+	ASSERT(array);
+
+	if (unlikely(index >= array->size))
+		return VMCI_INVALID_HANDLE;
+
+	return array->entries[index];
+}
+
+uint32_t vmci_handle_arr_get_size(const struct vmci_handle_arr *array)
+{
+	ASSERT(array);
+	return array->size;
+}
+
+bool vmci_handle_arr_has_entry(const struct vmci_handle_arr *array,
+			       struct vmci_handle entryHandle)
+{
+	uint32_t i;
+
+	ASSERT(array);
+	for (i = 0; i < array->size; i++)
+		if (VMCI_HANDLE_EQUAL(array->entries[i], entryHandle))
+			return true;
+
+	return false;
+}
+
+/*
+ * NULL if the array is empty. Otherwise, a pointer to the array
+ * of VMCI handles in the handle array.
+ */
+struct vmci_handle *vmci_handle_arr_get_handles(struct vmci_handle_arr *array)
+{
+	ASSERT(array);
+
+	if (array->size)
+		return array->entries;
+
+	return NULL;
+}
diff --git a/drivers/misc/vmw_vmci/vmci_handle_array.h b/drivers/misc/vmw_vmci/vmci_handle_array.h
new file mode 100644
index 0000000..966a6fd
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_handle_array.h
@@ -0,0 +1,50 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMCI_HANDLE_ARRAY_H_
+#define _VMCI_HANDLE_ARRAY_H_
+
+#include <linux/slab.h>
+#include <linux/vmw_vmci_defs.h>
+
+#define VMCI_HANDLE_ARRAY_DEFAULT_SIZE 4
+#define VMCI_ARR_CAP_MULT 2	/* Array capacity multiplier */
+
+struct vmci_handle_arr {
+	uint32_t capacity;
+	uint32_t size;
+	struct vmci_handle entries[1];
+};
+
+struct vmci_handle_arr *vmci_handle_arr_create(uint32_t capacity);
+void vmci_handle_arr_destroy(struct vmci_handle_arr *array);
+void vmci_handle_arr_append_entry(struct vmci_handle_arr **arrayPtr,
+				  struct vmci_handle handle);
+struct vmci_handle vmci_handle_arr_remove_entry(struct vmci_handle_arr *array,
+						struct vmci_handle entryHandle);
+struct vmci_handle vmci_handle_arr_remove_tail(struct vmci_handle_arr *array);
+struct vmci_handle
+vmci_handle_arr_get_entry(const struct vmci_handle_arr *array,
+			  uint32_t index);
+uint32_t vmci_handle_arr_get_size(const struct vmci_handle_arr *array);
+bool vmci_handle_arr_has_entry(const struct vmci_handle_arr *array,
+			       struct vmci_handle entryHandle);
+struct vmci_handle *vmci_handle_arr_get_handles(struct vmci_handle_arr *array);
+
+#endif /* _VMCI_HANDLE_ARRAY_H_ */
-- 
1.7.0.4


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 06/11] Apply dynamic array code
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  0 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, Andrew Stiegmann (stieg), cschamp, gregkh

This code adds support for dynamic arrays that will grow if they
need to.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_handle_array.c |  174 +++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_handle_array.h |   50 ++++++++
 2 files changed, 224 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_handle_array.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_handle_array.h

diff --git a/drivers/misc/vmw_vmci/vmci_handle_array.c b/drivers/misc/vmw_vmci/vmci_handle_array.c
new file mode 100644
index 0000000..e23e82b
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_handle_array.c
@@ -0,0 +1,174 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/slab.h>
+
+#include "vmci_common_int.h"
+#include "vmci_handle_array.h"
+
+static unsigned handle_arr_calc_size(uint32_t c)
+{
+	/*
+	 * Decrement c because vmci_handle_arr already includes
+	 * one vmci_handle entry.
+	 */
+	return sizeof(struct vmci_handle_arr) +
+		--c  * sizeof(struct vmci_handle);
+}
+
+struct vmci_handle_arr *vmci_handle_arr_create(uint32_t capacity)
+{
+	struct vmci_handle_arr *array;
+	uint32_t arr_size;
+
+	if (capacity == 0)
+		capacity = VMCI_HANDLE_ARRAY_DEFAULT_SIZE;
+
+	arr_size = handle_arr_calc_size(capacity);
+	array = kmalloc(arr_size, GFP_ATOMIC);
+	if (!array)
+		return NULL;
+
+	array->capacity = capacity;
+	array->size = 0;
+
+	return array;
+}
+
+void vmci_handle_arr_destroy(struct vmci_handle_arr *array)
+{
+	kfree(array);
+}
+
+void vmci_handle_arr_append_entry(struct vmci_handle_arr **arrayPtr,
+				  struct vmci_handle handle)
+{
+	struct vmci_handle_arr *array;
+
+	ASSERT(arrayPtr && *arrayPtr);
+	array = *arrayPtr;
+
+	if (unlikely(array->size >= array->capacity)) {
+		/* reallocate. */
+		struct vmci_handle_arr *newArray;
+		const uint32_t arraySize =
+			handle_arr_calc_size(array->capacity *
+					     VMCI_ARR_CAP_MULT);
+
+		newArray = kmalloc(arraySize, GFP_ATOMIC);
+		if (!newArray)
+			return;
+
+		memcpy(newArray, array, arraySize);
+		newArray->capacity *= VMCI_ARR_CAP_MULT;
+		kfree(array);
+		*arrayPtr = newArray;
+		array = newArray;
+	}
+
+	array->entries[array->size] = handle;
+	array->size++;
+}
+
+/*
+ * Handle that was removed, VMCI_INVALID_HANDLE if entry not found.
+ */
+struct vmci_handle vmci_handle_arr_remove_entry(struct vmci_handle_arr *array,
+						struct vmci_handle entryHandle)
+{
+	uint32_t i;
+	struct vmci_handle handle = VMCI_INVALID_HANDLE;
+
+	ASSERT(array);
+	for (i = 0; i < array->size; i++) {
+		if (VMCI_HANDLE_EQUAL(array->entries[i], entryHandle)) {
+			handle = array->entries[i];
+			array->size--;
+			array->entries[i] = array->entries[array->size];
+			array->entries[array->size] = VMCI_INVALID_HANDLE;
+			break;
+		}
+	}
+
+	return handle;
+}
+
+/*
+ * Handle that was removed, VMCI_INVALID_HANDLE if array was empty.
+ */
+struct vmci_handle vmci_handle_arr_remove_tail(struct vmci_handle_arr *array)
+{
+	struct vmci_handle handle = VMCI_INVALID_HANDLE;
+
+	if (array->size) {
+		array->size--;
+		handle = array->entries[array->size];
+		array->entries[array->size] = VMCI_INVALID_HANDLE;
+	}
+
+	return handle;
+}
+
+/*
+ * Handle at given index, VMCI_INVALID_HANDLE if invalid index.
+ */
+struct vmci_handle
+vmci_handle_arr_get_entry(const struct vmci_handle_arr *array,
+			  uint32_t index)
+{
+	ASSERT(array);
+
+	if (unlikely(index >= array->size))
+		return VMCI_INVALID_HANDLE;
+
+	return array->entries[index];
+}
+
+uint32_t vmci_handle_arr_get_size(const struct vmci_handle_arr *array)
+{
+	ASSERT(array);
+	return array->size;
+}
+
+bool vmci_handle_arr_has_entry(const struct vmci_handle_arr *array,
+			       struct vmci_handle entryHandle)
+{
+	uint32_t i;
+
+	ASSERT(array);
+	for (i = 0; i < array->size; i++)
+		if (VMCI_HANDLE_EQUAL(array->entries[i], entryHandle))
+			return true;
+
+	return false;
+}
+
+/*
+ * NULL if the array is empty. Otherwise, a pointer to the array
+ * of VMCI handles in the handle array.
+ */
+struct vmci_handle *vmci_handle_arr_get_handles(struct vmci_handle_arr *array)
+{
+	ASSERT(array);
+
+	if (array->size)
+		return array->entries;
+
+	return NULL;
+}
diff --git a/drivers/misc/vmw_vmci/vmci_handle_array.h b/drivers/misc/vmw_vmci/vmci_handle_array.h
new file mode 100644
index 0000000..966a6fd
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_handle_array.h
@@ -0,0 +1,50 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMCI_HANDLE_ARRAY_H_
+#define _VMCI_HANDLE_ARRAY_H_
+
+#include <linux/slab.h>
+#include <linux/vmw_vmci_defs.h>
+
+#define VMCI_HANDLE_ARRAY_DEFAULT_SIZE 4
+#define VMCI_ARR_CAP_MULT 2	/* Array capacity multiplier */
+
+struct vmci_handle_arr {
+	uint32_t capacity;
+	uint32_t size;
+	struct vmci_handle entries[1];
+};
+
+struct vmci_handle_arr *vmci_handle_arr_create(uint32_t capacity);
+void vmci_handle_arr_destroy(struct vmci_handle_arr *array);
+void vmci_handle_arr_append_entry(struct vmci_handle_arr **arrayPtr,
+				  struct vmci_handle handle);
+struct vmci_handle vmci_handle_arr_remove_entry(struct vmci_handle_arr *array,
+						struct vmci_handle entryHandle);
+struct vmci_handle vmci_handle_arr_remove_tail(struct vmci_handle_arr *array);
+struct vmci_handle
+vmci_handle_arr_get_entry(const struct vmci_handle_arr *array,
+			  uint32_t index);
+uint32_t vmci_handle_arr_get_size(const struct vmci_handle_arr *array);
+bool vmci_handle_arr_has_entry(const struct vmci_handle_arr *array,
+			       struct vmci_handle entryHandle);
+struct vmci_handle *vmci_handle_arr_get_handles(struct vmci_handle_arr *array);
+
+#endif /* _VMCI_HANDLE_ARRAY_H_ */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 07/11] Apply VMCI hash table
  2012-07-26 23:39 ` Andrew Stiegmann (stieg)
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  -1 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, cschamp, gregkh, Andrew Stiegmann (stieg)

Implements a hash table for VMCI's use.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_hash_table.c |  332 +++++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_hash_table.h |   56 +++++
 2 files changed, 388 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_hash_table.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_hash_table.h

diff --git a/drivers/misc/vmw_vmci/vmci_hash_table.c b/drivers/misc/vmw_vmci/vmci_hash_table.c
new file mode 100644
index 0000000..a7423df
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_hash_table.c
@@ -0,0 +1,332 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_context.h"
+#include "vmci_common_int.h"
+#include "vmci_driver.h"
+#include "vmci_hash_table.h"
+
+#define VMCI_HANDLE_TO_CONTEXT_ID(_handle) ((_handle).context)
+#define VMCI_HANDLE_TO_RESOURCE_ID(_handle) ((_handle).resource)
+#define VMCI_HASHTABLE_HASH(_h, _sz)				\
+	vmci_hash_calc(VMCI_HANDLE_TO_RESOURCE_ID(_h), (_sz))
+
+struct vmci_hash_table *vmci_hash_create(int size)
+{
+	struct vmci_hash_table *table;
+
+	table = kmalloc(sizeof *table, GFP_KERNEL);
+	if (table == NULL)
+		return NULL;
+
+	table->entries = kcalloc(size, sizeof *table->entries, GFP_KERNEL);
+	if (table->entries == NULL) {
+		kfree(table);
+		return NULL;
+	}
+
+	table->size = size;
+	spin_lock_init(&table->lock);
+
+	return table;
+}
+
+/*
+ * This function should be called at module exit time.
+ * We rely on the module ref count to insure that no one is accessing any
+ * hash table entries at this point in time. Hence we should be able to just
+ * remove all entries from the hash table.
+ */
+void vmci_hash_destroy(struct vmci_hash_table *table)
+{
+	ASSERT(table);
+
+	spin_lock_bh(&table->lock);
+	kfree(table->entries);
+	table->entries = NULL;
+	spin_unlock_bh(&table->lock);
+	kfree(table);
+}
+
+void vmci_hash_init_entry(struct vmci_hash_entry *entry,
+			  struct vmci_handle handle)
+{
+	ASSERT(entry);
+	entry->handle = handle;
+	entry->refCount = 0;
+}
+
+/*
+ * Unlocked version of vmci_hash_exists.
+ * True if handle already in hashtable. false otherwise.
+ */
+static bool hash_exists_locked(struct vmci_hash_table *table,
+			       struct vmci_handle handle)
+{
+	struct vmci_hash_entry *entry;
+	int idx;
+
+	ASSERT(table);
+
+	idx = VMCI_HASHTABLE_HASH(handle, table->size);
+
+	for (entry = table->entries[idx]; entry; entry = entry->next) {
+		if (VMCI_HANDLE_TO_RESOURCE_ID(entry->handle) ==
+		    VMCI_HANDLE_TO_RESOURCE_ID(handle) &&
+		    ((VMCI_HANDLE_TO_CONTEXT_ID(entry->handle) ==
+		      VMCI_HANDLE_TO_CONTEXT_ID(handle)) ||
+		     (VMCI_INVALID_ID == VMCI_HANDLE_TO_CONTEXT_ID(handle))
+		     || (VMCI_INVALID_ID ==
+			 VMCI_HANDLE_TO_CONTEXT_ID(entry->handle)))) {
+			return true;
+		}
+	}
+
+	return false;
+}
+
+/*
+ * Assumes caller holds table lock.
+ */
+static int hash_unlink(struct vmci_hash_table *table,
+		       struct vmci_hash_entry *entry)
+{
+	int result;
+	struct vmci_hash_entry *prev, *cur;
+	const int idx = VMCI_HASHTABLE_HASH(entry->handle, table->size);
+
+	prev = NULL;
+	cur = table->entries[idx];
+	while (true) {
+		if (cur == NULL) {
+			result = VMCI_ERROR_NOT_FOUND;
+			break;
+		}
+		if (VMCI_HANDLE_EQUAL(cur->handle, entry->handle)) {
+			ASSERT(cur == entry);
+
+			/* Remove entry and break. */
+			if (prev)
+				prev->next = cur->next;
+			else
+				table->entries[idx] = cur->next;
+
+			cur->next = NULL;
+			result = VMCI_SUCCESS;
+			break;
+		}
+		prev = cur;
+		cur = cur->next;
+	}
+
+	return result;
+}
+
+int vmci_hash_add(struct vmci_hash_table *table,
+		  struct vmci_hash_entry *entry)
+{
+	int idx;
+
+	ASSERT(entry);
+	ASSERT(table);
+
+	spin_lock_bh(&table->lock);
+
+	/* Creation of a new hashtable entry is always allowed. */
+	if (hash_exists_locked(table, entry->handle)) {
+		pr_devel("Entry (handle=0x%x:0x%x) already exists.",
+			 entry->handle.context, entry->handle.resource);
+		spin_unlock_bh(&table->lock);
+		return VMCI_ERROR_DUPLICATE_ENTRY;
+	}
+
+	idx = VMCI_HASHTABLE_HASH(entry->handle, table->size);
+	ASSERT(idx < table->size);
+
+	/* New entry is added to top/front of hash bucket. */
+	entry->refCount++;
+	entry->next = table->entries[idx];
+	table->entries[idx] = entry;
+	spin_unlock_bh(&table->lock);
+
+	return VMCI_SUCCESS;
+}
+
+int vmci_hash_remove(struct vmci_hash_table *table,
+		     struct vmci_hash_entry *entry)
+{
+	int result;
+
+	ASSERT(table);
+	ASSERT(entry);
+
+	spin_lock_bh(&table->lock);
+
+	/* First unlink the entry. */
+	result = hash_unlink(table, entry);
+	if (result == VMCI_SUCCESS) {
+		/* Decrement refcount and check if this is last reference. */
+		entry->refCount--;
+		if (entry->refCount == 0)
+			result = VMCI_SUCCESS_ENTRY_DEAD;
+	}
+
+	spin_unlock_bh(&table->lock);
+
+	return result;
+}
+
+/*
+ * Looks up an entry in the hash table, that is already locked.
+ * If the element is found, a pointer to the element is returned.
+ * Else NULL.
+ */
+static struct vmci_hash_entry *hash_get_locked(struct vmci_hash_table *table,
+					       struct vmci_handle handle)
+{
+	struct vmci_hash_entry *cur = NULL;
+	int idx;
+
+	ASSERT(!VMCI_HANDLE_EQUAL(handle, VMCI_INVALID_HANDLE));
+	ASSERT(table);
+
+	idx = VMCI_HASHTABLE_HASH(handle, table->size);
+
+	for (cur = table->entries[idx]; cur != NULL; cur = cur->next) {
+		if (VMCI_HANDLE_TO_RESOURCE_ID(cur->handle) ==
+		    VMCI_HANDLE_TO_RESOURCE_ID(handle) &&
+		    ((VMCI_HANDLE_TO_CONTEXT_ID(cur->handle) ==
+		      VMCI_HANDLE_TO_CONTEXT_ID(handle)) ||
+		     (VMCI_INVALID_ID ==
+		      VMCI_HANDLE_TO_CONTEXT_ID(cur->handle)))) {
+			cur->refCount++;
+			break;
+		}
+	}
+
+	return cur;
+}
+
+struct vmci_hash_entry *vmci_hash_get(struct vmci_hash_table *table,
+				      struct vmci_handle handle)
+{
+	struct vmci_hash_entry *entry;
+
+	if (VMCI_HANDLE_EQUAL(handle, VMCI_INVALID_HANDLE))
+		return NULL;
+
+	ASSERT(table);
+
+	spin_lock_bh(&table->lock);
+	entry = hash_get_locked(table, handle);
+	spin_unlock_bh(&table->lock);
+
+	return entry;
+}
+
+/*
+ * Hold the given entry.  This will increment the entry's reference count.
+ * This is like a GetEntry() but without having to lookup the entry by
+ * handle.
+ */
+void vmci_hash_hold(struct vmci_hash_table *table,
+		    struct vmci_hash_entry *entry)
+{
+	ASSERT(table);
+	ASSERT(entry);
+
+	spin_lock_bh(&table->lock);
+	entry->refCount++;
+	spin_unlock_bh(&table->lock);
+}
+
+/*
+ * Releases an element previously obtained with hash_get_locked.
+ * If the entry is removed from the hash table, VMCI_SUCCESS_ENTRY_DEAD
+ * is returned. Otherwise, VMCI_SUCCESS is returned.
+ */
+static int hash_release_locked(struct vmci_hash_table *table,
+			       struct vmci_hash_entry *entry)
+{
+	int result = VMCI_SUCCESS;
+
+	ASSERT(table);
+	ASSERT(entry);
+
+	entry->refCount--;
+	/* Check if this is last reference and report if so. */
+	if (entry->refCount == 0) {
+		/*
+		 * Remove entry from hash table if not already
+		 * removed. This could have happened already because
+		 * vmci_hash_remove was called to unlink it. We ignore
+		 * if it is not found. Datagram handles will often
+		 * have RemoveEntry called, whereas SharedMemory
+		 * regions rely on ReleaseEntry to unlink the entry,
+		 * since the creator does not call RemoveEntry when it
+		 * detaches.
+		 */
+		hash_unlink(table, entry);
+		result = VMCI_SUCCESS_ENTRY_DEAD;
+	}
+
+	return result;
+}
+
+int vmci_hash_release(struct vmci_hash_table *table,
+		      struct vmci_hash_entry *entry)
+{
+	int result;
+
+	spin_lock_bh(&table->lock);
+	result = hash_release_locked(table, entry);
+	spin_unlock_bh(&table->lock);
+
+	return result;
+}
+
+bool vmci_hash_exists(struct vmci_hash_table *table,
+		      struct vmci_handle handle)
+{
+	bool exists;
+
+	spin_lock_bh(&table->lock);
+	exists = hash_exists_locked(table, handle);
+	spin_unlock_bh(&table->lock);
+
+	return exists;
+}
+
+/*
+ * Hash function used by the Simple Datagram API. Hashes only a VMCI id
+ * (not the full VMCI handle) Based on the djb2 hash function by
+ * Dan Bernstein.
+ */
+int vmci_hash_calc(uint32_t id, unsigned size)
+{
+	unsigned i;
+	int hash = 5381;
+
+	for (i = 0; i < sizeof id; i++)
+		hash = ((hash << 5) + hash) + (uint8_t) (id >> (i * 8));
+
+	return hash & (size - 1);
+}
diff --git a/drivers/misc/vmw_vmci/vmci_hash_table.h b/drivers/misc/vmw_vmci/vmci_hash_table.h
new file mode 100644
index 0000000..8e5c83b
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_hash_table.h
@@ -0,0 +1,56 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMCI_HASH_TABLE_H_
+#define _VMCI_HASH_TABLE_H_
+
+#include <linux/vmw_vmci_defs.h>
+
+struct vmci_hash_entry {
+	struct vmci_handle handle;
+	int refCount;
+	struct vmci_hash_entry *next;
+};
+
+struct vmci_hash_table {
+	struct vmci_hash_entry **entries;
+	int size;		/* Number of buckets in above array. */
+	spinlock_t lock;
+};
+
+struct vmci_hash_table *vmci_hash_create(int size);
+void vmci_hash_destroy(struct vmci_hash_table *table);
+void vmci_hash_init_entry(struct vmci_hash_entry *entry,
+			  struct vmci_handle handle);
+int vmci_hash_add(struct vmci_hash_table *table,
+		  struct vmci_hash_entry *entry);
+int vmci_hash_remove(struct vmci_hash_table *table,
+		     struct vmci_hash_entry *entry);
+struct vmci_hash_entry *vmci_hash_get(struct vmci_hash_table
+				      *table,
+				      struct vmci_handle handle);
+void vmci_hash_hold(struct vmci_hash_table *table,
+		    struct vmci_hash_entry *entry);
+int vmci_hash_release(struct vmci_hash_table *table,
+		      struct vmci_hash_entry *entry);
+bool vmci_hash_exists(struct vmci_hash_table *table,
+		      struct vmci_handle handle);
+int vmci_hash_calc(uint32_t id, unsigned size);
+
+#endif /* _VMCI_HASH_TABLE_H_ */
-- 
1.7.0.4


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 07/11] Apply VMCI hash table
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  0 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, Andrew Stiegmann (stieg), cschamp, gregkh

Implements a hash table for VMCI's use.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_hash_table.c |  332 +++++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_hash_table.h |   56 +++++
 2 files changed, 388 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_hash_table.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_hash_table.h

diff --git a/drivers/misc/vmw_vmci/vmci_hash_table.c b/drivers/misc/vmw_vmci/vmci_hash_table.c
new file mode 100644
index 0000000..a7423df
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_hash_table.c
@@ -0,0 +1,332 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_context.h"
+#include "vmci_common_int.h"
+#include "vmci_driver.h"
+#include "vmci_hash_table.h"
+
+#define VMCI_HANDLE_TO_CONTEXT_ID(_handle) ((_handle).context)
+#define VMCI_HANDLE_TO_RESOURCE_ID(_handle) ((_handle).resource)
+#define VMCI_HASHTABLE_HASH(_h, _sz)				\
+	vmci_hash_calc(VMCI_HANDLE_TO_RESOURCE_ID(_h), (_sz))
+
+struct vmci_hash_table *vmci_hash_create(int size)
+{
+	struct vmci_hash_table *table;
+
+	table = kmalloc(sizeof *table, GFP_KERNEL);
+	if (table == NULL)
+		return NULL;
+
+	table->entries = kcalloc(size, sizeof *table->entries, GFP_KERNEL);
+	if (table->entries == NULL) {
+		kfree(table);
+		return NULL;
+	}
+
+	table->size = size;
+	spin_lock_init(&table->lock);
+
+	return table;
+}
+
+/*
+ * This function should be called at module exit time.
+ * We rely on the module ref count to insure that no one is accessing any
+ * hash table entries at this point in time. Hence we should be able to just
+ * remove all entries from the hash table.
+ */
+void vmci_hash_destroy(struct vmci_hash_table *table)
+{
+	ASSERT(table);
+
+	spin_lock_bh(&table->lock);
+	kfree(table->entries);
+	table->entries = NULL;
+	spin_unlock_bh(&table->lock);
+	kfree(table);
+}
+
+void vmci_hash_init_entry(struct vmci_hash_entry *entry,
+			  struct vmci_handle handle)
+{
+	ASSERT(entry);
+	entry->handle = handle;
+	entry->refCount = 0;
+}
+
+/*
+ * Unlocked version of vmci_hash_exists.
+ * True if handle already in hashtable. false otherwise.
+ */
+static bool hash_exists_locked(struct vmci_hash_table *table,
+			       struct vmci_handle handle)
+{
+	struct vmci_hash_entry *entry;
+	int idx;
+
+	ASSERT(table);
+
+	idx = VMCI_HASHTABLE_HASH(handle, table->size);
+
+	for (entry = table->entries[idx]; entry; entry = entry->next) {
+		if (VMCI_HANDLE_TO_RESOURCE_ID(entry->handle) ==
+		    VMCI_HANDLE_TO_RESOURCE_ID(handle) &&
+		    ((VMCI_HANDLE_TO_CONTEXT_ID(entry->handle) ==
+		      VMCI_HANDLE_TO_CONTEXT_ID(handle)) ||
+		     (VMCI_INVALID_ID == VMCI_HANDLE_TO_CONTEXT_ID(handle))
+		     || (VMCI_INVALID_ID ==
+			 VMCI_HANDLE_TO_CONTEXT_ID(entry->handle)))) {
+			return true;
+		}
+	}
+
+	return false;
+}
+
+/*
+ * Assumes caller holds table lock.
+ */
+static int hash_unlink(struct vmci_hash_table *table,
+		       struct vmci_hash_entry *entry)
+{
+	int result;
+	struct vmci_hash_entry *prev, *cur;
+	const int idx = VMCI_HASHTABLE_HASH(entry->handle, table->size);
+
+	prev = NULL;
+	cur = table->entries[idx];
+	while (true) {
+		if (cur == NULL) {
+			result = VMCI_ERROR_NOT_FOUND;
+			break;
+		}
+		if (VMCI_HANDLE_EQUAL(cur->handle, entry->handle)) {
+			ASSERT(cur == entry);
+
+			/* Remove entry and break. */
+			if (prev)
+				prev->next = cur->next;
+			else
+				table->entries[idx] = cur->next;
+
+			cur->next = NULL;
+			result = VMCI_SUCCESS;
+			break;
+		}
+		prev = cur;
+		cur = cur->next;
+	}
+
+	return result;
+}
+
+int vmci_hash_add(struct vmci_hash_table *table,
+		  struct vmci_hash_entry *entry)
+{
+	int idx;
+
+	ASSERT(entry);
+	ASSERT(table);
+
+	spin_lock_bh(&table->lock);
+
+	/* Creation of a new hashtable entry is always allowed. */
+	if (hash_exists_locked(table, entry->handle)) {
+		pr_devel("Entry (handle=0x%x:0x%x) already exists.",
+			 entry->handle.context, entry->handle.resource);
+		spin_unlock_bh(&table->lock);
+		return VMCI_ERROR_DUPLICATE_ENTRY;
+	}
+
+	idx = VMCI_HASHTABLE_HASH(entry->handle, table->size);
+	ASSERT(idx < table->size);
+
+	/* New entry is added to top/front of hash bucket. */
+	entry->refCount++;
+	entry->next = table->entries[idx];
+	table->entries[idx] = entry;
+	spin_unlock_bh(&table->lock);
+
+	return VMCI_SUCCESS;
+}
+
+int vmci_hash_remove(struct vmci_hash_table *table,
+		     struct vmci_hash_entry *entry)
+{
+	int result;
+
+	ASSERT(table);
+	ASSERT(entry);
+
+	spin_lock_bh(&table->lock);
+
+	/* First unlink the entry. */
+	result = hash_unlink(table, entry);
+	if (result == VMCI_SUCCESS) {
+		/* Decrement refcount and check if this is last reference. */
+		entry->refCount--;
+		if (entry->refCount == 0)
+			result = VMCI_SUCCESS_ENTRY_DEAD;
+	}
+
+	spin_unlock_bh(&table->lock);
+
+	return result;
+}
+
+/*
+ * Looks up an entry in the hash table, that is already locked.
+ * If the element is found, a pointer to the element is returned.
+ * Else NULL.
+ */
+static struct vmci_hash_entry *hash_get_locked(struct vmci_hash_table *table,
+					       struct vmci_handle handle)
+{
+	struct vmci_hash_entry *cur = NULL;
+	int idx;
+
+	ASSERT(!VMCI_HANDLE_EQUAL(handle, VMCI_INVALID_HANDLE));
+	ASSERT(table);
+
+	idx = VMCI_HASHTABLE_HASH(handle, table->size);
+
+	for (cur = table->entries[idx]; cur != NULL; cur = cur->next) {
+		if (VMCI_HANDLE_TO_RESOURCE_ID(cur->handle) ==
+		    VMCI_HANDLE_TO_RESOURCE_ID(handle) &&
+		    ((VMCI_HANDLE_TO_CONTEXT_ID(cur->handle) ==
+		      VMCI_HANDLE_TO_CONTEXT_ID(handle)) ||
+		     (VMCI_INVALID_ID ==
+		      VMCI_HANDLE_TO_CONTEXT_ID(cur->handle)))) {
+			cur->refCount++;
+			break;
+		}
+	}
+
+	return cur;
+}
+
+struct vmci_hash_entry *vmci_hash_get(struct vmci_hash_table *table,
+				      struct vmci_handle handle)
+{
+	struct vmci_hash_entry *entry;
+
+	if (VMCI_HANDLE_EQUAL(handle, VMCI_INVALID_HANDLE))
+		return NULL;
+
+	ASSERT(table);
+
+	spin_lock_bh(&table->lock);
+	entry = hash_get_locked(table, handle);
+	spin_unlock_bh(&table->lock);
+
+	return entry;
+}
+
+/*
+ * Hold the given entry.  This will increment the entry's reference count.
+ * This is like a GetEntry() but without having to lookup the entry by
+ * handle.
+ */
+void vmci_hash_hold(struct vmci_hash_table *table,
+		    struct vmci_hash_entry *entry)
+{
+	ASSERT(table);
+	ASSERT(entry);
+
+	spin_lock_bh(&table->lock);
+	entry->refCount++;
+	spin_unlock_bh(&table->lock);
+}
+
+/*
+ * Releases an element previously obtained with hash_get_locked.
+ * If the entry is removed from the hash table, VMCI_SUCCESS_ENTRY_DEAD
+ * is returned. Otherwise, VMCI_SUCCESS is returned.
+ */
+static int hash_release_locked(struct vmci_hash_table *table,
+			       struct vmci_hash_entry *entry)
+{
+	int result = VMCI_SUCCESS;
+
+	ASSERT(table);
+	ASSERT(entry);
+
+	entry->refCount--;
+	/* Check if this is last reference and report if so. */
+	if (entry->refCount == 0) {
+		/*
+		 * Remove entry from hash table if not already
+		 * removed. This could have happened already because
+		 * vmci_hash_remove was called to unlink it. We ignore
+		 * if it is not found. Datagram handles will often
+		 * have RemoveEntry called, whereas SharedMemory
+		 * regions rely on ReleaseEntry to unlink the entry,
+		 * since the creator does not call RemoveEntry when it
+		 * detaches.
+		 */
+		hash_unlink(table, entry);
+		result = VMCI_SUCCESS_ENTRY_DEAD;
+	}
+
+	return result;
+}
+
+int vmci_hash_release(struct vmci_hash_table *table,
+		      struct vmci_hash_entry *entry)
+{
+	int result;
+
+	spin_lock_bh(&table->lock);
+	result = hash_release_locked(table, entry);
+	spin_unlock_bh(&table->lock);
+
+	return result;
+}
+
+bool vmci_hash_exists(struct vmci_hash_table *table,
+		      struct vmci_handle handle)
+{
+	bool exists;
+
+	spin_lock_bh(&table->lock);
+	exists = hash_exists_locked(table, handle);
+	spin_unlock_bh(&table->lock);
+
+	return exists;
+}
+
+/*
+ * Hash function used by the Simple Datagram API. Hashes only a VMCI id
+ * (not the full VMCI handle) Based on the djb2 hash function by
+ * Dan Bernstein.
+ */
+int vmci_hash_calc(uint32_t id, unsigned size)
+{
+	unsigned i;
+	int hash = 5381;
+
+	for (i = 0; i < sizeof id; i++)
+		hash = ((hash << 5) + hash) + (uint8_t) (id >> (i * 8));
+
+	return hash & (size - 1);
+}
diff --git a/drivers/misc/vmw_vmci/vmci_hash_table.h b/drivers/misc/vmw_vmci/vmci_hash_table.h
new file mode 100644
index 0000000..8e5c83b
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_hash_table.h
@@ -0,0 +1,56 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMCI_HASH_TABLE_H_
+#define _VMCI_HASH_TABLE_H_
+
+#include <linux/vmw_vmci_defs.h>
+
+struct vmci_hash_entry {
+	struct vmci_handle handle;
+	int refCount;
+	struct vmci_hash_entry *next;
+};
+
+struct vmci_hash_table {
+	struct vmci_hash_entry **entries;
+	int size;		/* Number of buckets in above array. */
+	spinlock_t lock;
+};
+
+struct vmci_hash_table *vmci_hash_create(int size);
+void vmci_hash_destroy(struct vmci_hash_table *table);
+void vmci_hash_init_entry(struct vmci_hash_entry *entry,
+			  struct vmci_handle handle);
+int vmci_hash_add(struct vmci_hash_table *table,
+		  struct vmci_hash_entry *entry);
+int vmci_hash_remove(struct vmci_hash_table *table,
+		     struct vmci_hash_entry *entry);
+struct vmci_hash_entry *vmci_hash_get(struct vmci_hash_table
+				      *table,
+				      struct vmci_handle handle);
+void vmci_hash_hold(struct vmci_hash_table *table,
+		    struct vmci_hash_entry *entry);
+int vmci_hash_release(struct vmci_hash_table *table,
+		      struct vmci_hash_entry *entry);
+bool vmci_hash_exists(struct vmci_hash_table *table,
+		      struct vmci_handle handle);
+int vmci_hash_calc(uint32_t id, unsigned size);
+
+#endif /* _VMCI_HASH_TABLE_H_ */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 08/11] Apply VMCI queue pairs
  2012-07-26 23:39 ` Andrew Stiegmann (stieg)
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  -1 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, cschamp, gregkh, Andrew Stiegmann (stieg)

VMCI queue pairs allow for bi-directional ordered communication between
host and guests.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_queue_pair.c | 3548 +++++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_queue_pair.h |  182 ++
 2 files changed, 3730 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_queue_pair.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_queue_pair.h

diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c b/drivers/misc/vmw_vmci/vmci_queue_pair.c
new file mode 100644
index 0000000..11d111b
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c
@@ -0,0 +1,3548 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/device-mapper.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/semaphore.h>
+#include <linux/socket.h>
+#include <linux/vmw_vmci_api.h>
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_common_int.h"
+#include "vmci_context.h"
+#include "vmci_datagram.h"
+#include "vmci_driver.h"
+#include "vmci_event.h"
+#include "vmci_handle_array.h"
+#include "vmci_hash_table.h"
+#include "vmci_queue_pair.h"
+#include "vmci_resource.h"
+#include "vmci_route.h"
+
+/*
+ * In the following, we will distinguish between two kinds of VMX processes -
+ * the ones with versions lower than VMCI_VERSION_NOVMVM that use specialized
+ * VMCI page files in the VMX and supporting VM to VM communication and the
+ * newer ones that use the guest memory directly. We will in the following
+ * refer to the older VMX versions as old-style VMX'en, and the newer ones as
+ * new-style VMX'en.
+ *
+ * The state transition datagram is as follows (the VMCIQPB_ prefix has been
+ * removed for readability) - see below for more details on the transtions:
+ *
+ *            --------------  NEW  -------------
+ *            |                                |
+ *           \_/                              \_/
+ *     CREATED_NO_MEM <-----------------> CREATED_MEM
+ *            |    |                           |
+ *            |    o-----------------------o   |
+ *            |                            |   |
+ *           \_/                          \_/ \_/
+ *     ATTACHED_NO_MEM <----------------> ATTACHED_MEM
+ *            |                            |   |
+ *            |     o----------------------o   |
+ *            |     |                          |
+ *           \_/   \_/                        \_/
+ *     SHUTDOWN_NO_MEM <----------------> SHUTDOWN_MEM
+ *            |                                |
+ *            |                                |
+ *            -------------> gone <-------------
+ *
+ * In more detail. When a VMCI queue pair is first created, it will be in the
+ * VMCIQPB_NEW state. It will then move into one of the following states:
+ *
+ * - VMCIQPB_CREATED_NO_MEM: this state indicates that either:
+ *
+ *     - the created was performed by a host endpoint, in which case there is
+ *       no backing memory yet.
+ *
+ *     - the create was initiated by an old-style VMX, that uses
+ *       vmci_qp_broker_set_page_store to specify the UVAs of the queue pair at
+ *       a later point in time. This state can be distinguished from the one
+ *       above by the context ID of the creator. A host side is not allowed to
+ *       attach until the page store has been set.
+ *
+ * - VMCIQPB_CREATED_MEM: this state is the result when the queue pair
+ *     is created by a VMX using the queue pair device backend that
+ *     sets the UVAs of the queue pair immediately and stores the
+ *     information for later attachers. At this point, it is ready for
+ *     the host side to attach to it.
+ *
+ * Once the queue pair is in one of the created states (with the exception of
+ * the case mentioned for older VMX'en above), it is possible to attach to the
+ * queue pair. Again we have two new states possible:
+ *
+ * - VMCIQPB_ATTACHED_MEM: this state can be reached through the following
+ *   paths:
+ *
+ *     - from VMCIQPB_CREATED_NO_MEM when a new-style VMX allocates a queue
+ *       pair, and attaches to a queue pair previously created by the host side.
+ *
+ *     - from VMCIQPB_CREATED_MEM when the host side attaches to a queue pair
+ *       already created by a guest.
+ *
+ *     - from VMCIQPB_ATTACHED_NO_MEM, when an old-style VMX calls
+ *       vmci_qp_broker_set_page_store (see below).
+ *
+ * - VMCIQPB_ATTACHED_NO_MEM: If the queue pair already was in the
+ *     VMCIQPB_CREATED_NO_MEM due to a host side create, an old-style VMX will
+ *     bring the queue pair into this state. Once vmci_qp_broker_set_page_store
+ *     is called to register the user memory, the VMCIQPB_ATTACH_MEM state
+ *     will be entered.
+ *
+ * From the attached queue pair, the queue pair can enter the shutdown states
+ * when either side of the queue pair detaches. If the guest side detaches
+ * first, the queue pair will enter the VMCIQPB_SHUTDOWN_NO_MEM state, where
+ * the content of the queue pair will no longer be available. If the host
+ * side detaches first, the queue pair will either enter the
+ * VMCIQPB_SHUTDOWN_MEM, if the guest memory is currently mapped, or
+ * VMCIQPB_SHUTDOWN_NO_MEM, if the guest memory is not mapped
+ * (e.g., the host detaches while a guest is stunned).
+ *
+ * New-style VMX'en will also unmap guest memory, if the guest is
+ * quiesced, e.g., during a snapshot operation. In that case, the guest
+ * memory will no longer be available, and the queue pair will transition from
+ * *_MEM state to a *_NO_MEM state. The VMX may later map the memory once more,
+ * in which case the queue pair will transition from the *_NO_MEM state at that
+ * point back to the *_MEM state. Note that the *_NO_MEM state may have changed,
+ * since the peer may have either attached or detached in the meantime. The
+ * values are laid out such that ++ on a state will move from a *_NO_MEM to a
+ * *_MEM state, and vice versa.
+ */
+
+/*
+ * VMCIMemcpy{To,From}QueueFunc() prototypes.  Functions of these
+ * types are passed around to enqueue and dequeue routines.  Note that
+ * often the functions passed are simply wrappers around memcpy
+ * itself.
+ *
+ * Note: In order for the memcpy typedefs to be compatible with the VMKernel,
+ * there's an unused last parameter for the hosted side.  In
+ * ESX, that parameter holds a buffer type.
+ */
+typedef int VMCIMemcpyToQueueFunc(struct vmci_queue *queue,
+				  uint64_t queueOffset, const void *src,
+				  size_t srcOffset, size_t size);
+typedef int VMCIMemcpyFromQueueFunc(void *dest, size_t destOffset,
+				    const struct vmci_queue *queue,
+				    uint64_t queueOffset, size_t size);
+
+/* The Kernel specific component of the struct vmci_queue structure. */
+struct vmci_queue_kern_if {
+	struct page **page;
+	struct page **headerPage;
+	void *va;
+	struct semaphore __mutex;
+	struct semaphore *mutex;
+	bool host;
+	size_t numPages;
+	bool mapped;
+};
+
+/*
+ * This structure is opaque to the clients.
+ */
+struct vmci_qp {
+	struct vmci_handle handle;
+	struct vmci_queue *produceQ;
+	struct vmci_queue *consumeQ;
+	uint64_t produceQSize;
+	uint64_t consumeQSize;
+	uint32_t peer;
+	uint32_t flags;
+	uint32_t privFlags;
+	bool guestEndpoint;
+	uint32_t blocked;
+	wait_queue_head_t event;
+};
+
+enum qp_broker_state {
+	VMCIQPB_NEW,
+	VMCIQPB_CREATED_NO_MEM,
+	VMCIQPB_CREATED_MEM,
+	VMCIQPB_ATTACHED_NO_MEM,
+	VMCIQPB_ATTACHED_MEM,
+	VMCIQPB_SHUTDOWN_NO_MEM,
+	VMCIQPB_SHUTDOWN_MEM,
+	VMCIQPB_GONE
+};
+
+#define QPBROKERSTATE_HAS_MEM(_qpb) (_qpb->state == VMCIQPB_CREATED_MEM || \
+				     _qpb->state == VMCIQPB_ATTACHED_MEM || \
+				     _qpb->state == VMCIQPB_SHUTDOWN_MEM)
+
+/*
+ * In the queue pair broker, we always use the guest point of view for
+ * the produce and consume queue values and references, e.g., the
+ * produce queue size stored is the guests produce queue size. The
+ * host endpoint will need to swap these around. The only exception is
+ * the local queue pairs on the host, in which case the host endpoint
+ * that creates the queue pair will have the right orientation, and
+ * the attaching host endpoint will need to swap.
+ */
+struct qp_entry {
+	struct list_head listItem;
+	struct vmci_handle handle;
+	uint32_t peer;
+	uint32_t flags;
+	uint64_t produceSize;
+	uint64_t consumeSize;
+	uint32_t refCount;
+};
+
+struct qp_broker_entry {
+	struct qp_entry qp;
+	uint32_t createId;
+	uint32_t attachId;
+	enum qp_broker_state state;
+	bool requireTrustedAttach;
+	bool createdByTrusted;
+	bool vmciPageFiles; /* Created by VMX using VMCI page files */
+	struct vmci_queue *produceQ;
+	struct vmci_queue *consumeQ;
+	struct vmci_queue_header savedProduceQ;
+	struct vmci_queue_header savedConsumeQ;
+	VMCIEventReleaseCB wakeupCB;
+	void *clientData;
+	void *localMem;	 /* Kernel memory for local queue pair */
+};
+
+struct qp_guest_endpoint {
+	struct qp_entry qp;
+	uint64_t numPPNs;
+	void *produceQ;
+	void *consumeQ;
+	struct PPNSet ppnSet;
+};
+
+struct qp_list {
+	struct list_head head;
+	struct semaphore mutex;
+};
+
+static struct qp_list qpBrokerList;
+static struct qp_list qpGuestEndpoints;
+
+#define INVALID_VMCI_GUEST_MEM_ID  0
+#define QPE_NUM_PAGES(_QPE) ((uint32_t)					\
+			     (dm_div_up(_QPE.produceSize, PAGE_SIZE) +	\
+			      dm_div_up(_QPE.consumeSize, PAGE_SIZE) + 2))
+
+/*
+ * Frees kernel VA space for a given queue and its queue header, and
+ * frees physical data pages.
+ */
+static void qp_free_queue(void *q,
+			  uint64_t size)
+{
+	struct vmci_queue *queue = q;
+
+	if (queue) {
+		uint64_t i = dm_div_up(size, PAGE_SIZE);
+
+		if (queue->kernelIf->mapped) {
+			ASSERT(queue->kernelIf->va);
+			vunmap(queue->kernelIf->va);
+			queue->kernelIf->va = NULL;
+		}
+
+		while (i)
+			__free_page(queue->kernelIf->page[--i]);
+
+		vfree(queue->qHeader);
+	}
+}
+
+
+/*
+ * Allocates kernel VA space of specified size, plus space for the
+ * queue structure/kernel interface and the queue header.  Allocates
+ * physical pages for the queue data pages.
+ *
+ * PAGE m:      struct vmci_queue_header (struct vmci_queue->qHeader)
+ * PAGE m+1:    struct vmci_queue
+ * PAGE m+1+q:  struct vmci_queue_kern_if (struct vmci_queue->kernelIf)
+ * PAGE n-size: Data pages (struct vmci_queue->kernelIf->page[])
+ */
+static void *qp_alloc_queue(uint64_t size,
+			    uint32_t flags)
+{
+	uint64_t i;
+	struct vmci_queue *queue;
+	struct vmci_queue_header *qHeader;
+	const uint64_t numDataPages = dm_div_up(size, PAGE_SIZE);
+	const uint queueSize =
+		PAGE_SIZE +
+		sizeof(*queue) + sizeof(*(queue->kernelIf)) +
+		numDataPages * sizeof(*(queue->kernelIf->page));
+
+	ASSERT(size <= VMCI_MAX_GUEST_QP_MEMORY);
+	ASSERT(!QP_PINNED(flags) || size <= VMCI_MAX_PINNED_QP_MEMORY);
+
+	qHeader = vmalloc(queueSize);
+	if (!qHeader)
+		return NULL;
+
+	queue = (struct vmci_queue *)((uint8_t *) qHeader + PAGE_SIZE);
+	queue->qHeader = qHeader;
+	queue->savedHeader = NULL;
+	queue->kernelIf = (struct vmci_queue_kern_if *)((uint8_t *) queue +
+							sizeof(*queue));
+	queue->kernelIf->headerPage = NULL;	/* Unused in guest. */
+	queue->kernelIf->page =
+		(struct page **)((uint8_t *) queue->kernelIf +
+				 sizeof(*(queue->kernelIf)));
+	queue->kernelIf->host = false;
+	queue->kernelIf->va = NULL;
+	queue->kernelIf->mapped = false;
+
+	for (i = 0; i < numDataPages; i++) {
+		queue->kernelIf->page[i] = alloc_pages(GFP_KERNEL, 0);
+		if (!queue->kernelIf->page[i])
+			goto fail;
+	}
+
+	if (QP_PINNED(flags)) {
+		queue->kernelIf->va = vmap(queue->kernelIf->page, numDataPages,
+					   VM_MAP, PAGE_KERNEL);
+		if (!queue->kernelIf->va)
+			goto fail;
+
+		queue->kernelIf->mapped = true;
+	}
+
+	return (void *)queue;
+
+fail:
+	qp_free_queue(queue, i * PAGE_SIZE);
+	return NULL;
+}
+
+/*
+ * Copies from a given buffer or iovector to a VMCI Queue.  Uses
+ * kmap()/kunmap() to dynamically map/unmap required portions of the queue
+ * by traversing the offset -> page translation structure for the queue.
+ * Assumes that offset + size does not wrap around in the queue.
+ */
+static int __qp_memcpy_to_queue(struct vmci_queue *queue,
+				uint64_t queueOffset,
+				const void *src,
+				size_t size,
+				bool isIovec)
+{
+	struct vmci_queue_kern_if *kernelIf = queue->kernelIf;
+	size_t bytesCopied = 0;
+
+	while (bytesCopied < size) {
+		uint64_t pageIndex = (queueOffset + bytesCopied) / PAGE_SIZE;
+		size_t pageOffset =
+			(queueOffset + bytesCopied) & (PAGE_SIZE - 1);
+		void *va;
+		size_t toCopy;
+
+		if (!kernelIf->mapped)
+			va = kmap(kernelIf->page[pageIndex]);
+		else
+			va = (void *)((uint8_t *)kernelIf->va +
+				      (pageIndex * PAGE_SIZE));
+
+		if (size - bytesCopied > PAGE_SIZE - pageOffset) {
+			/* Enough payload to fill up from this page. */
+			toCopy = PAGE_SIZE - pageOffset;
+		} else {
+			toCopy = size - bytesCopied;
+		}
+
+		if (isIovec) {
+			struct iovec *iov = (struct iovec *)src;
+			int err;
+
+			/* The iovec will track bytesCopied internally. */
+			err = memcpy_fromiovec((uint8_t *) va + pageOffset,
+					       iov, toCopy);
+			if (err != 0) {
+				kunmap(kernelIf->page[pageIndex]);
+				return VMCI_ERROR_INVALID_ARGS;
+			}
+		} else {
+			memcpy((uint8_t *) va + pageOffset,
+			       (uint8_t *) src + bytesCopied, toCopy);
+		}
+
+		bytesCopied += toCopy;
+		if (!kernelIf->mapped)
+			kunmap(kernelIf->page[pageIndex]);
+	}
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Copies to a given buffer or iovector from a VMCI Queue.  Uses
+ * kmap()/kunmap() to dynamically map/unmap required portions of the queue
+ * by traversing the offset -> page translation structure for the queue.
+ * Assumes that offset + size does not wrap around in the queue.
+ */
+static int __qp_memcpy_from_queue(void *dest,
+				  const struct vmci_queue *queue,
+				  uint64_t queueOffset,
+				  size_t size,
+				  bool isIovec)
+{
+	struct vmci_queue_kern_if *kernelIf = queue->kernelIf;
+	size_t bytesCopied = 0;
+
+	while (bytesCopied < size) {
+		uint64_t pageIndex = (queueOffset + bytesCopied) / PAGE_SIZE;
+		size_t pageOffset =
+			(queueOffset + bytesCopied) & (PAGE_SIZE - 1);
+		void *va;
+		size_t toCopy;
+
+		if (!kernelIf->mapped)
+			va = kmap(kernelIf->page[pageIndex]);
+		else
+			va = (void *)((uint8_t *)kernelIf->va +
+				      (pageIndex * PAGE_SIZE));
+
+		if (size - bytesCopied > PAGE_SIZE - pageOffset) {
+			/* Enough payload to fill up this page. */
+			toCopy = PAGE_SIZE - pageOffset;
+		} else {
+			toCopy = size - bytesCopied;
+		}
+
+		if (isIovec) {
+			struct iovec *iov = (struct iovec *)dest;
+			int err;
+
+			/* The iovec will track bytesCopied internally. */
+			err = memcpy_toiovec(iov, (uint8_t *) va + pageOffset,
+					     toCopy);
+			if (err != 0) {
+				kunmap(kernelIf->page[pageIndex]);
+				return VMCI_ERROR_INVALID_ARGS;
+			}
+		} else {
+			memcpy((uint8_t *) dest + bytesCopied,
+			       (uint8_t *) va + pageOffset, toCopy);
+		}
+
+		bytesCopied += toCopy;
+		if (!kernelIf->mapped)
+			kunmap(kernelIf->page[pageIndex]);
+	}
+
+	return VMCI_SUCCESS;
+}
+
+
+/*
+ * Allocates two list of PPNs --- one for the pages in the produce queue,
+ * and the other for the pages in the consume queue. Intializes the list
+ * of PPNs with the page frame numbers of the KVA for the two queues (and
+ * the queue headers).
+ */
+static int qp_alloc_ppn_set(void *prodQ,
+			    uint64_t numProducePages,
+			    void *consQ,
+			    uint64_t numConsumePages,
+			    struct PPNSet *ppnSet)
+{
+	uint32_t *producePPNs;
+	uint32_t *consumePPNs;
+	struct vmci_queue *produceQ = prodQ;
+	struct vmci_queue *consumeQ = consQ;
+	uint64_t i;
+
+	if (!produceQ || !numProducePages || !consumeQ ||
+	    !numConsumePages || !ppnSet)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	if (ppnSet->initialized)
+		return VMCI_ERROR_ALREADY_EXISTS;
+
+	producePPNs =
+		kmalloc(numProducePages * sizeof *producePPNs, GFP_KERNEL);
+	if (!producePPNs)
+		return VMCI_ERROR_NO_MEM;
+
+	consumePPNs =
+		kmalloc(numConsumePages * sizeof *consumePPNs, GFP_KERNEL);
+	if (!consumePPNs) {
+		kfree(producePPNs);
+		return VMCI_ERROR_NO_MEM;
+	}
+
+	producePPNs[0] = page_to_pfn(vmalloc_to_page(produceQ->qHeader));
+	for (i = 1; i < numProducePages; i++) {
+		unsigned long pfn;
+
+		producePPNs[i] = pfn =
+			page_to_pfn(produceQ->kernelIf->page[i - 1]);
+
+		/* Fail allocation if PFN isn't supported by hypervisor. */
+		if (sizeof pfn > sizeof *producePPNs && pfn != producePPNs[i])
+			goto ppnError;
+	}
+
+	consumePPNs[0] = page_to_pfn(vmalloc_to_page(consumeQ->qHeader));
+	for (i = 1; i < numConsumePages; i++) {
+		unsigned long pfn;
+
+		consumePPNs[i] = pfn =
+			page_to_pfn(consumeQ->kernelIf->page[i - 1]);
+
+		/* Fail allocation if PFN isn't supported by hypervisor. */
+		if (sizeof pfn > sizeof *consumePPNs && pfn != consumePPNs[i])
+			goto ppnError;
+	}
+
+	ppnSet->numProducePages = numProducePages;
+	ppnSet->numConsumePages = numConsumePages;
+	ppnSet->producePPNs = producePPNs;
+	ppnSet->consumePPNs = consumePPNs;
+	ppnSet->initialized = true;
+	return VMCI_SUCCESS;
+
+ppnError:
+	kfree(producePPNs);
+	kfree(consumePPNs);
+	return VMCI_ERROR_INVALID_ARGS;
+}
+
+/*
+ * Frees the two list of PPNs for a queue pair.
+ */
+static void qp_free_ppn_set(struct PPNSet *ppnSet)
+{
+	ASSERT(ppnSet);
+	if (ppnSet->initialized) {
+		/* Do not call these functions on NULL inputs. */
+		ASSERT(ppnSet->producePPNs && ppnSet->consumePPNs);
+		kfree(ppnSet->producePPNs);
+		kfree(ppnSet->consumePPNs);
+	}
+	memset(ppnSet, 0, sizeof *ppnSet);
+}
+
+/*
+ * Populates the list of PPNs in the hypercall structure with the PPNS
+ * of the produce queue and the consume queue.
+ */
+static int qp_populate_ppn_set(uint8_t *callBuf,
+			       const struct PPNSet *ppnSet)
+{
+	ASSERT(callBuf && ppnSet && ppnSet->initialized);
+	memcpy(callBuf, ppnSet->producePPNs,
+	       ppnSet->numProducePages * sizeof *ppnSet->producePPNs);
+	memcpy(callBuf +
+	       ppnSet->numProducePages * sizeof *ppnSet->producePPNs,
+	       ppnSet->consumePPNs,
+	       ppnSet->numConsumePages * sizeof *ppnSet->consumePPNs);
+
+	return VMCI_SUCCESS;
+}
+
+static int qp_memcpy_to_queue(struct vmci_queue *queue,
+			      uint64_t queueOffset,
+			      const void *src,
+			      size_t srcOffset,
+			      size_t size)
+{
+	return __qp_memcpy_to_queue(queue, queueOffset,
+				    (uint8_t *) src + srcOffset, size, false);
+}
+
+static int qp_memcpy_from_queue(void *dest,
+				size_t destOffset,
+				const struct vmci_queue *queue,
+				uint64_t queueOffset,
+				size_t size)
+{
+	return __qp_memcpy_from_queue((uint8_t *) dest + destOffset,
+				      queue, queueOffset, size, false);
+}
+
+/*
+ * Copies from a given iovec from a VMCI Queue.
+ */
+static int qp_memcpy_to_queue_iov(struct vmci_queue *queue,
+				  uint64_t queueOffset,
+				  const void *src,
+				  size_t srcOffset,
+				  size_t size)
+{
+
+	/*
+	 * We ignore srcOffset because src is really a struct iovec * and will
+	 * maintain offset internally.
+	 */
+	return __qp_memcpy_to_queue(queue, queueOffset, src, size, true);
+}
+
+/*
+ * Copies to a given iovec from a VMCI Queue.
+ */
+static int qp_memcpy_from_queue_iov(void *dest,
+				    size_t destOffset,
+				    const struct vmci_queue *queue,
+				    uint64_t queueOffset,
+				    size_t size)
+{
+	/*
+	 * We ignore destOffset because dest is really a struct iovec * and will
+	 * maintain offset internally.
+	 */
+	return __qp_memcpy_from_queue(dest, queue, queueOffset, size, true);
+}
+
+/*
+ * Allocates kernel VA space of specified size plus space for the queue
+ * and kernel interface.  This is different from the guest queue allocator,
+ * because we do not allocate our own queue header/data pages here but
+ * share those of the guest.
+ */
+static struct vmci_queue *qp_host_alloc_queue(uint64_t size)
+{
+	struct vmci_queue *queue;
+	const size_t numPages = dm_div_up(size, PAGE_SIZE) + 1;
+	const size_t queueSize = sizeof(*queue) + sizeof(*(queue->kernelIf));
+	const size_t queuePageSize = numPages * sizeof(*queue->kernelIf->page);
+
+	queue = kzalloc(queueSize + queuePageSize, GFP_KERNEL);
+	if (queue) {
+		queue->qHeader = NULL;
+		queue->savedHeader = NULL;
+		queue->kernelIf =
+			(struct vmci_queue_kern_if *)((uint8_t *) queue +
+						      sizeof(*queue));
+		queue->kernelIf->host = true;
+		queue->kernelIf->mutex = NULL;
+		queue->kernelIf->numPages = numPages;
+		queue->kernelIf->headerPage =
+			(struct page **)((uint8_t *) queue + queueSize);
+		queue->kernelIf->page = &queue->kernelIf->headerPage[1];
+		queue->kernelIf->va = NULL;
+		queue->kernelIf->mapped = false;
+	}
+
+	return queue;
+}
+
+/*
+ * Frees kernel memory for a given queue (header plus translation
+ * structure).
+ */
+static void qp_host_free_queue(struct vmci_queue *queue,
+			       uint64_t queueSize)
+{
+	kfree(queue);
+}
+
+/*
+ * Initialize the mutex for the pair of queues.  This mutex is used to
+ * protect the qHeader and the buffer from changing out from under any
+ * users of either queue.  Of course, it's only any good if the mutexes
+ * are actually acquired.  Queue structure must lie on non-paged memory
+ * or we cannot guarantee access to the mutex.
+ */
+static void qp_init_queue_mutex(struct vmci_queue *produceQ,
+				struct vmci_queue *consumeQ)
+{
+	ASSERT(produceQ);
+	ASSERT(consumeQ);
+	ASSERT(produceQ->kernelIf);
+	ASSERT(consumeQ->kernelIf);
+
+	/*
+	 * Only the host queue has shared state - the guest queues do not
+	 * need to synchronize access using a queue mutex.
+	 */
+
+	if (produceQ->kernelIf->host) {
+		produceQ->kernelIf->mutex = &produceQ->kernelIf->__mutex;
+		consumeQ->kernelIf->mutex = &produceQ->kernelIf->__mutex;
+		sema_init(produceQ->kernelIf->mutex, 1);
+	}
+}
+
+/*
+ * Cleans up the mutex for the pair of queues.
+ */
+static void qp_cleanup_queue_mutex(struct vmci_queue *produceQ,
+				   struct vmci_queue *consumeQ)
+{
+	ASSERT(produceQ);
+	ASSERT(consumeQ);
+	ASSERT(produceQ->kernelIf);
+	ASSERT(consumeQ->kernelIf);
+
+	if (produceQ->kernelIf->host) {
+		produceQ->kernelIf->mutex = NULL;
+		consumeQ->kernelIf->mutex = NULL;
+	}
+}
+
+/*
+ * Acquire the mutex for the queue.  Note that the produceQ and
+ * the consumeQ share a mutex.  So, only one of the two need to
+ * be passed in to this routine.  Either will work just fine.
+ */
+static void qp_acquire_queue_mutex(struct vmci_queue *queue)
+{
+	ASSERT(queue);
+	ASSERT(queue->kernelIf);
+
+	if (queue->kernelIf->host) {
+		ASSERT(queue->kernelIf->mutex);
+		down(queue->kernelIf->mutex);
+	}
+}
+
+/*
+ * Release the mutex for the queue.  Note that the produceQ and
+ * the consumeQ share a mutex.  So, only one of the two need to
+ * be passed in to this routine.  Either will work just fine.
+ */
+static void qp_release_queue_mutex(struct vmci_queue *queue)
+{
+	ASSERT(queue);
+	ASSERT(queue->kernelIf);
+
+	if (queue->kernelIf->host) {
+		ASSERT(queue->kernelIf->mutex);
+		up(queue->kernelIf->mutex);
+	}
+}
+
+/*
+ * Helper function to release pages in the PageStoreAttachInfo
+ * previously obtained using get_user_pages.
+ */
+static void qp_release_pages(struct page **pages,
+			     uint64_t numPages,
+			     bool dirty)
+{
+	int i;
+
+	for (i = 0; i < numPages; i++) {
+		ASSERT(pages[i]);
+
+		if (dirty)
+			set_page_dirty(pages[i]);
+
+		page_cache_release(pages[i]);
+		pages[i] = NULL;
+	}
+}
+
+/*
+ * Lock the user pages referenced by the {produce,consume}Buffer
+ * struct into memory and populate the {produce,consume}Pages
+ * arrays in the attach structure with them.
+ */
+static int qp_host_get_user_memory(uint64_t produceUVA,
+				   uint64_t consumeUVA,
+				   struct vmci_queue *produceQ,
+				   struct vmci_queue *consumeQ)
+{
+	int retval;
+	int err = VMCI_SUCCESS;
+
+	down_write(&current->mm->mmap_sem);
+	retval = get_user_pages(current,
+				current->mm,
+				(uintptr_t) produceUVA,
+				produceQ->kernelIf->numPages,
+				1, 0, produceQ->kernelIf->headerPage, NULL);
+	if (retval < produceQ->kernelIf->numPages) {
+		pr_warn("get_user_pages(produce) failed (retval=%d)",
+			retval);
+		qp_release_pages(produceQ->kernelIf->headerPage, retval, false);
+		err = VMCI_ERROR_NO_MEM;
+		goto out;
+	}
+
+	retval = get_user_pages(current,
+				current->mm,
+				(uintptr_t) consumeUVA,
+				consumeQ->kernelIf->numPages,
+				1, 0, consumeQ->kernelIf->headerPage, NULL);
+	if (retval < consumeQ->kernelIf->numPages) {
+		pr_warn("get_user_pages(consume) failed (retval=%d)",
+			retval);
+		qp_release_pages(consumeQ->kernelIf->headerPage, retval, false);
+		qp_release_pages(produceQ->kernelIf->headerPage,
+				 produceQ->kernelIf->numPages, false);
+		err = VMCI_ERROR_NO_MEM;
+	}
+
+out:
+	up_write(&current->mm->mmap_sem);
+
+	return err;
+}
+
+/*
+ * Registers the specification of the user pages used for backing a queue
+ * pair. Enough information to map in pages is stored in the OS specific
+ * part of the struct vmci_queue structure.
+ */
+static int qp_host_register_user_memory(struct vmci_qp_page_store *pageStore,
+					struct vmci_queue *produceQ,
+					struct vmci_queue *consumeQ)
+{
+	uint64_t produceUVA;
+	uint64_t consumeUVA;
+
+	ASSERT(produceQ->kernelIf->headerPage
+	       && consumeQ->kernelIf->headerPage);
+
+	/*
+	 * The new style and the old style mapping only differs in
+	 * that we either get a single or two UVAs, so we split the
+	 * single UVA range at the appropriate spot.
+	 */
+	produceUVA = pageStore->pages;
+	consumeUVA = pageStore->pages +
+		produceQ->kernelIf->numPages * PAGE_SIZE;
+	return qp_host_get_user_memory(produceUVA, consumeUVA, produceQ,
+				       consumeQ);
+}
+
+/*
+ * Releases and removes the references to user pages stored in the attach
+ * struct.  Pages are released from the page cache and may become
+ * swappable again.
+ */
+static void qp_host_unregister_user_memory(struct vmci_queue *produceQ,
+					   struct vmci_queue *consumeQ)
+{
+	ASSERT(produceQ->kernelIf);
+	ASSERT(consumeQ->kernelIf);
+	ASSERT(!produceQ->qHeader && !consumeQ->qHeader);
+
+	qp_release_pages(produceQ->kernelIf->headerPage,
+			 produceQ->kernelIf->numPages, true);
+	memset(produceQ->kernelIf->headerPage, 0,
+	       sizeof *produceQ->kernelIf->headerPage *
+	       produceQ->kernelIf->numPages);
+	qp_release_pages(consumeQ->kernelIf->headerPage,
+			 consumeQ->kernelIf->numPages, true);
+	memset(consumeQ->kernelIf->headerPage, 0,
+	       sizeof *consumeQ->kernelIf->headerPage *
+	       consumeQ->kernelIf->numPages);
+}
+
+/*
+ * Once qp_host_register_user_memory has been performed on a
+ * queue, the queue pair headers can be mapped into the
+ * kernel. Once mapped, they must be unmapped with
+ * qp_host_unmap_queues prior to calling
+ * qp_host_unregister_user_memory.
+ * Pages are pinned.
+ */
+static int qp_host_map_queues(struct vmci_queue *produceQ,
+			      struct vmci_queue *consumeQ)
+{
+	int result;
+
+	if (!produceQ->qHeader || !consumeQ->qHeader) {
+		struct page *headers[2];
+
+		if (produceQ->qHeader != consumeQ->qHeader)
+			return VMCI_ERROR_QUEUEPAIR_MISMATCH;
+
+		if (produceQ->kernelIf->headerPage == NULL ||
+		    *produceQ->kernelIf->headerPage == NULL)
+			return VMCI_ERROR_UNAVAILABLE;
+
+		ASSERT(*produceQ->kernelIf->headerPage
+		       && *consumeQ->kernelIf->headerPage);
+
+		headers[0] = *produceQ->kernelIf->headerPage;
+		headers[1] = *consumeQ->kernelIf->headerPage;
+
+		produceQ->qHeader = vmap(headers, 2, VM_MAP, PAGE_KERNEL);
+		if (produceQ->qHeader != NULL) {
+			consumeQ->qHeader =
+				(struct vmci_queue_header *)((uint8_t *)
+							     produceQ->qHeader +
+							     PAGE_SIZE);
+			result = VMCI_SUCCESS;
+		} else {
+			pr_warn("vmap failed.");
+			result = VMCI_ERROR_NO_MEM;
+		}
+	} else {
+		result = VMCI_SUCCESS;
+	}
+
+	return result;
+}
+
+/*
+ * Unmaps previously mapped queue pair headers from the kernel.
+ * Pages are unpinned.
+ */
+static int qp_host_unmap_queues(uint32_t gid,
+				struct vmci_queue *produceQ,
+				struct vmci_queue *consumeQ)
+{
+	if (produceQ->qHeader) {
+		ASSERT(consumeQ->qHeader);
+
+		if (produceQ->qHeader < consumeQ->qHeader)
+			vunmap(produceQ->qHeader);
+		else
+			vunmap(consumeQ->qHeader);
+
+		produceQ->qHeader = NULL;
+		consumeQ->qHeader = NULL;
+	}
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Finds the entry in the list corresponding to a given handle. Assumes
+ * that the list is locked.
+ */
+static struct qp_entry *qp_list_find(struct qp_list *qpList,
+				     struct vmci_handle handle)
+{
+	struct qp_entry *entry;
+
+	if (VMCI_HANDLE_INVALID(handle))
+		return NULL;
+
+	list_for_each_entry(entry, &qpList->head, listItem) {
+		if (VMCI_HANDLE_EQUAL(entry->handle, handle))
+			return entry;
+	}
+
+	return NULL;
+}
+
+/*
+ * Dispatches a queue pair event message directly into the local event
+ * queue.
+ */
+static int qp_notify_peer_local(bool attach,
+				struct vmci_handle handle)
+{
+	struct vmci_event_msg *eMsg;
+	struct vmci_event_payld_qp *ePayload;
+	/* buf is only 48 bytes. */
+	char buf[sizeof *eMsg + sizeof *ePayload];
+	uint32_t contextId;
+
+	contextId = VMCI_GetContextID();
+
+	eMsg = (struct vmci_event_msg *)buf;
+	ePayload = vmci_event_data_payload(&eMsg->eventData);
+
+	eMsg->hdr.dst = vmci_make_handle(contextId, VMCI_EVENT_HANDLER);
+	eMsg->hdr.src = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+					 VMCI_CONTEXT_RESOURCE_ID);
+	eMsg->hdr.payloadSize =
+		sizeof *eMsg + sizeof *ePayload - sizeof eMsg->hdr;
+	eMsg->eventData.event =
+		attach ? VMCI_EVENT_QP_PEER_ATTACH : VMCI_EVENT_QP_PEER_DETACH;
+	ePayload->peerId = contextId;
+	ePayload->handle = handle;
+
+	return vmci_event_dispatch((struct vmci_dg *)eMsg);
+}
+
+/*
+ * Allocates and initializes a qp_guest_endpoint structure.
+ * Allocates a QueuePair rid (and handle) iff the given entry has
+ * an invalid handle.  0 through VMCI_RESERVED_RESOURCE_ID_MAX
+ * are reserved handles.  Assumes that the QP list mutex is held
+ * by the caller.
+ */
+static struct qp_guest_endpoint *
+qp_guest_endpoint_create(struct vmci_handle handle,
+			 uint32_t peer,
+			 uint32_t flags,
+			 uint64_t produceSize,
+			 uint64_t consumeSize,
+			 void *produceQ,
+			 void *consumeQ)
+{
+	static uint32_t queuePairRID = VMCI_RESERVED_RESOURCE_ID_MAX + 1;
+	struct qp_guest_endpoint *entry;
+	/* One page each for the queue headers. */
+	const uint64_t numPPNs = dm_div_up(produceSize, PAGE_SIZE) +
+		dm_div_up(consumeSize, PAGE_SIZE) + 2;
+
+	ASSERT((produceSize || consumeSize) && produceQ && consumeQ);
+
+	if (VMCI_HANDLE_INVALID(handle)) {
+		uint32_t contextID = VMCI_GetContextID();
+		uint32_t oldRID = queuePairRID;
+
+		/*
+		 * Generate a unique QueuePair rid.  Keep on trying
+		 * until we wrap around in the RID space.
+		 */
+		ASSERT(oldRID > VMCI_RESERVED_RESOURCE_ID_MAX);
+		do {
+			handle = vmci_make_handle(contextID, queuePairRID);
+			entry = (struct qp_guest_endpoint *)
+				qp_list_find(&qpGuestEndpoints, handle);
+			queuePairRID++;
+
+			if (unlikely(!queuePairRID))
+				/* Skip the reserved rids. */
+				queuePairRID =
+					VMCI_RESERVED_RESOURCE_ID_MAX + 1;
+
+		} while (entry && queuePairRID != oldRID);
+
+		if (unlikely(entry != NULL)) {
+			ASSERT(queuePairRID == oldRID);
+			/*
+			 * We wrapped around --- no rids were free.
+			 */
+			return NULL;
+		}
+	}
+
+	ASSERT(!VMCI_HANDLE_INVALID(handle) &&
+	       qp_list_find(&qpGuestEndpoints, handle) == NULL);
+	entry = kzalloc(sizeof *entry, GFP_KERNEL);
+	if (entry) {
+		entry->qp.handle = handle;
+		entry->qp.peer = peer;
+		entry->qp.flags = flags;
+		entry->qp.produceSize = produceSize;
+		entry->qp.consumeSize = consumeSize;
+		entry->qp.refCount = 0;
+		entry->numPPNs = numPPNs;
+		entry->produceQ = produceQ;
+		entry->consumeQ = consumeQ;
+		INIT_LIST_HEAD(&entry->qp.listItem);
+	}
+	return entry;
+}
+
+/*
+ * Frees a qp_guest_endpoint structure.
+ */
+static void qp_guest_endpoint_destroy(struct qp_guest_endpoint *entry)
+{
+	ASSERT(entry);
+	ASSERT(entry->qp.refCount == 0);
+
+	qp_free_ppn_set(&entry->ppnSet);
+	qp_cleanup_queue_mutex(entry->produceQ, entry->consumeQ);
+	qp_free_queue(entry->produceQ, entry->qp.produceSize);
+	qp_free_queue(entry->consumeQ, entry->qp.consumeSize);
+	kfree(entry);
+}
+
+/*
+ * Helper to make a QueuePairAlloc hypercall when the driver is
+ * supporting a guest device.
+ */
+static int qp_alloc_hypercall(const struct qp_guest_endpoint *entry)
+{
+	struct vmci_qp_alloc_msg *allocMsg;
+	size_t msgSize;
+	int result;
+
+	if (!entry || entry->numPPNs <= 2)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	ASSERT(!(entry->qp.flags & VMCI_QPFLAG_LOCAL));
+
+	msgSize = sizeof *allocMsg + (size_t) entry->numPPNs * sizeof(uint32_t);
+	allocMsg = kmalloc(msgSize, GFP_KERNEL);
+	if (!allocMsg)
+		return VMCI_ERROR_NO_MEM;
+
+	allocMsg->hdr.dst = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+					     VMCI_QUEUEPAIR_ALLOC);
+	allocMsg->hdr.src = VMCI_ANON_SRC_HANDLE;
+	allocMsg->hdr.payloadSize = msgSize - VMCI_DG_HEADERSIZE;
+	allocMsg->handle = entry->qp.handle;
+	allocMsg->peer = entry->qp.peer;
+	allocMsg->flags = entry->qp.flags;
+	allocMsg->produceSize = entry->qp.produceSize;
+	allocMsg->consumeSize = entry->qp.consumeSize;
+	allocMsg->numPPNs = entry->numPPNs;
+
+	result =
+		qp_populate_ppn_set((uint8_t *) allocMsg + sizeof *allocMsg,
+				    &entry->ppnSet);
+	if (result == VMCI_SUCCESS)
+		result = vmci_send_dg((struct vmci_dg *)allocMsg);
+
+	kfree(allocMsg);
+
+	return result;
+}
+
+/*
+ * Helper to make a QueuePairDetach hypercall when the driver is
+ * supporting a guest device.
+ */
+static int qp_detatch_hypercall(struct vmci_handle handle)
+{
+	struct vmci_qp_detach_msg detachMsg;
+
+	detachMsg.hdr.dst = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+					     VMCI_QUEUEPAIR_DETACH);
+	detachMsg.hdr.src = VMCI_ANON_SRC_HANDLE;
+	detachMsg.hdr.payloadSize = sizeof handle;
+	detachMsg.handle = handle;
+
+	return vmci_send_dg((struct vmci_dg *)&detachMsg);
+}
+
+/*
+ * Adds the given entry to the list. Assumes that the list is locked.
+ */
+static void qp_list_add_entry(struct qp_list *qpList,
+			      struct qp_entry *entry)
+{
+	if (entry)
+		list_add(&entry->listItem, &qpList->head);
+}
+
+/*
+ * Removes the given entry from the list. Assumes that the list is locked.
+ */
+static void qp_list_remove_entry(struct qp_list *qpList,
+				 struct qp_entry *entry)
+{
+	if (entry)
+		list_del(&entry->listItem);
+}
+
+/*
+ * Helper for VMCI QueuePair detach interface. Frees the physical
+ * pages for the queue pair.
+ */
+static int qp_detatch_guest_work(struct vmci_handle handle)
+{
+	int result;
+	struct qp_guest_endpoint *entry;
+	uint32_t refCount = ~0;	/* To avoid compiler warning below */
+
+	ASSERT(!VMCI_HANDLE_INVALID(handle));
+
+	down(&qpGuestEndpoints.mutex);
+
+	entry = (struct qp_guest_endpoint *)
+		qp_list_find(&qpGuestEndpoints, handle);
+	if (!entry) {
+		up(&qpGuestEndpoints.mutex);
+		return VMCI_ERROR_NOT_FOUND;
+	}
+
+	ASSERT(entry->qp.refCount >= 1);
+
+	if (entry->qp.flags & VMCI_QPFLAG_LOCAL) {
+		result = VMCI_SUCCESS;
+
+		if (entry->qp.refCount > 1) {
+			result = qp_notify_peer_local(false, handle);
+			/*
+			 * We can fail to notify a local queuepair
+			 * because we can't allocate.  We still want
+			 * to release the entry if that happens, so
+			 * don't bail out yet.
+			 */
+		}
+	} else {
+		result = qp_detatch_hypercall(handle);
+		if (result < VMCI_SUCCESS) {
+			/*
+			 * We failed to notify a non-local queuepair.
+			 * That other queuepair might still be
+			 * accessing the shared memory, so don't
+			 * release the entry yet.  It will get cleaned
+			 * up by VMCIQueuePair_Exit() if necessary
+			 * (assuming we are going away, otherwise why
+			 * did this fail?).
+			 */
+
+			up(&qpGuestEndpoints.mutex);
+			return result;
+		}
+	}
+
+	/*
+	 * If we get here then we either failed to notify a local queuepair, or
+	 * we succeeded in all cases.  Release the entry if required.
+	 */
+
+	entry->qp.refCount--;
+	if (entry->qp.refCount == 0)
+		qp_list_remove_entry(&qpGuestEndpoints, &entry->qp);
+
+	/* If we didn't remove the entry, this could change once we unlock. */
+	if (entry)
+		refCount = entry->qp.refCount;
+
+	up(&qpGuestEndpoints.mutex);
+
+	if (refCount == 0)
+		qp_guest_endpoint_destroy(entry);
+
+	return result;
+}
+
+/*
+ * This functions handles the actual allocation of a VMCI queue
+ * pair guest endpoint. Allocates physical pages for the queue
+ * pair. It makes OS dependent calls through generic wrappers.
+ */
+static int qp_alloc_guest_work(struct vmci_handle *handle,
+			       struct vmci_queue **produceQ,
+			       uint64_t produceSize,
+			       struct vmci_queue **consumeQ,
+			       uint64_t consumeSize,
+			       uint32_t peer,
+			       uint32_t flags,
+			       uint32_t privFlags)
+{
+	const uint64_t numProducePages = dm_div_up(produceSize, PAGE_SIZE) + 1;
+	const uint64_t numConsumePages = dm_div_up(consumeSize, PAGE_SIZE) + 1;
+	void *myProduceQ = NULL;
+	void *myConsumeQ = NULL;
+	int result;
+	struct qp_guest_endpoint *queuePairEntry = NULL;
+
+	ASSERT(handle && produceQ && consumeQ && (produceSize || consumeSize));
+
+	if (privFlags != VMCI_NO_PRIVILEGE_FLAGS)
+		return VMCI_ERROR_NO_ACCESS;
+
+	down(&qpGuestEndpoints.mutex);
+
+	queuePairEntry = (struct qp_guest_endpoint *)qp_list_find(
+		&qpGuestEndpoints, *handle);
+	if (queuePairEntry) {
+		if (queuePairEntry->qp.flags & VMCI_QPFLAG_LOCAL) {
+			/* Local attach case. */
+			if (queuePairEntry->qp.refCount > 1) {
+				pr_devel("Error attempting to attach more " \
+					 "than once.");
+				result = VMCI_ERROR_UNAVAILABLE;
+				goto errorKeepEntry;
+			}
+
+			if (queuePairEntry->qp.produceSize != consumeSize
+			    || queuePairEntry->qp.consumeSize !=
+			    produceSize
+			    || queuePairEntry->qp.flags !=
+			    (flags & ~VMCI_QPFLAG_ATTACH_ONLY)) {
+				pr_devel("Error mismatched queue pair in " \
+					 "local attach.");
+				result = VMCI_ERROR_QUEUEPAIR_MISMATCH;
+				goto errorKeepEntry;
+			}
+
+			/*
+			 * Do a local attach.  We swap the consume and
+			 * produce queues for the attacher and deliver
+			 * an attach event.
+			 */
+			result = qp_notify_peer_local(true, *handle);
+			if (result < VMCI_SUCCESS)
+				goto errorKeepEntry;
+
+			myProduceQ = queuePairEntry->consumeQ;
+			myConsumeQ = queuePairEntry->produceQ;
+			goto out;
+		}
+
+		result = VMCI_ERROR_ALREADY_EXISTS;
+		goto errorKeepEntry;
+	}
+
+	myProduceQ = qp_alloc_queue(produceSize, flags);
+	if (!myProduceQ) {
+		pr_warn("Error allocating pages for produce queue.");
+		result = VMCI_ERROR_NO_MEM;
+		goto error;
+	}
+
+	myConsumeQ = qp_alloc_queue(consumeSize, flags);
+	if (!myConsumeQ) {
+		pr_warn("Error allocating pages for consume queue.");
+		result = VMCI_ERROR_NO_MEM;
+		goto error;
+	}
+
+	queuePairEntry = qp_guest_endpoint_create(*handle, peer, flags,
+						  produceSize, consumeSize,
+						  myProduceQ, myConsumeQ);
+	if (!queuePairEntry) {
+		pr_warn("Error allocating memory in %s.", __func__);
+		result = VMCI_ERROR_NO_MEM;
+		goto error;
+	}
+
+	result = qp_alloc_ppn_set(myProduceQ, numProducePages, myConsumeQ,
+				  numConsumePages, &queuePairEntry->ppnSet);
+	if (result < VMCI_SUCCESS) {
+		pr_warn("qp_alloc_ppn_set failed.");
+		goto error;
+	}
+
+	/*
+	 * It's only necessary to notify the host if this queue pair will be
+	 * attached to from another context.
+	 */
+	if (queuePairEntry->qp.flags & VMCI_QPFLAG_LOCAL) {
+		/* Local create case. */
+		uint32_t contextId = VMCI_GetContextID();
+
+		/*
+		 * Enforce similar checks on local queue pairs as we
+		 * do for regular ones.  The handle's context must
+		 * match the creator or attacher context id (here they
+		 * are both the current context id) and the
+		 * attach-only flag cannot exist during create.  We
+		 * also ensure specified peer is this context or an
+		 * invalid one.
+		 */
+		if (queuePairEntry->qp.handle.context != contextId ||
+		    (queuePairEntry->qp.peer != VMCI_INVALID_ID &&
+		     queuePairEntry->qp.peer != contextId)) {
+			result = VMCI_ERROR_NO_ACCESS;
+			goto error;
+		}
+
+		if (queuePairEntry->qp.flags & VMCI_QPFLAG_ATTACH_ONLY) {
+			result = VMCI_ERROR_NOT_FOUND;
+			goto error;
+		}
+	} else {
+		result = qp_alloc_hypercall(queuePairEntry);
+		if (result < VMCI_SUCCESS) {
+			pr_warn("qp_alloc_hypercall result = %d.",
+				result);
+			goto error;
+		}
+	}
+
+	qp_init_queue_mutex((struct vmci_queue *)myProduceQ,
+			    (struct vmci_queue *)myConsumeQ);
+
+	qp_list_add_entry(&qpGuestEndpoints, &queuePairEntry->qp);
+
+out:
+	queuePairEntry->qp.refCount++;
+	*handle = queuePairEntry->qp.handle;
+	*produceQ = (struct vmci_queue *)myProduceQ;
+	*consumeQ = (struct vmci_queue *)myConsumeQ;
+
+	/*
+	 * We should initialize the queue pair header pages on a local
+	 * queue pair create.  For non-local queue pairs, the
+	 * hypervisor initializes the header pages in the create step.
+	 */
+	if ((queuePairEntry->qp.flags & VMCI_QPFLAG_LOCAL) &&
+	    queuePairEntry->qp.refCount == 1) {
+		vmci_q_header_init((*produceQ)->qHeader, *handle);
+		vmci_q_header_init((*consumeQ)->qHeader, *handle);
+	}
+
+	up(&qpGuestEndpoints.mutex);
+
+	return VMCI_SUCCESS;
+
+error:
+	up(&qpGuestEndpoints.mutex);
+	if (queuePairEntry) {
+		/* The queues will be freed inside the destroy routine. */
+		qp_guest_endpoint_destroy(queuePairEntry);
+	} else {
+		qp_free_queue(myProduceQ, produceSize);
+		qp_free_queue(myConsumeQ, consumeSize);
+	}
+	return result;
+
+errorKeepEntry:
+	/* This path should only be used when an existing entry was found. */
+	ASSERT(queuePairEntry->qp.refCount > 0);
+	up(&qpGuestEndpoints.mutex);
+	return result;
+}
+
+/*
+ * The first endpoint issuing a queue pair allocation will create the state
+ * of the queue pair in the queue pair broker.
+ *
+ * If the creator is a guest, it will associate a VMX virtual address range
+ * with the queue pair as specified by the pageStore. For compatibility with
+ * older VMX'en, that would use a separate step to set the VMX virtual
+ * address range, the virtual address range can be registered later using
+ * vmci_qp_broker_set_page_store. In that case, a pageStore of NULL should be
+ * used.
+ *
+ * If the creator is the host, a pageStore of NULL should be used as well,
+ * since the host is not able to supply a page store for the queue pair.
+ *
+ * For older VMX and host callers, the queue pair will be created in the
+ * VMCIQPB_CREATED_NO_MEM state, and for current VMX callers, it will be
+ * created in VMCOQPB_CREATED_MEM state.
+ */
+static int qp_broker_create(struct vmci_handle handle,
+			    uint32_t peer,
+			    uint32_t flags,
+			    uint32_t privFlags,
+			    uint64_t produceSize,
+			    uint64_t consumeSize,
+			    struct vmci_qp_page_store *pageStore,
+			    struct vmci_ctx *context,
+			    VMCIEventReleaseCB wakeupCB,
+			    void *clientData,
+			    struct qp_broker_entry **ent)
+{
+	struct qp_broker_entry *entry = NULL;
+	const uint32_t contextId = vmci_ctx_get_id(context);
+	bool isLocal = flags & VMCI_QPFLAG_LOCAL;
+	int result;
+	uint64_t guestProduceSize;
+	uint64_t guestConsumeSize;
+
+	/* Do not create if the caller asked not to. */
+	if (flags & VMCI_QPFLAG_ATTACH_ONLY)
+		return VMCI_ERROR_NOT_FOUND;
+
+	/*
+	 * Creator's context ID should match handle's context ID or the creator
+	 * must allow the context in handle's context ID as the "peer".
+	 */
+	if (handle.context != contextId && handle.context != peer)
+		return VMCI_ERROR_NO_ACCESS;
+
+	if (VMCI_CONTEXT_IS_VM(contextId) && VMCI_CONTEXT_IS_VM(peer))
+		return VMCI_ERROR_DST_UNREACHABLE;
+
+	/*
+	 * Creator's context ID for local queue pairs should match the
+	 * peer, if a peer is specified.
+	 */
+	if (isLocal && peer != VMCI_INVALID_ID && contextId != peer)
+		return VMCI_ERROR_NO_ACCESS;
+
+	entry = kzalloc(sizeof *entry, GFP_ATOMIC);
+	if (!entry)
+		return VMCI_ERROR_NO_MEM;
+
+	if (vmci_ctx_get_id(context) == VMCI_HOST_CONTEXT_ID && !isLocal) {
+		/*
+		 * The queue pair broker entry stores values from the guest
+		 * point of view, so a creating host side endpoint should swap
+		 * produce and consume values -- unless it is a local queue
+		 * pair, in which case no swapping is necessary, since the local
+		 * attacher will swap queues.
+		 */
+
+		guestProduceSize = consumeSize;
+		guestConsumeSize = produceSize;
+	} else {
+		guestProduceSize = produceSize;
+		guestConsumeSize = consumeSize;
+	}
+
+	entry->qp.handle = handle;
+	entry->qp.peer = peer;
+	entry->qp.flags = flags;
+	entry->qp.produceSize = guestProduceSize;
+	entry->qp.consumeSize = guestConsumeSize;
+	entry->qp.refCount = 1;
+	entry->createId = contextId;
+	entry->attachId = VMCI_INVALID_ID;
+	entry->state = VMCIQPB_NEW;
+	entry->requireTrustedAttach =
+		!!(context->privFlags & VMCI_PRIVILEGE_FLAG_RESTRICTED);
+	entry->createdByTrusted = !!(privFlags & VMCI_PRIVILEGE_FLAG_TRUSTED);
+	entry->vmciPageFiles = false;
+	entry->wakeupCB = wakeupCB;
+	entry->clientData = clientData;
+	entry->produceQ = qp_host_alloc_queue(guestProduceSize);
+	if (entry->produceQ == NULL) {
+		result = VMCI_ERROR_NO_MEM;
+		goto error;
+	}
+	entry->consumeQ = qp_host_alloc_queue(guestConsumeSize);
+	if (entry->consumeQ == NULL) {
+		result = VMCI_ERROR_NO_MEM;
+		goto error;
+	}
+
+	qp_init_queue_mutex(entry->produceQ, entry->consumeQ);
+
+	INIT_LIST_HEAD(&entry->qp.listItem);
+
+	if (isLocal) {
+		uint8_t *tmp;
+		ASSERT(pageStore == NULL);
+
+		entry->localMem = kcalloc(QPE_NUM_PAGES(entry->qp),
+					  PAGE_SIZE, GFP_KERNEL);
+		if (entry->localMem == NULL) {
+			result = VMCI_ERROR_NO_MEM;
+			goto error;
+		}
+		entry->state = VMCIQPB_CREATED_MEM;
+		entry->produceQ->qHeader = entry->localMem;
+		tmp = (uint8_t *) entry->localMem + PAGE_SIZE *
+			(dm_div_up(entry->qp.produceSize, PAGE_SIZE) + 1);
+		entry->consumeQ->qHeader = (struct vmci_queue_header *) tmp;
+
+		vmci_q_header_init(entry->produceQ->qHeader, handle);
+		vmci_q_header_init(entry->consumeQ->qHeader, handle);
+	} else if (pageStore) {
+		ASSERT(entry->createId != VMCI_HOST_CONTEXT_ID || isLocal);
+
+		/*
+		 * The VMX already initialized the queue pair headers, so no
+		 * need for the kernel side to do that.
+		 */
+		result = qp_host_register_user_memory(pageStore,
+						      entry->produceQ,
+						      entry->consumeQ);
+		if (result < VMCI_SUCCESS)
+			goto error;
+
+		entry->state = VMCIQPB_CREATED_MEM;
+	} else {
+		/*
+		 * A create without a pageStore may be either a host
+		 * side create (in which case we are waiting for the
+		 * guest side to supply the memory) or an old style
+		 * queue pair create (in which case we will expect a
+		 * set page store call as the next step).
+		 */
+		entry->state = VMCIQPB_CREATED_NO_MEM;
+	}
+
+	qp_list_add_entry(&qpBrokerList, &entry->qp);
+	if (ent != NULL)
+		*ent = entry;
+
+	vmci_ctx_qp_create(context, handle);
+
+	return VMCI_SUCCESS;
+
+error:
+	if (entry != NULL) {
+		qp_host_free_queue(entry->produceQ, guestProduceSize);
+		qp_host_free_queue(entry->consumeQ, guestConsumeSize);
+		kfree(entry);
+	}
+
+	return result;
+}
+
+/*
+ * Enqueues an event datagram to notify the peer VM attached to
+ * the given queue pair handle about attach/detach event by the
+ * given VM.  Returns Payload size of datagram enqueued on
+ * success, error code otherwise.
+ */
+static int qp_notify_peer(bool attach,
+			  struct vmci_handle handle,
+			  uint32_t myId,
+			  uint32_t peerId)
+{
+	int rv;
+	struct vmci_event_msg *eMsg;
+	struct vmci_event_payld_qp *evPayload;
+	char buf[sizeof *eMsg + sizeof *evPayload];
+
+	if (VMCI_HANDLE_INVALID(handle) || myId == VMCI_INVALID_ID ||
+	    peerId == VMCI_INVALID_ID)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	/*
+	 * Notification message contains: queue pair handle and
+	 * attaching/detaching VM's context id.
+	 */
+	eMsg = (struct vmci_event_msg *)buf;
+
+	/*
+	 * In vmci_ctx_enqueue_dg() we enforce the upper limit on
+	 * number of pending events from the hypervisor to a given VM
+	 * otherwise a rogue VM could do an arbitrary number of attach
+	 * and detach operations causing memory pressure in the host
+	 * kernel.
+	 */
+
+	/* Clear out any garbage. */
+	memset(eMsg, 0, sizeof buf);
+
+	eMsg->hdr.dst = vmci_make_handle(peerId, VMCI_EVENT_HANDLER);
+	eMsg->hdr.src = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+					 VMCI_CONTEXT_RESOURCE_ID);
+	eMsg->hdr.payloadSize = sizeof *eMsg + sizeof *evPayload -
+		sizeof eMsg->hdr;
+	eMsg->eventData.event = attach ?
+		VMCI_EVENT_QP_PEER_ATTACH : VMCI_EVENT_QP_PEER_DETACH;
+	evPayload = vmci_event_data_payload(&eMsg->eventData);
+	evPayload->handle = handle;
+	evPayload->peerId = myId;
+
+	rv = vmci_dg_dispatch(VMCI_HYPERVISOR_CONTEXT_ID,
+			      (struct vmci_dg *)eMsg, false);
+	if (rv < VMCI_SUCCESS)
+		pr_warn("Failed to enqueue QueuePair %s event datagram " \
+			"for context (ID=0x%x).", attach ? "ATTACH" : "DETACH",
+			peerId);
+
+	return rv;
+}
+
+/*
+ * The second endpoint issuing a queue pair allocation will attach to
+ * the queue pair registered with the queue pair broker.
+ *
+ * If the attacher is a guest, it will associate a VMX virtual address
+ * range with the queue pair as specified by the pageStore. At this
+ * point, the already attach host endpoint may start using the queue
+ * pair, and an attach event is sent to it. For compatibility with
+ * older VMX'en, that used a separate step to set the VMX virtual
+ * address range, the virtual address range can be registered later
+ * using vmci_qp_broker_set_page_store. In that case, a pageStore of
+ * NULL should be used, and the attach event will be generated once
+ * the actual page store has been set.
+ *
+ * If the attacher is the host, a pageStore of NULL should be used as
+ * well, since the page store information is already set by the guest.
+ *
+ * For new VMX and host callers, the queue pair will be moved to the
+ * VMCIQPB_ATTACHED_MEM state, and for older VMX callers, it will be
+ * moved to the VMCOQPB_ATTACHED_NO_MEM state.
+ */
+static int qp_broker_attach(struct qp_broker_entry *entry,
+			    uint32_t peer,
+			    uint32_t flags,
+			    uint32_t privFlags,
+			    uint64_t produceSize,
+			    uint64_t consumeSize,
+			    struct vmci_qp_page_store *pageStore,
+			    struct vmci_ctx *context,
+			    VMCIEventReleaseCB wakeupCB,
+			    void *clientData,
+			    struct qp_broker_entry **ent)
+{
+	const uint32_t contextId = vmci_ctx_get_id(context);
+	bool isLocal = flags & VMCI_QPFLAG_LOCAL;
+	int result;
+
+	if (entry->state != VMCIQPB_CREATED_NO_MEM &&
+	    entry->state != VMCIQPB_CREATED_MEM)
+		return VMCI_ERROR_UNAVAILABLE;
+
+	if (isLocal) {
+		if (!(entry->qp.flags & VMCI_QPFLAG_LOCAL) ||
+		    contextId != entry->createId) {
+			return VMCI_ERROR_INVALID_ARGS;
+		}
+	} else if (contextId == entry->createId ||
+		   contextId == entry->attachId) {
+		return VMCI_ERROR_ALREADY_EXISTS;
+	}
+
+	ASSERT(entry->qp.refCount < 2);
+	ASSERT(entry->attachId == VMCI_INVALID_ID);
+
+	if (VMCI_CONTEXT_IS_VM(contextId) &&
+	    VMCI_CONTEXT_IS_VM(entry->createId))
+		return VMCI_ERROR_DST_UNREACHABLE;
+
+	/*
+	 * If we are attaching from a restricted context then the queuepair
+	 * must have been created by a trusted endpoint.
+	 */
+	if ((context->privFlags & VMCI_PRIVILEGE_FLAG_RESTRICTED) &&
+	    !entry->createdByTrusted)
+		return VMCI_ERROR_NO_ACCESS;
+
+	/*
+	 * If we are attaching to a queuepair that was created by a restricted
+	 * context then we must be trusted.
+	 */
+	if (entry->requireTrustedAttach &&
+	    (!(privFlags & VMCI_PRIVILEGE_FLAG_TRUSTED)))
+		return VMCI_ERROR_NO_ACCESS;
+
+	/*
+	 * If the creator specifies VMCI_INVALID_ID in "peer" field, access
+	 * control check is not performed.
+	 */
+	if (entry->qp.peer != VMCI_INVALID_ID && entry->qp.peer != contextId)
+		return VMCI_ERROR_NO_ACCESS;
+
+	if (entry->createId == VMCI_HOST_CONTEXT_ID) {
+		/*
+		 * Do not attach if the caller doesn't support Host Queue Pairs
+		 * and a host created this queue pair.
+		 */
+
+		if (!vmci_ctx_supports_host_qp(context))
+			return VMCI_ERROR_INVALID_RESOURCE;
+
+	} else if (contextId == VMCI_HOST_CONTEXT_ID) {
+		struct vmci_ctx *createContext;
+		bool supportsHostQP;
+
+		/*
+		 * Do not attach a host to a user created queue pair if that
+		 * user doesn't support host queue pair end points.
+		 */
+
+		createContext = vmci_ctx_get(entry->createId);
+		supportsHostQP = vmci_ctx_supports_host_qp(createContext);
+		vmci_ctx_release(createContext);
+
+		if (!supportsHostQP)
+			return VMCI_ERROR_INVALID_RESOURCE;
+	}
+
+	if ((entry->qp.flags & ~VMCI_QP_ASYMM) != (flags & ~VMCI_QP_ASYMM_PEER))
+		return VMCI_ERROR_QUEUEPAIR_MISMATCH;
+
+	if (contextId != VMCI_HOST_CONTEXT_ID) {
+		/*
+		 * The queue pair broker entry stores values from the guest
+		 * point of view, so an attaching guest should match the values
+		 * stored in the entry.
+		 */
+
+		if (entry->qp.produceSize != produceSize ||
+		    entry->qp.consumeSize != consumeSize) {
+			return VMCI_ERROR_QUEUEPAIR_MISMATCH;
+		}
+	} else if (entry->qp.produceSize != consumeSize ||
+		   entry->qp.consumeSize != produceSize) {
+		return VMCI_ERROR_QUEUEPAIR_MISMATCH;
+	}
+
+	if (contextId != VMCI_HOST_CONTEXT_ID) {
+		/*
+		 * If a guest attached to a queue pair, it will supply
+		 * the backing memory.  If this is a pre NOVMVM vmx,
+		 * the backing memory will be supplied by calling
+		 * vmci_qp_broker_set_page_store() following the
+		 * return of the vmci_qp_broker_alloc() call. If it is
+		 * a vmx of version NOVMVM or later, the page store
+		 * must be supplied as part of the
+		 * vmci_qp_broker_alloc call.  Under all circumstances
+		 * must the initially created queue pair not have any
+		 * memory associated with it already.
+		 */
+
+		if (entry->state != VMCIQPB_CREATED_NO_MEM)
+			return VMCI_ERROR_INVALID_ARGS;
+
+		if (pageStore != NULL) {
+			/*
+			 * Patch up host state to point to guest
+			 * supplied memory. The VMX already
+			 * initialized the queue pair headers, so no
+			 * need for the kernel side to do that.
+			 */
+
+			result = qp_host_register_user_memory(pageStore,
+							      entry->produceQ,
+							      entry->consumeQ);
+			if (result < VMCI_SUCCESS)
+				return result;
+
+			/*
+			 * Preemptively load in the headers if non-blocking to
+			 * prevent blocking later.
+			 */
+			if (entry->qp.flags & VMCI_QPFLAG_NONBLOCK) {
+				result = qp_host_map_queues(entry->produceQ,
+							    entry->consumeQ);
+				if (result < VMCI_SUCCESS) {
+					qp_host_unregister_user_memory(
+						entry->produceQ,
+						entry->consumeQ);
+					return result;
+				}
+			}
+
+			entry->state = VMCIQPB_ATTACHED_MEM;
+		} else {
+			entry->state = VMCIQPB_ATTACHED_NO_MEM;
+		}
+	} else if (entry->state == VMCIQPB_CREATED_NO_MEM) {
+		/*
+		 * The host side is attempting to attach to a queue
+		 * pair that doesn't have any memory associated with
+		 * it. This must be a pre NOVMVM vmx that hasn't set
+		 * the page store information yet, or a quiesced VM.
+		 */
+
+		return VMCI_ERROR_UNAVAILABLE;
+	} else {
+		/*
+		 * For non-blocking queue pairs, we cannot rely on
+		 * enqueue/dequeue to map in the pages on the
+		 * host-side, since it may block, so we make an
+		 * attempt here.
+		 */
+
+		if (flags & VMCI_QPFLAG_NONBLOCK) {
+			result =
+				qp_host_map_queues(entry->produceQ,
+						   entry->consumeQ);
+			if (result < VMCI_SUCCESS)
+				return result;
+
+			entry->qp.flags |= flags &
+				(VMCI_QPFLAG_NONBLOCK | VMCI_QPFLAG_PINNED);
+		}
+
+		/* The host side has successfully attached to a queue pair. */
+		entry->state = VMCIQPB_ATTACHED_MEM;
+	}
+
+	if (entry->state == VMCIQPB_ATTACHED_MEM) {
+		result =
+			qp_notify_peer(true, entry->qp.handle, contextId,
+				       entry->createId);
+		if (result < VMCI_SUCCESS)
+			pr_warn("Failed to notify peer (ID=0x%x) of " \
+				"attach to queue pair (handle=0x%x:0x%x).",
+				entry->createId, entry->qp.handle.context,
+				entry->qp.handle.resource);
+	}
+
+	entry->attachId = contextId;
+	entry->qp.refCount++;
+	if (wakeupCB) {
+		ASSERT(!entry->wakeupCB);
+		entry->wakeupCB = wakeupCB;
+		entry->clientData = clientData;
+	}
+
+	/*
+	 * When attaching to local queue pairs, the context already has
+	 * an entry tracking the queue pair, so don't add another one.
+	 */
+	if (!isLocal)
+		vmci_ctx_qp_create(context, entry->qp.handle);
+	else
+		ASSERT(vmci_ctx_qp_exists(context, entry->qp.handle));
+
+	if (ent != NULL)
+		*ent = entry;
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * QueuePair_Alloc for use when setting up queue pair endpoints
+ * on the host.
+ */
+static int qp_broker_alloc(struct vmci_handle handle,
+			   uint32_t peer,
+			   uint32_t flags,
+			   uint32_t privFlags,
+			   uint64_t produceSize,
+			   uint64_t consumeSize,
+			   struct vmci_qp_page_store *pageStore,
+			   struct vmci_ctx *context,
+			   VMCIEventReleaseCB wakeupCB,
+			   void *clientData,
+			   struct qp_broker_entry **ent,
+			   bool *swap)
+{
+	const uint32_t contextId = vmci_ctx_get_id(context);
+	bool create;
+	struct qp_broker_entry *entry;
+	bool isLocal = flags & VMCI_QPFLAG_LOCAL;
+	int result;
+
+	if (VMCI_HANDLE_INVALID(handle) ||
+	    (flags & ~VMCI_QP_ALL_FLAGS) || isLocal ||
+	    !(produceSize || consumeSize) ||
+	    !context || contextId == VMCI_INVALID_ID ||
+	    handle.context == VMCI_INVALID_ID) {
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	if (pageStore && !VMCI_QP_PAGESTORE_IS_WELLFORMED(pageStore))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	/*
+	 * In the initial argument check, we ensure that non-vmkernel hosts
+	 * are not allowed to create local queue pairs.
+	 */
+
+	ASSERT(!isLocal);
+
+	down(&qpBrokerList.mutex);
+
+	if (!isLocal && vmci_ctx_qp_exists(context, handle)) {
+		pr_devel("Context (ID=0x%x) already attached to queue " \
+			 "pair (handle=0x%x:0x%x).", contextId,
+			 handle.context, handle.resource);
+		up(&qpBrokerList.mutex);
+		return VMCI_ERROR_ALREADY_EXISTS;
+	}
+
+	entry = (struct qp_broker_entry *)
+		qp_list_find(&qpBrokerList, handle);
+	if (!entry) {
+		create = true;
+		result =
+			qp_broker_create(handle, peer, flags, privFlags,
+					 produceSize, consumeSize, pageStore,
+					 context, wakeupCB, clientData, ent);
+	} else {
+		create = false;
+		result =
+			qp_broker_attach(entry, peer, flags, privFlags,
+					 produceSize, consumeSize, pageStore,
+					 context, wakeupCB, clientData, ent);
+	}
+
+	up(&qpBrokerList.mutex);
+
+	if (swap)
+		*swap = (contextId == VMCI_HOST_CONTEXT_ID) &&
+			!(create && isLocal);
+
+
+	return result;
+}
+
+/*
+ * This function implements the kernel API for allocating a queue
+ * pair.
+ */
+static int qp_alloc_host_work(struct vmci_handle *handle,
+			      struct vmci_queue **produceQ,
+			      uint64_t produceSize,
+			      struct vmci_queue **consumeQ,
+			      uint64_t consumeSize,
+			      uint32_t peer,
+			      uint32_t flags,
+			      uint32_t privFlags,
+			      VMCIEventReleaseCB wakeupCB,
+			      void *clientData)
+{
+	struct vmci_ctx *context;
+	struct qp_broker_entry *entry;
+	int result;
+	bool swap;
+
+	if (VMCI_HANDLE_INVALID(*handle)) {
+		uint32_t resourceID;
+
+		resourceID = vmci_resource_get_id(VMCI_HOST_CONTEXT_ID);
+		if (resourceID == VMCI_INVALID_ID)
+			return VMCI_ERROR_NO_HANDLE;
+
+		*handle = vmci_make_handle(VMCI_HOST_CONTEXT_ID, resourceID);
+	}
+
+	context = vmci_ctx_get(VMCI_HOST_CONTEXT_ID);
+	ASSERT(context);
+
+	entry = NULL;
+	result =
+		qp_broker_alloc(*handle, peer, flags, privFlags,
+				produceSize, consumeSize, NULL, context,
+				wakeupCB, clientData, &entry, &swap);
+	if (result == VMCI_SUCCESS) {
+		if (swap) {
+			/*
+			 * If this is a local queue pair, the attacher
+			 * will swap around produce and consume
+			 * queues.
+			 */
+
+			*produceQ = entry->consumeQ;
+			*consumeQ = entry->produceQ;
+		} else {
+			*produceQ = entry->produceQ;
+			*consumeQ = entry->consumeQ;
+		}
+	} else {
+		*handle = VMCI_INVALID_HANDLE;
+		pr_devel("queue pair broker failed to alloc (result=%d).",
+			 result);
+	}
+	vmci_ctx_release(context);
+	return result;
+}
+
+/*
+ * Allocates a VMCI QueuePair. Only checks validity of input
+ * arguments. The real work is done in the host or guest
+ * specific function.
+ */
+int vmci_qp_alloc(struct vmci_handle *handle,
+		  struct vmci_queue **produceQ,
+		  uint64_t produceSize,
+		  struct vmci_queue **consumeQ,
+		  uint64_t consumeSize,
+		  uint32_t peer,
+		  uint32_t flags,
+		  uint32_t privFlags,
+		  bool guestEndpoint,
+		  VMCIEventReleaseCB wakeupCB,
+		  void *clientData)
+{
+	if (!handle || !produceQ || !consumeQ || (!produceSize && !consumeSize)
+	    || (flags & ~VMCI_QP_ALL_FLAGS))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	if (guestEndpoint)
+		return qp_alloc_guest_work(handle, produceQ,
+					   produceSize, consumeQ,
+					   consumeSize, peer,
+					   flags, privFlags);
+	else
+		return qp_alloc_host_work(handle, produceQ,
+					  produceSize, consumeQ,
+					  consumeSize, peer, flags,
+					  privFlags, wakeupCB,
+					  clientData);
+}
+
+/*
+ * This function implements the host kernel API for detaching from
+ * a queue pair.
+ */
+static int qp_detatch_host_work(struct vmci_handle handle)
+{
+	int result;
+	struct vmci_ctx *context;
+
+	context = vmci_ctx_get(VMCI_HOST_CONTEXT_ID);
+
+	result = vmci_qp_broker_detach(handle, context);
+
+	vmci_ctx_release(context);
+	return result;
+}
+
+/*
+ * Detaches from a VMCI QueuePair. Only checks validity of input argument.
+ * Real work is done in the host or guest specific function.
+ */
+static int qp_detatch(struct vmci_handle handle,
+		      bool guestEndpoint)
+{
+	if (VMCI_HANDLE_INVALID(handle))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	if (guestEndpoint)
+		return qp_detatch_guest_work(handle);
+	else
+		return qp_detatch_host_work(handle);
+}
+
+/*
+ * Initializes the list of QueuePairs.
+ */
+static int qp_list_init(struct qp_list *qpList)
+{
+	INIT_LIST_HEAD(&qpList->head);
+	sema_init(&qpList->mutex, 1);
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Returns the entry from the head of the list. Assumes that the list is
+ * locked.
+ */
+static struct qp_entry *qp_list_get_head(struct qp_list *qpList)
+{
+	if (!list_empty(&qpList->head)) {
+		struct qp_entry *entry =
+			list_first_entry(&qpList->head, struct qp_entry,
+					 listItem);
+		return entry;
+	}
+
+	return NULL;
+}
+
+int __init vmci_qp_broker_init(void)
+{
+	return qp_list_init(&qpBrokerList);
+}
+
+void vmci_qp_broker_exit(void)
+{
+	struct qp_broker_entry *entry;
+
+	down(&qpBrokerList.mutex);
+
+	while ((entry = (struct qp_broker_entry *)
+		qp_list_get_head(&qpBrokerList))) {
+		qp_list_remove_entry(&qpBrokerList, &entry->qp);
+		kfree(entry);
+	}
+
+	up(&qpBrokerList.mutex);
+	INIT_LIST_HEAD(&(qpBrokerList.head));
+}
+
+/*
+ * Requests that a queue pair be allocated with the VMCI queue
+ * pair broker. Allocates a queue pair entry if one does not
+ * exist. Attaches to one if it exists, and retrieves the page
+ * files backing that QueuePair.  Assumes that the queue pair
+ * broker lock is held.
+ */
+int vmci_qp_broker_alloc(struct vmci_handle handle,
+			 uint32_t peer,
+			 uint32_t flags,
+			 uint32_t privFlags,
+			 uint64_t produceSize,
+			 uint64_t consumeSize,
+			 struct vmci_qp_page_store *pageStore,
+			 struct vmci_ctx *context)
+{
+	return qp_broker_alloc(handle, peer, flags, privFlags,
+			       produceSize, consumeSize,
+			       pageStore, context, NULL, NULL, NULL, NULL);
+}
+
+/*
+ * VMX'en with versions lower than VMCI_VERSION_NOVMVM use a separate
+ * step to add the UVAs of the VMX mapping of the queue pair. This function
+ * provides backwards compatibility with such VMX'en, and takes care of
+ * registering the page store for a queue pair previously allocated by the
+ * VMX during create or attach. This function will move the queue pair state
+ * to either from VMCIQBP_CREATED_NO_MEM to VMCIQBP_CREATED_MEM or
+ * VMCIQBP_ATTACHED_NO_MEM to VMCIQBP_ATTACHED_MEM. If moving to the
+ * attached state with memory, the queue pair is ready to be used by the
+ * host peer, and an attached event will be generated.
+ *
+ * Assumes that the queue pair broker lock is held.
+ *
+ * This function is only used by the hosted platform, since there is no
+ * issue with backwards compatibility for vmkernel.
+ */
+int vmci_qp_broker_set_page_store(struct vmci_handle handle,
+				  uint64_t produceUVA,
+				  uint64_t consumeUVA,
+				  struct vmci_ctx *context)
+{
+	struct qp_broker_entry *entry;
+	int result;
+	const uint32_t contextId = vmci_ctx_get_id(context);
+
+	if (VMCI_HANDLE_INVALID(handle) || !context
+	    || contextId == VMCI_INVALID_ID)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	/*
+	 * We only support guest to host queue pairs, so the VMX must
+	 * supply UVAs for the mapped page files.
+	 */
+
+	if (produceUVA == 0 || consumeUVA == 0)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	down(&qpBrokerList.mutex);
+
+	if (!vmci_ctx_qp_exists(context, handle)) {
+		pr_warn("Context (ID=0x%x) not attached to queue pair " \
+			"(handle=0x%x:0x%x).", contextId, handle.context,
+			handle.resource);
+		result = VMCI_ERROR_NOT_FOUND;
+		goto out;
+	}
+
+	entry = (struct qp_broker_entry *)
+		qp_list_find(&qpBrokerList, handle);
+	if (!entry) {
+		result = VMCI_ERROR_NOT_FOUND;
+		goto out;
+	}
+
+	/*
+	 * If I'm the owner then I can set the page store.
+	 *
+	 * Or, if a host created the QueuePair and I'm the attached peer
+	 * then I can set the page store.
+	 */
+	if (entry->createId != contextId &&
+	    (entry->createId != VMCI_HOST_CONTEXT_ID ||
+	     entry->attachId != contextId)) {
+		result = VMCI_ERROR_QUEUEPAIR_NOTOWNER;
+		goto out;
+	}
+
+	if (entry->state != VMCIQPB_CREATED_NO_MEM &&
+	    entry->state != VMCIQPB_ATTACHED_NO_MEM) {
+		result = VMCI_ERROR_UNAVAILABLE;
+		goto out;
+	}
+
+	result = qp_host_get_user_memory(produceUVA, consumeUVA,
+					 entry->produceQ, entry->consumeQ);
+	if (result < VMCI_SUCCESS)
+		goto out;
+
+	result = qp_host_map_queues(entry->produceQ, entry->consumeQ);
+	if (result < VMCI_SUCCESS) {
+		qp_host_unregister_user_memory(entry->produceQ,
+					       entry->consumeQ);
+		goto out;
+	}
+
+	if (entry->state == VMCIQPB_CREATED_NO_MEM) {
+		entry->state = VMCIQPB_CREATED_MEM;
+	} else {
+		ASSERT(entry->state == VMCIQPB_ATTACHED_NO_MEM);
+		entry->state = VMCIQPB_ATTACHED_MEM;
+	}
+	entry->vmciPageFiles = true;
+
+	if (entry->state == VMCIQPB_ATTACHED_MEM) {
+		result =
+			qp_notify_peer(true, handle, contextId,
+				       entry->createId);
+		if (result < VMCI_SUCCESS) {
+			pr_warn("Failed to notify peer (ID=0x%x) of " \
+				"attach to queue pair (handle=0x%x:0x%x).",
+				entry->createId, entry->qp.handle.context,
+				entry->qp.handle.resource);
+		}
+	}
+
+	result = VMCI_SUCCESS;
+out:
+	up(&qpBrokerList.mutex);
+	return result;
+}
+
+/*
+ * Resets saved queue headers for the given QP broker
+ * entry. Should be used when guest memory becomes available
+ * again, or the guest detaches.
+ */
+static void qp_reset_saved_headers(struct qp_broker_entry *entry)
+{
+	entry->produceQ->savedHeader = NULL;
+	entry->consumeQ->savedHeader = NULL;
+}
+
+/*
+ * The main entry point for detaching from a queue pair registered with the
+ * queue pair broker. If more than one endpoint is attached to the queue
+ * pair, the first endpoint will mainly decrement a reference count and
+ * generate a notification to its peer. The last endpoint will clean up
+ * the queue pair state registered with the broker.
+ *
+ * When a guest endpoint detaches, it will unmap and unregister the guest
+ * memory backing the queue pair. If the host is still attached, it will
+ * no longer be able to access the queue pair content.
+ *
+ * If the queue pair is already in a state where there is no memory
+ * registered for the queue pair (any *_NO_MEM state), it will transition to
+ * the VMCIQPB_SHUTDOWN_NO_MEM state. This will also happen, if a guest
+ * endpoint is the first of two endpoints to detach. If the host endpoint is
+ * the first out of two to detach, the queue pair will move to the
+ * VMCIQPB_SHUTDOWN_MEM state.
+ */
+int vmci_qp_broker_detach(struct vmci_handle handle,
+			  struct vmci_ctx *context)
+{
+	struct qp_broker_entry *entry;
+	const uint32_t contextId = vmci_ctx_get_id(context);
+	uint32_t peerId;
+	bool isLocal = false;
+	int result;
+
+	if (VMCI_HANDLE_INVALID(handle) || !context
+	    || contextId == VMCI_INVALID_ID) {
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	down(&qpBrokerList.mutex);
+
+	if (!vmci_ctx_qp_exists(context, handle)) {
+		pr_devel("Context (ID=0x%x) not attached to queue pair " \
+			 "(handle=0x%x:0x%x).", contextId, handle.context,
+			 handle.resource);
+		result = VMCI_ERROR_NOT_FOUND;
+		goto out;
+	}
+
+	entry = (struct qp_broker_entry *)
+		qp_list_find(&qpBrokerList, handle);
+	if (!entry) {
+		pr_devel("Context (ID=0x%x) reports being attached to " \
+			 "queue pair(handle=0x%x:0x%x) that isn't present " \
+			 "in broker.", contextId, handle.context,
+			 handle.resource);
+		result = VMCI_ERROR_NOT_FOUND;
+		goto out;
+	}
+
+	if (contextId != entry->createId && contextId != entry->attachId) {
+		result = VMCI_ERROR_QUEUEPAIR_NOTATTACHED;
+		goto out;
+	}
+
+	if (contextId == entry->createId) {
+		peerId = entry->attachId;
+		entry->createId = VMCI_INVALID_ID;
+	} else {
+		peerId = entry->createId;
+		entry->attachId = VMCI_INVALID_ID;
+	}
+	entry->qp.refCount--;
+
+	isLocal = entry->qp.flags & VMCI_QPFLAG_LOCAL;
+
+	if (contextId != VMCI_HOST_CONTEXT_ID) {
+		bool headersMapped;
+
+		ASSERT(!isLocal);
+
+		/*
+		 * Pre NOVMVM vmx'en may detach from a queue pair
+		 * before setting the page store, and in that case
+		 * there is no user memory to detach from. Also, more
+		 * recent VMX'en may detach from a queue pair in the
+		 * quiesced state.
+		 */
+
+		qp_acquire_queue_mutex(entry->produceQ);
+		headersMapped = entry->produceQ->qHeader
+			|| entry->consumeQ->qHeader;
+		if (QPBROKERSTATE_HAS_MEM(entry)) {
+			result = qp_host_unmap_queues(
+				INVALID_VMCI_GUEST_MEM_ID, entry->produceQ,
+				 entry->consumeQ);
+			if (result < VMCI_SUCCESS)
+				pr_warn("Failed to unmap queue headers " \
+					"for queue pair " \
+					"(handle=0x%x:0x%x,result=%d).",
+					handle.context, handle.resource,
+					result);
+
+			if (entry->vmciPageFiles) {
+				qp_host_unregister_user_memory(entry->produceQ,
+							       entry->consumeQ);
+			} else {
+				qp_host_unregister_user_memory(entry->produceQ,
+							       entry->consumeQ);
+			}
+		}
+
+		if (!headersMapped)
+			qp_reset_saved_headers(entry);
+
+		qp_release_queue_mutex(entry->produceQ);
+
+		if (!headersMapped && entry->wakeupCB)
+			entry->wakeupCB(entry->clientData);
+
+	} else {
+		if (entry->wakeupCB) {
+			entry->wakeupCB = NULL;
+			entry->clientData = NULL;
+		}
+	}
+
+	if (entry->qp.refCount == 0) {
+		qp_list_remove_entry(&qpBrokerList, &entry->qp);
+
+		if (isLocal)
+			kfree(entry->localMem);
+
+		qp_cleanup_queue_mutex(entry->produceQ, entry->consumeQ);
+		qp_host_free_queue(entry->produceQ, entry->qp.produceSize);
+		qp_host_free_queue(entry->consumeQ, entry->qp.consumeSize);
+		kfree(entry);
+
+		vmci_ctx_qp_destroy(context, handle);
+	} else {
+		ASSERT(peerId != VMCI_INVALID_ID);
+		qp_notify_peer(false, handle, contextId, peerId);
+		if (contextId == VMCI_HOST_CONTEXT_ID
+		    && QPBROKERSTATE_HAS_MEM(entry)) {
+			entry->state = VMCIQPB_SHUTDOWN_MEM;
+		} else {
+			entry->state = VMCIQPB_SHUTDOWN_NO_MEM;
+		}
+
+		if (!isLocal)
+			vmci_ctx_qp_destroy(context, handle);
+
+	}
+	result = VMCI_SUCCESS;
+out:
+	up(&qpBrokerList.mutex);
+	return result;
+}
+
+/*
+ * Establishes the necessary mappings for a queue pair given a
+ * reference to the queue pair guest memory. This is usually
+ * called when a guest is unquiesced and the VMX is allowed to
+ * map guest memory once again.
+ */
+int vmci_qp_broker_map(struct vmci_handle handle,
+		       struct vmci_ctx *context,
+		       uint64_t guestMem)
+{
+	struct qp_broker_entry *entry;
+	const uint32_t contextId = vmci_ctx_get_id(context);
+	bool isLocal = false;
+	int result;
+
+	if (VMCI_HANDLE_INVALID(handle) || !context
+	    || contextId == VMCI_INVALID_ID)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	down(&qpBrokerList.mutex);
+
+	if (!vmci_ctx_qp_exists(context, handle)) {
+		pr_devel("Context (ID=0x%x) not attached to queue pair " \
+			 "(handle=0x%x:0x%x).", contextId, handle.context,
+			 handle.resource);
+		result = VMCI_ERROR_NOT_FOUND;
+		goto out;
+	}
+
+	entry = (struct qp_broker_entry *)
+		qp_list_find(&qpBrokerList, handle);
+	if (!entry) {
+		pr_devel("Context (ID=0x%x) reports being attached to " \
+			 "queue pair (handle=0x%x:0x%x) that isn't present " \
+			 "in broker.", contextId, handle.context,
+			 handle.resource);
+		result = VMCI_ERROR_NOT_FOUND;
+		goto out;
+	}
+
+	if (contextId != entry->createId && contextId != entry->attachId) {
+		result = VMCI_ERROR_QUEUEPAIR_NOTATTACHED;
+		goto out;
+	}
+
+	isLocal = entry->qp.flags & VMCI_QPFLAG_LOCAL;
+	result = VMCI_SUCCESS;
+
+	if (contextId != VMCI_HOST_CONTEXT_ID) {
+		struct vmci_qp_page_store pageStore;
+
+		ASSERT(entry->state == VMCIQPB_CREATED_NO_MEM ||
+		       entry->state == VMCIQPB_SHUTDOWN_NO_MEM ||
+		       entry->state == VMCIQPB_ATTACHED_NO_MEM);
+		ASSERT(!isLocal);
+
+		pageStore.pages = guestMem;
+		pageStore.len = QPE_NUM_PAGES(entry->qp);
+
+		qp_acquire_queue_mutex(entry->produceQ);
+		qp_reset_saved_headers(entry);
+		result =
+			qp_host_register_user_memory(&pageStore,
+						     entry->produceQ,
+						     entry->consumeQ);
+		qp_release_queue_mutex(entry->produceQ);
+		if (result == VMCI_SUCCESS) {
+			/* Move state from *_NO_MEM to *_MEM */
+
+			entry->state++;
+
+			ASSERT(entry->state == VMCIQPB_CREATED_MEM ||
+			       entry->state == VMCIQPB_SHUTDOWN_MEM ||
+			       entry->state == VMCIQPB_ATTACHED_MEM);
+
+			if (entry->wakeupCB)
+				entry->wakeupCB(entry->clientData);
+		}
+	}
+
+out:
+	up(&qpBrokerList.mutex);
+	return result;
+}
+
+/*
+ * Saves a snapshot of the queue headers for the given QP broker
+ * entry. Should be used when guest memory is unmapped.
+ * Results:
+ * VMCI_SUCCESS on success, appropriate error code if guest memory
+ * can't be accessed..
+ */
+static int qp_save_headers(struct qp_broker_entry *entry)
+{
+	int result;
+
+	if (entry->produceQ->savedHeader != NULL &&
+	    entry->consumeQ->savedHeader != NULL) {
+		/*
+		 *  If the headers have already been saved, we don't need to do
+		 *  it again, and we don't want to map in the headers
+		 *  unnecessarily.
+		 */
+
+		return VMCI_SUCCESS;
+	}
+
+	if (NULL == entry->produceQ->qHeader
+	    || NULL == entry->consumeQ->qHeader) {
+		result = qp_host_map_queues(entry->produceQ, entry->consumeQ);
+		if (result < VMCI_SUCCESS)
+			return result;
+	}
+
+	memcpy(&entry->savedProduceQ, entry->produceQ->qHeader,
+	       sizeof entry->savedProduceQ);
+	entry->produceQ->savedHeader = &entry->savedProduceQ;
+	memcpy(&entry->savedConsumeQ, entry->consumeQ->qHeader,
+	       sizeof entry->savedConsumeQ);
+	entry->consumeQ->savedHeader = &entry->savedConsumeQ;
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Removes all references to the guest memory of a given queue pair, and
+ * will move the queue pair from state *_MEM to *_NO_MEM. It is usually
+ * called when a VM is being quiesced where access to guest memory should
+ * avoided.
+ */
+int vmci_qp_broker_unmap(struct vmci_handle handle,
+			 struct vmci_ctx *context,
+			 uint32_t gid)
+{
+	struct qp_broker_entry *entry;
+	const uint32_t contextId = vmci_ctx_get_id(context);
+	bool isLocal = false;
+	int result;
+
+	if (VMCI_HANDLE_INVALID(handle) || !context
+	    || contextId == VMCI_INVALID_ID)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	down(&qpBrokerList.mutex);
+
+	if (!vmci_ctx_qp_exists(context, handle)) {
+		pr_devel("Context (ID=0x%x) not attached to queue pair " \
+			 "(handle=0x%x:0x%x).", contextId,
+			 handle.context, handle.resource);
+		result = VMCI_ERROR_NOT_FOUND;
+		goto out;
+	}
+
+	entry = (struct qp_broker_entry *)
+		qp_list_find(&qpBrokerList, handle);
+	if (!entry) {
+		pr_devel("Context (ID=0x%x) reports being attached to " \
+			 "queue pair (handle=0x%x:0x%x) that isn't present " \
+			 "in broker.", contextId, handle.context,
+			 handle.resource);
+		result = VMCI_ERROR_NOT_FOUND;
+		goto out;
+	}
+
+	if (contextId != entry->createId && contextId != entry->attachId) {
+		result = VMCI_ERROR_QUEUEPAIR_NOTATTACHED;
+		goto out;
+	}
+
+	isLocal = entry->qp.flags & VMCI_QPFLAG_LOCAL;
+
+	if (contextId != VMCI_HOST_CONTEXT_ID) {
+		ASSERT(entry->state != VMCIQPB_CREATED_NO_MEM &&
+		       entry->state != VMCIQPB_SHUTDOWN_NO_MEM &&
+		       entry->state != VMCIQPB_ATTACHED_NO_MEM);
+		ASSERT(!isLocal);
+
+		qp_acquire_queue_mutex(entry->produceQ);
+		result = qp_save_headers(entry);
+		if (result < VMCI_SUCCESS)
+			pr_warn("Failed to save queue headers for " \
+				"queue pair (handle=0x%x:0x%x,result=%d).",
+				handle.context, handle.resource, result);
+
+		qp_host_unmap_queues(gid, entry->produceQ, entry->consumeQ);
+
+		/*
+		 * On hosted, when we unmap queue pairs, the VMX will also
+		 * unmap the guest memory, so we invalidate the previously
+		 * registered memory. If the queue pair is mapped again at a
+		 * later point in time, we will need to reregister the user
+		 * memory with a possibly new user VA.
+		 */
+		qp_host_unregister_user_memory(entry->produceQ,
+					       entry->consumeQ);
+
+		/*
+		 * Move state from *_MEM to *_NO_MEM.
+		 */
+		entry->state--;
+
+		qp_release_queue_mutex(entry->produceQ);
+	}
+
+	result = VMCI_SUCCESS;
+
+out:
+	up(&qpBrokerList.mutex);
+	return result;
+}
+
+int __devinit vmci_qp_guest_endpoints_init(void)
+{
+	return qp_list_init(&qpGuestEndpoints);
+}
+
+/*
+ * Destroys all guest queue pair endpoints. If active guest queue
+ * pairs still exist, hypercalls to attempt detach from these
+ * queue pairs will be made. Any failure to detach is silently
+ * ignored.
+ */
+void vmci_qp_guest_endpoints_exit(void)
+{
+	struct qp_guest_endpoint *entry;
+
+	down(&qpGuestEndpoints.mutex);
+
+	while ((entry = (struct qp_guest_endpoint *)
+		qp_list_get_head(&qpGuestEndpoints))) {
+
+		/* Don't make a hypercall for local QueuePairs. */
+		if (!(entry->qp.flags & VMCI_QPFLAG_LOCAL))
+			qp_detatch_hypercall(entry->qp.handle);
+
+		/* We cannot fail the exit, so let's reset refCount. */
+		entry->qp.refCount = 0;
+		qp_list_remove_entry(&qpGuestEndpoints, &entry->qp);
+		qp_guest_endpoint_destroy(entry);
+	}
+
+	up(&qpGuestEndpoints.mutex);
+	INIT_LIST_HEAD(&(qpGuestEndpoints.head));
+}
+
+/*
+ * Helper routine that will lock the queue pair before subsequent
+ * operations.
+ * Note: Non-blocking on the host side is currently only implemented in ESX.
+ * Since non-blocking isn't yet implemented on the host personality we
+ * have no reason to acquire a spin lock.  So to avoid the use of an
+ * unnecessary lock only acquire the mutex if we can block.
+ * Note: It is assumed that QPFLAG_PINNED implies QPFLAG_NONBLOCK.  Therefore
+ * we can use the same locking function for access to both the queue
+ * and the queue headers as it is the same logic.  Assert this behvior.
+ */
+static void qp_lock(const struct vmci_qp *qpair)
+{
+	ASSERT(!QP_PINNED(qpair->flags) ||
+	       (QP_PINNED(qpair->flags) && !CAN_BLOCK(qpair->flags)));
+
+	if (CAN_BLOCK(qpair->flags))
+		qp_acquire_queue_mutex(qpair->produceQ);
+}
+
+/*
+ * Helper routine that unlocks the queue pair after calling
+ * qp_lock.  Respects non-blocking and pinning flags.
+ */
+static void qp_unlock(const struct vmci_qp *qpair)
+{
+	if (CAN_BLOCK(qpair->flags))
+		qp_release_queue_mutex(qpair->produceQ);
+}
+
+/*
+ * The queue headers may not be mapped at all times. If a queue is
+ * currently not mapped, it will be attempted to do so.
+ */
+static int qp_map_queue_headers(struct vmci_queue *produceQ,
+				struct vmci_queue *consumeQ,
+				bool canBlock)
+{
+	int result;
+
+	if (NULL == produceQ->qHeader || NULL == consumeQ->qHeader) {
+		if (canBlock)
+			result = qp_host_map_queues(produceQ, consumeQ);
+		else
+			result = VMCI_ERROR_QUEUEPAIR_NOT_READY;
+
+		if (result < VMCI_SUCCESS)
+			return (produceQ->savedHeader &&
+				consumeQ->savedHeader) ?
+				VMCI_ERROR_QUEUEPAIR_NOT_READY :
+				VMCI_ERROR_QUEUEPAIR_NOTATTACHED;
+	}
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Helper routine that will retrieve the produce and consume
+ * headers of a given queue pair. If the guest memory of the
+ * queue pair is currently not available, the saved queue headers
+ * will be returned, if these are available.
+ */
+static int qp_get_queue_headers(const struct vmci_qp *qpair,
+				struct vmci_queue_header **produceQHeader,
+				struct vmci_queue_header **consumeQHeader)
+{
+	int result;
+
+	result = qp_map_queue_headers(qpair->produceQ, qpair->consumeQ,
+				      CAN_BLOCK(qpair->flags));
+	if (result == VMCI_SUCCESS) {
+		*produceQHeader = qpair->produceQ->qHeader;
+		*consumeQHeader = qpair->consumeQ->qHeader;
+	} else if (qpair->produceQ->savedHeader &&
+		   qpair->consumeQ->savedHeader) {
+		ASSERT(!qpair->guestEndpoint);
+		*produceQHeader = qpair->produceQ->savedHeader;
+		*consumeQHeader = qpair->consumeQ->savedHeader;
+		result = VMCI_SUCCESS;
+	}
+
+	return result;
+}
+
+/*
+ * Callback from VMCI queue pair broker indicating that a queue
+ * pair that was previously not ready, now either is ready or
+ * gone forever.
+ */
+static int qp_wakeup_cb(void *clientData)
+{
+	struct vmci_qp *qpair = (struct vmci_qp *)clientData;
+	ASSERT(qpair);
+
+	qp_lock(qpair);
+	while (qpair->blocked > 0) {
+		qpair->blocked--;
+		wake_up(&qpair->event);
+	}
+	qp_unlock(qpair);
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Callback from VMCI_WaitOnEvent releasing the queue pair mutex
+ * protecting the queue pair header state.
+ */
+static int qp_release_mutex_cb(void *clientData)
+{
+	struct vmci_qp *qpair = (struct vmci_qp *)clientData;
+	ASSERT(qpair);
+	qp_unlock(qpair);
+	return 0;
+}
+
+/*
+ * Makes the calling thread wait for the queue pair to become
+ * ready for host side access.  Returns true when thread is
+ * woken up after queue pair state change, false otherwise.
+ */
+static bool qp_wait_for_ready_queue(struct vmci_qp *qpair)
+{
+	if (unlikely(qpair->guestEndpoint))
+		ASSERT(false);
+
+	if (qpair->flags & VMCI_QPFLAG_NONBLOCK)
+		return false;
+
+	qpair->blocked++;
+	vmci_drv_wait_on_event_intr(&qpair->event, qp_release_mutex_cb,
+				    qpair);
+	qp_lock(qpair);
+	return true;
+}
+
+/*
+ * Enqueues a given buffer to the produce queue using the provided
+ * function. As many bytes as possible (space available in the queue)
+ * are enqueued.  Assumes the queue->mutex has been acquired.  Returns
+ * VMCI_ERROR_QUEUEPAIR_NOSPACE if no space was available to enqueue
+ * data, VMCI_ERROR_INVALID_SIZE, if any queue pointer is outside the
+ * queue (as defined by the queue size), VMCI_ERROR_INVALID_ARGS, if
+ * an error occured when accessing the buffer,
+ * VMCI_ERROR_QUEUEPAIR_NOTATTACHED, if the queue pair pages aren't
+ * available.  Otherwise, the number of bytes written to the queue is
+ * returned.  Updates the tail pointer of the produce queue.
+ */
+static ssize_t qp_enqueue_locked(struct vmci_queue *produceQ,
+				 struct vmci_queue *consumeQ,
+				 const uint64_t produceQSize,
+				 const void *buf,
+				 size_t bufSize,
+				 VMCIMemcpyToQueueFunc memcpyToQueue,
+				 bool canBlock)
+{
+	int64_t freeSpace;
+	uint64_t tail;
+	size_t written;
+	ssize_t result;
+
+	result = qp_map_queue_headers(produceQ, consumeQ, canBlock);
+	if (unlikely(result != VMCI_SUCCESS))
+		return result;
+
+	freeSpace = vmci_q_header_free_space(produceQ->qHeader,
+					     consumeQ->qHeader, produceQSize);
+	if (freeSpace == 0)
+		return VMCI_ERROR_QUEUEPAIR_NOSPACE;
+
+	if (freeSpace < VMCI_SUCCESS)
+		return (ssize_t) freeSpace;
+
+	written = (size_t) (freeSpace > bufSize ? bufSize : freeSpace);
+	tail = vmci_q_header_producer_tail(produceQ->qHeader);
+	if (likely(tail + written < produceQSize)) {
+		result = memcpyToQueue(produceQ, tail, buf, 0, written);
+	} else {
+		/* Tail pointer wraps around. */
+
+		const size_t tmp = (size_t) (produceQSize - tail);
+
+		result = memcpyToQueue(produceQ, tail, buf, 0, tmp);
+		if (result >= VMCI_SUCCESS)
+			result = memcpyToQueue(produceQ, 0, buf, tmp,
+					       written - tmp);
+	}
+
+	if (result < VMCI_SUCCESS)
+		return result;
+
+	vmci_q_header_add_producer_tail(produceQ->qHeader, written,
+					produceQSize);
+	return written;
+}
+
+/*
+ * Dequeues data (if available) from the given consume queue. Writes data
+ * to the user provided buffer using the provided function.
+ * Assumes the queue->mutex has been acquired.
+ * Results:
+ * VMCI_ERROR_QUEUEPAIR_NODATA if no data was available to dequeue.
+ * VMCI_ERROR_INVALID_SIZE, if any queue pointer is outside the queue
+ * (as defined by the queue size).
+ * VMCI_ERROR_INVALID_ARGS, if an error occured when accessing the buffer.
+ * Otherwise the number of bytes dequeued is returned.
+ * Side effects:
+ * Updates the head pointer of the consume queue.
+ */
+static ssize_t qp_dequeue_locked(struct vmci_queue *produceQ,
+				 struct vmci_queue *consumeQ,
+				 const uint64_t consumeQSize,
+				 void *buf,
+				 size_t bufSize,
+				 VMCIMemcpyFromQueueFunc memcpyFromQueue,
+				 bool updateConsumer,
+				 bool canBlock)
+{
+	int64_t bufReady;
+	uint64_t head;
+	size_t read;
+	ssize_t result;
+
+	result = qp_map_queue_headers(produceQ, consumeQ, canBlock);
+	if (unlikely(result != VMCI_SUCCESS))
+		return result;
+
+	bufReady = vmci_q_header_buf_ready(consumeQ->qHeader,
+					   produceQ->qHeader, consumeQSize);
+	if (bufReady == 0)
+		return VMCI_ERROR_QUEUEPAIR_NODATA;
+
+	if (bufReady < VMCI_SUCCESS)
+		return (ssize_t) bufReady;
+
+	read = (size_t) (bufReady > bufSize ? bufSize : bufReady);
+	head = vmci_q_header_consumer_head(produceQ->qHeader);
+	if (likely(head + read < consumeQSize)) {
+		result = memcpyFromQueue(buf, 0, consumeQ, head, read);
+	} else {
+		/* Head pointer wraps around. */
+
+		const size_t tmp = (size_t) (consumeQSize - head);
+
+		result = memcpyFromQueue(buf, 0, consumeQ, head, tmp);
+		if (result >= VMCI_SUCCESS) {
+			result = memcpyFromQueue(buf, tmp, consumeQ, 0,
+						 read - tmp);
+		}
+	}
+
+	if (result < VMCI_SUCCESS)
+		return result;
+
+	if (updateConsumer)
+		vmci_q_header_add_consumer_head(produceQ->qHeader,
+						read, consumeQSize);
+
+	return read;
+}
+
+/**
+ * VMCIQPair_Alloc() - Allocates a queue pair.
+ * @qpair:	Pointer for the new vmci_qp struct.
+ * @handle:	Handle to track the resource.
+ * @produceQSize:	Desired size of the producer queue.
+ * @consumeQSize:	Desired size of the consumer queue.
+ * @peer:	ContextID of the peer.
+ * @flags:	VMCI flags.
+ * @privFlags:	VMCI priviledge flags.
+ *
+ * This is the client interface for allocating the memory for a
+ * vmci_qp structure and then attaching to the underlying
+ * queue.  If an error occurs allocating the memory for the
+ * vmci_qp structure no attempt is made to attach.  If an
+ * error occurs attaching, then the structure is freed.
+ */
+int VMCIQPair_Alloc(struct vmci_qp **qpair,
+		    struct vmci_handle *handle,
+		    uint64_t produceQSize,
+		    uint64_t consumeQSize,
+		    uint32_t peer,
+		    uint32_t flags,
+		    uint32_t privFlags)
+{
+	struct vmci_qp *myQPair;
+	int retval;
+	struct vmci_handle src = VMCI_INVALID_HANDLE;
+	struct vmci_handle dst = vmci_make_handle(peer, VMCI_INVALID_ID);
+	enum vmci_route route;
+	VMCIEventReleaseCB wakeupCB;
+	void *clientData;
+
+	/*
+	 * Restrict the size of a queuepair.  The device already
+	 * enforces a limit on the total amount of memory that can be
+	 * allocated to queuepairs for a guest.  However, we try to
+	 * allocate this memory before we make the queuepair
+	 * allocation hypercall.  On Linux, we allocate each page
+	 * separately, which means rather than fail, the guest will
+	 * thrash while it tries to allocate, and will become
+	 * increasingly unresponsive to the point where it appears to
+	 * be hung.  So we place a limit on the size of an individual
+	 * queuepair here, and leave the device to enforce the
+	 * restriction on total queuepair memory.  (Note that this
+	 * doesn't prevent all cases; a user with only this much
+	 * physical memory could still get into trouble.)  The error
+	 * used by the device is NO_RESOURCES, so use that here too.
+	 */
+
+	if (produceQSize + consumeQSize < max(produceQSize, consumeQSize) ||
+	    produceQSize + consumeQSize > VMCI_MAX_GUEST_QP_MEMORY)
+		return VMCI_ERROR_NO_RESOURCES;
+
+	retval = vmci_route(&src, &dst, false, &route);
+	if (retval < VMCI_SUCCESS)
+		route = vmci_guest_code_active() ?
+			VMCI_ROUTE_AS_GUEST : VMCI_ROUTE_AS_HOST;
+
+	/* If NONBLOCK or PINNED is set, we better be the guest personality. */
+	if ((!CAN_BLOCK(flags) || QP_PINNED(flags)) &&
+	    VMCI_ROUTE_AS_GUEST != route) {
+		pr_devel("Not guest personality w/ NONBLOCK OR PINNED set");
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	/*
+	 * Limit the size of pinned QPs and check sanity.
+	 *
+	 * Pinned pages implies non-blocking mode.  Mutexes aren't acquired
+	 * when the NONBLOCK flag is set in qpair code; and also should not be
+	 * acquired when the PINNED flagged is set.  Since pinning pages
+	 * implies we want speed, it makes no sense not to have NONBLOCK
+	 * set if PINNED is set.  Hence enforce this implication.
+	 */
+	if (QP_PINNED(flags)) {
+		if (CAN_BLOCK(flags)) {
+			pr_err("Attempted to enable pinning w/o non-blocking");
+			return VMCI_ERROR_INVALID_ARGS;
+		}
+
+		if (produceQSize + consumeQSize > VMCI_MAX_PINNED_QP_MEMORY)
+			return VMCI_ERROR_NO_RESOURCES;
+	}
+
+	myQPair = kzalloc(sizeof *myQPair, GFP_KERNEL);
+	if (!myQPair)
+		return VMCI_ERROR_NO_MEM;
+
+	myQPair->produceQSize = produceQSize;
+	myQPair->consumeQSize = consumeQSize;
+	myQPair->peer = peer;
+	myQPair->flags = flags;
+	myQPair->privFlags = privFlags;
+
+	wakeupCB = clientData = NULL;
+	if (VMCI_ROUTE_AS_HOST == route) {
+		myQPair->guestEndpoint = false;
+		if (!(flags & VMCI_QPFLAG_LOCAL)) {
+			myQPair->blocked = 0;
+			init_waitqueue_head(&myQPair->event);
+			wakeupCB = qp_wakeup_cb;
+			clientData = (void *)myQPair;
+		}
+	} else {
+		myQPair->guestEndpoint = true;
+	}
+
+	retval = vmci_qp_alloc(handle,
+			       &myQPair->produceQ,
+			       myQPair->produceQSize,
+			       &myQPair->consumeQ,
+			       myQPair->consumeQSize,
+			       myQPair->peer,
+			       myQPair->flags,
+			       myQPair->privFlags,
+			       myQPair->guestEndpoint,
+			       wakeupCB, clientData);
+
+	if (retval < VMCI_SUCCESS) {
+		kfree(myQPair);
+		return retval;
+	}
+
+	*qpair = myQPair;
+	myQPair->handle = *handle;
+
+	return retval;
+}
+EXPORT_SYMBOL(VMCIQPair_Alloc);
+
+/**
+ * VMCIQPair_Detatch() - Detatches the client from a queue pair.
+ * @qpair:	Reference of a pointer to the qpair struct.
+ *
+ * This is the client interface for detaching from a VMCIQPair.
+ * Note that this routine will free the memory allocated for the
+ * vmci_qp structure too.
+ */
+int VMCIQPair_Detach(struct vmci_qp **qpair)
+{
+	int result;
+	struct vmci_qp *oldQPair;
+
+	if (!qpair || !(*qpair))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	oldQPair = *qpair;
+	result = qp_detatch(oldQPair->handle, oldQPair->guestEndpoint);
+
+	/*
+	 * The guest can fail to detach for a number of reasons, and
+	 * if it does so, it will cleanup the entry (if there is one).
+	 * The host can fail too, but it won't cleanup the entry
+	 * immediately, it will do that later when the context is
+	 * freed.  Either way, we need to release the qpair struct
+	 * here; there isn't much the caller can do, and we don't want
+	 * to leak.
+	 */
+
+	memset(oldQPair, 0, sizeof *oldQPair);
+	oldQPair->handle = VMCI_INVALID_HANDLE;
+	oldQPair->peer = VMCI_INVALID_ID;
+	kfree(oldQPair);
+	*qpair = NULL;
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_Detach);
+
+/**
+ * VMCIQPair_GetProduceIndexes() - Retrieves the indexes of the producer.
+ * @qpair:	Pointer to the queue pair struct.
+ * @producerTail:	Reference used for storing producer tail index.
+ * @consumerHead:	Reference used for storing the consumer head index.
+ *
+ * This is the client interface for getting the current indexes of the
+ * QPair from the point of the view of the caller as the producer.
+ */
+int VMCIQPair_GetProduceIndexes(const struct vmci_qp *qpair,
+				uint64_t *producerTail,
+				uint64_t *consumerHead)
+{
+	struct vmci_queue_header *produceQHeader;
+	struct vmci_queue_header *consumeQHeader;
+	int result;
+
+	if (!qpair)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+	result = qp_get_queue_headers(qpair, &produceQHeader, &consumeQHeader);
+	if (result == VMCI_SUCCESS)
+		vmci_q_header_get_pointers(produceQHeader, consumeQHeader,
+					   producerTail, consumerHead);
+	qp_unlock(qpair);
+
+	if (result == VMCI_SUCCESS &&
+	    ((producerTail && *producerTail >= qpair->produceQSize) ||
+	     (consumerHead && *consumerHead >= qpair->produceQSize)))
+		return VMCI_ERROR_INVALID_SIZE;
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_GetProduceIndexes);
+
+/**
+ * VMCIQPair_GetConsumeIndexes() - Retrieves the indexes of the comsumer.
+ * @qpair:	Pointer to the queue pair struct.
+ * @consumerTail:	Reference used for storing consumer tail index.
+ * @producerHead:	Reference used for storing the producer head index.
+ *
+ * This is the client interface for getting the current indexes of the
+ * QPair from the point of the view of the caller as the consumer.
+ */
+int VMCIQPair_GetConsumeIndexes(const struct vmci_qp *qpair,
+				uint64_t *consumerTail,
+				uint64_t *producerHead)
+{
+	struct vmci_queue_header *produceQHeader;
+	struct vmci_queue_header *consumeQHeader;
+	int result;
+
+	if (!qpair)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+	result = qp_get_queue_headers(qpair, &produceQHeader, &consumeQHeader);
+	if (result == VMCI_SUCCESS)
+		vmci_q_header_get_pointers(consumeQHeader, produceQHeader,
+					   consumerTail, producerHead);
+	qp_unlock(qpair);
+
+	if (result == VMCI_SUCCESS &&
+	    ((consumerTail && *consumerTail >= qpair->consumeQSize) ||
+	     (producerHead && *producerHead >= qpair->consumeQSize)))
+		return VMCI_ERROR_INVALID_SIZE;
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_GetConsumeIndexes);
+
+/**
+ * VMCIQPair_ProduceFreeSpace() - Retrieves free space in producer queue.
+ * @qpair:	Pointer to the queue pair struct.
+ *
+ * This is the client interface for getting the amount of free
+ * space in the QPair from the point of the view of the caller as
+ * the producer which is the common case.  Returns < 0 if err, else
+ * available bytes into which data can be enqueued if > 0.
+ */
+int64_t VMCIQPair_ProduceFreeSpace(const struct vmci_qp *qpair)
+{
+	struct vmci_queue_header *produceQHeader;
+	struct vmci_queue_header *consumeQHeader;
+	int64_t result;
+
+	if (!qpair)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+	result = qp_get_queue_headers(qpair, &produceQHeader, &consumeQHeader);
+	if (result == VMCI_SUCCESS) {
+		result = vmci_q_header_free_space(produceQHeader,
+						  consumeQHeader,
+						  qpair->produceQSize);
+	} else {
+		result = 0;
+	}
+	qp_unlock(qpair);
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_ProduceFreeSpace);
+
+/**
+ * VMCIQPair_ConsumeFreeSpace() - Retrieves free space in consumer queue.
+ * @qpair:	Pointer to the queue pair struct.
+ *
+ * This is the client interface for getting the amount of free
+ * space in the QPair from the point of the view of the caller as
+ * the consumer which is not the common case.  Returns < 0 if err, else
+ * available bytes into which data can be enqueued if > 0.
+ */
+int64_t VMCIQPair_ConsumeFreeSpace(const struct vmci_qp *qpair)
+{
+	struct vmci_queue_header *produceQHeader;
+	struct vmci_queue_header *consumeQHeader;
+	int64_t result;
+
+	if (!qpair)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+	result = qp_get_queue_headers(qpair, &produceQHeader, &consumeQHeader);
+	if (result == VMCI_SUCCESS) {
+		result = vmci_q_header_free_space(consumeQHeader,
+						  produceQHeader,
+						  qpair->consumeQSize);
+	} else {
+		result = 0;
+	}
+	qp_unlock(qpair);
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_ConsumeFreeSpace);
+
+/**
+ * VMCIQPair_ProduceBufReady() - Gets bytes ready to read from producer queue.
+ * @qpair:	Pointer to the queue pair struct.
+ *
+ * This is the client interface for getting the amount of
+ * enqueued data in the QPair from the point of the view of the
+ * caller as the producer which is not the common case.  Returns < 0 if err,
+ * else available bytes that may be read.
+ */
+int64_t VMCIQPair_ProduceBufReady(const struct vmci_qp *qpair)
+{
+	struct vmci_queue_header *produceQHeader;
+	struct vmci_queue_header *consumeQHeader;
+	int64_t result;
+
+	if (!qpair)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+	result = qp_get_queue_headers(qpair, &produceQHeader, &consumeQHeader);
+	if (result == VMCI_SUCCESS) {
+		result = vmci_q_header_buf_ready(produceQHeader,
+						 consumeQHeader,
+						 qpair->produceQSize);
+	} else {
+		result = 0;
+	}
+	qp_unlock(qpair);
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_ProduceBufReady);
+
+/**
+ * VMCIQPair_ConsumeBufReady() - Gets bytes ready to read from consumer queue.
+ * @qpair:	Pointer to the queue pair struct.
+ *
+ * This is the client interface for getting the amount of
+ * enqueued data in the QPair from the point of the view of the
+ * caller as the consumer which is the normal case.  Returns < 0 if err,
+ * else available bytes that may be read.
+ */
+int64_t VMCIQPair_ConsumeBufReady(const struct vmci_qp *qpair)
+{
+	struct vmci_queue_header *produceQHeader;
+	struct vmci_queue_header *consumeQHeader;
+	int64_t result;
+
+	if (!qpair)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+	result = qp_get_queue_headers(qpair, &produceQHeader, &consumeQHeader);
+	if (result == VMCI_SUCCESS) {
+		result = vmci_q_header_buf_ready(consumeQHeader,
+						 produceQHeader,
+						 qpair->consumeQSize);
+	} else {
+		result = 0;
+	}
+	qp_unlock(qpair);
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_ConsumeBufReady);
+
+/**
+ * VMCIQPair_Enqueue() - Throw data on the queue.
+ * @qpair:	Pointer to the queue pair struct.
+ * @buf:	Pointer to buffer containing data
+ * @bufSize:	Length of buffer.
+ * @bufType:	Buffer type (Unused).
+ *
+ * This is the client interface for enqueueing data into the queue.
+ * Returns number of bytes enqueued or < 0 on error.
+ */
+ssize_t VMCIQPair_Enqueue(struct vmci_qp *qpair,
+			  const void *buf,
+			  size_t bufSize,
+			  int bufType)
+{
+	ssize_t result;
+
+	if (!qpair || !buf)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+
+	do {
+		result = qp_enqueue_locked(qpair->produceQ,
+					   qpair->consumeQ,
+					   qpair->produceQSize,
+					   buf, bufSize,
+					   qp_memcpy_to_queue,
+					   CAN_BLOCK(qpair->flags));
+
+		if (result == VMCI_ERROR_QUEUEPAIR_NOT_READY &&
+		    !qp_wait_for_ready_queue(qpair))
+			result = VMCI_ERROR_WOULD_BLOCK;
+
+	} while (result == VMCI_ERROR_QUEUEPAIR_NOT_READY);
+
+	qp_unlock(qpair);
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_Enqueue);
+
+/**
+ * VMCIQPair_Dequeue() - Get data from the queue.
+ * @qpair:	Pointer to the queue pair struct.
+ * @buf:	Pointer to buffer for the data
+ * @bufSize:	Length of buffer.
+ * @bufType:	Buffer type (Unused).
+ *
+ * This is the client interface for dequeueing data from the queue.
+ * Returns number of bytes dequeued or < 0 on error.
+ */
+ssize_t VMCIQPair_Dequeue(struct vmci_qp *qpair,
+			  void *buf,
+			  size_t bufSize,
+			  int bufType)
+{
+	ssize_t result;
+
+	if (!qpair || !buf)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+
+	do {
+		result = qp_dequeue_locked(qpair->produceQ,
+					   qpair->consumeQ,
+					   qpair->consumeQSize,
+					   buf, bufSize,
+					   qp_memcpy_from_queue, true,
+					   CAN_BLOCK(qpair->flags));
+
+		if (result == VMCI_ERROR_QUEUEPAIR_NOT_READY &&
+		    !qp_wait_for_ready_queue(qpair))
+			result = VMCI_ERROR_WOULD_BLOCK;
+
+	} while (result == VMCI_ERROR_QUEUEPAIR_NOT_READY);
+
+	qp_unlock(qpair);
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_Dequeue);
+
+/**
+ * VMCIQPair_Peek() - Peek at the data in the queue.
+ * @qpair:	Pointer to the queue pair struct.
+ * @buf:	Pointer to buffer for the data
+ * @bufSize:	Length of buffer.
+ * @bufType:	Buffer type (Unused on Linux).
+ *
+ * This is the client interface for peeking into a queue.  (I.e.,
+ * copy data from the queue without updating the head pointer.)
+ * Returns number of bytes dequeued or < 0 on error.
+ */
+ssize_t VMCIQPair_Peek(struct vmci_qp *qpair,
+		       void *buf,
+		       size_t bufSize,
+		       int bufType)
+{
+	ssize_t result;
+
+	if (!qpair || !buf)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+
+	do {
+		result = qp_dequeue_locked(qpair->produceQ,
+					   qpair->consumeQ,
+					   qpair->consumeQSize,
+					   buf, bufSize,
+					   qp_memcpy_from_queue, false,
+					   CAN_BLOCK(qpair->flags));
+
+		if (result == VMCI_ERROR_QUEUEPAIR_NOT_READY &&
+		    !qp_wait_for_ready_queue(qpair))
+			result = VMCI_ERROR_WOULD_BLOCK;
+
+	} while (result == VMCI_ERROR_QUEUEPAIR_NOT_READY);
+
+	qp_unlock(qpair);
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_Peek);
+
+/**
+ * VMCIQPair_EnqueueV() - Throw data on the queue using iov.
+ * @qpair:	Pointer to the queue pair struct.
+ * @iov:	Pointer to buffer containing data
+ * @iovSize:	Length of buffer.
+ * @bufType:	Buffer type (Unused).
+ *
+ * This is the client interface for enqueueing data into the queue.
+ * This function uses IO vectors to handle the work. Returns number
+ * of bytes enqueued or < 0 on error.
+ */
+ssize_t VMCIQPair_EnqueueV(struct vmci_qp *qpair,
+			   void *iov,
+			   size_t iovSize,
+			   int bufType)
+{
+	ssize_t result;
+
+	if (!qpair || !iov)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+
+	do {
+		result = qp_enqueue_locked(qpair->produceQ,
+					   qpair->consumeQ,
+					   qpair->produceQSize,
+					   iov, iovSize,
+					   qp_memcpy_to_queue_iov,
+					   CAN_BLOCK(qpair->flags));
+
+		if (result == VMCI_ERROR_QUEUEPAIR_NOT_READY &&
+		    !qp_wait_for_ready_queue(qpair))
+			result = VMCI_ERROR_WOULD_BLOCK;
+
+	} while (result == VMCI_ERROR_QUEUEPAIR_NOT_READY);
+
+	qp_unlock(qpair);
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_EnqueueV);
+
+
+/**
+ * VMCIQPair_DequeueV() - Get data from the queue using iov.
+ * @qpair:	Pointer to the queue pair struct.
+ * @iov:	Pointer to buffer for the data
+ * @iovSize:	Length of buffer.
+ * @bufType:	Buffer type (Unused).
+ *
+ * This is the client interface for dequeueing data from the queue.
+ * This function uses IO vectors to handle the work. Returns number
+ * of bytes dequeued or < 0 on error.
+ */
+ssize_t VMCIQPair_DequeueV(struct vmci_qp *qpair,
+			   void *iov,
+			   size_t iovSize,
+			   int bufType)
+{
+	ssize_t result;
+
+	qp_lock(qpair);
+
+	if (!qpair || !iov)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	do {
+		result = qp_dequeue_locked(qpair->produceQ,
+					   qpair->consumeQ,
+					   qpair->consumeQSize,
+					   iov, iovSize,
+					   qp_memcpy_from_queue_iov,
+					   true, CAN_BLOCK(qpair->flags));
+
+		if (result == VMCI_ERROR_QUEUEPAIR_NOT_READY &&
+		    !qp_wait_for_ready_queue(qpair))
+			result = VMCI_ERROR_WOULD_BLOCK;
+
+	} while (result == VMCI_ERROR_QUEUEPAIR_NOT_READY);
+
+	qp_unlock(qpair);
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_DequeueV);
+
+/**
+ * VMCIQPair_PeekV() - Peek at the data in the queue using iov.
+ * @qpair:	Pointer to the queue pair struct.
+ * @iov:	Pointer to buffer for the data
+ * @iovSize:	Length of buffer.
+ * @bufType:	Buffer type (Unused on Linux).
+ *
+ * This is the client interface for peeking into a queue.  (I.e.,
+ * copy data from the queue without updating the head pointer.)
+ * This function uses IO vectors to handle the work. Returns number
+ * of bytes peeked or < 0 on error.
+ */
+ssize_t VMCIQPair_PeekV(struct vmci_qp *qpair,
+			void *iov,
+			size_t iovSize,
+			int bufType)
+{
+	ssize_t result;
+
+	if (!qpair || !iov)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+
+	do {
+		result = qp_dequeue_locked(qpair->produceQ,
+					   qpair->consumeQ,
+					   qpair->consumeQSize,
+					   iov, iovSize,
+					   qp_memcpy_from_queue_iov,
+					   false, CAN_BLOCK(qpair->flags));
+
+		if (result == VMCI_ERROR_QUEUEPAIR_NOT_READY &&
+		    !qp_wait_for_ready_queue(qpair))
+			result = VMCI_ERROR_WOULD_BLOCK;
+
+	} while (result == VMCI_ERROR_QUEUEPAIR_NOT_READY);
+
+	qp_unlock(qpair);
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_PeekV);
diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.h b/drivers/misc/vmw_vmci/vmci_queue_pair.h
new file mode 100644
index 0000000..b4f39e4
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_queue_pair.h
@@ -0,0 +1,182 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMCI_QUEUE_PAIR_H_
+#define _VMCI_QUEUE_PAIR_H_
+
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_context.h"
+
+/* Callback needed for correctly waiting on events. */
+typedef int (*VMCIEventReleaseCB) (void *clientData);
+
+/* Guest device port I/O. */
+struct PPNSet {
+	uint64_t numProducePages;
+	uint64_t numConsumePages;
+	uint32_t *producePPNs;
+	uint32_t *consumePPNs;
+	bool initialized;
+};
+
+
+/* VMCIQueuePairAllocInfo */
+struct vmci_qp_alloc_info {
+	struct vmci_handle handle;
+	uint32_t peer;
+	uint32_t flags;
+	uint64_t produceSize;
+	uint64_t consumeSize;
+	uint64_t ppnVA;		/* Start VA of queue pair PPNs. */
+	uint64_t numPPNs;
+	int32_t result;
+	uint32_t version;
+};
+
+/* VMCIQueuePairSetVAInfo */
+struct vmci_qp_set_va_info {
+	struct vmci_handle handle;
+	uint64_t va;		/* Start VA of queue pair PPNs. */
+	uint64_t numPPNs;
+	uint32_t version;
+	int32_t result;
+};
+
+/*
+ * For backwards compatibility, here is a version of the
+ * VMCIQueuePairPageFileInfo before host support end-points was added.
+ * Note that the current version of that structure requires VMX to
+ * pass down the VA of the mapped file.  Before host support was added
+ * there was nothing of the sort.  So, when the driver sees the ioctl
+ * with a parameter that is the sizeof
+ * VMCIQueuePairPageFileInfo_NoHostQP then it can infer that the version
+ * of VMX running can't attach to host end points because it doesn't
+ * provide the VA of the mapped files.
+ *
+ * The Linux driver doesn't get an indication of the size of the
+ * structure passed down from user space.  So, to fix a long standing
+ * but unfiled bug, the _pad field has been renamed to version.
+ * Existing versions of VMX always initialize the PageFileInfo
+ * structure so that _pad, er, version is set to 0.
+ *
+ * A version value of 1 indicates that the size of the structure has
+ * been increased to include two UVA's: produceUVA and consumeUVA.
+ * These UVA's are of the mmap()'d queue contents backing files.
+ *
+ * In addition, if when VMX is sending down the
+ * VMCIQueuePairPageFileInfo structure it gets an error then it will
+ * try again with the _NoHostQP version of the file to see if an older
+ * VMCI kernel module is running.
+ */
+
+/* VMCIQueuePairPageFileInfo */
+struct vmci_qp_page_file_info {
+	struct vmci_handle handle;
+	uint64_t producePageFile;	/* User VA. */
+	uint64_t consumePageFile;	/* User VA. */
+	uint64_t producePageFileSize;	/* Size of the file name array. */
+	uint64_t consumePageFileSize;	/* Size of the file name array. */
+	int32_t result;
+	uint32_t version;	/* Was _pad. */
+	uint64_t produceVA;	/* User VA of the mapped file. */
+	uint64_t consumeVA;	/* User VA of the mapped file. */
+};
+
+/* VMCIQueuePairDetachInfo */
+struct vmci_qp_dtch_info {
+	struct vmci_handle handle;
+	int32_t result;
+	uint32_t _pad;
+};
+
+/*
+ * struct vmci_qp_page_store describes how the memory of a given queue pair
+ * is backed. When the queue pair is between the host and a guest, the
+ * page store consists of references to the guest pages. On vmkernel,
+ * this is a list of PPNs, and on hosted, it is a user VA where the
+ * queue pair is mapped into the VMX address space.
+ */
+struct vmci_qp_page_store {
+	/* Reference to pages backing the queue pair. */
+	uint64_t pages;
+	/* Length of pageList/virtual addres range (in pages). */
+	uint32_t len;
+};
+
+/*
+ * This data type contains the information about a queue.
+ * There are two queues (hence, queue pairs) per transaction model between a
+ * pair of end points, A & B.  One queue is used by end point A to transmit
+ * commands and responses to B.  The other queue is used by B to transmit
+ * commands and responses.
+ *
+ * struct vmci_queue_kern_if is a per-OS defined Queue structure.  It contains
+ * either a direct pointer to the linear address of the buffer contents or a
+ * pointer to structures which help the OS locate those data pages.  See
+ * vmciKernelIf.c for each platform for its definition.
+ */
+struct vmci_queue {
+	struct vmci_queue_header *qHeader;
+	struct vmci_queue_header *savedHeader;
+	struct vmci_queue_kern_if *kernelIf;
+};
+
+/*
+ * Utility function that checks whether the fields of the page
+ * store contain valid values.
+ * Result:
+ * true if the page store is wellformed. false otherwise.
+ */
+static inline bool
+VMCI_QP_PAGESTORE_IS_WELLFORMED(struct vmci_qp_page_store *pageStore)
+{
+	return pageStore->len >= 2;
+}
+
+
+
+int vmci_qp_broker_init(void);
+void vmci_qp_broker_exit(void);
+int vmci_qp_broker_alloc(struct vmci_handle handle, uint32_t peer,
+			 uint32_t flags, uint32_t privFlags,
+			 uint64_t produceSize, uint64_t consumeSize,
+			 struct vmci_qp_page_store *pageStore,
+			 struct vmci_ctx *context);
+int vmci_qp_broker_set_page_store(struct vmci_handle handle,
+				  uint64_t produceUVA, uint64_t consumeUVA,
+				  struct vmci_ctx *context);
+int vmci_qp_broker_detach(struct vmci_handle handle,
+			  struct vmci_ctx *context);
+
+int vmci_qp_guest_endpoints_init(void);
+void vmci_qp_guest_endpoints_exit(void);
+
+int vmci_qp_alloc(struct vmci_handle *handle,
+		  struct vmci_queue **produceQ, uint64_t produceSize,
+		  struct vmci_queue **consumeQ, uint64_t consumeSize,
+		  uint32_t peer, uint32_t flags, uint32_t privFlags,
+		  bool guestEndpoint, VMCIEventReleaseCB wakeupCB,
+		  void *clientData);
+int vmci_qp_broker_map(struct vmci_handle handle,
+		       struct vmci_ctx *context, uint64_t guestMem);
+int vmci_qp_broker_unmap(struct vmci_handle handle,
+			 struct vmci_ctx *context, uint32_t gid);
+
+#endif /* _VMCI_QUEUE_PAIR_H_ */
-- 
1.7.0.4


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 08/11] Apply VMCI queue pairs
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  0 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, Andrew Stiegmann (stieg), cschamp, gregkh

VMCI queue pairs allow for bi-directional ordered communication between
host and guests.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_queue_pair.c | 3548 +++++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_queue_pair.h |  182 ++
 2 files changed, 3730 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_queue_pair.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_queue_pair.h

diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c b/drivers/misc/vmw_vmci/vmci_queue_pair.c
new file mode 100644
index 0000000..11d111b
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c
@@ -0,0 +1,3548 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/device-mapper.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/semaphore.h>
+#include <linux/socket.h>
+#include <linux/vmw_vmci_api.h>
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_common_int.h"
+#include "vmci_context.h"
+#include "vmci_datagram.h"
+#include "vmci_driver.h"
+#include "vmci_event.h"
+#include "vmci_handle_array.h"
+#include "vmci_hash_table.h"
+#include "vmci_queue_pair.h"
+#include "vmci_resource.h"
+#include "vmci_route.h"
+
+/*
+ * In the following, we will distinguish between two kinds of VMX processes -
+ * the ones with versions lower than VMCI_VERSION_NOVMVM that use specialized
+ * VMCI page files in the VMX and supporting VM to VM communication and the
+ * newer ones that use the guest memory directly. We will in the following
+ * refer to the older VMX versions as old-style VMX'en, and the newer ones as
+ * new-style VMX'en.
+ *
+ * The state transition datagram is as follows (the VMCIQPB_ prefix has been
+ * removed for readability) - see below for more details on the transtions:
+ *
+ *            --------------  NEW  -------------
+ *            |                                |
+ *           \_/                              \_/
+ *     CREATED_NO_MEM <-----------------> CREATED_MEM
+ *            |    |                           |
+ *            |    o-----------------------o   |
+ *            |                            |   |
+ *           \_/                          \_/ \_/
+ *     ATTACHED_NO_MEM <----------------> ATTACHED_MEM
+ *            |                            |   |
+ *            |     o----------------------o   |
+ *            |     |                          |
+ *           \_/   \_/                        \_/
+ *     SHUTDOWN_NO_MEM <----------------> SHUTDOWN_MEM
+ *            |                                |
+ *            |                                |
+ *            -------------> gone <-------------
+ *
+ * In more detail. When a VMCI queue pair is first created, it will be in the
+ * VMCIQPB_NEW state. It will then move into one of the following states:
+ *
+ * - VMCIQPB_CREATED_NO_MEM: this state indicates that either:
+ *
+ *     - the created was performed by a host endpoint, in which case there is
+ *       no backing memory yet.
+ *
+ *     - the create was initiated by an old-style VMX, that uses
+ *       vmci_qp_broker_set_page_store to specify the UVAs of the queue pair at
+ *       a later point in time. This state can be distinguished from the one
+ *       above by the context ID of the creator. A host side is not allowed to
+ *       attach until the page store has been set.
+ *
+ * - VMCIQPB_CREATED_MEM: this state is the result when the queue pair
+ *     is created by a VMX using the queue pair device backend that
+ *     sets the UVAs of the queue pair immediately and stores the
+ *     information for later attachers. At this point, it is ready for
+ *     the host side to attach to it.
+ *
+ * Once the queue pair is in one of the created states (with the exception of
+ * the case mentioned for older VMX'en above), it is possible to attach to the
+ * queue pair. Again we have two new states possible:
+ *
+ * - VMCIQPB_ATTACHED_MEM: this state can be reached through the following
+ *   paths:
+ *
+ *     - from VMCIQPB_CREATED_NO_MEM when a new-style VMX allocates a queue
+ *       pair, and attaches to a queue pair previously created by the host side.
+ *
+ *     - from VMCIQPB_CREATED_MEM when the host side attaches to a queue pair
+ *       already created by a guest.
+ *
+ *     - from VMCIQPB_ATTACHED_NO_MEM, when an old-style VMX calls
+ *       vmci_qp_broker_set_page_store (see below).
+ *
+ * - VMCIQPB_ATTACHED_NO_MEM: If the queue pair already was in the
+ *     VMCIQPB_CREATED_NO_MEM due to a host side create, an old-style VMX will
+ *     bring the queue pair into this state. Once vmci_qp_broker_set_page_store
+ *     is called to register the user memory, the VMCIQPB_ATTACH_MEM state
+ *     will be entered.
+ *
+ * From the attached queue pair, the queue pair can enter the shutdown states
+ * when either side of the queue pair detaches. If the guest side detaches
+ * first, the queue pair will enter the VMCIQPB_SHUTDOWN_NO_MEM state, where
+ * the content of the queue pair will no longer be available. If the host
+ * side detaches first, the queue pair will either enter the
+ * VMCIQPB_SHUTDOWN_MEM, if the guest memory is currently mapped, or
+ * VMCIQPB_SHUTDOWN_NO_MEM, if the guest memory is not mapped
+ * (e.g., the host detaches while a guest is stunned).
+ *
+ * New-style VMX'en will also unmap guest memory, if the guest is
+ * quiesced, e.g., during a snapshot operation. In that case, the guest
+ * memory will no longer be available, and the queue pair will transition from
+ * *_MEM state to a *_NO_MEM state. The VMX may later map the memory once more,
+ * in which case the queue pair will transition from the *_NO_MEM state at that
+ * point back to the *_MEM state. Note that the *_NO_MEM state may have changed,
+ * since the peer may have either attached or detached in the meantime. The
+ * values are laid out such that ++ on a state will move from a *_NO_MEM to a
+ * *_MEM state, and vice versa.
+ */
+
+/*
+ * VMCIMemcpy{To,From}QueueFunc() prototypes.  Functions of these
+ * types are passed around to enqueue and dequeue routines.  Note that
+ * often the functions passed are simply wrappers around memcpy
+ * itself.
+ *
+ * Note: In order for the memcpy typedefs to be compatible with the VMKernel,
+ * there's an unused last parameter for the hosted side.  In
+ * ESX, that parameter holds a buffer type.
+ */
+typedef int VMCIMemcpyToQueueFunc(struct vmci_queue *queue,
+				  uint64_t queueOffset, const void *src,
+				  size_t srcOffset, size_t size);
+typedef int VMCIMemcpyFromQueueFunc(void *dest, size_t destOffset,
+				    const struct vmci_queue *queue,
+				    uint64_t queueOffset, size_t size);
+
+/* The Kernel specific component of the struct vmci_queue structure. */
+struct vmci_queue_kern_if {
+	struct page **page;
+	struct page **headerPage;
+	void *va;
+	struct semaphore __mutex;
+	struct semaphore *mutex;
+	bool host;
+	size_t numPages;
+	bool mapped;
+};
+
+/*
+ * This structure is opaque to the clients.
+ */
+struct vmci_qp {
+	struct vmci_handle handle;
+	struct vmci_queue *produceQ;
+	struct vmci_queue *consumeQ;
+	uint64_t produceQSize;
+	uint64_t consumeQSize;
+	uint32_t peer;
+	uint32_t flags;
+	uint32_t privFlags;
+	bool guestEndpoint;
+	uint32_t blocked;
+	wait_queue_head_t event;
+};
+
+enum qp_broker_state {
+	VMCIQPB_NEW,
+	VMCIQPB_CREATED_NO_MEM,
+	VMCIQPB_CREATED_MEM,
+	VMCIQPB_ATTACHED_NO_MEM,
+	VMCIQPB_ATTACHED_MEM,
+	VMCIQPB_SHUTDOWN_NO_MEM,
+	VMCIQPB_SHUTDOWN_MEM,
+	VMCIQPB_GONE
+};
+
+#define QPBROKERSTATE_HAS_MEM(_qpb) (_qpb->state == VMCIQPB_CREATED_MEM || \
+				     _qpb->state == VMCIQPB_ATTACHED_MEM || \
+				     _qpb->state == VMCIQPB_SHUTDOWN_MEM)
+
+/*
+ * In the queue pair broker, we always use the guest point of view for
+ * the produce and consume queue values and references, e.g., the
+ * produce queue size stored is the guests produce queue size. The
+ * host endpoint will need to swap these around. The only exception is
+ * the local queue pairs on the host, in which case the host endpoint
+ * that creates the queue pair will have the right orientation, and
+ * the attaching host endpoint will need to swap.
+ */
+struct qp_entry {
+	struct list_head listItem;
+	struct vmci_handle handle;
+	uint32_t peer;
+	uint32_t flags;
+	uint64_t produceSize;
+	uint64_t consumeSize;
+	uint32_t refCount;
+};
+
+struct qp_broker_entry {
+	struct qp_entry qp;
+	uint32_t createId;
+	uint32_t attachId;
+	enum qp_broker_state state;
+	bool requireTrustedAttach;
+	bool createdByTrusted;
+	bool vmciPageFiles; /* Created by VMX using VMCI page files */
+	struct vmci_queue *produceQ;
+	struct vmci_queue *consumeQ;
+	struct vmci_queue_header savedProduceQ;
+	struct vmci_queue_header savedConsumeQ;
+	VMCIEventReleaseCB wakeupCB;
+	void *clientData;
+	void *localMem;	 /* Kernel memory for local queue pair */
+};
+
+struct qp_guest_endpoint {
+	struct qp_entry qp;
+	uint64_t numPPNs;
+	void *produceQ;
+	void *consumeQ;
+	struct PPNSet ppnSet;
+};
+
+struct qp_list {
+	struct list_head head;
+	struct semaphore mutex;
+};
+
+static struct qp_list qpBrokerList;
+static struct qp_list qpGuestEndpoints;
+
+#define INVALID_VMCI_GUEST_MEM_ID  0
+#define QPE_NUM_PAGES(_QPE) ((uint32_t)					\
+			     (dm_div_up(_QPE.produceSize, PAGE_SIZE) +	\
+			      dm_div_up(_QPE.consumeSize, PAGE_SIZE) + 2))
+
+/*
+ * Frees kernel VA space for a given queue and its queue header, and
+ * frees physical data pages.
+ */
+static void qp_free_queue(void *q,
+			  uint64_t size)
+{
+	struct vmci_queue *queue = q;
+
+	if (queue) {
+		uint64_t i = dm_div_up(size, PAGE_SIZE);
+
+		if (queue->kernelIf->mapped) {
+			ASSERT(queue->kernelIf->va);
+			vunmap(queue->kernelIf->va);
+			queue->kernelIf->va = NULL;
+		}
+
+		while (i)
+			__free_page(queue->kernelIf->page[--i]);
+
+		vfree(queue->qHeader);
+	}
+}
+
+
+/*
+ * Allocates kernel VA space of specified size, plus space for the
+ * queue structure/kernel interface and the queue header.  Allocates
+ * physical pages for the queue data pages.
+ *
+ * PAGE m:      struct vmci_queue_header (struct vmci_queue->qHeader)
+ * PAGE m+1:    struct vmci_queue
+ * PAGE m+1+q:  struct vmci_queue_kern_if (struct vmci_queue->kernelIf)
+ * PAGE n-size: Data pages (struct vmci_queue->kernelIf->page[])
+ */
+static void *qp_alloc_queue(uint64_t size,
+			    uint32_t flags)
+{
+	uint64_t i;
+	struct vmci_queue *queue;
+	struct vmci_queue_header *qHeader;
+	const uint64_t numDataPages = dm_div_up(size, PAGE_SIZE);
+	const uint queueSize =
+		PAGE_SIZE +
+		sizeof(*queue) + sizeof(*(queue->kernelIf)) +
+		numDataPages * sizeof(*(queue->kernelIf->page));
+
+	ASSERT(size <= VMCI_MAX_GUEST_QP_MEMORY);
+	ASSERT(!QP_PINNED(flags) || size <= VMCI_MAX_PINNED_QP_MEMORY);
+
+	qHeader = vmalloc(queueSize);
+	if (!qHeader)
+		return NULL;
+
+	queue = (struct vmci_queue *)((uint8_t *) qHeader + PAGE_SIZE);
+	queue->qHeader = qHeader;
+	queue->savedHeader = NULL;
+	queue->kernelIf = (struct vmci_queue_kern_if *)((uint8_t *) queue +
+							sizeof(*queue));
+	queue->kernelIf->headerPage = NULL;	/* Unused in guest. */
+	queue->kernelIf->page =
+		(struct page **)((uint8_t *) queue->kernelIf +
+				 sizeof(*(queue->kernelIf)));
+	queue->kernelIf->host = false;
+	queue->kernelIf->va = NULL;
+	queue->kernelIf->mapped = false;
+
+	for (i = 0; i < numDataPages; i++) {
+		queue->kernelIf->page[i] = alloc_pages(GFP_KERNEL, 0);
+		if (!queue->kernelIf->page[i])
+			goto fail;
+	}
+
+	if (QP_PINNED(flags)) {
+		queue->kernelIf->va = vmap(queue->kernelIf->page, numDataPages,
+					   VM_MAP, PAGE_KERNEL);
+		if (!queue->kernelIf->va)
+			goto fail;
+
+		queue->kernelIf->mapped = true;
+	}
+
+	return (void *)queue;
+
+fail:
+	qp_free_queue(queue, i * PAGE_SIZE);
+	return NULL;
+}
+
+/*
+ * Copies from a given buffer or iovector to a VMCI Queue.  Uses
+ * kmap()/kunmap() to dynamically map/unmap required portions of the queue
+ * by traversing the offset -> page translation structure for the queue.
+ * Assumes that offset + size does not wrap around in the queue.
+ */
+static int __qp_memcpy_to_queue(struct vmci_queue *queue,
+				uint64_t queueOffset,
+				const void *src,
+				size_t size,
+				bool isIovec)
+{
+	struct vmci_queue_kern_if *kernelIf = queue->kernelIf;
+	size_t bytesCopied = 0;
+
+	while (bytesCopied < size) {
+		uint64_t pageIndex = (queueOffset + bytesCopied) / PAGE_SIZE;
+		size_t pageOffset =
+			(queueOffset + bytesCopied) & (PAGE_SIZE - 1);
+		void *va;
+		size_t toCopy;
+
+		if (!kernelIf->mapped)
+			va = kmap(kernelIf->page[pageIndex]);
+		else
+			va = (void *)((uint8_t *)kernelIf->va +
+				      (pageIndex * PAGE_SIZE));
+
+		if (size - bytesCopied > PAGE_SIZE - pageOffset) {
+			/* Enough payload to fill up from this page. */
+			toCopy = PAGE_SIZE - pageOffset;
+		} else {
+			toCopy = size - bytesCopied;
+		}
+
+		if (isIovec) {
+			struct iovec *iov = (struct iovec *)src;
+			int err;
+
+			/* The iovec will track bytesCopied internally. */
+			err = memcpy_fromiovec((uint8_t *) va + pageOffset,
+					       iov, toCopy);
+			if (err != 0) {
+				kunmap(kernelIf->page[pageIndex]);
+				return VMCI_ERROR_INVALID_ARGS;
+			}
+		} else {
+			memcpy((uint8_t *) va + pageOffset,
+			       (uint8_t *) src + bytesCopied, toCopy);
+		}
+
+		bytesCopied += toCopy;
+		if (!kernelIf->mapped)
+			kunmap(kernelIf->page[pageIndex]);
+	}
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Copies to a given buffer or iovector from a VMCI Queue.  Uses
+ * kmap()/kunmap() to dynamically map/unmap required portions of the queue
+ * by traversing the offset -> page translation structure for the queue.
+ * Assumes that offset + size does not wrap around in the queue.
+ */
+static int __qp_memcpy_from_queue(void *dest,
+				  const struct vmci_queue *queue,
+				  uint64_t queueOffset,
+				  size_t size,
+				  bool isIovec)
+{
+	struct vmci_queue_kern_if *kernelIf = queue->kernelIf;
+	size_t bytesCopied = 0;
+
+	while (bytesCopied < size) {
+		uint64_t pageIndex = (queueOffset + bytesCopied) / PAGE_SIZE;
+		size_t pageOffset =
+			(queueOffset + bytesCopied) & (PAGE_SIZE - 1);
+		void *va;
+		size_t toCopy;
+
+		if (!kernelIf->mapped)
+			va = kmap(kernelIf->page[pageIndex]);
+		else
+			va = (void *)((uint8_t *)kernelIf->va +
+				      (pageIndex * PAGE_SIZE));
+
+		if (size - bytesCopied > PAGE_SIZE - pageOffset) {
+			/* Enough payload to fill up this page. */
+			toCopy = PAGE_SIZE - pageOffset;
+		} else {
+			toCopy = size - bytesCopied;
+		}
+
+		if (isIovec) {
+			struct iovec *iov = (struct iovec *)dest;
+			int err;
+
+			/* The iovec will track bytesCopied internally. */
+			err = memcpy_toiovec(iov, (uint8_t *) va + pageOffset,
+					     toCopy);
+			if (err != 0) {
+				kunmap(kernelIf->page[pageIndex]);
+				return VMCI_ERROR_INVALID_ARGS;
+			}
+		} else {
+			memcpy((uint8_t *) dest + bytesCopied,
+			       (uint8_t *) va + pageOffset, toCopy);
+		}
+
+		bytesCopied += toCopy;
+		if (!kernelIf->mapped)
+			kunmap(kernelIf->page[pageIndex]);
+	}
+
+	return VMCI_SUCCESS;
+}
+
+
+/*
+ * Allocates two list of PPNs --- one for the pages in the produce queue,
+ * and the other for the pages in the consume queue. Intializes the list
+ * of PPNs with the page frame numbers of the KVA for the two queues (and
+ * the queue headers).
+ */
+static int qp_alloc_ppn_set(void *prodQ,
+			    uint64_t numProducePages,
+			    void *consQ,
+			    uint64_t numConsumePages,
+			    struct PPNSet *ppnSet)
+{
+	uint32_t *producePPNs;
+	uint32_t *consumePPNs;
+	struct vmci_queue *produceQ = prodQ;
+	struct vmci_queue *consumeQ = consQ;
+	uint64_t i;
+
+	if (!produceQ || !numProducePages || !consumeQ ||
+	    !numConsumePages || !ppnSet)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	if (ppnSet->initialized)
+		return VMCI_ERROR_ALREADY_EXISTS;
+
+	producePPNs =
+		kmalloc(numProducePages * sizeof *producePPNs, GFP_KERNEL);
+	if (!producePPNs)
+		return VMCI_ERROR_NO_MEM;
+
+	consumePPNs =
+		kmalloc(numConsumePages * sizeof *consumePPNs, GFP_KERNEL);
+	if (!consumePPNs) {
+		kfree(producePPNs);
+		return VMCI_ERROR_NO_MEM;
+	}
+
+	producePPNs[0] = page_to_pfn(vmalloc_to_page(produceQ->qHeader));
+	for (i = 1; i < numProducePages; i++) {
+		unsigned long pfn;
+
+		producePPNs[i] = pfn =
+			page_to_pfn(produceQ->kernelIf->page[i - 1]);
+
+		/* Fail allocation if PFN isn't supported by hypervisor. */
+		if (sizeof pfn > sizeof *producePPNs && pfn != producePPNs[i])
+			goto ppnError;
+	}
+
+	consumePPNs[0] = page_to_pfn(vmalloc_to_page(consumeQ->qHeader));
+	for (i = 1; i < numConsumePages; i++) {
+		unsigned long pfn;
+
+		consumePPNs[i] = pfn =
+			page_to_pfn(consumeQ->kernelIf->page[i - 1]);
+
+		/* Fail allocation if PFN isn't supported by hypervisor. */
+		if (sizeof pfn > sizeof *consumePPNs && pfn != consumePPNs[i])
+			goto ppnError;
+	}
+
+	ppnSet->numProducePages = numProducePages;
+	ppnSet->numConsumePages = numConsumePages;
+	ppnSet->producePPNs = producePPNs;
+	ppnSet->consumePPNs = consumePPNs;
+	ppnSet->initialized = true;
+	return VMCI_SUCCESS;
+
+ppnError:
+	kfree(producePPNs);
+	kfree(consumePPNs);
+	return VMCI_ERROR_INVALID_ARGS;
+}
+
+/*
+ * Frees the two list of PPNs for a queue pair.
+ */
+static void qp_free_ppn_set(struct PPNSet *ppnSet)
+{
+	ASSERT(ppnSet);
+	if (ppnSet->initialized) {
+		/* Do not call these functions on NULL inputs. */
+		ASSERT(ppnSet->producePPNs && ppnSet->consumePPNs);
+		kfree(ppnSet->producePPNs);
+		kfree(ppnSet->consumePPNs);
+	}
+	memset(ppnSet, 0, sizeof *ppnSet);
+}
+
+/*
+ * Populates the list of PPNs in the hypercall structure with the PPNS
+ * of the produce queue and the consume queue.
+ */
+static int qp_populate_ppn_set(uint8_t *callBuf,
+			       const struct PPNSet *ppnSet)
+{
+	ASSERT(callBuf && ppnSet && ppnSet->initialized);
+	memcpy(callBuf, ppnSet->producePPNs,
+	       ppnSet->numProducePages * sizeof *ppnSet->producePPNs);
+	memcpy(callBuf +
+	       ppnSet->numProducePages * sizeof *ppnSet->producePPNs,
+	       ppnSet->consumePPNs,
+	       ppnSet->numConsumePages * sizeof *ppnSet->consumePPNs);
+
+	return VMCI_SUCCESS;
+}
+
+static int qp_memcpy_to_queue(struct vmci_queue *queue,
+			      uint64_t queueOffset,
+			      const void *src,
+			      size_t srcOffset,
+			      size_t size)
+{
+	return __qp_memcpy_to_queue(queue, queueOffset,
+				    (uint8_t *) src + srcOffset, size, false);
+}
+
+static int qp_memcpy_from_queue(void *dest,
+				size_t destOffset,
+				const struct vmci_queue *queue,
+				uint64_t queueOffset,
+				size_t size)
+{
+	return __qp_memcpy_from_queue((uint8_t *) dest + destOffset,
+				      queue, queueOffset, size, false);
+}
+
+/*
+ * Copies from a given iovec from a VMCI Queue.
+ */
+static int qp_memcpy_to_queue_iov(struct vmci_queue *queue,
+				  uint64_t queueOffset,
+				  const void *src,
+				  size_t srcOffset,
+				  size_t size)
+{
+
+	/*
+	 * We ignore srcOffset because src is really a struct iovec * and will
+	 * maintain offset internally.
+	 */
+	return __qp_memcpy_to_queue(queue, queueOffset, src, size, true);
+}
+
+/*
+ * Copies to a given iovec from a VMCI Queue.
+ */
+static int qp_memcpy_from_queue_iov(void *dest,
+				    size_t destOffset,
+				    const struct vmci_queue *queue,
+				    uint64_t queueOffset,
+				    size_t size)
+{
+	/*
+	 * We ignore destOffset because dest is really a struct iovec * and will
+	 * maintain offset internally.
+	 */
+	return __qp_memcpy_from_queue(dest, queue, queueOffset, size, true);
+}
+
+/*
+ * Allocates kernel VA space of specified size plus space for the queue
+ * and kernel interface.  This is different from the guest queue allocator,
+ * because we do not allocate our own queue header/data pages here but
+ * share those of the guest.
+ */
+static struct vmci_queue *qp_host_alloc_queue(uint64_t size)
+{
+	struct vmci_queue *queue;
+	const size_t numPages = dm_div_up(size, PAGE_SIZE) + 1;
+	const size_t queueSize = sizeof(*queue) + sizeof(*(queue->kernelIf));
+	const size_t queuePageSize = numPages * sizeof(*queue->kernelIf->page);
+
+	queue = kzalloc(queueSize + queuePageSize, GFP_KERNEL);
+	if (queue) {
+		queue->qHeader = NULL;
+		queue->savedHeader = NULL;
+		queue->kernelIf =
+			(struct vmci_queue_kern_if *)((uint8_t *) queue +
+						      sizeof(*queue));
+		queue->kernelIf->host = true;
+		queue->kernelIf->mutex = NULL;
+		queue->kernelIf->numPages = numPages;
+		queue->kernelIf->headerPage =
+			(struct page **)((uint8_t *) queue + queueSize);
+		queue->kernelIf->page = &queue->kernelIf->headerPage[1];
+		queue->kernelIf->va = NULL;
+		queue->kernelIf->mapped = false;
+	}
+
+	return queue;
+}
+
+/*
+ * Frees kernel memory for a given queue (header plus translation
+ * structure).
+ */
+static void qp_host_free_queue(struct vmci_queue *queue,
+			       uint64_t queueSize)
+{
+	kfree(queue);
+}
+
+/*
+ * Initialize the mutex for the pair of queues.  This mutex is used to
+ * protect the qHeader and the buffer from changing out from under any
+ * users of either queue.  Of course, it's only any good if the mutexes
+ * are actually acquired.  Queue structure must lie on non-paged memory
+ * or we cannot guarantee access to the mutex.
+ */
+static void qp_init_queue_mutex(struct vmci_queue *produceQ,
+				struct vmci_queue *consumeQ)
+{
+	ASSERT(produceQ);
+	ASSERT(consumeQ);
+	ASSERT(produceQ->kernelIf);
+	ASSERT(consumeQ->kernelIf);
+
+	/*
+	 * Only the host queue has shared state - the guest queues do not
+	 * need to synchronize access using a queue mutex.
+	 */
+
+	if (produceQ->kernelIf->host) {
+		produceQ->kernelIf->mutex = &produceQ->kernelIf->__mutex;
+		consumeQ->kernelIf->mutex = &produceQ->kernelIf->__mutex;
+		sema_init(produceQ->kernelIf->mutex, 1);
+	}
+}
+
+/*
+ * Cleans up the mutex for the pair of queues.
+ */
+static void qp_cleanup_queue_mutex(struct vmci_queue *produceQ,
+				   struct vmci_queue *consumeQ)
+{
+	ASSERT(produceQ);
+	ASSERT(consumeQ);
+	ASSERT(produceQ->kernelIf);
+	ASSERT(consumeQ->kernelIf);
+
+	if (produceQ->kernelIf->host) {
+		produceQ->kernelIf->mutex = NULL;
+		consumeQ->kernelIf->mutex = NULL;
+	}
+}
+
+/*
+ * Acquire the mutex for the queue.  Note that the produceQ and
+ * the consumeQ share a mutex.  So, only one of the two need to
+ * be passed in to this routine.  Either will work just fine.
+ */
+static void qp_acquire_queue_mutex(struct vmci_queue *queue)
+{
+	ASSERT(queue);
+	ASSERT(queue->kernelIf);
+
+	if (queue->kernelIf->host) {
+		ASSERT(queue->kernelIf->mutex);
+		down(queue->kernelIf->mutex);
+	}
+}
+
+/*
+ * Release the mutex for the queue.  Note that the produceQ and
+ * the consumeQ share a mutex.  So, only one of the two need to
+ * be passed in to this routine.  Either will work just fine.
+ */
+static void qp_release_queue_mutex(struct vmci_queue *queue)
+{
+	ASSERT(queue);
+	ASSERT(queue->kernelIf);
+
+	if (queue->kernelIf->host) {
+		ASSERT(queue->kernelIf->mutex);
+		up(queue->kernelIf->mutex);
+	}
+}
+
+/*
+ * Helper function to release pages in the PageStoreAttachInfo
+ * previously obtained using get_user_pages.
+ */
+static void qp_release_pages(struct page **pages,
+			     uint64_t numPages,
+			     bool dirty)
+{
+	int i;
+
+	for (i = 0; i < numPages; i++) {
+		ASSERT(pages[i]);
+
+		if (dirty)
+			set_page_dirty(pages[i]);
+
+		page_cache_release(pages[i]);
+		pages[i] = NULL;
+	}
+}
+
+/*
+ * Lock the user pages referenced by the {produce,consume}Buffer
+ * struct into memory and populate the {produce,consume}Pages
+ * arrays in the attach structure with them.
+ */
+static int qp_host_get_user_memory(uint64_t produceUVA,
+				   uint64_t consumeUVA,
+				   struct vmci_queue *produceQ,
+				   struct vmci_queue *consumeQ)
+{
+	int retval;
+	int err = VMCI_SUCCESS;
+
+	down_write(&current->mm->mmap_sem);
+	retval = get_user_pages(current,
+				current->mm,
+				(uintptr_t) produceUVA,
+				produceQ->kernelIf->numPages,
+				1, 0, produceQ->kernelIf->headerPage, NULL);
+	if (retval < produceQ->kernelIf->numPages) {
+		pr_warn("get_user_pages(produce) failed (retval=%d)",
+			retval);
+		qp_release_pages(produceQ->kernelIf->headerPage, retval, false);
+		err = VMCI_ERROR_NO_MEM;
+		goto out;
+	}
+
+	retval = get_user_pages(current,
+				current->mm,
+				(uintptr_t) consumeUVA,
+				consumeQ->kernelIf->numPages,
+				1, 0, consumeQ->kernelIf->headerPage, NULL);
+	if (retval < consumeQ->kernelIf->numPages) {
+		pr_warn("get_user_pages(consume) failed (retval=%d)",
+			retval);
+		qp_release_pages(consumeQ->kernelIf->headerPage, retval, false);
+		qp_release_pages(produceQ->kernelIf->headerPage,
+				 produceQ->kernelIf->numPages, false);
+		err = VMCI_ERROR_NO_MEM;
+	}
+
+out:
+	up_write(&current->mm->mmap_sem);
+
+	return err;
+}
+
+/*
+ * Registers the specification of the user pages used for backing a queue
+ * pair. Enough information to map in pages is stored in the OS specific
+ * part of the struct vmci_queue structure.
+ */
+static int qp_host_register_user_memory(struct vmci_qp_page_store *pageStore,
+					struct vmci_queue *produceQ,
+					struct vmci_queue *consumeQ)
+{
+	uint64_t produceUVA;
+	uint64_t consumeUVA;
+
+	ASSERT(produceQ->kernelIf->headerPage
+	       && consumeQ->kernelIf->headerPage);
+
+	/*
+	 * The new style and the old style mapping only differs in
+	 * that we either get a single or two UVAs, so we split the
+	 * single UVA range at the appropriate spot.
+	 */
+	produceUVA = pageStore->pages;
+	consumeUVA = pageStore->pages +
+		produceQ->kernelIf->numPages * PAGE_SIZE;
+	return qp_host_get_user_memory(produceUVA, consumeUVA, produceQ,
+				       consumeQ);
+}
+
+/*
+ * Releases and removes the references to user pages stored in the attach
+ * struct.  Pages are released from the page cache and may become
+ * swappable again.
+ */
+static void qp_host_unregister_user_memory(struct vmci_queue *produceQ,
+					   struct vmci_queue *consumeQ)
+{
+	ASSERT(produceQ->kernelIf);
+	ASSERT(consumeQ->kernelIf);
+	ASSERT(!produceQ->qHeader && !consumeQ->qHeader);
+
+	qp_release_pages(produceQ->kernelIf->headerPage,
+			 produceQ->kernelIf->numPages, true);
+	memset(produceQ->kernelIf->headerPage, 0,
+	       sizeof *produceQ->kernelIf->headerPage *
+	       produceQ->kernelIf->numPages);
+	qp_release_pages(consumeQ->kernelIf->headerPage,
+			 consumeQ->kernelIf->numPages, true);
+	memset(consumeQ->kernelIf->headerPage, 0,
+	       sizeof *consumeQ->kernelIf->headerPage *
+	       consumeQ->kernelIf->numPages);
+}
+
+/*
+ * Once qp_host_register_user_memory has been performed on a
+ * queue, the queue pair headers can be mapped into the
+ * kernel. Once mapped, they must be unmapped with
+ * qp_host_unmap_queues prior to calling
+ * qp_host_unregister_user_memory.
+ * Pages are pinned.
+ */
+static int qp_host_map_queues(struct vmci_queue *produceQ,
+			      struct vmci_queue *consumeQ)
+{
+	int result;
+
+	if (!produceQ->qHeader || !consumeQ->qHeader) {
+		struct page *headers[2];
+
+		if (produceQ->qHeader != consumeQ->qHeader)
+			return VMCI_ERROR_QUEUEPAIR_MISMATCH;
+
+		if (produceQ->kernelIf->headerPage == NULL ||
+		    *produceQ->kernelIf->headerPage == NULL)
+			return VMCI_ERROR_UNAVAILABLE;
+
+		ASSERT(*produceQ->kernelIf->headerPage
+		       && *consumeQ->kernelIf->headerPage);
+
+		headers[0] = *produceQ->kernelIf->headerPage;
+		headers[1] = *consumeQ->kernelIf->headerPage;
+
+		produceQ->qHeader = vmap(headers, 2, VM_MAP, PAGE_KERNEL);
+		if (produceQ->qHeader != NULL) {
+			consumeQ->qHeader =
+				(struct vmci_queue_header *)((uint8_t *)
+							     produceQ->qHeader +
+							     PAGE_SIZE);
+			result = VMCI_SUCCESS;
+		} else {
+			pr_warn("vmap failed.");
+			result = VMCI_ERROR_NO_MEM;
+		}
+	} else {
+		result = VMCI_SUCCESS;
+	}
+
+	return result;
+}
+
+/*
+ * Unmaps previously mapped queue pair headers from the kernel.
+ * Pages are unpinned.
+ */
+static int qp_host_unmap_queues(uint32_t gid,
+				struct vmci_queue *produceQ,
+				struct vmci_queue *consumeQ)
+{
+	if (produceQ->qHeader) {
+		ASSERT(consumeQ->qHeader);
+
+		if (produceQ->qHeader < consumeQ->qHeader)
+			vunmap(produceQ->qHeader);
+		else
+			vunmap(consumeQ->qHeader);
+
+		produceQ->qHeader = NULL;
+		consumeQ->qHeader = NULL;
+	}
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Finds the entry in the list corresponding to a given handle. Assumes
+ * that the list is locked.
+ */
+static struct qp_entry *qp_list_find(struct qp_list *qpList,
+				     struct vmci_handle handle)
+{
+	struct qp_entry *entry;
+
+	if (VMCI_HANDLE_INVALID(handle))
+		return NULL;
+
+	list_for_each_entry(entry, &qpList->head, listItem) {
+		if (VMCI_HANDLE_EQUAL(entry->handle, handle))
+			return entry;
+	}
+
+	return NULL;
+}
+
+/*
+ * Dispatches a queue pair event message directly into the local event
+ * queue.
+ */
+static int qp_notify_peer_local(bool attach,
+				struct vmci_handle handle)
+{
+	struct vmci_event_msg *eMsg;
+	struct vmci_event_payld_qp *ePayload;
+	/* buf is only 48 bytes. */
+	char buf[sizeof *eMsg + sizeof *ePayload];
+	uint32_t contextId;
+
+	contextId = VMCI_GetContextID();
+
+	eMsg = (struct vmci_event_msg *)buf;
+	ePayload = vmci_event_data_payload(&eMsg->eventData);
+
+	eMsg->hdr.dst = vmci_make_handle(contextId, VMCI_EVENT_HANDLER);
+	eMsg->hdr.src = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+					 VMCI_CONTEXT_RESOURCE_ID);
+	eMsg->hdr.payloadSize =
+		sizeof *eMsg + sizeof *ePayload - sizeof eMsg->hdr;
+	eMsg->eventData.event =
+		attach ? VMCI_EVENT_QP_PEER_ATTACH : VMCI_EVENT_QP_PEER_DETACH;
+	ePayload->peerId = contextId;
+	ePayload->handle = handle;
+
+	return vmci_event_dispatch((struct vmci_dg *)eMsg);
+}
+
+/*
+ * Allocates and initializes a qp_guest_endpoint structure.
+ * Allocates a QueuePair rid (and handle) iff the given entry has
+ * an invalid handle.  0 through VMCI_RESERVED_RESOURCE_ID_MAX
+ * are reserved handles.  Assumes that the QP list mutex is held
+ * by the caller.
+ */
+static struct qp_guest_endpoint *
+qp_guest_endpoint_create(struct vmci_handle handle,
+			 uint32_t peer,
+			 uint32_t flags,
+			 uint64_t produceSize,
+			 uint64_t consumeSize,
+			 void *produceQ,
+			 void *consumeQ)
+{
+	static uint32_t queuePairRID = VMCI_RESERVED_RESOURCE_ID_MAX + 1;
+	struct qp_guest_endpoint *entry;
+	/* One page each for the queue headers. */
+	const uint64_t numPPNs = dm_div_up(produceSize, PAGE_SIZE) +
+		dm_div_up(consumeSize, PAGE_SIZE) + 2;
+
+	ASSERT((produceSize || consumeSize) && produceQ && consumeQ);
+
+	if (VMCI_HANDLE_INVALID(handle)) {
+		uint32_t contextID = VMCI_GetContextID();
+		uint32_t oldRID = queuePairRID;
+
+		/*
+		 * Generate a unique QueuePair rid.  Keep on trying
+		 * until we wrap around in the RID space.
+		 */
+		ASSERT(oldRID > VMCI_RESERVED_RESOURCE_ID_MAX);
+		do {
+			handle = vmci_make_handle(contextID, queuePairRID);
+			entry = (struct qp_guest_endpoint *)
+				qp_list_find(&qpGuestEndpoints, handle);
+			queuePairRID++;
+
+			if (unlikely(!queuePairRID))
+				/* Skip the reserved rids. */
+				queuePairRID =
+					VMCI_RESERVED_RESOURCE_ID_MAX + 1;
+
+		} while (entry && queuePairRID != oldRID);
+
+		if (unlikely(entry != NULL)) {
+			ASSERT(queuePairRID == oldRID);
+			/*
+			 * We wrapped around --- no rids were free.
+			 */
+			return NULL;
+		}
+	}
+
+	ASSERT(!VMCI_HANDLE_INVALID(handle) &&
+	       qp_list_find(&qpGuestEndpoints, handle) == NULL);
+	entry = kzalloc(sizeof *entry, GFP_KERNEL);
+	if (entry) {
+		entry->qp.handle = handle;
+		entry->qp.peer = peer;
+		entry->qp.flags = flags;
+		entry->qp.produceSize = produceSize;
+		entry->qp.consumeSize = consumeSize;
+		entry->qp.refCount = 0;
+		entry->numPPNs = numPPNs;
+		entry->produceQ = produceQ;
+		entry->consumeQ = consumeQ;
+		INIT_LIST_HEAD(&entry->qp.listItem);
+	}
+	return entry;
+}
+
+/*
+ * Frees a qp_guest_endpoint structure.
+ */
+static void qp_guest_endpoint_destroy(struct qp_guest_endpoint *entry)
+{
+	ASSERT(entry);
+	ASSERT(entry->qp.refCount == 0);
+
+	qp_free_ppn_set(&entry->ppnSet);
+	qp_cleanup_queue_mutex(entry->produceQ, entry->consumeQ);
+	qp_free_queue(entry->produceQ, entry->qp.produceSize);
+	qp_free_queue(entry->consumeQ, entry->qp.consumeSize);
+	kfree(entry);
+}
+
+/*
+ * Helper to make a QueuePairAlloc hypercall when the driver is
+ * supporting a guest device.
+ */
+static int qp_alloc_hypercall(const struct qp_guest_endpoint *entry)
+{
+	struct vmci_qp_alloc_msg *allocMsg;
+	size_t msgSize;
+	int result;
+
+	if (!entry || entry->numPPNs <= 2)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	ASSERT(!(entry->qp.flags & VMCI_QPFLAG_LOCAL));
+
+	msgSize = sizeof *allocMsg + (size_t) entry->numPPNs * sizeof(uint32_t);
+	allocMsg = kmalloc(msgSize, GFP_KERNEL);
+	if (!allocMsg)
+		return VMCI_ERROR_NO_MEM;
+
+	allocMsg->hdr.dst = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+					     VMCI_QUEUEPAIR_ALLOC);
+	allocMsg->hdr.src = VMCI_ANON_SRC_HANDLE;
+	allocMsg->hdr.payloadSize = msgSize - VMCI_DG_HEADERSIZE;
+	allocMsg->handle = entry->qp.handle;
+	allocMsg->peer = entry->qp.peer;
+	allocMsg->flags = entry->qp.flags;
+	allocMsg->produceSize = entry->qp.produceSize;
+	allocMsg->consumeSize = entry->qp.consumeSize;
+	allocMsg->numPPNs = entry->numPPNs;
+
+	result =
+		qp_populate_ppn_set((uint8_t *) allocMsg + sizeof *allocMsg,
+				    &entry->ppnSet);
+	if (result == VMCI_SUCCESS)
+		result = vmci_send_dg((struct vmci_dg *)allocMsg);
+
+	kfree(allocMsg);
+
+	return result;
+}
+
+/*
+ * Helper to make a QueuePairDetach hypercall when the driver is
+ * supporting a guest device.
+ */
+static int qp_detatch_hypercall(struct vmci_handle handle)
+{
+	struct vmci_qp_detach_msg detachMsg;
+
+	detachMsg.hdr.dst = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+					     VMCI_QUEUEPAIR_DETACH);
+	detachMsg.hdr.src = VMCI_ANON_SRC_HANDLE;
+	detachMsg.hdr.payloadSize = sizeof handle;
+	detachMsg.handle = handle;
+
+	return vmci_send_dg((struct vmci_dg *)&detachMsg);
+}
+
+/*
+ * Adds the given entry to the list. Assumes that the list is locked.
+ */
+static void qp_list_add_entry(struct qp_list *qpList,
+			      struct qp_entry *entry)
+{
+	if (entry)
+		list_add(&entry->listItem, &qpList->head);
+}
+
+/*
+ * Removes the given entry from the list. Assumes that the list is locked.
+ */
+static void qp_list_remove_entry(struct qp_list *qpList,
+				 struct qp_entry *entry)
+{
+	if (entry)
+		list_del(&entry->listItem);
+}
+
+/*
+ * Helper for VMCI QueuePair detach interface. Frees the physical
+ * pages for the queue pair.
+ */
+static int qp_detatch_guest_work(struct vmci_handle handle)
+{
+	int result;
+	struct qp_guest_endpoint *entry;
+	uint32_t refCount = ~0;	/* To avoid compiler warning below */
+
+	ASSERT(!VMCI_HANDLE_INVALID(handle));
+
+	down(&qpGuestEndpoints.mutex);
+
+	entry = (struct qp_guest_endpoint *)
+		qp_list_find(&qpGuestEndpoints, handle);
+	if (!entry) {
+		up(&qpGuestEndpoints.mutex);
+		return VMCI_ERROR_NOT_FOUND;
+	}
+
+	ASSERT(entry->qp.refCount >= 1);
+
+	if (entry->qp.flags & VMCI_QPFLAG_LOCAL) {
+		result = VMCI_SUCCESS;
+
+		if (entry->qp.refCount > 1) {
+			result = qp_notify_peer_local(false, handle);
+			/*
+			 * We can fail to notify a local queuepair
+			 * because we can't allocate.  We still want
+			 * to release the entry if that happens, so
+			 * don't bail out yet.
+			 */
+		}
+	} else {
+		result = qp_detatch_hypercall(handle);
+		if (result < VMCI_SUCCESS) {
+			/*
+			 * We failed to notify a non-local queuepair.
+			 * That other queuepair might still be
+			 * accessing the shared memory, so don't
+			 * release the entry yet.  It will get cleaned
+			 * up by VMCIQueuePair_Exit() if necessary
+			 * (assuming we are going away, otherwise why
+			 * did this fail?).
+			 */
+
+			up(&qpGuestEndpoints.mutex);
+			return result;
+		}
+	}
+
+	/*
+	 * If we get here then we either failed to notify a local queuepair, or
+	 * we succeeded in all cases.  Release the entry if required.
+	 */
+
+	entry->qp.refCount--;
+	if (entry->qp.refCount == 0)
+		qp_list_remove_entry(&qpGuestEndpoints, &entry->qp);
+
+	/* If we didn't remove the entry, this could change once we unlock. */
+	if (entry)
+		refCount = entry->qp.refCount;
+
+	up(&qpGuestEndpoints.mutex);
+
+	if (refCount == 0)
+		qp_guest_endpoint_destroy(entry);
+
+	return result;
+}
+
+/*
+ * This functions handles the actual allocation of a VMCI queue
+ * pair guest endpoint. Allocates physical pages for the queue
+ * pair. It makes OS dependent calls through generic wrappers.
+ */
+static int qp_alloc_guest_work(struct vmci_handle *handle,
+			       struct vmci_queue **produceQ,
+			       uint64_t produceSize,
+			       struct vmci_queue **consumeQ,
+			       uint64_t consumeSize,
+			       uint32_t peer,
+			       uint32_t flags,
+			       uint32_t privFlags)
+{
+	const uint64_t numProducePages = dm_div_up(produceSize, PAGE_SIZE) + 1;
+	const uint64_t numConsumePages = dm_div_up(consumeSize, PAGE_SIZE) + 1;
+	void *myProduceQ = NULL;
+	void *myConsumeQ = NULL;
+	int result;
+	struct qp_guest_endpoint *queuePairEntry = NULL;
+
+	ASSERT(handle && produceQ && consumeQ && (produceSize || consumeSize));
+
+	if (privFlags != VMCI_NO_PRIVILEGE_FLAGS)
+		return VMCI_ERROR_NO_ACCESS;
+
+	down(&qpGuestEndpoints.mutex);
+
+	queuePairEntry = (struct qp_guest_endpoint *)qp_list_find(
+		&qpGuestEndpoints, *handle);
+	if (queuePairEntry) {
+		if (queuePairEntry->qp.flags & VMCI_QPFLAG_LOCAL) {
+			/* Local attach case. */
+			if (queuePairEntry->qp.refCount > 1) {
+				pr_devel("Error attempting to attach more " \
+					 "than once.");
+				result = VMCI_ERROR_UNAVAILABLE;
+				goto errorKeepEntry;
+			}
+
+			if (queuePairEntry->qp.produceSize != consumeSize
+			    || queuePairEntry->qp.consumeSize !=
+			    produceSize
+			    || queuePairEntry->qp.flags !=
+			    (flags & ~VMCI_QPFLAG_ATTACH_ONLY)) {
+				pr_devel("Error mismatched queue pair in " \
+					 "local attach.");
+				result = VMCI_ERROR_QUEUEPAIR_MISMATCH;
+				goto errorKeepEntry;
+			}
+
+			/*
+			 * Do a local attach.  We swap the consume and
+			 * produce queues for the attacher and deliver
+			 * an attach event.
+			 */
+			result = qp_notify_peer_local(true, *handle);
+			if (result < VMCI_SUCCESS)
+				goto errorKeepEntry;
+
+			myProduceQ = queuePairEntry->consumeQ;
+			myConsumeQ = queuePairEntry->produceQ;
+			goto out;
+		}
+
+		result = VMCI_ERROR_ALREADY_EXISTS;
+		goto errorKeepEntry;
+	}
+
+	myProduceQ = qp_alloc_queue(produceSize, flags);
+	if (!myProduceQ) {
+		pr_warn("Error allocating pages for produce queue.");
+		result = VMCI_ERROR_NO_MEM;
+		goto error;
+	}
+
+	myConsumeQ = qp_alloc_queue(consumeSize, flags);
+	if (!myConsumeQ) {
+		pr_warn("Error allocating pages for consume queue.");
+		result = VMCI_ERROR_NO_MEM;
+		goto error;
+	}
+
+	queuePairEntry = qp_guest_endpoint_create(*handle, peer, flags,
+						  produceSize, consumeSize,
+						  myProduceQ, myConsumeQ);
+	if (!queuePairEntry) {
+		pr_warn("Error allocating memory in %s.", __func__);
+		result = VMCI_ERROR_NO_MEM;
+		goto error;
+	}
+
+	result = qp_alloc_ppn_set(myProduceQ, numProducePages, myConsumeQ,
+				  numConsumePages, &queuePairEntry->ppnSet);
+	if (result < VMCI_SUCCESS) {
+		pr_warn("qp_alloc_ppn_set failed.");
+		goto error;
+	}
+
+	/*
+	 * It's only necessary to notify the host if this queue pair will be
+	 * attached to from another context.
+	 */
+	if (queuePairEntry->qp.flags & VMCI_QPFLAG_LOCAL) {
+		/* Local create case. */
+		uint32_t contextId = VMCI_GetContextID();
+
+		/*
+		 * Enforce similar checks on local queue pairs as we
+		 * do for regular ones.  The handle's context must
+		 * match the creator or attacher context id (here they
+		 * are both the current context id) and the
+		 * attach-only flag cannot exist during create.  We
+		 * also ensure specified peer is this context or an
+		 * invalid one.
+		 */
+		if (queuePairEntry->qp.handle.context != contextId ||
+		    (queuePairEntry->qp.peer != VMCI_INVALID_ID &&
+		     queuePairEntry->qp.peer != contextId)) {
+			result = VMCI_ERROR_NO_ACCESS;
+			goto error;
+		}
+
+		if (queuePairEntry->qp.flags & VMCI_QPFLAG_ATTACH_ONLY) {
+			result = VMCI_ERROR_NOT_FOUND;
+			goto error;
+		}
+	} else {
+		result = qp_alloc_hypercall(queuePairEntry);
+		if (result < VMCI_SUCCESS) {
+			pr_warn("qp_alloc_hypercall result = %d.",
+				result);
+			goto error;
+		}
+	}
+
+	qp_init_queue_mutex((struct vmci_queue *)myProduceQ,
+			    (struct vmci_queue *)myConsumeQ);
+
+	qp_list_add_entry(&qpGuestEndpoints, &queuePairEntry->qp);
+
+out:
+	queuePairEntry->qp.refCount++;
+	*handle = queuePairEntry->qp.handle;
+	*produceQ = (struct vmci_queue *)myProduceQ;
+	*consumeQ = (struct vmci_queue *)myConsumeQ;
+
+	/*
+	 * We should initialize the queue pair header pages on a local
+	 * queue pair create.  For non-local queue pairs, the
+	 * hypervisor initializes the header pages in the create step.
+	 */
+	if ((queuePairEntry->qp.flags & VMCI_QPFLAG_LOCAL) &&
+	    queuePairEntry->qp.refCount == 1) {
+		vmci_q_header_init((*produceQ)->qHeader, *handle);
+		vmci_q_header_init((*consumeQ)->qHeader, *handle);
+	}
+
+	up(&qpGuestEndpoints.mutex);
+
+	return VMCI_SUCCESS;
+
+error:
+	up(&qpGuestEndpoints.mutex);
+	if (queuePairEntry) {
+		/* The queues will be freed inside the destroy routine. */
+		qp_guest_endpoint_destroy(queuePairEntry);
+	} else {
+		qp_free_queue(myProduceQ, produceSize);
+		qp_free_queue(myConsumeQ, consumeSize);
+	}
+	return result;
+
+errorKeepEntry:
+	/* This path should only be used when an existing entry was found. */
+	ASSERT(queuePairEntry->qp.refCount > 0);
+	up(&qpGuestEndpoints.mutex);
+	return result;
+}
+
+/*
+ * The first endpoint issuing a queue pair allocation will create the state
+ * of the queue pair in the queue pair broker.
+ *
+ * If the creator is a guest, it will associate a VMX virtual address range
+ * with the queue pair as specified by the pageStore. For compatibility with
+ * older VMX'en, that would use a separate step to set the VMX virtual
+ * address range, the virtual address range can be registered later using
+ * vmci_qp_broker_set_page_store. In that case, a pageStore of NULL should be
+ * used.
+ *
+ * If the creator is the host, a pageStore of NULL should be used as well,
+ * since the host is not able to supply a page store for the queue pair.
+ *
+ * For older VMX and host callers, the queue pair will be created in the
+ * VMCIQPB_CREATED_NO_MEM state, and for current VMX callers, it will be
+ * created in VMCOQPB_CREATED_MEM state.
+ */
+static int qp_broker_create(struct vmci_handle handle,
+			    uint32_t peer,
+			    uint32_t flags,
+			    uint32_t privFlags,
+			    uint64_t produceSize,
+			    uint64_t consumeSize,
+			    struct vmci_qp_page_store *pageStore,
+			    struct vmci_ctx *context,
+			    VMCIEventReleaseCB wakeupCB,
+			    void *clientData,
+			    struct qp_broker_entry **ent)
+{
+	struct qp_broker_entry *entry = NULL;
+	const uint32_t contextId = vmci_ctx_get_id(context);
+	bool isLocal = flags & VMCI_QPFLAG_LOCAL;
+	int result;
+	uint64_t guestProduceSize;
+	uint64_t guestConsumeSize;
+
+	/* Do not create if the caller asked not to. */
+	if (flags & VMCI_QPFLAG_ATTACH_ONLY)
+		return VMCI_ERROR_NOT_FOUND;
+
+	/*
+	 * Creator's context ID should match handle's context ID or the creator
+	 * must allow the context in handle's context ID as the "peer".
+	 */
+	if (handle.context != contextId && handle.context != peer)
+		return VMCI_ERROR_NO_ACCESS;
+
+	if (VMCI_CONTEXT_IS_VM(contextId) && VMCI_CONTEXT_IS_VM(peer))
+		return VMCI_ERROR_DST_UNREACHABLE;
+
+	/*
+	 * Creator's context ID for local queue pairs should match the
+	 * peer, if a peer is specified.
+	 */
+	if (isLocal && peer != VMCI_INVALID_ID && contextId != peer)
+		return VMCI_ERROR_NO_ACCESS;
+
+	entry = kzalloc(sizeof *entry, GFP_ATOMIC);
+	if (!entry)
+		return VMCI_ERROR_NO_MEM;
+
+	if (vmci_ctx_get_id(context) == VMCI_HOST_CONTEXT_ID && !isLocal) {
+		/*
+		 * The queue pair broker entry stores values from the guest
+		 * point of view, so a creating host side endpoint should swap
+		 * produce and consume values -- unless it is a local queue
+		 * pair, in which case no swapping is necessary, since the local
+		 * attacher will swap queues.
+		 */
+
+		guestProduceSize = consumeSize;
+		guestConsumeSize = produceSize;
+	} else {
+		guestProduceSize = produceSize;
+		guestConsumeSize = consumeSize;
+	}
+
+	entry->qp.handle = handle;
+	entry->qp.peer = peer;
+	entry->qp.flags = flags;
+	entry->qp.produceSize = guestProduceSize;
+	entry->qp.consumeSize = guestConsumeSize;
+	entry->qp.refCount = 1;
+	entry->createId = contextId;
+	entry->attachId = VMCI_INVALID_ID;
+	entry->state = VMCIQPB_NEW;
+	entry->requireTrustedAttach =
+		!!(context->privFlags & VMCI_PRIVILEGE_FLAG_RESTRICTED);
+	entry->createdByTrusted = !!(privFlags & VMCI_PRIVILEGE_FLAG_TRUSTED);
+	entry->vmciPageFiles = false;
+	entry->wakeupCB = wakeupCB;
+	entry->clientData = clientData;
+	entry->produceQ = qp_host_alloc_queue(guestProduceSize);
+	if (entry->produceQ == NULL) {
+		result = VMCI_ERROR_NO_MEM;
+		goto error;
+	}
+	entry->consumeQ = qp_host_alloc_queue(guestConsumeSize);
+	if (entry->consumeQ == NULL) {
+		result = VMCI_ERROR_NO_MEM;
+		goto error;
+	}
+
+	qp_init_queue_mutex(entry->produceQ, entry->consumeQ);
+
+	INIT_LIST_HEAD(&entry->qp.listItem);
+
+	if (isLocal) {
+		uint8_t *tmp;
+		ASSERT(pageStore == NULL);
+
+		entry->localMem = kcalloc(QPE_NUM_PAGES(entry->qp),
+					  PAGE_SIZE, GFP_KERNEL);
+		if (entry->localMem == NULL) {
+			result = VMCI_ERROR_NO_MEM;
+			goto error;
+		}
+		entry->state = VMCIQPB_CREATED_MEM;
+		entry->produceQ->qHeader = entry->localMem;
+		tmp = (uint8_t *) entry->localMem + PAGE_SIZE *
+			(dm_div_up(entry->qp.produceSize, PAGE_SIZE) + 1);
+		entry->consumeQ->qHeader = (struct vmci_queue_header *) tmp;
+
+		vmci_q_header_init(entry->produceQ->qHeader, handle);
+		vmci_q_header_init(entry->consumeQ->qHeader, handle);
+	} else if (pageStore) {
+		ASSERT(entry->createId != VMCI_HOST_CONTEXT_ID || isLocal);
+
+		/*
+		 * The VMX already initialized the queue pair headers, so no
+		 * need for the kernel side to do that.
+		 */
+		result = qp_host_register_user_memory(pageStore,
+						      entry->produceQ,
+						      entry->consumeQ);
+		if (result < VMCI_SUCCESS)
+			goto error;
+
+		entry->state = VMCIQPB_CREATED_MEM;
+	} else {
+		/*
+		 * A create without a pageStore may be either a host
+		 * side create (in which case we are waiting for the
+		 * guest side to supply the memory) or an old style
+		 * queue pair create (in which case we will expect a
+		 * set page store call as the next step).
+		 */
+		entry->state = VMCIQPB_CREATED_NO_MEM;
+	}
+
+	qp_list_add_entry(&qpBrokerList, &entry->qp);
+	if (ent != NULL)
+		*ent = entry;
+
+	vmci_ctx_qp_create(context, handle);
+
+	return VMCI_SUCCESS;
+
+error:
+	if (entry != NULL) {
+		qp_host_free_queue(entry->produceQ, guestProduceSize);
+		qp_host_free_queue(entry->consumeQ, guestConsumeSize);
+		kfree(entry);
+	}
+
+	return result;
+}
+
+/*
+ * Enqueues an event datagram to notify the peer VM attached to
+ * the given queue pair handle about attach/detach event by the
+ * given VM.  Returns Payload size of datagram enqueued on
+ * success, error code otherwise.
+ */
+static int qp_notify_peer(bool attach,
+			  struct vmci_handle handle,
+			  uint32_t myId,
+			  uint32_t peerId)
+{
+	int rv;
+	struct vmci_event_msg *eMsg;
+	struct vmci_event_payld_qp *evPayload;
+	char buf[sizeof *eMsg + sizeof *evPayload];
+
+	if (VMCI_HANDLE_INVALID(handle) || myId == VMCI_INVALID_ID ||
+	    peerId == VMCI_INVALID_ID)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	/*
+	 * Notification message contains: queue pair handle and
+	 * attaching/detaching VM's context id.
+	 */
+	eMsg = (struct vmci_event_msg *)buf;
+
+	/*
+	 * In vmci_ctx_enqueue_dg() we enforce the upper limit on
+	 * number of pending events from the hypervisor to a given VM
+	 * otherwise a rogue VM could do an arbitrary number of attach
+	 * and detach operations causing memory pressure in the host
+	 * kernel.
+	 */
+
+	/* Clear out any garbage. */
+	memset(eMsg, 0, sizeof buf);
+
+	eMsg->hdr.dst = vmci_make_handle(peerId, VMCI_EVENT_HANDLER);
+	eMsg->hdr.src = vmci_make_handle(VMCI_HYPERVISOR_CONTEXT_ID,
+					 VMCI_CONTEXT_RESOURCE_ID);
+	eMsg->hdr.payloadSize = sizeof *eMsg + sizeof *evPayload -
+		sizeof eMsg->hdr;
+	eMsg->eventData.event = attach ?
+		VMCI_EVENT_QP_PEER_ATTACH : VMCI_EVENT_QP_PEER_DETACH;
+	evPayload = vmci_event_data_payload(&eMsg->eventData);
+	evPayload->handle = handle;
+	evPayload->peerId = myId;
+
+	rv = vmci_dg_dispatch(VMCI_HYPERVISOR_CONTEXT_ID,
+			      (struct vmci_dg *)eMsg, false);
+	if (rv < VMCI_SUCCESS)
+		pr_warn("Failed to enqueue QueuePair %s event datagram " \
+			"for context (ID=0x%x).", attach ? "ATTACH" : "DETACH",
+			peerId);
+
+	return rv;
+}
+
+/*
+ * The second endpoint issuing a queue pair allocation will attach to
+ * the queue pair registered with the queue pair broker.
+ *
+ * If the attacher is a guest, it will associate a VMX virtual address
+ * range with the queue pair as specified by the pageStore. At this
+ * point, the already attach host endpoint may start using the queue
+ * pair, and an attach event is sent to it. For compatibility with
+ * older VMX'en, that used a separate step to set the VMX virtual
+ * address range, the virtual address range can be registered later
+ * using vmci_qp_broker_set_page_store. In that case, a pageStore of
+ * NULL should be used, and the attach event will be generated once
+ * the actual page store has been set.
+ *
+ * If the attacher is the host, a pageStore of NULL should be used as
+ * well, since the page store information is already set by the guest.
+ *
+ * For new VMX and host callers, the queue pair will be moved to the
+ * VMCIQPB_ATTACHED_MEM state, and for older VMX callers, it will be
+ * moved to the VMCOQPB_ATTACHED_NO_MEM state.
+ */
+static int qp_broker_attach(struct qp_broker_entry *entry,
+			    uint32_t peer,
+			    uint32_t flags,
+			    uint32_t privFlags,
+			    uint64_t produceSize,
+			    uint64_t consumeSize,
+			    struct vmci_qp_page_store *pageStore,
+			    struct vmci_ctx *context,
+			    VMCIEventReleaseCB wakeupCB,
+			    void *clientData,
+			    struct qp_broker_entry **ent)
+{
+	const uint32_t contextId = vmci_ctx_get_id(context);
+	bool isLocal = flags & VMCI_QPFLAG_LOCAL;
+	int result;
+
+	if (entry->state != VMCIQPB_CREATED_NO_MEM &&
+	    entry->state != VMCIQPB_CREATED_MEM)
+		return VMCI_ERROR_UNAVAILABLE;
+
+	if (isLocal) {
+		if (!(entry->qp.flags & VMCI_QPFLAG_LOCAL) ||
+		    contextId != entry->createId) {
+			return VMCI_ERROR_INVALID_ARGS;
+		}
+	} else if (contextId == entry->createId ||
+		   contextId == entry->attachId) {
+		return VMCI_ERROR_ALREADY_EXISTS;
+	}
+
+	ASSERT(entry->qp.refCount < 2);
+	ASSERT(entry->attachId == VMCI_INVALID_ID);
+
+	if (VMCI_CONTEXT_IS_VM(contextId) &&
+	    VMCI_CONTEXT_IS_VM(entry->createId))
+		return VMCI_ERROR_DST_UNREACHABLE;
+
+	/*
+	 * If we are attaching from a restricted context then the queuepair
+	 * must have been created by a trusted endpoint.
+	 */
+	if ((context->privFlags & VMCI_PRIVILEGE_FLAG_RESTRICTED) &&
+	    !entry->createdByTrusted)
+		return VMCI_ERROR_NO_ACCESS;
+
+	/*
+	 * If we are attaching to a queuepair that was created by a restricted
+	 * context then we must be trusted.
+	 */
+	if (entry->requireTrustedAttach &&
+	    (!(privFlags & VMCI_PRIVILEGE_FLAG_TRUSTED)))
+		return VMCI_ERROR_NO_ACCESS;
+
+	/*
+	 * If the creator specifies VMCI_INVALID_ID in "peer" field, access
+	 * control check is not performed.
+	 */
+	if (entry->qp.peer != VMCI_INVALID_ID && entry->qp.peer != contextId)
+		return VMCI_ERROR_NO_ACCESS;
+
+	if (entry->createId == VMCI_HOST_CONTEXT_ID) {
+		/*
+		 * Do not attach if the caller doesn't support Host Queue Pairs
+		 * and a host created this queue pair.
+		 */
+
+		if (!vmci_ctx_supports_host_qp(context))
+			return VMCI_ERROR_INVALID_RESOURCE;
+
+	} else if (contextId == VMCI_HOST_CONTEXT_ID) {
+		struct vmci_ctx *createContext;
+		bool supportsHostQP;
+
+		/*
+		 * Do not attach a host to a user created queue pair if that
+		 * user doesn't support host queue pair end points.
+		 */
+
+		createContext = vmci_ctx_get(entry->createId);
+		supportsHostQP = vmci_ctx_supports_host_qp(createContext);
+		vmci_ctx_release(createContext);
+
+		if (!supportsHostQP)
+			return VMCI_ERROR_INVALID_RESOURCE;
+	}
+
+	if ((entry->qp.flags & ~VMCI_QP_ASYMM) != (flags & ~VMCI_QP_ASYMM_PEER))
+		return VMCI_ERROR_QUEUEPAIR_MISMATCH;
+
+	if (contextId != VMCI_HOST_CONTEXT_ID) {
+		/*
+		 * The queue pair broker entry stores values from the guest
+		 * point of view, so an attaching guest should match the values
+		 * stored in the entry.
+		 */
+
+		if (entry->qp.produceSize != produceSize ||
+		    entry->qp.consumeSize != consumeSize) {
+			return VMCI_ERROR_QUEUEPAIR_MISMATCH;
+		}
+	} else if (entry->qp.produceSize != consumeSize ||
+		   entry->qp.consumeSize != produceSize) {
+		return VMCI_ERROR_QUEUEPAIR_MISMATCH;
+	}
+
+	if (contextId != VMCI_HOST_CONTEXT_ID) {
+		/*
+		 * If a guest attached to a queue pair, it will supply
+		 * the backing memory.  If this is a pre NOVMVM vmx,
+		 * the backing memory will be supplied by calling
+		 * vmci_qp_broker_set_page_store() following the
+		 * return of the vmci_qp_broker_alloc() call. If it is
+		 * a vmx of version NOVMVM or later, the page store
+		 * must be supplied as part of the
+		 * vmci_qp_broker_alloc call.  Under all circumstances
+		 * must the initially created queue pair not have any
+		 * memory associated with it already.
+		 */
+
+		if (entry->state != VMCIQPB_CREATED_NO_MEM)
+			return VMCI_ERROR_INVALID_ARGS;
+
+		if (pageStore != NULL) {
+			/*
+			 * Patch up host state to point to guest
+			 * supplied memory. The VMX already
+			 * initialized the queue pair headers, so no
+			 * need for the kernel side to do that.
+			 */
+
+			result = qp_host_register_user_memory(pageStore,
+							      entry->produceQ,
+							      entry->consumeQ);
+			if (result < VMCI_SUCCESS)
+				return result;
+
+			/*
+			 * Preemptively load in the headers if non-blocking to
+			 * prevent blocking later.
+			 */
+			if (entry->qp.flags & VMCI_QPFLAG_NONBLOCK) {
+				result = qp_host_map_queues(entry->produceQ,
+							    entry->consumeQ);
+				if (result < VMCI_SUCCESS) {
+					qp_host_unregister_user_memory(
+						entry->produceQ,
+						entry->consumeQ);
+					return result;
+				}
+			}
+
+			entry->state = VMCIQPB_ATTACHED_MEM;
+		} else {
+			entry->state = VMCIQPB_ATTACHED_NO_MEM;
+		}
+	} else if (entry->state == VMCIQPB_CREATED_NO_MEM) {
+		/*
+		 * The host side is attempting to attach to a queue
+		 * pair that doesn't have any memory associated with
+		 * it. This must be a pre NOVMVM vmx that hasn't set
+		 * the page store information yet, or a quiesced VM.
+		 */
+
+		return VMCI_ERROR_UNAVAILABLE;
+	} else {
+		/*
+		 * For non-blocking queue pairs, we cannot rely on
+		 * enqueue/dequeue to map in the pages on the
+		 * host-side, since it may block, so we make an
+		 * attempt here.
+		 */
+
+		if (flags & VMCI_QPFLAG_NONBLOCK) {
+			result =
+				qp_host_map_queues(entry->produceQ,
+						   entry->consumeQ);
+			if (result < VMCI_SUCCESS)
+				return result;
+
+			entry->qp.flags |= flags &
+				(VMCI_QPFLAG_NONBLOCK | VMCI_QPFLAG_PINNED);
+		}
+
+		/* The host side has successfully attached to a queue pair. */
+		entry->state = VMCIQPB_ATTACHED_MEM;
+	}
+
+	if (entry->state == VMCIQPB_ATTACHED_MEM) {
+		result =
+			qp_notify_peer(true, entry->qp.handle, contextId,
+				       entry->createId);
+		if (result < VMCI_SUCCESS)
+			pr_warn("Failed to notify peer (ID=0x%x) of " \
+				"attach to queue pair (handle=0x%x:0x%x).",
+				entry->createId, entry->qp.handle.context,
+				entry->qp.handle.resource);
+	}
+
+	entry->attachId = contextId;
+	entry->qp.refCount++;
+	if (wakeupCB) {
+		ASSERT(!entry->wakeupCB);
+		entry->wakeupCB = wakeupCB;
+		entry->clientData = clientData;
+	}
+
+	/*
+	 * When attaching to local queue pairs, the context already has
+	 * an entry tracking the queue pair, so don't add another one.
+	 */
+	if (!isLocal)
+		vmci_ctx_qp_create(context, entry->qp.handle);
+	else
+		ASSERT(vmci_ctx_qp_exists(context, entry->qp.handle));
+
+	if (ent != NULL)
+		*ent = entry;
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * QueuePair_Alloc for use when setting up queue pair endpoints
+ * on the host.
+ */
+static int qp_broker_alloc(struct vmci_handle handle,
+			   uint32_t peer,
+			   uint32_t flags,
+			   uint32_t privFlags,
+			   uint64_t produceSize,
+			   uint64_t consumeSize,
+			   struct vmci_qp_page_store *pageStore,
+			   struct vmci_ctx *context,
+			   VMCIEventReleaseCB wakeupCB,
+			   void *clientData,
+			   struct qp_broker_entry **ent,
+			   bool *swap)
+{
+	const uint32_t contextId = vmci_ctx_get_id(context);
+	bool create;
+	struct qp_broker_entry *entry;
+	bool isLocal = flags & VMCI_QPFLAG_LOCAL;
+	int result;
+
+	if (VMCI_HANDLE_INVALID(handle) ||
+	    (flags & ~VMCI_QP_ALL_FLAGS) || isLocal ||
+	    !(produceSize || consumeSize) ||
+	    !context || contextId == VMCI_INVALID_ID ||
+	    handle.context == VMCI_INVALID_ID) {
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	if (pageStore && !VMCI_QP_PAGESTORE_IS_WELLFORMED(pageStore))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	/*
+	 * In the initial argument check, we ensure that non-vmkernel hosts
+	 * are not allowed to create local queue pairs.
+	 */
+
+	ASSERT(!isLocal);
+
+	down(&qpBrokerList.mutex);
+
+	if (!isLocal && vmci_ctx_qp_exists(context, handle)) {
+		pr_devel("Context (ID=0x%x) already attached to queue " \
+			 "pair (handle=0x%x:0x%x).", contextId,
+			 handle.context, handle.resource);
+		up(&qpBrokerList.mutex);
+		return VMCI_ERROR_ALREADY_EXISTS;
+	}
+
+	entry = (struct qp_broker_entry *)
+		qp_list_find(&qpBrokerList, handle);
+	if (!entry) {
+		create = true;
+		result =
+			qp_broker_create(handle, peer, flags, privFlags,
+					 produceSize, consumeSize, pageStore,
+					 context, wakeupCB, clientData, ent);
+	} else {
+		create = false;
+		result =
+			qp_broker_attach(entry, peer, flags, privFlags,
+					 produceSize, consumeSize, pageStore,
+					 context, wakeupCB, clientData, ent);
+	}
+
+	up(&qpBrokerList.mutex);
+
+	if (swap)
+		*swap = (contextId == VMCI_HOST_CONTEXT_ID) &&
+			!(create && isLocal);
+
+
+	return result;
+}
+
+/*
+ * This function implements the kernel API for allocating a queue
+ * pair.
+ */
+static int qp_alloc_host_work(struct vmci_handle *handle,
+			      struct vmci_queue **produceQ,
+			      uint64_t produceSize,
+			      struct vmci_queue **consumeQ,
+			      uint64_t consumeSize,
+			      uint32_t peer,
+			      uint32_t flags,
+			      uint32_t privFlags,
+			      VMCIEventReleaseCB wakeupCB,
+			      void *clientData)
+{
+	struct vmci_ctx *context;
+	struct qp_broker_entry *entry;
+	int result;
+	bool swap;
+
+	if (VMCI_HANDLE_INVALID(*handle)) {
+		uint32_t resourceID;
+
+		resourceID = vmci_resource_get_id(VMCI_HOST_CONTEXT_ID);
+		if (resourceID == VMCI_INVALID_ID)
+			return VMCI_ERROR_NO_HANDLE;
+
+		*handle = vmci_make_handle(VMCI_HOST_CONTEXT_ID, resourceID);
+	}
+
+	context = vmci_ctx_get(VMCI_HOST_CONTEXT_ID);
+	ASSERT(context);
+
+	entry = NULL;
+	result =
+		qp_broker_alloc(*handle, peer, flags, privFlags,
+				produceSize, consumeSize, NULL, context,
+				wakeupCB, clientData, &entry, &swap);
+	if (result == VMCI_SUCCESS) {
+		if (swap) {
+			/*
+			 * If this is a local queue pair, the attacher
+			 * will swap around produce and consume
+			 * queues.
+			 */
+
+			*produceQ = entry->consumeQ;
+			*consumeQ = entry->produceQ;
+		} else {
+			*produceQ = entry->produceQ;
+			*consumeQ = entry->consumeQ;
+		}
+	} else {
+		*handle = VMCI_INVALID_HANDLE;
+		pr_devel("queue pair broker failed to alloc (result=%d).",
+			 result);
+	}
+	vmci_ctx_release(context);
+	return result;
+}
+
+/*
+ * Allocates a VMCI QueuePair. Only checks validity of input
+ * arguments. The real work is done in the host or guest
+ * specific function.
+ */
+int vmci_qp_alloc(struct vmci_handle *handle,
+		  struct vmci_queue **produceQ,
+		  uint64_t produceSize,
+		  struct vmci_queue **consumeQ,
+		  uint64_t consumeSize,
+		  uint32_t peer,
+		  uint32_t flags,
+		  uint32_t privFlags,
+		  bool guestEndpoint,
+		  VMCIEventReleaseCB wakeupCB,
+		  void *clientData)
+{
+	if (!handle || !produceQ || !consumeQ || (!produceSize && !consumeSize)
+	    || (flags & ~VMCI_QP_ALL_FLAGS))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	if (guestEndpoint)
+		return qp_alloc_guest_work(handle, produceQ,
+					   produceSize, consumeQ,
+					   consumeSize, peer,
+					   flags, privFlags);
+	else
+		return qp_alloc_host_work(handle, produceQ,
+					  produceSize, consumeQ,
+					  consumeSize, peer, flags,
+					  privFlags, wakeupCB,
+					  clientData);
+}
+
+/*
+ * This function implements the host kernel API for detaching from
+ * a queue pair.
+ */
+static int qp_detatch_host_work(struct vmci_handle handle)
+{
+	int result;
+	struct vmci_ctx *context;
+
+	context = vmci_ctx_get(VMCI_HOST_CONTEXT_ID);
+
+	result = vmci_qp_broker_detach(handle, context);
+
+	vmci_ctx_release(context);
+	return result;
+}
+
+/*
+ * Detaches from a VMCI QueuePair. Only checks validity of input argument.
+ * Real work is done in the host or guest specific function.
+ */
+static int qp_detatch(struct vmci_handle handle,
+		      bool guestEndpoint)
+{
+	if (VMCI_HANDLE_INVALID(handle))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	if (guestEndpoint)
+		return qp_detatch_guest_work(handle);
+	else
+		return qp_detatch_host_work(handle);
+}
+
+/*
+ * Initializes the list of QueuePairs.
+ */
+static int qp_list_init(struct qp_list *qpList)
+{
+	INIT_LIST_HEAD(&qpList->head);
+	sema_init(&qpList->mutex, 1);
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Returns the entry from the head of the list. Assumes that the list is
+ * locked.
+ */
+static struct qp_entry *qp_list_get_head(struct qp_list *qpList)
+{
+	if (!list_empty(&qpList->head)) {
+		struct qp_entry *entry =
+			list_first_entry(&qpList->head, struct qp_entry,
+					 listItem);
+		return entry;
+	}
+
+	return NULL;
+}
+
+int __init vmci_qp_broker_init(void)
+{
+	return qp_list_init(&qpBrokerList);
+}
+
+void vmci_qp_broker_exit(void)
+{
+	struct qp_broker_entry *entry;
+
+	down(&qpBrokerList.mutex);
+
+	while ((entry = (struct qp_broker_entry *)
+		qp_list_get_head(&qpBrokerList))) {
+		qp_list_remove_entry(&qpBrokerList, &entry->qp);
+		kfree(entry);
+	}
+
+	up(&qpBrokerList.mutex);
+	INIT_LIST_HEAD(&(qpBrokerList.head));
+}
+
+/*
+ * Requests that a queue pair be allocated with the VMCI queue
+ * pair broker. Allocates a queue pair entry if one does not
+ * exist. Attaches to one if it exists, and retrieves the page
+ * files backing that QueuePair.  Assumes that the queue pair
+ * broker lock is held.
+ */
+int vmci_qp_broker_alloc(struct vmci_handle handle,
+			 uint32_t peer,
+			 uint32_t flags,
+			 uint32_t privFlags,
+			 uint64_t produceSize,
+			 uint64_t consumeSize,
+			 struct vmci_qp_page_store *pageStore,
+			 struct vmci_ctx *context)
+{
+	return qp_broker_alloc(handle, peer, flags, privFlags,
+			       produceSize, consumeSize,
+			       pageStore, context, NULL, NULL, NULL, NULL);
+}
+
+/*
+ * VMX'en with versions lower than VMCI_VERSION_NOVMVM use a separate
+ * step to add the UVAs of the VMX mapping of the queue pair. This function
+ * provides backwards compatibility with such VMX'en, and takes care of
+ * registering the page store for a queue pair previously allocated by the
+ * VMX during create or attach. This function will move the queue pair state
+ * to either from VMCIQBP_CREATED_NO_MEM to VMCIQBP_CREATED_MEM or
+ * VMCIQBP_ATTACHED_NO_MEM to VMCIQBP_ATTACHED_MEM. If moving to the
+ * attached state with memory, the queue pair is ready to be used by the
+ * host peer, and an attached event will be generated.
+ *
+ * Assumes that the queue pair broker lock is held.
+ *
+ * This function is only used by the hosted platform, since there is no
+ * issue with backwards compatibility for vmkernel.
+ */
+int vmci_qp_broker_set_page_store(struct vmci_handle handle,
+				  uint64_t produceUVA,
+				  uint64_t consumeUVA,
+				  struct vmci_ctx *context)
+{
+	struct qp_broker_entry *entry;
+	int result;
+	const uint32_t contextId = vmci_ctx_get_id(context);
+
+	if (VMCI_HANDLE_INVALID(handle) || !context
+	    || contextId == VMCI_INVALID_ID)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	/*
+	 * We only support guest to host queue pairs, so the VMX must
+	 * supply UVAs for the mapped page files.
+	 */
+
+	if (produceUVA == 0 || consumeUVA == 0)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	down(&qpBrokerList.mutex);
+
+	if (!vmci_ctx_qp_exists(context, handle)) {
+		pr_warn("Context (ID=0x%x) not attached to queue pair " \
+			"(handle=0x%x:0x%x).", contextId, handle.context,
+			handle.resource);
+		result = VMCI_ERROR_NOT_FOUND;
+		goto out;
+	}
+
+	entry = (struct qp_broker_entry *)
+		qp_list_find(&qpBrokerList, handle);
+	if (!entry) {
+		result = VMCI_ERROR_NOT_FOUND;
+		goto out;
+	}
+
+	/*
+	 * If I'm the owner then I can set the page store.
+	 *
+	 * Or, if a host created the QueuePair and I'm the attached peer
+	 * then I can set the page store.
+	 */
+	if (entry->createId != contextId &&
+	    (entry->createId != VMCI_HOST_CONTEXT_ID ||
+	     entry->attachId != contextId)) {
+		result = VMCI_ERROR_QUEUEPAIR_NOTOWNER;
+		goto out;
+	}
+
+	if (entry->state != VMCIQPB_CREATED_NO_MEM &&
+	    entry->state != VMCIQPB_ATTACHED_NO_MEM) {
+		result = VMCI_ERROR_UNAVAILABLE;
+		goto out;
+	}
+
+	result = qp_host_get_user_memory(produceUVA, consumeUVA,
+					 entry->produceQ, entry->consumeQ);
+	if (result < VMCI_SUCCESS)
+		goto out;
+
+	result = qp_host_map_queues(entry->produceQ, entry->consumeQ);
+	if (result < VMCI_SUCCESS) {
+		qp_host_unregister_user_memory(entry->produceQ,
+					       entry->consumeQ);
+		goto out;
+	}
+
+	if (entry->state == VMCIQPB_CREATED_NO_MEM) {
+		entry->state = VMCIQPB_CREATED_MEM;
+	} else {
+		ASSERT(entry->state == VMCIQPB_ATTACHED_NO_MEM);
+		entry->state = VMCIQPB_ATTACHED_MEM;
+	}
+	entry->vmciPageFiles = true;
+
+	if (entry->state == VMCIQPB_ATTACHED_MEM) {
+		result =
+			qp_notify_peer(true, handle, contextId,
+				       entry->createId);
+		if (result < VMCI_SUCCESS) {
+			pr_warn("Failed to notify peer (ID=0x%x) of " \
+				"attach to queue pair (handle=0x%x:0x%x).",
+				entry->createId, entry->qp.handle.context,
+				entry->qp.handle.resource);
+		}
+	}
+
+	result = VMCI_SUCCESS;
+out:
+	up(&qpBrokerList.mutex);
+	return result;
+}
+
+/*
+ * Resets saved queue headers for the given QP broker
+ * entry. Should be used when guest memory becomes available
+ * again, or the guest detaches.
+ */
+static void qp_reset_saved_headers(struct qp_broker_entry *entry)
+{
+	entry->produceQ->savedHeader = NULL;
+	entry->consumeQ->savedHeader = NULL;
+}
+
+/*
+ * The main entry point for detaching from a queue pair registered with the
+ * queue pair broker. If more than one endpoint is attached to the queue
+ * pair, the first endpoint will mainly decrement a reference count and
+ * generate a notification to its peer. The last endpoint will clean up
+ * the queue pair state registered with the broker.
+ *
+ * When a guest endpoint detaches, it will unmap and unregister the guest
+ * memory backing the queue pair. If the host is still attached, it will
+ * no longer be able to access the queue pair content.
+ *
+ * If the queue pair is already in a state where there is no memory
+ * registered for the queue pair (any *_NO_MEM state), it will transition to
+ * the VMCIQPB_SHUTDOWN_NO_MEM state. This will also happen, if a guest
+ * endpoint is the first of two endpoints to detach. If the host endpoint is
+ * the first out of two to detach, the queue pair will move to the
+ * VMCIQPB_SHUTDOWN_MEM state.
+ */
+int vmci_qp_broker_detach(struct vmci_handle handle,
+			  struct vmci_ctx *context)
+{
+	struct qp_broker_entry *entry;
+	const uint32_t contextId = vmci_ctx_get_id(context);
+	uint32_t peerId;
+	bool isLocal = false;
+	int result;
+
+	if (VMCI_HANDLE_INVALID(handle) || !context
+	    || contextId == VMCI_INVALID_ID) {
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	down(&qpBrokerList.mutex);
+
+	if (!vmci_ctx_qp_exists(context, handle)) {
+		pr_devel("Context (ID=0x%x) not attached to queue pair " \
+			 "(handle=0x%x:0x%x).", contextId, handle.context,
+			 handle.resource);
+		result = VMCI_ERROR_NOT_FOUND;
+		goto out;
+	}
+
+	entry = (struct qp_broker_entry *)
+		qp_list_find(&qpBrokerList, handle);
+	if (!entry) {
+		pr_devel("Context (ID=0x%x) reports being attached to " \
+			 "queue pair(handle=0x%x:0x%x) that isn't present " \
+			 "in broker.", contextId, handle.context,
+			 handle.resource);
+		result = VMCI_ERROR_NOT_FOUND;
+		goto out;
+	}
+
+	if (contextId != entry->createId && contextId != entry->attachId) {
+		result = VMCI_ERROR_QUEUEPAIR_NOTATTACHED;
+		goto out;
+	}
+
+	if (contextId == entry->createId) {
+		peerId = entry->attachId;
+		entry->createId = VMCI_INVALID_ID;
+	} else {
+		peerId = entry->createId;
+		entry->attachId = VMCI_INVALID_ID;
+	}
+	entry->qp.refCount--;
+
+	isLocal = entry->qp.flags & VMCI_QPFLAG_LOCAL;
+
+	if (contextId != VMCI_HOST_CONTEXT_ID) {
+		bool headersMapped;
+
+		ASSERT(!isLocal);
+
+		/*
+		 * Pre NOVMVM vmx'en may detach from a queue pair
+		 * before setting the page store, and in that case
+		 * there is no user memory to detach from. Also, more
+		 * recent VMX'en may detach from a queue pair in the
+		 * quiesced state.
+		 */
+
+		qp_acquire_queue_mutex(entry->produceQ);
+		headersMapped = entry->produceQ->qHeader
+			|| entry->consumeQ->qHeader;
+		if (QPBROKERSTATE_HAS_MEM(entry)) {
+			result = qp_host_unmap_queues(
+				INVALID_VMCI_GUEST_MEM_ID, entry->produceQ,
+				 entry->consumeQ);
+			if (result < VMCI_SUCCESS)
+				pr_warn("Failed to unmap queue headers " \
+					"for queue pair " \
+					"(handle=0x%x:0x%x,result=%d).",
+					handle.context, handle.resource,
+					result);
+
+			if (entry->vmciPageFiles) {
+				qp_host_unregister_user_memory(entry->produceQ,
+							       entry->consumeQ);
+			} else {
+				qp_host_unregister_user_memory(entry->produceQ,
+							       entry->consumeQ);
+			}
+		}
+
+		if (!headersMapped)
+			qp_reset_saved_headers(entry);
+
+		qp_release_queue_mutex(entry->produceQ);
+
+		if (!headersMapped && entry->wakeupCB)
+			entry->wakeupCB(entry->clientData);
+
+	} else {
+		if (entry->wakeupCB) {
+			entry->wakeupCB = NULL;
+			entry->clientData = NULL;
+		}
+	}
+
+	if (entry->qp.refCount == 0) {
+		qp_list_remove_entry(&qpBrokerList, &entry->qp);
+
+		if (isLocal)
+			kfree(entry->localMem);
+
+		qp_cleanup_queue_mutex(entry->produceQ, entry->consumeQ);
+		qp_host_free_queue(entry->produceQ, entry->qp.produceSize);
+		qp_host_free_queue(entry->consumeQ, entry->qp.consumeSize);
+		kfree(entry);
+
+		vmci_ctx_qp_destroy(context, handle);
+	} else {
+		ASSERT(peerId != VMCI_INVALID_ID);
+		qp_notify_peer(false, handle, contextId, peerId);
+		if (contextId == VMCI_HOST_CONTEXT_ID
+		    && QPBROKERSTATE_HAS_MEM(entry)) {
+			entry->state = VMCIQPB_SHUTDOWN_MEM;
+		} else {
+			entry->state = VMCIQPB_SHUTDOWN_NO_MEM;
+		}
+
+		if (!isLocal)
+			vmci_ctx_qp_destroy(context, handle);
+
+	}
+	result = VMCI_SUCCESS;
+out:
+	up(&qpBrokerList.mutex);
+	return result;
+}
+
+/*
+ * Establishes the necessary mappings for a queue pair given a
+ * reference to the queue pair guest memory. This is usually
+ * called when a guest is unquiesced and the VMX is allowed to
+ * map guest memory once again.
+ */
+int vmci_qp_broker_map(struct vmci_handle handle,
+		       struct vmci_ctx *context,
+		       uint64_t guestMem)
+{
+	struct qp_broker_entry *entry;
+	const uint32_t contextId = vmci_ctx_get_id(context);
+	bool isLocal = false;
+	int result;
+
+	if (VMCI_HANDLE_INVALID(handle) || !context
+	    || contextId == VMCI_INVALID_ID)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	down(&qpBrokerList.mutex);
+
+	if (!vmci_ctx_qp_exists(context, handle)) {
+		pr_devel("Context (ID=0x%x) not attached to queue pair " \
+			 "(handle=0x%x:0x%x).", contextId, handle.context,
+			 handle.resource);
+		result = VMCI_ERROR_NOT_FOUND;
+		goto out;
+	}
+
+	entry = (struct qp_broker_entry *)
+		qp_list_find(&qpBrokerList, handle);
+	if (!entry) {
+		pr_devel("Context (ID=0x%x) reports being attached to " \
+			 "queue pair (handle=0x%x:0x%x) that isn't present " \
+			 "in broker.", contextId, handle.context,
+			 handle.resource);
+		result = VMCI_ERROR_NOT_FOUND;
+		goto out;
+	}
+
+	if (contextId != entry->createId && contextId != entry->attachId) {
+		result = VMCI_ERROR_QUEUEPAIR_NOTATTACHED;
+		goto out;
+	}
+
+	isLocal = entry->qp.flags & VMCI_QPFLAG_LOCAL;
+	result = VMCI_SUCCESS;
+
+	if (contextId != VMCI_HOST_CONTEXT_ID) {
+		struct vmci_qp_page_store pageStore;
+
+		ASSERT(entry->state == VMCIQPB_CREATED_NO_MEM ||
+		       entry->state == VMCIQPB_SHUTDOWN_NO_MEM ||
+		       entry->state == VMCIQPB_ATTACHED_NO_MEM);
+		ASSERT(!isLocal);
+
+		pageStore.pages = guestMem;
+		pageStore.len = QPE_NUM_PAGES(entry->qp);
+
+		qp_acquire_queue_mutex(entry->produceQ);
+		qp_reset_saved_headers(entry);
+		result =
+			qp_host_register_user_memory(&pageStore,
+						     entry->produceQ,
+						     entry->consumeQ);
+		qp_release_queue_mutex(entry->produceQ);
+		if (result == VMCI_SUCCESS) {
+			/* Move state from *_NO_MEM to *_MEM */
+
+			entry->state++;
+
+			ASSERT(entry->state == VMCIQPB_CREATED_MEM ||
+			       entry->state == VMCIQPB_SHUTDOWN_MEM ||
+			       entry->state == VMCIQPB_ATTACHED_MEM);
+
+			if (entry->wakeupCB)
+				entry->wakeupCB(entry->clientData);
+		}
+	}
+
+out:
+	up(&qpBrokerList.mutex);
+	return result;
+}
+
+/*
+ * Saves a snapshot of the queue headers for the given QP broker
+ * entry. Should be used when guest memory is unmapped.
+ * Results:
+ * VMCI_SUCCESS on success, appropriate error code if guest memory
+ * can't be accessed..
+ */
+static int qp_save_headers(struct qp_broker_entry *entry)
+{
+	int result;
+
+	if (entry->produceQ->savedHeader != NULL &&
+	    entry->consumeQ->savedHeader != NULL) {
+		/*
+		 *  If the headers have already been saved, we don't need to do
+		 *  it again, and we don't want to map in the headers
+		 *  unnecessarily.
+		 */
+
+		return VMCI_SUCCESS;
+	}
+
+	if (NULL == entry->produceQ->qHeader
+	    || NULL == entry->consumeQ->qHeader) {
+		result = qp_host_map_queues(entry->produceQ, entry->consumeQ);
+		if (result < VMCI_SUCCESS)
+			return result;
+	}
+
+	memcpy(&entry->savedProduceQ, entry->produceQ->qHeader,
+	       sizeof entry->savedProduceQ);
+	entry->produceQ->savedHeader = &entry->savedProduceQ;
+	memcpy(&entry->savedConsumeQ, entry->consumeQ->qHeader,
+	       sizeof entry->savedConsumeQ);
+	entry->consumeQ->savedHeader = &entry->savedConsumeQ;
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Removes all references to the guest memory of a given queue pair, and
+ * will move the queue pair from state *_MEM to *_NO_MEM. It is usually
+ * called when a VM is being quiesced where access to guest memory should
+ * avoided.
+ */
+int vmci_qp_broker_unmap(struct vmci_handle handle,
+			 struct vmci_ctx *context,
+			 uint32_t gid)
+{
+	struct qp_broker_entry *entry;
+	const uint32_t contextId = vmci_ctx_get_id(context);
+	bool isLocal = false;
+	int result;
+
+	if (VMCI_HANDLE_INVALID(handle) || !context
+	    || contextId == VMCI_INVALID_ID)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	down(&qpBrokerList.mutex);
+
+	if (!vmci_ctx_qp_exists(context, handle)) {
+		pr_devel("Context (ID=0x%x) not attached to queue pair " \
+			 "(handle=0x%x:0x%x).", contextId,
+			 handle.context, handle.resource);
+		result = VMCI_ERROR_NOT_FOUND;
+		goto out;
+	}
+
+	entry = (struct qp_broker_entry *)
+		qp_list_find(&qpBrokerList, handle);
+	if (!entry) {
+		pr_devel("Context (ID=0x%x) reports being attached to " \
+			 "queue pair (handle=0x%x:0x%x) that isn't present " \
+			 "in broker.", contextId, handle.context,
+			 handle.resource);
+		result = VMCI_ERROR_NOT_FOUND;
+		goto out;
+	}
+
+	if (contextId != entry->createId && contextId != entry->attachId) {
+		result = VMCI_ERROR_QUEUEPAIR_NOTATTACHED;
+		goto out;
+	}
+
+	isLocal = entry->qp.flags & VMCI_QPFLAG_LOCAL;
+
+	if (contextId != VMCI_HOST_CONTEXT_ID) {
+		ASSERT(entry->state != VMCIQPB_CREATED_NO_MEM &&
+		       entry->state != VMCIQPB_SHUTDOWN_NO_MEM &&
+		       entry->state != VMCIQPB_ATTACHED_NO_MEM);
+		ASSERT(!isLocal);
+
+		qp_acquire_queue_mutex(entry->produceQ);
+		result = qp_save_headers(entry);
+		if (result < VMCI_SUCCESS)
+			pr_warn("Failed to save queue headers for " \
+				"queue pair (handle=0x%x:0x%x,result=%d).",
+				handle.context, handle.resource, result);
+
+		qp_host_unmap_queues(gid, entry->produceQ, entry->consumeQ);
+
+		/*
+		 * On hosted, when we unmap queue pairs, the VMX will also
+		 * unmap the guest memory, so we invalidate the previously
+		 * registered memory. If the queue pair is mapped again at a
+		 * later point in time, we will need to reregister the user
+		 * memory with a possibly new user VA.
+		 */
+		qp_host_unregister_user_memory(entry->produceQ,
+					       entry->consumeQ);
+
+		/*
+		 * Move state from *_MEM to *_NO_MEM.
+		 */
+		entry->state--;
+
+		qp_release_queue_mutex(entry->produceQ);
+	}
+
+	result = VMCI_SUCCESS;
+
+out:
+	up(&qpBrokerList.mutex);
+	return result;
+}
+
+int __devinit vmci_qp_guest_endpoints_init(void)
+{
+	return qp_list_init(&qpGuestEndpoints);
+}
+
+/*
+ * Destroys all guest queue pair endpoints. If active guest queue
+ * pairs still exist, hypercalls to attempt detach from these
+ * queue pairs will be made. Any failure to detach is silently
+ * ignored.
+ */
+void vmci_qp_guest_endpoints_exit(void)
+{
+	struct qp_guest_endpoint *entry;
+
+	down(&qpGuestEndpoints.mutex);
+
+	while ((entry = (struct qp_guest_endpoint *)
+		qp_list_get_head(&qpGuestEndpoints))) {
+
+		/* Don't make a hypercall for local QueuePairs. */
+		if (!(entry->qp.flags & VMCI_QPFLAG_LOCAL))
+			qp_detatch_hypercall(entry->qp.handle);
+
+		/* We cannot fail the exit, so let's reset refCount. */
+		entry->qp.refCount = 0;
+		qp_list_remove_entry(&qpGuestEndpoints, &entry->qp);
+		qp_guest_endpoint_destroy(entry);
+	}
+
+	up(&qpGuestEndpoints.mutex);
+	INIT_LIST_HEAD(&(qpGuestEndpoints.head));
+}
+
+/*
+ * Helper routine that will lock the queue pair before subsequent
+ * operations.
+ * Note: Non-blocking on the host side is currently only implemented in ESX.
+ * Since non-blocking isn't yet implemented on the host personality we
+ * have no reason to acquire a spin lock.  So to avoid the use of an
+ * unnecessary lock only acquire the mutex if we can block.
+ * Note: It is assumed that QPFLAG_PINNED implies QPFLAG_NONBLOCK.  Therefore
+ * we can use the same locking function for access to both the queue
+ * and the queue headers as it is the same logic.  Assert this behvior.
+ */
+static void qp_lock(const struct vmci_qp *qpair)
+{
+	ASSERT(!QP_PINNED(qpair->flags) ||
+	       (QP_PINNED(qpair->flags) && !CAN_BLOCK(qpair->flags)));
+
+	if (CAN_BLOCK(qpair->flags))
+		qp_acquire_queue_mutex(qpair->produceQ);
+}
+
+/*
+ * Helper routine that unlocks the queue pair after calling
+ * qp_lock.  Respects non-blocking and pinning flags.
+ */
+static void qp_unlock(const struct vmci_qp *qpair)
+{
+	if (CAN_BLOCK(qpair->flags))
+		qp_release_queue_mutex(qpair->produceQ);
+}
+
+/*
+ * The queue headers may not be mapped at all times. If a queue is
+ * currently not mapped, it will be attempted to do so.
+ */
+static int qp_map_queue_headers(struct vmci_queue *produceQ,
+				struct vmci_queue *consumeQ,
+				bool canBlock)
+{
+	int result;
+
+	if (NULL == produceQ->qHeader || NULL == consumeQ->qHeader) {
+		if (canBlock)
+			result = qp_host_map_queues(produceQ, consumeQ);
+		else
+			result = VMCI_ERROR_QUEUEPAIR_NOT_READY;
+
+		if (result < VMCI_SUCCESS)
+			return (produceQ->savedHeader &&
+				consumeQ->savedHeader) ?
+				VMCI_ERROR_QUEUEPAIR_NOT_READY :
+				VMCI_ERROR_QUEUEPAIR_NOTATTACHED;
+	}
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Helper routine that will retrieve the produce and consume
+ * headers of a given queue pair. If the guest memory of the
+ * queue pair is currently not available, the saved queue headers
+ * will be returned, if these are available.
+ */
+static int qp_get_queue_headers(const struct vmci_qp *qpair,
+				struct vmci_queue_header **produceQHeader,
+				struct vmci_queue_header **consumeQHeader)
+{
+	int result;
+
+	result = qp_map_queue_headers(qpair->produceQ, qpair->consumeQ,
+				      CAN_BLOCK(qpair->flags));
+	if (result == VMCI_SUCCESS) {
+		*produceQHeader = qpair->produceQ->qHeader;
+		*consumeQHeader = qpair->consumeQ->qHeader;
+	} else if (qpair->produceQ->savedHeader &&
+		   qpair->consumeQ->savedHeader) {
+		ASSERT(!qpair->guestEndpoint);
+		*produceQHeader = qpair->produceQ->savedHeader;
+		*consumeQHeader = qpair->consumeQ->savedHeader;
+		result = VMCI_SUCCESS;
+	}
+
+	return result;
+}
+
+/*
+ * Callback from VMCI queue pair broker indicating that a queue
+ * pair that was previously not ready, now either is ready or
+ * gone forever.
+ */
+static int qp_wakeup_cb(void *clientData)
+{
+	struct vmci_qp *qpair = (struct vmci_qp *)clientData;
+	ASSERT(qpair);
+
+	qp_lock(qpair);
+	while (qpair->blocked > 0) {
+		qpair->blocked--;
+		wake_up(&qpair->event);
+	}
+	qp_unlock(qpair);
+
+	return VMCI_SUCCESS;
+}
+
+/*
+ * Callback from VMCI_WaitOnEvent releasing the queue pair mutex
+ * protecting the queue pair header state.
+ */
+static int qp_release_mutex_cb(void *clientData)
+{
+	struct vmci_qp *qpair = (struct vmci_qp *)clientData;
+	ASSERT(qpair);
+	qp_unlock(qpair);
+	return 0;
+}
+
+/*
+ * Makes the calling thread wait for the queue pair to become
+ * ready for host side access.  Returns true when thread is
+ * woken up after queue pair state change, false otherwise.
+ */
+static bool qp_wait_for_ready_queue(struct vmci_qp *qpair)
+{
+	if (unlikely(qpair->guestEndpoint))
+		ASSERT(false);
+
+	if (qpair->flags & VMCI_QPFLAG_NONBLOCK)
+		return false;
+
+	qpair->blocked++;
+	vmci_drv_wait_on_event_intr(&qpair->event, qp_release_mutex_cb,
+				    qpair);
+	qp_lock(qpair);
+	return true;
+}
+
+/*
+ * Enqueues a given buffer to the produce queue using the provided
+ * function. As many bytes as possible (space available in the queue)
+ * are enqueued.  Assumes the queue->mutex has been acquired.  Returns
+ * VMCI_ERROR_QUEUEPAIR_NOSPACE if no space was available to enqueue
+ * data, VMCI_ERROR_INVALID_SIZE, if any queue pointer is outside the
+ * queue (as defined by the queue size), VMCI_ERROR_INVALID_ARGS, if
+ * an error occured when accessing the buffer,
+ * VMCI_ERROR_QUEUEPAIR_NOTATTACHED, if the queue pair pages aren't
+ * available.  Otherwise, the number of bytes written to the queue is
+ * returned.  Updates the tail pointer of the produce queue.
+ */
+static ssize_t qp_enqueue_locked(struct vmci_queue *produceQ,
+				 struct vmci_queue *consumeQ,
+				 const uint64_t produceQSize,
+				 const void *buf,
+				 size_t bufSize,
+				 VMCIMemcpyToQueueFunc memcpyToQueue,
+				 bool canBlock)
+{
+	int64_t freeSpace;
+	uint64_t tail;
+	size_t written;
+	ssize_t result;
+
+	result = qp_map_queue_headers(produceQ, consumeQ, canBlock);
+	if (unlikely(result != VMCI_SUCCESS))
+		return result;
+
+	freeSpace = vmci_q_header_free_space(produceQ->qHeader,
+					     consumeQ->qHeader, produceQSize);
+	if (freeSpace == 0)
+		return VMCI_ERROR_QUEUEPAIR_NOSPACE;
+
+	if (freeSpace < VMCI_SUCCESS)
+		return (ssize_t) freeSpace;
+
+	written = (size_t) (freeSpace > bufSize ? bufSize : freeSpace);
+	tail = vmci_q_header_producer_tail(produceQ->qHeader);
+	if (likely(tail + written < produceQSize)) {
+		result = memcpyToQueue(produceQ, tail, buf, 0, written);
+	} else {
+		/* Tail pointer wraps around. */
+
+		const size_t tmp = (size_t) (produceQSize - tail);
+
+		result = memcpyToQueue(produceQ, tail, buf, 0, tmp);
+		if (result >= VMCI_SUCCESS)
+			result = memcpyToQueue(produceQ, 0, buf, tmp,
+					       written - tmp);
+	}
+
+	if (result < VMCI_SUCCESS)
+		return result;
+
+	vmci_q_header_add_producer_tail(produceQ->qHeader, written,
+					produceQSize);
+	return written;
+}
+
+/*
+ * Dequeues data (if available) from the given consume queue. Writes data
+ * to the user provided buffer using the provided function.
+ * Assumes the queue->mutex has been acquired.
+ * Results:
+ * VMCI_ERROR_QUEUEPAIR_NODATA if no data was available to dequeue.
+ * VMCI_ERROR_INVALID_SIZE, if any queue pointer is outside the queue
+ * (as defined by the queue size).
+ * VMCI_ERROR_INVALID_ARGS, if an error occured when accessing the buffer.
+ * Otherwise the number of bytes dequeued is returned.
+ * Side effects:
+ * Updates the head pointer of the consume queue.
+ */
+static ssize_t qp_dequeue_locked(struct vmci_queue *produceQ,
+				 struct vmci_queue *consumeQ,
+				 const uint64_t consumeQSize,
+				 void *buf,
+				 size_t bufSize,
+				 VMCIMemcpyFromQueueFunc memcpyFromQueue,
+				 bool updateConsumer,
+				 bool canBlock)
+{
+	int64_t bufReady;
+	uint64_t head;
+	size_t read;
+	ssize_t result;
+
+	result = qp_map_queue_headers(produceQ, consumeQ, canBlock);
+	if (unlikely(result != VMCI_SUCCESS))
+		return result;
+
+	bufReady = vmci_q_header_buf_ready(consumeQ->qHeader,
+					   produceQ->qHeader, consumeQSize);
+	if (bufReady == 0)
+		return VMCI_ERROR_QUEUEPAIR_NODATA;
+
+	if (bufReady < VMCI_SUCCESS)
+		return (ssize_t) bufReady;
+
+	read = (size_t) (bufReady > bufSize ? bufSize : bufReady);
+	head = vmci_q_header_consumer_head(produceQ->qHeader);
+	if (likely(head + read < consumeQSize)) {
+		result = memcpyFromQueue(buf, 0, consumeQ, head, read);
+	} else {
+		/* Head pointer wraps around. */
+
+		const size_t tmp = (size_t) (consumeQSize - head);
+
+		result = memcpyFromQueue(buf, 0, consumeQ, head, tmp);
+		if (result >= VMCI_SUCCESS) {
+			result = memcpyFromQueue(buf, tmp, consumeQ, 0,
+						 read - tmp);
+		}
+	}
+
+	if (result < VMCI_SUCCESS)
+		return result;
+
+	if (updateConsumer)
+		vmci_q_header_add_consumer_head(produceQ->qHeader,
+						read, consumeQSize);
+
+	return read;
+}
+
+/**
+ * VMCIQPair_Alloc() - Allocates a queue pair.
+ * @qpair:	Pointer for the new vmci_qp struct.
+ * @handle:	Handle to track the resource.
+ * @produceQSize:	Desired size of the producer queue.
+ * @consumeQSize:	Desired size of the consumer queue.
+ * @peer:	ContextID of the peer.
+ * @flags:	VMCI flags.
+ * @privFlags:	VMCI priviledge flags.
+ *
+ * This is the client interface for allocating the memory for a
+ * vmci_qp structure and then attaching to the underlying
+ * queue.  If an error occurs allocating the memory for the
+ * vmci_qp structure no attempt is made to attach.  If an
+ * error occurs attaching, then the structure is freed.
+ */
+int VMCIQPair_Alloc(struct vmci_qp **qpair,
+		    struct vmci_handle *handle,
+		    uint64_t produceQSize,
+		    uint64_t consumeQSize,
+		    uint32_t peer,
+		    uint32_t flags,
+		    uint32_t privFlags)
+{
+	struct vmci_qp *myQPair;
+	int retval;
+	struct vmci_handle src = VMCI_INVALID_HANDLE;
+	struct vmci_handle dst = vmci_make_handle(peer, VMCI_INVALID_ID);
+	enum vmci_route route;
+	VMCIEventReleaseCB wakeupCB;
+	void *clientData;
+
+	/*
+	 * Restrict the size of a queuepair.  The device already
+	 * enforces a limit on the total amount of memory that can be
+	 * allocated to queuepairs for a guest.  However, we try to
+	 * allocate this memory before we make the queuepair
+	 * allocation hypercall.  On Linux, we allocate each page
+	 * separately, which means rather than fail, the guest will
+	 * thrash while it tries to allocate, and will become
+	 * increasingly unresponsive to the point where it appears to
+	 * be hung.  So we place a limit on the size of an individual
+	 * queuepair here, and leave the device to enforce the
+	 * restriction on total queuepair memory.  (Note that this
+	 * doesn't prevent all cases; a user with only this much
+	 * physical memory could still get into trouble.)  The error
+	 * used by the device is NO_RESOURCES, so use that here too.
+	 */
+
+	if (produceQSize + consumeQSize < max(produceQSize, consumeQSize) ||
+	    produceQSize + consumeQSize > VMCI_MAX_GUEST_QP_MEMORY)
+		return VMCI_ERROR_NO_RESOURCES;
+
+	retval = vmci_route(&src, &dst, false, &route);
+	if (retval < VMCI_SUCCESS)
+		route = vmci_guest_code_active() ?
+			VMCI_ROUTE_AS_GUEST : VMCI_ROUTE_AS_HOST;
+
+	/* If NONBLOCK or PINNED is set, we better be the guest personality. */
+	if ((!CAN_BLOCK(flags) || QP_PINNED(flags)) &&
+	    VMCI_ROUTE_AS_GUEST != route) {
+		pr_devel("Not guest personality w/ NONBLOCK OR PINNED set");
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	/*
+	 * Limit the size of pinned QPs and check sanity.
+	 *
+	 * Pinned pages implies non-blocking mode.  Mutexes aren't acquired
+	 * when the NONBLOCK flag is set in qpair code; and also should not be
+	 * acquired when the PINNED flagged is set.  Since pinning pages
+	 * implies we want speed, it makes no sense not to have NONBLOCK
+	 * set if PINNED is set.  Hence enforce this implication.
+	 */
+	if (QP_PINNED(flags)) {
+		if (CAN_BLOCK(flags)) {
+			pr_err("Attempted to enable pinning w/o non-blocking");
+			return VMCI_ERROR_INVALID_ARGS;
+		}
+
+		if (produceQSize + consumeQSize > VMCI_MAX_PINNED_QP_MEMORY)
+			return VMCI_ERROR_NO_RESOURCES;
+	}
+
+	myQPair = kzalloc(sizeof *myQPair, GFP_KERNEL);
+	if (!myQPair)
+		return VMCI_ERROR_NO_MEM;
+
+	myQPair->produceQSize = produceQSize;
+	myQPair->consumeQSize = consumeQSize;
+	myQPair->peer = peer;
+	myQPair->flags = flags;
+	myQPair->privFlags = privFlags;
+
+	wakeupCB = clientData = NULL;
+	if (VMCI_ROUTE_AS_HOST == route) {
+		myQPair->guestEndpoint = false;
+		if (!(flags & VMCI_QPFLAG_LOCAL)) {
+			myQPair->blocked = 0;
+			init_waitqueue_head(&myQPair->event);
+			wakeupCB = qp_wakeup_cb;
+			clientData = (void *)myQPair;
+		}
+	} else {
+		myQPair->guestEndpoint = true;
+	}
+
+	retval = vmci_qp_alloc(handle,
+			       &myQPair->produceQ,
+			       myQPair->produceQSize,
+			       &myQPair->consumeQ,
+			       myQPair->consumeQSize,
+			       myQPair->peer,
+			       myQPair->flags,
+			       myQPair->privFlags,
+			       myQPair->guestEndpoint,
+			       wakeupCB, clientData);
+
+	if (retval < VMCI_SUCCESS) {
+		kfree(myQPair);
+		return retval;
+	}
+
+	*qpair = myQPair;
+	myQPair->handle = *handle;
+
+	return retval;
+}
+EXPORT_SYMBOL(VMCIQPair_Alloc);
+
+/**
+ * VMCIQPair_Detatch() - Detatches the client from a queue pair.
+ * @qpair:	Reference of a pointer to the qpair struct.
+ *
+ * This is the client interface for detaching from a VMCIQPair.
+ * Note that this routine will free the memory allocated for the
+ * vmci_qp structure too.
+ */
+int VMCIQPair_Detach(struct vmci_qp **qpair)
+{
+	int result;
+	struct vmci_qp *oldQPair;
+
+	if (!qpair || !(*qpair))
+		return VMCI_ERROR_INVALID_ARGS;
+
+	oldQPair = *qpair;
+	result = qp_detatch(oldQPair->handle, oldQPair->guestEndpoint);
+
+	/*
+	 * The guest can fail to detach for a number of reasons, and
+	 * if it does so, it will cleanup the entry (if there is one).
+	 * The host can fail too, but it won't cleanup the entry
+	 * immediately, it will do that later when the context is
+	 * freed.  Either way, we need to release the qpair struct
+	 * here; there isn't much the caller can do, and we don't want
+	 * to leak.
+	 */
+
+	memset(oldQPair, 0, sizeof *oldQPair);
+	oldQPair->handle = VMCI_INVALID_HANDLE;
+	oldQPair->peer = VMCI_INVALID_ID;
+	kfree(oldQPair);
+	*qpair = NULL;
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_Detach);
+
+/**
+ * VMCIQPair_GetProduceIndexes() - Retrieves the indexes of the producer.
+ * @qpair:	Pointer to the queue pair struct.
+ * @producerTail:	Reference used for storing producer tail index.
+ * @consumerHead:	Reference used for storing the consumer head index.
+ *
+ * This is the client interface for getting the current indexes of the
+ * QPair from the point of the view of the caller as the producer.
+ */
+int VMCIQPair_GetProduceIndexes(const struct vmci_qp *qpair,
+				uint64_t *producerTail,
+				uint64_t *consumerHead)
+{
+	struct vmci_queue_header *produceQHeader;
+	struct vmci_queue_header *consumeQHeader;
+	int result;
+
+	if (!qpair)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+	result = qp_get_queue_headers(qpair, &produceQHeader, &consumeQHeader);
+	if (result == VMCI_SUCCESS)
+		vmci_q_header_get_pointers(produceQHeader, consumeQHeader,
+					   producerTail, consumerHead);
+	qp_unlock(qpair);
+
+	if (result == VMCI_SUCCESS &&
+	    ((producerTail && *producerTail >= qpair->produceQSize) ||
+	     (consumerHead && *consumerHead >= qpair->produceQSize)))
+		return VMCI_ERROR_INVALID_SIZE;
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_GetProduceIndexes);
+
+/**
+ * VMCIQPair_GetConsumeIndexes() - Retrieves the indexes of the comsumer.
+ * @qpair:	Pointer to the queue pair struct.
+ * @consumerTail:	Reference used for storing consumer tail index.
+ * @producerHead:	Reference used for storing the producer head index.
+ *
+ * This is the client interface for getting the current indexes of the
+ * QPair from the point of the view of the caller as the consumer.
+ */
+int VMCIQPair_GetConsumeIndexes(const struct vmci_qp *qpair,
+				uint64_t *consumerTail,
+				uint64_t *producerHead)
+{
+	struct vmci_queue_header *produceQHeader;
+	struct vmci_queue_header *consumeQHeader;
+	int result;
+
+	if (!qpair)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+	result = qp_get_queue_headers(qpair, &produceQHeader, &consumeQHeader);
+	if (result == VMCI_SUCCESS)
+		vmci_q_header_get_pointers(consumeQHeader, produceQHeader,
+					   consumerTail, producerHead);
+	qp_unlock(qpair);
+
+	if (result == VMCI_SUCCESS &&
+	    ((consumerTail && *consumerTail >= qpair->consumeQSize) ||
+	     (producerHead && *producerHead >= qpair->consumeQSize)))
+		return VMCI_ERROR_INVALID_SIZE;
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_GetConsumeIndexes);
+
+/**
+ * VMCIQPair_ProduceFreeSpace() - Retrieves free space in producer queue.
+ * @qpair:	Pointer to the queue pair struct.
+ *
+ * This is the client interface for getting the amount of free
+ * space in the QPair from the point of the view of the caller as
+ * the producer which is the common case.  Returns < 0 if err, else
+ * available bytes into which data can be enqueued if > 0.
+ */
+int64_t VMCIQPair_ProduceFreeSpace(const struct vmci_qp *qpair)
+{
+	struct vmci_queue_header *produceQHeader;
+	struct vmci_queue_header *consumeQHeader;
+	int64_t result;
+
+	if (!qpair)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+	result = qp_get_queue_headers(qpair, &produceQHeader, &consumeQHeader);
+	if (result == VMCI_SUCCESS) {
+		result = vmci_q_header_free_space(produceQHeader,
+						  consumeQHeader,
+						  qpair->produceQSize);
+	} else {
+		result = 0;
+	}
+	qp_unlock(qpair);
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_ProduceFreeSpace);
+
+/**
+ * VMCIQPair_ConsumeFreeSpace() - Retrieves free space in consumer queue.
+ * @qpair:	Pointer to the queue pair struct.
+ *
+ * This is the client interface for getting the amount of free
+ * space in the QPair from the point of the view of the caller as
+ * the consumer which is not the common case.  Returns < 0 if err, else
+ * available bytes into which data can be enqueued if > 0.
+ */
+int64_t VMCIQPair_ConsumeFreeSpace(const struct vmci_qp *qpair)
+{
+	struct vmci_queue_header *produceQHeader;
+	struct vmci_queue_header *consumeQHeader;
+	int64_t result;
+
+	if (!qpair)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+	result = qp_get_queue_headers(qpair, &produceQHeader, &consumeQHeader);
+	if (result == VMCI_SUCCESS) {
+		result = vmci_q_header_free_space(consumeQHeader,
+						  produceQHeader,
+						  qpair->consumeQSize);
+	} else {
+		result = 0;
+	}
+	qp_unlock(qpair);
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_ConsumeFreeSpace);
+
+/**
+ * VMCIQPair_ProduceBufReady() - Gets bytes ready to read from producer queue.
+ * @qpair:	Pointer to the queue pair struct.
+ *
+ * This is the client interface for getting the amount of
+ * enqueued data in the QPair from the point of the view of the
+ * caller as the producer which is not the common case.  Returns < 0 if err,
+ * else available bytes that may be read.
+ */
+int64_t VMCIQPair_ProduceBufReady(const struct vmci_qp *qpair)
+{
+	struct vmci_queue_header *produceQHeader;
+	struct vmci_queue_header *consumeQHeader;
+	int64_t result;
+
+	if (!qpair)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+	result = qp_get_queue_headers(qpair, &produceQHeader, &consumeQHeader);
+	if (result == VMCI_SUCCESS) {
+		result = vmci_q_header_buf_ready(produceQHeader,
+						 consumeQHeader,
+						 qpair->produceQSize);
+	} else {
+		result = 0;
+	}
+	qp_unlock(qpair);
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_ProduceBufReady);
+
+/**
+ * VMCIQPair_ConsumeBufReady() - Gets bytes ready to read from consumer queue.
+ * @qpair:	Pointer to the queue pair struct.
+ *
+ * This is the client interface for getting the amount of
+ * enqueued data in the QPair from the point of the view of the
+ * caller as the consumer which is the normal case.  Returns < 0 if err,
+ * else available bytes that may be read.
+ */
+int64_t VMCIQPair_ConsumeBufReady(const struct vmci_qp *qpair)
+{
+	struct vmci_queue_header *produceQHeader;
+	struct vmci_queue_header *consumeQHeader;
+	int64_t result;
+
+	if (!qpair)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+	result = qp_get_queue_headers(qpair, &produceQHeader, &consumeQHeader);
+	if (result == VMCI_SUCCESS) {
+		result = vmci_q_header_buf_ready(consumeQHeader,
+						 produceQHeader,
+						 qpair->consumeQSize);
+	} else {
+		result = 0;
+	}
+	qp_unlock(qpair);
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_ConsumeBufReady);
+
+/**
+ * VMCIQPair_Enqueue() - Throw data on the queue.
+ * @qpair:	Pointer to the queue pair struct.
+ * @buf:	Pointer to buffer containing data
+ * @bufSize:	Length of buffer.
+ * @bufType:	Buffer type (Unused).
+ *
+ * This is the client interface for enqueueing data into the queue.
+ * Returns number of bytes enqueued or < 0 on error.
+ */
+ssize_t VMCIQPair_Enqueue(struct vmci_qp *qpair,
+			  const void *buf,
+			  size_t bufSize,
+			  int bufType)
+{
+	ssize_t result;
+
+	if (!qpair || !buf)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+
+	do {
+		result = qp_enqueue_locked(qpair->produceQ,
+					   qpair->consumeQ,
+					   qpair->produceQSize,
+					   buf, bufSize,
+					   qp_memcpy_to_queue,
+					   CAN_BLOCK(qpair->flags));
+
+		if (result == VMCI_ERROR_QUEUEPAIR_NOT_READY &&
+		    !qp_wait_for_ready_queue(qpair))
+			result = VMCI_ERROR_WOULD_BLOCK;
+
+	} while (result == VMCI_ERROR_QUEUEPAIR_NOT_READY);
+
+	qp_unlock(qpair);
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_Enqueue);
+
+/**
+ * VMCIQPair_Dequeue() - Get data from the queue.
+ * @qpair:	Pointer to the queue pair struct.
+ * @buf:	Pointer to buffer for the data
+ * @bufSize:	Length of buffer.
+ * @bufType:	Buffer type (Unused).
+ *
+ * This is the client interface for dequeueing data from the queue.
+ * Returns number of bytes dequeued or < 0 on error.
+ */
+ssize_t VMCIQPair_Dequeue(struct vmci_qp *qpair,
+			  void *buf,
+			  size_t bufSize,
+			  int bufType)
+{
+	ssize_t result;
+
+	if (!qpair || !buf)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+
+	do {
+		result = qp_dequeue_locked(qpair->produceQ,
+					   qpair->consumeQ,
+					   qpair->consumeQSize,
+					   buf, bufSize,
+					   qp_memcpy_from_queue, true,
+					   CAN_BLOCK(qpair->flags));
+
+		if (result == VMCI_ERROR_QUEUEPAIR_NOT_READY &&
+		    !qp_wait_for_ready_queue(qpair))
+			result = VMCI_ERROR_WOULD_BLOCK;
+
+	} while (result == VMCI_ERROR_QUEUEPAIR_NOT_READY);
+
+	qp_unlock(qpair);
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_Dequeue);
+
+/**
+ * VMCIQPair_Peek() - Peek at the data in the queue.
+ * @qpair:	Pointer to the queue pair struct.
+ * @buf:	Pointer to buffer for the data
+ * @bufSize:	Length of buffer.
+ * @bufType:	Buffer type (Unused on Linux).
+ *
+ * This is the client interface for peeking into a queue.  (I.e.,
+ * copy data from the queue without updating the head pointer.)
+ * Returns number of bytes dequeued or < 0 on error.
+ */
+ssize_t VMCIQPair_Peek(struct vmci_qp *qpair,
+		       void *buf,
+		       size_t bufSize,
+		       int bufType)
+{
+	ssize_t result;
+
+	if (!qpair || !buf)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+
+	do {
+		result = qp_dequeue_locked(qpair->produceQ,
+					   qpair->consumeQ,
+					   qpair->consumeQSize,
+					   buf, bufSize,
+					   qp_memcpy_from_queue, false,
+					   CAN_BLOCK(qpair->flags));
+
+		if (result == VMCI_ERROR_QUEUEPAIR_NOT_READY &&
+		    !qp_wait_for_ready_queue(qpair))
+			result = VMCI_ERROR_WOULD_BLOCK;
+
+	} while (result == VMCI_ERROR_QUEUEPAIR_NOT_READY);
+
+	qp_unlock(qpair);
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_Peek);
+
+/**
+ * VMCIQPair_EnqueueV() - Throw data on the queue using iov.
+ * @qpair:	Pointer to the queue pair struct.
+ * @iov:	Pointer to buffer containing data
+ * @iovSize:	Length of buffer.
+ * @bufType:	Buffer type (Unused).
+ *
+ * This is the client interface for enqueueing data into the queue.
+ * This function uses IO vectors to handle the work. Returns number
+ * of bytes enqueued or < 0 on error.
+ */
+ssize_t VMCIQPair_EnqueueV(struct vmci_qp *qpair,
+			   void *iov,
+			   size_t iovSize,
+			   int bufType)
+{
+	ssize_t result;
+
+	if (!qpair || !iov)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+
+	do {
+		result = qp_enqueue_locked(qpair->produceQ,
+					   qpair->consumeQ,
+					   qpair->produceQSize,
+					   iov, iovSize,
+					   qp_memcpy_to_queue_iov,
+					   CAN_BLOCK(qpair->flags));
+
+		if (result == VMCI_ERROR_QUEUEPAIR_NOT_READY &&
+		    !qp_wait_for_ready_queue(qpair))
+			result = VMCI_ERROR_WOULD_BLOCK;
+
+	} while (result == VMCI_ERROR_QUEUEPAIR_NOT_READY);
+
+	qp_unlock(qpair);
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_EnqueueV);
+
+
+/**
+ * VMCIQPair_DequeueV() - Get data from the queue using iov.
+ * @qpair:	Pointer to the queue pair struct.
+ * @iov:	Pointer to buffer for the data
+ * @iovSize:	Length of buffer.
+ * @bufType:	Buffer type (Unused).
+ *
+ * This is the client interface for dequeueing data from the queue.
+ * This function uses IO vectors to handle the work. Returns number
+ * of bytes dequeued or < 0 on error.
+ */
+ssize_t VMCIQPair_DequeueV(struct vmci_qp *qpair,
+			   void *iov,
+			   size_t iovSize,
+			   int bufType)
+{
+	ssize_t result;
+
+	qp_lock(qpair);
+
+	if (!qpair || !iov)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	do {
+		result = qp_dequeue_locked(qpair->produceQ,
+					   qpair->consumeQ,
+					   qpair->consumeQSize,
+					   iov, iovSize,
+					   qp_memcpy_from_queue_iov,
+					   true, CAN_BLOCK(qpair->flags));
+
+		if (result == VMCI_ERROR_QUEUEPAIR_NOT_READY &&
+		    !qp_wait_for_ready_queue(qpair))
+			result = VMCI_ERROR_WOULD_BLOCK;
+
+	} while (result == VMCI_ERROR_QUEUEPAIR_NOT_READY);
+
+	qp_unlock(qpair);
+
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_DequeueV);
+
+/**
+ * VMCIQPair_PeekV() - Peek at the data in the queue using iov.
+ * @qpair:	Pointer to the queue pair struct.
+ * @iov:	Pointer to buffer for the data
+ * @iovSize:	Length of buffer.
+ * @bufType:	Buffer type (Unused on Linux).
+ *
+ * This is the client interface for peeking into a queue.  (I.e.,
+ * copy data from the queue without updating the head pointer.)
+ * This function uses IO vectors to handle the work. Returns number
+ * of bytes peeked or < 0 on error.
+ */
+ssize_t VMCIQPair_PeekV(struct vmci_qp *qpair,
+			void *iov,
+			size_t iovSize,
+			int bufType)
+{
+	ssize_t result;
+
+	if (!qpair || !iov)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	qp_lock(qpair);
+
+	do {
+		result = qp_dequeue_locked(qpair->produceQ,
+					   qpair->consumeQ,
+					   qpair->consumeQSize,
+					   iov, iovSize,
+					   qp_memcpy_from_queue_iov,
+					   false, CAN_BLOCK(qpair->flags));
+
+		if (result == VMCI_ERROR_QUEUEPAIR_NOT_READY &&
+		    !qp_wait_for_ready_queue(qpair))
+			result = VMCI_ERROR_WOULD_BLOCK;
+
+	} while (result == VMCI_ERROR_QUEUEPAIR_NOT_READY);
+
+	qp_unlock(qpair);
+	return result;
+}
+EXPORT_SYMBOL(VMCIQPair_PeekV);
diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.h b/drivers/misc/vmw_vmci/vmci_queue_pair.h
new file mode 100644
index 0000000..b4f39e4
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_queue_pair.h
@@ -0,0 +1,182 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMCI_QUEUE_PAIR_H_
+#define _VMCI_QUEUE_PAIR_H_
+
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_context.h"
+
+/* Callback needed for correctly waiting on events. */
+typedef int (*VMCIEventReleaseCB) (void *clientData);
+
+/* Guest device port I/O. */
+struct PPNSet {
+	uint64_t numProducePages;
+	uint64_t numConsumePages;
+	uint32_t *producePPNs;
+	uint32_t *consumePPNs;
+	bool initialized;
+};
+
+
+/* VMCIQueuePairAllocInfo */
+struct vmci_qp_alloc_info {
+	struct vmci_handle handle;
+	uint32_t peer;
+	uint32_t flags;
+	uint64_t produceSize;
+	uint64_t consumeSize;
+	uint64_t ppnVA;		/* Start VA of queue pair PPNs. */
+	uint64_t numPPNs;
+	int32_t result;
+	uint32_t version;
+};
+
+/* VMCIQueuePairSetVAInfo */
+struct vmci_qp_set_va_info {
+	struct vmci_handle handle;
+	uint64_t va;		/* Start VA of queue pair PPNs. */
+	uint64_t numPPNs;
+	uint32_t version;
+	int32_t result;
+};
+
+/*
+ * For backwards compatibility, here is a version of the
+ * VMCIQueuePairPageFileInfo before host support end-points was added.
+ * Note that the current version of that structure requires VMX to
+ * pass down the VA of the mapped file.  Before host support was added
+ * there was nothing of the sort.  So, when the driver sees the ioctl
+ * with a parameter that is the sizeof
+ * VMCIQueuePairPageFileInfo_NoHostQP then it can infer that the version
+ * of VMX running can't attach to host end points because it doesn't
+ * provide the VA of the mapped files.
+ *
+ * The Linux driver doesn't get an indication of the size of the
+ * structure passed down from user space.  So, to fix a long standing
+ * but unfiled bug, the _pad field has been renamed to version.
+ * Existing versions of VMX always initialize the PageFileInfo
+ * structure so that _pad, er, version is set to 0.
+ *
+ * A version value of 1 indicates that the size of the structure has
+ * been increased to include two UVA's: produceUVA and consumeUVA.
+ * These UVA's are of the mmap()'d queue contents backing files.
+ *
+ * In addition, if when VMX is sending down the
+ * VMCIQueuePairPageFileInfo structure it gets an error then it will
+ * try again with the _NoHostQP version of the file to see if an older
+ * VMCI kernel module is running.
+ */
+
+/* VMCIQueuePairPageFileInfo */
+struct vmci_qp_page_file_info {
+	struct vmci_handle handle;
+	uint64_t producePageFile;	/* User VA. */
+	uint64_t consumePageFile;	/* User VA. */
+	uint64_t producePageFileSize;	/* Size of the file name array. */
+	uint64_t consumePageFileSize;	/* Size of the file name array. */
+	int32_t result;
+	uint32_t version;	/* Was _pad. */
+	uint64_t produceVA;	/* User VA of the mapped file. */
+	uint64_t consumeVA;	/* User VA of the mapped file. */
+};
+
+/* VMCIQueuePairDetachInfo */
+struct vmci_qp_dtch_info {
+	struct vmci_handle handle;
+	int32_t result;
+	uint32_t _pad;
+};
+
+/*
+ * struct vmci_qp_page_store describes how the memory of a given queue pair
+ * is backed. When the queue pair is between the host and a guest, the
+ * page store consists of references to the guest pages. On vmkernel,
+ * this is a list of PPNs, and on hosted, it is a user VA where the
+ * queue pair is mapped into the VMX address space.
+ */
+struct vmci_qp_page_store {
+	/* Reference to pages backing the queue pair. */
+	uint64_t pages;
+	/* Length of pageList/virtual addres range (in pages). */
+	uint32_t len;
+};
+
+/*
+ * This data type contains the information about a queue.
+ * There are two queues (hence, queue pairs) per transaction model between a
+ * pair of end points, A & B.  One queue is used by end point A to transmit
+ * commands and responses to B.  The other queue is used by B to transmit
+ * commands and responses.
+ *
+ * struct vmci_queue_kern_if is a per-OS defined Queue structure.  It contains
+ * either a direct pointer to the linear address of the buffer contents or a
+ * pointer to structures which help the OS locate those data pages.  See
+ * vmciKernelIf.c for each platform for its definition.
+ */
+struct vmci_queue {
+	struct vmci_queue_header *qHeader;
+	struct vmci_queue_header *savedHeader;
+	struct vmci_queue_kern_if *kernelIf;
+};
+
+/*
+ * Utility function that checks whether the fields of the page
+ * store contain valid values.
+ * Result:
+ * true if the page store is wellformed. false otherwise.
+ */
+static inline bool
+VMCI_QP_PAGESTORE_IS_WELLFORMED(struct vmci_qp_page_store *pageStore)
+{
+	return pageStore->len >= 2;
+}
+
+
+
+int vmci_qp_broker_init(void);
+void vmci_qp_broker_exit(void);
+int vmci_qp_broker_alloc(struct vmci_handle handle, uint32_t peer,
+			 uint32_t flags, uint32_t privFlags,
+			 uint64_t produceSize, uint64_t consumeSize,
+			 struct vmci_qp_page_store *pageStore,
+			 struct vmci_ctx *context);
+int vmci_qp_broker_set_page_store(struct vmci_handle handle,
+				  uint64_t produceUVA, uint64_t consumeUVA,
+				  struct vmci_ctx *context);
+int vmci_qp_broker_detach(struct vmci_handle handle,
+			  struct vmci_ctx *context);
+
+int vmci_qp_guest_endpoints_init(void);
+void vmci_qp_guest_endpoints_exit(void);
+
+int vmci_qp_alloc(struct vmci_handle *handle,
+		  struct vmci_queue **produceQ, uint64_t produceSize,
+		  struct vmci_queue **consumeQ, uint64_t consumeSize,
+		  uint32_t peer, uint32_t flags, uint32_t privFlags,
+		  bool guestEndpoint, VMCIEventReleaseCB wakeupCB,
+		  void *clientData);
+int vmci_qp_broker_map(struct vmci_handle handle,
+		       struct vmci_ctx *context, uint64_t guestMem);
+int vmci_qp_broker_unmap(struct vmci_handle handle,
+			 struct vmci_ctx *context, uint32_t gid);
+
+#endif /* _VMCI_QUEUE_PAIR_H_ */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 09/11] Apply VMCI resource code
  2012-07-26 23:39 ` Andrew Stiegmann (stieg)
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  -1 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, cschamp, gregkh, Andrew Stiegmann (stieg)

Tracks all used resources within the vmci code.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_resource.c |  194 +++++++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_resource.h |   62 +++++++++++
 2 files changed, 256 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_resource.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_resource.h

diff --git a/drivers/misc/vmw_vmci/vmci_resource.c b/drivers/misc/vmw_vmci/vmci_resource.c
new file mode 100644
index 0000000..03d1f44
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_resource.c
@@ -0,0 +1,194 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_common_int.h"
+#include "vmci_hash_table.h"
+#include "vmci_resource.h"
+#include "vmci_driver.h"
+
+/* 0 through VMCI_RESERVED_RESOURCE_ID_MAX are reserved. */
+static uint32_t resourceID = VMCI_RESERVED_RESOURCE_ID_MAX + 1;
+static spinlock_t resourceIdLock;
+static struct vmci_hash_table *resourceTable;
+
+/*
+ * Initializes the VMCI Resource Access Control API. Creates a hashtable
+ * to hold all resources, and registers vectors and callbacks for
+ * hypercalls.
+ */
+int __init vmci_resource_init(void)
+{
+	spin_lock_init(&resourceIdLock);
+
+	resourceTable = vmci_hash_create(128);
+	if (resourceTable == NULL) {
+		pr_warn("Failed creating a resource hash table.");
+		return VMCI_ERROR_NO_MEM;
+	}
+
+	return VMCI_SUCCESS;
+}
+
+void vmci_resource_exit(void)
+{
+	if (resourceTable)
+		vmci_hash_destroy(resourceTable);
+}
+
+/*
+ * Return resource ID. The first VMCI_RESERVED_RESOURCE_ID_MAX are
+ * reserved so we start from its value + 1.  Returns
+ * VMCI resource id on success, VMCI_INVALID_ID on failure.
+ */
+uint32_t vmci_resource_get_id(uint32_t contextID)
+{
+	uint32_t oldRID = resourceID;
+	uint32_t currentRID;
+	bool foundRID = false;
+
+	/*
+	 * Generate a unique resource ID.  Keep on trying until we wrap around
+	 * in the RID space.
+	 */
+	ASSERT(oldRID > VMCI_RESERVED_RESOURCE_ID_MAX);
+
+	do {
+		struct vmci_handle handle;
+
+		spin_lock(&resourceIdLock);
+		currentRID = resourceID;
+		handle = vmci_make_handle(contextID, currentRID);
+		resourceID++;
+		if (unlikely(resourceID == VMCI_INVALID_ID)) {
+			/* Skip the reserved rids. */
+
+			resourceID = VMCI_RESERVED_RESOURCE_ID_MAX + 1;
+		}
+		spin_unlock(&resourceIdLock);
+		foundRID = !vmci_hash_exists(resourceTable, handle);
+	} while (!foundRID && resourceID != oldRID);
+
+	return (unlikely(!foundRID)) ? VMCI_INVALID_ID : currentRID;
+}
+
+int vmci_resource_add(struct vmci_resource *resource,
+		      enum vmci_resource_type resourceType,
+		      struct vmci_handle resourceHandle,
+		      VMCIResourceFreeCB containerFreeCB,
+		      void *containerObject)
+{
+	int result;
+
+	ASSERT(resource);
+
+	if (VMCI_HANDLE_EQUAL(resourceHandle, VMCI_INVALID_HANDLE)) {
+		pr_devel("Invalid argument resource (handle=0x%x:0x%x)",
+			 resourceHandle.context, resourceHandle.resource);
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	vmci_hash_init_entry(&resource->hashEntry, resourceHandle);
+	resource->type = resourceType;
+	resource->containerFreeCB = containerFreeCB;
+	resource->containerObject = containerObject;
+
+	/* Add resource to hashtable. */
+	result = vmci_hash_add(resourceTable, &resource->hashEntry);
+	if (result != VMCI_SUCCESS) {
+		pr_devel("Failed to add entry to hash table " \
+			 "(result=%d).", result);
+		return result;
+	}
+
+	return result;
+}
+
+void vmci_resource_remove(struct vmci_handle resourceHandle,
+			  enum vmci_resource_type resourceType)
+{
+	struct vmci_resource *resource =
+		vmci_resource_get(resourceHandle, resourceType);
+
+	if (resource == NULL)
+		return;
+
+	/* Remove resource from hashtable. */
+	vmci_hash_remove(resourceTable, &resource->hashEntry);
+
+	vmci_resource_release(resource);
+	/* resource could be freed by now. */
+}
+
+struct vmci_resource *vmci_resource_get(struct vmci_handle resourceHandle,
+					enum vmci_resource_type resourceType)
+{
+	struct vmci_resource *resource;
+	struct vmci_hash_entry *entry =
+		vmci_hash_get(resourceTable, resourceHandle);
+
+	if (entry == NULL)
+		return NULL;
+
+	resource = container_of(entry, struct vmci_resource, hashEntry);
+	if (resourceType == VMCI_RESOURCE_TYPE_ANY ||
+	    resource->type == resourceType)
+		return resource;
+
+	vmci_hash_release(resourceTable, entry);
+	return NULL;
+}
+
+/*
+ * Hold the given resource.  This will hold the hashtable entry.  This
+ * is like doing a Get() but without having to lookup the resource by
+ * handle.
+ */
+void vmci_resource_hold(struct vmci_resource *resource)
+{
+	ASSERT(resource);
+	vmci_hash_hold(resourceTable, &resource->hashEntry);
+}
+
+/*
+ * resource's containerFreeCB will get called if last reference.
+ */
+int vmci_resource_release(struct vmci_resource *resource)
+{
+	int result;
+
+	ASSERT(resource);
+
+	result = vmci_hash_release(resourceTable, &resource->hashEntry);
+	if (result == VMCI_SUCCESS_ENTRY_DEAD && resource->containerFreeCB)
+		resource->containerFreeCB(resource->containerObject);
+
+	/*
+	 * We propagate the information back to caller in case it wants to know
+	 * whether entry was freed.
+	 */
+	return result;
+}
+
+struct vmci_handle vmci_resource_handle(struct vmci_resource *resource)
+{
+	ASSERT(resource);
+	return resource->hashEntry.handle;
+}
diff --git a/drivers/misc/vmw_vmci/vmci_resource.h b/drivers/misc/vmw_vmci/vmci_resource.h
new file mode 100644
index 0000000..81a4254
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_resource.h
@@ -0,0 +1,62 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMCI_RESOURCE_H_
+#define _VMCI_RESOURCE_H_
+
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_context.h"
+#include "vmci_hash_table.h"
+
+typedef void (*VMCIResourceFreeCB) (void *resource);
+
+enum vmci_resource_type {
+	VMCI_RESOURCE_TYPE_ANY,
+	VMCI_RESOURCE_TYPE_API,
+	VMCI_RESOURCE_TYPE_GROUP,
+	VMCI_RESOURCE_TYPE_DATAGRAM,
+	VMCI_RESOURCE_TYPE_DOORBELL,
+};
+
+struct vmci_resource {
+	struct vmci_hash_entry hashEntry;
+	enum vmci_resource_type type;
+	/* Callback to free container object when refCount is 0. */
+	VMCIResourceFreeCB containerFreeCB;
+	void *containerObject;	/* Container object reference. */
+};
+
+int vmci_resource_init(void);
+void vmci_resource_exit(void);
+uint32_t vmci_resource_get_id(uint32_t contextID);
+int vmci_resource_add(struct vmci_resource *resource,
+		      enum vmci_resource_type resourceType,
+		      struct vmci_handle resourceHandle,
+		      VMCIResourceFreeCB containerFreeCB,
+		      void *containerObject);
+void vmci_resource_remove(struct vmci_handle resourceHandle,
+			  enum vmci_resource_type resourceType);
+struct vmci_resource *vmci_resource_get(struct vmci_handle resourceHandle,
+					enum vmci_resource_type resourceType);
+void vmci_resource_hold(struct vmci_resource *resource);
+int vmci_resource_release(struct vmci_resource *resource);
+struct vmci_handle vmci_resource_handle(struct vmci_resource *resource);
+
+#endif /* _VMCI_RESOURCE_H_ */
-- 
1.7.0.4


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 09/11] Apply VMCI resource code
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  0 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, Andrew Stiegmann (stieg), cschamp, gregkh

Tracks all used resources within the vmci code.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_resource.c |  194 +++++++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_resource.h |   62 +++++++++++
 2 files changed, 256 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_resource.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_resource.h

diff --git a/drivers/misc/vmw_vmci/vmci_resource.c b/drivers/misc/vmw_vmci/vmci_resource.c
new file mode 100644
index 0000000..03d1f44
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_resource.c
@@ -0,0 +1,194 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_common_int.h"
+#include "vmci_hash_table.h"
+#include "vmci_resource.h"
+#include "vmci_driver.h"
+
+/* 0 through VMCI_RESERVED_RESOURCE_ID_MAX are reserved. */
+static uint32_t resourceID = VMCI_RESERVED_RESOURCE_ID_MAX + 1;
+static spinlock_t resourceIdLock;
+static struct vmci_hash_table *resourceTable;
+
+/*
+ * Initializes the VMCI Resource Access Control API. Creates a hashtable
+ * to hold all resources, and registers vectors and callbacks for
+ * hypercalls.
+ */
+int __init vmci_resource_init(void)
+{
+	spin_lock_init(&resourceIdLock);
+
+	resourceTable = vmci_hash_create(128);
+	if (resourceTable == NULL) {
+		pr_warn("Failed creating a resource hash table.");
+		return VMCI_ERROR_NO_MEM;
+	}
+
+	return VMCI_SUCCESS;
+}
+
+void vmci_resource_exit(void)
+{
+	if (resourceTable)
+		vmci_hash_destroy(resourceTable);
+}
+
+/*
+ * Return resource ID. The first VMCI_RESERVED_RESOURCE_ID_MAX are
+ * reserved so we start from its value + 1.  Returns
+ * VMCI resource id on success, VMCI_INVALID_ID on failure.
+ */
+uint32_t vmci_resource_get_id(uint32_t contextID)
+{
+	uint32_t oldRID = resourceID;
+	uint32_t currentRID;
+	bool foundRID = false;
+
+	/*
+	 * Generate a unique resource ID.  Keep on trying until we wrap around
+	 * in the RID space.
+	 */
+	ASSERT(oldRID > VMCI_RESERVED_RESOURCE_ID_MAX);
+
+	do {
+		struct vmci_handle handle;
+
+		spin_lock(&resourceIdLock);
+		currentRID = resourceID;
+		handle = vmci_make_handle(contextID, currentRID);
+		resourceID++;
+		if (unlikely(resourceID == VMCI_INVALID_ID)) {
+			/* Skip the reserved rids. */
+
+			resourceID = VMCI_RESERVED_RESOURCE_ID_MAX + 1;
+		}
+		spin_unlock(&resourceIdLock);
+		foundRID = !vmci_hash_exists(resourceTable, handle);
+	} while (!foundRID && resourceID != oldRID);
+
+	return (unlikely(!foundRID)) ? VMCI_INVALID_ID : currentRID;
+}
+
+int vmci_resource_add(struct vmci_resource *resource,
+		      enum vmci_resource_type resourceType,
+		      struct vmci_handle resourceHandle,
+		      VMCIResourceFreeCB containerFreeCB,
+		      void *containerObject)
+{
+	int result;
+
+	ASSERT(resource);
+
+	if (VMCI_HANDLE_EQUAL(resourceHandle, VMCI_INVALID_HANDLE)) {
+		pr_devel("Invalid argument resource (handle=0x%x:0x%x)",
+			 resourceHandle.context, resourceHandle.resource);
+		return VMCI_ERROR_INVALID_ARGS;
+	}
+
+	vmci_hash_init_entry(&resource->hashEntry, resourceHandle);
+	resource->type = resourceType;
+	resource->containerFreeCB = containerFreeCB;
+	resource->containerObject = containerObject;
+
+	/* Add resource to hashtable. */
+	result = vmci_hash_add(resourceTable, &resource->hashEntry);
+	if (result != VMCI_SUCCESS) {
+		pr_devel("Failed to add entry to hash table " \
+			 "(result=%d).", result);
+		return result;
+	}
+
+	return result;
+}
+
+void vmci_resource_remove(struct vmci_handle resourceHandle,
+			  enum vmci_resource_type resourceType)
+{
+	struct vmci_resource *resource =
+		vmci_resource_get(resourceHandle, resourceType);
+
+	if (resource == NULL)
+		return;
+
+	/* Remove resource from hashtable. */
+	vmci_hash_remove(resourceTable, &resource->hashEntry);
+
+	vmci_resource_release(resource);
+	/* resource could be freed by now. */
+}
+
+struct vmci_resource *vmci_resource_get(struct vmci_handle resourceHandle,
+					enum vmci_resource_type resourceType)
+{
+	struct vmci_resource *resource;
+	struct vmci_hash_entry *entry =
+		vmci_hash_get(resourceTable, resourceHandle);
+
+	if (entry == NULL)
+		return NULL;
+
+	resource = container_of(entry, struct vmci_resource, hashEntry);
+	if (resourceType == VMCI_RESOURCE_TYPE_ANY ||
+	    resource->type == resourceType)
+		return resource;
+
+	vmci_hash_release(resourceTable, entry);
+	return NULL;
+}
+
+/*
+ * Hold the given resource.  This will hold the hashtable entry.  This
+ * is like doing a Get() but without having to lookup the resource by
+ * handle.
+ */
+void vmci_resource_hold(struct vmci_resource *resource)
+{
+	ASSERT(resource);
+	vmci_hash_hold(resourceTable, &resource->hashEntry);
+}
+
+/*
+ * resource's containerFreeCB will get called if last reference.
+ */
+int vmci_resource_release(struct vmci_resource *resource)
+{
+	int result;
+
+	ASSERT(resource);
+
+	result = vmci_hash_release(resourceTable, &resource->hashEntry);
+	if (result == VMCI_SUCCESS_ENTRY_DEAD && resource->containerFreeCB)
+		resource->containerFreeCB(resource->containerObject);
+
+	/*
+	 * We propagate the information back to caller in case it wants to know
+	 * whether entry was freed.
+	 */
+	return result;
+}
+
+struct vmci_handle vmci_resource_handle(struct vmci_resource *resource)
+{
+	ASSERT(resource);
+	return resource->hashEntry.handle;
+}
diff --git a/drivers/misc/vmw_vmci/vmci_resource.h b/drivers/misc/vmw_vmci/vmci_resource.h
new file mode 100644
index 0000000..81a4254
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_resource.h
@@ -0,0 +1,62 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMCI_RESOURCE_H_
+#define _VMCI_RESOURCE_H_
+
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_context.h"
+#include "vmci_hash_table.h"
+
+typedef void (*VMCIResourceFreeCB) (void *resource);
+
+enum vmci_resource_type {
+	VMCI_RESOURCE_TYPE_ANY,
+	VMCI_RESOURCE_TYPE_API,
+	VMCI_RESOURCE_TYPE_GROUP,
+	VMCI_RESOURCE_TYPE_DATAGRAM,
+	VMCI_RESOURCE_TYPE_DOORBELL,
+};
+
+struct vmci_resource {
+	struct vmci_hash_entry hashEntry;
+	enum vmci_resource_type type;
+	/* Callback to free container object when refCount is 0. */
+	VMCIResourceFreeCB containerFreeCB;
+	void *containerObject;	/* Container object reference. */
+};
+
+int vmci_resource_init(void);
+void vmci_resource_exit(void);
+uint32_t vmci_resource_get_id(uint32_t contextID);
+int vmci_resource_add(struct vmci_resource *resource,
+		      enum vmci_resource_type resourceType,
+		      struct vmci_handle resourceHandle,
+		      VMCIResourceFreeCB containerFreeCB,
+		      void *containerObject);
+void vmci_resource_remove(struct vmci_handle resourceHandle,
+			  enum vmci_resource_type resourceType);
+struct vmci_resource *vmci_resource_get(struct vmci_handle resourceHandle,
+					enum vmci_resource_type resourceType);
+void vmci_resource_hold(struct vmci_resource *resource);
+int vmci_resource_release(struct vmci_resource *resource);
+struct vmci_handle vmci_resource_handle(struct vmci_resource *resource);
+
+#endif /* _VMCI_RESOURCE_H_ */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 10/11] Apply vmci routing code
  2012-07-26 23:39 ` Andrew Stiegmann (stieg)
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  -1 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, cschamp, gregkh, Andrew Stiegmann (stieg)

This code is responsible for routing between various hosts/guests as
well as routing in nested scenarios.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_route.c |  241 ++++++++++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_route.h |   34 +++++
 2 files changed, 275 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_route.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_route.h

diff --git a/drivers/misc/vmw_vmci/vmci_route.c b/drivers/misc/vmw_vmci/vmci_route.c
new file mode 100644
index 0000000..b9c301d
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_route.c
@@ -0,0 +1,241 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/vmw_vmci_api.h>
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_common_int.h"
+#include "vmci_context.h"
+#include "vmci_driver.h"
+#include "vmci_route.h"
+
+/*
+ * Make a routing decision for the given source and destination handles.
+ * This will try to determine the route using the handles and the available
+ * devices.  Will set the source context if it is invalid.
+ */
+int vmci_route(struct vmci_handle *src,
+	       const struct vmci_handle *dst,
+	       bool fromGuest,
+	       enum vmci_route *route)
+{
+	bool hasHostDevice = vmci_host_code_active();
+	bool hasGuestDevice = vmci_guest_code_active();
+
+	ASSERT(src);
+	ASSERT(dst);
+	ASSERT(route);
+
+	*route = VMCI_ROUTE_NONE;
+
+	/*
+	 * "fromGuest" is only ever set to true by
+	 * IOCTL_VMCI_DATAGRAM_SEND (or by the vmkernel equivalent),
+	 * which comes from the VMX, so we know it is coming from a
+	 * guest.
+	 *
+	 * To avoid inconsistencies, test these once.  We will test
+	 * them again when we do the actual send to ensure that we do
+	 * not touch a non-existent device.
+	 */
+
+	/* Must have a valid destination context. */
+	if (VMCI_INVALID_ID == dst->context)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	/* Anywhere to hypervisor. */
+	if (VMCI_HYPERVISOR_CONTEXT_ID == dst->context) {
+
+		/*
+		 * If this message already came from a guest then we
+		 * cannot send it to the hypervisor.  It must come
+		 * from a local client.
+		 */
+		if (fromGuest)
+			return VMCI_ERROR_DST_UNREACHABLE;
+
+		/*
+		 * We must be acting as a guest in order to send to
+		 * the hypervisor.
+		 */
+		if (!hasGuestDevice)
+			return VMCI_ERROR_DEVICE_NOT_FOUND;
+
+		/* And we cannot send if the source is the host context. */
+		if (VMCI_HOST_CONTEXT_ID == src->context)
+			return VMCI_ERROR_INVALID_ARGS;
+
+		/*
+		 * If the client passed the ANON source handle then
+		 * respect it (both context and resource are invalid).
+		 * However, if they passed only an invalid context,
+		 * then they probably mean ANY, in which case we
+		 * should set the real context here before passing it
+		 * down.
+		 */
+		if (VMCI_INVALID_ID == src->context &&
+		    VMCI_INVALID_ID != src->resource)
+			src->context = VMCI_GetContextID();
+
+		/* Send from local client down to the hypervisor. */
+		*route = VMCI_ROUTE_AS_GUEST;
+		return VMCI_SUCCESS;
+	}
+
+	/* Anywhere to local client on host. */
+	if (VMCI_HOST_CONTEXT_ID == dst->context) {
+		/*
+		 * If it is not from a guest but we are acting as a
+		 * guest, then we need to send it down to the host.
+		 * Note that if we are also acting as a host then this
+		 * will prevent us from sending from local client to
+		 * local client, but we accept that restriction as a
+		 * way to remove any ambiguity from the host context.
+		 */
+		if (src->context == VMCI_HYPERVISOR_CONTEXT_ID) {
+			/*
+			 * If the hypervisor is the source, this is
+			 * host local communication. The hypervisor
+			 * may send vmci event datagrams to the host
+			 * itself, but it will never send datagrams to
+			 * an "outer host" through the guest device.
+			 */
+
+			if (hasHostDevice) {
+				*route = VMCI_ROUTE_AS_HOST;
+				return VMCI_SUCCESS;
+			} else {
+				return VMCI_ERROR_DEVICE_NOT_FOUND;
+			}
+		}
+
+		if (!fromGuest && hasGuestDevice) {
+			/* If no source context then use the current. */
+			if (VMCI_INVALID_ID == src->context)
+				src->context = VMCI_GetContextID();
+
+			/* Send it from local client down to the host. */
+			*route = VMCI_ROUTE_AS_GUEST;
+			return VMCI_SUCCESS;
+		}
+
+		/*
+		 * Otherwise we already received it from a guest and
+		 * it is destined for a local client on this host, or
+		 * it is from another local client on this host.  We
+		 * must be acting as a host to service it.
+		 */
+		if (!hasHostDevice)
+			return VMCI_ERROR_DEVICE_NOT_FOUND;
+
+		if (VMCI_INVALID_ID == src->context) {
+			/*
+			 * If it came from a guest then it must have a
+			 * valid context.  Otherwise we can use the
+			 * host context.
+			 */
+			if (fromGuest)
+				return VMCI_ERROR_INVALID_ARGS;
+
+			src->context = VMCI_HOST_CONTEXT_ID;
+		}
+
+		/* Route to local client. */
+		*route = VMCI_ROUTE_AS_HOST;
+		return VMCI_SUCCESS;
+	}
+
+	/*
+	 * If we are acting as a host then this might be destined for
+	 * a guest.
+	 */
+	if (hasHostDevice) {
+		/* It will have a context if it is meant for a guest. */
+		if (vmci_ctx_exists(dst->context)) {
+			if (VMCI_INVALID_ID == src->context) {
+				/*
+				 * If it came from a guest then it
+				 * must have a valid context.
+				 * Otherwise we can use the host
+				 * context.
+				 */
+
+				if (fromGuest)
+					return VMCI_ERROR_INVALID_ARGS;
+
+				src->context = VMCI_HOST_CONTEXT_ID;
+			} else if (VMCI_CONTEXT_IS_VM(src->context) &&
+				   src->context != dst->context) {
+				/*
+				 * VM to VM communication is not
+				 * allowed. Since we catch all
+				 * communication destined for the host
+				 * above, this must be destined for a
+				 * VM since there is a valid context.
+				 */
+
+				ASSERT(VMCI_CONTEXT_IS_VM(dst->context));
+
+				return VMCI_ERROR_DST_UNREACHABLE;
+			}
+
+			/* Pass it up to the guest. */
+			*route = VMCI_ROUTE_AS_HOST;
+			return VMCI_SUCCESS;
+		} else if (!hasGuestDevice) {
+			/*
+			 * The host is attempting to reach a CID
+			 * without an active context, and we can't
+			 * send it down, since we have no guest
+			 * device.
+			 */
+
+			return VMCI_ERROR_DST_UNREACHABLE;
+		}
+	}
+
+	/*
+	 * We must be a guest trying to send to another guest, which means
+	 * we need to send it down to the host. We do not filter out VM to
+	 * VM communication here, since we want to be able to use the guest
+	 * driver on older versions that do support VM to VM communication.
+	 */
+	if (!hasGuestDevice) {
+		/*
+		 * Ending up here means we have neither guest nor host
+		 * device. That shouldn't happen, since any VMCI
+		 * client in the kernel should have done a successful
+		 * VMCI_DeviceGet.
+		 */
+
+		ASSERT(false);
+		return VMCI_ERROR_DEVICE_NOT_FOUND;
+	}
+
+	/* If no source context then use the current context. */
+	if (VMCI_INVALID_ID == src->context)
+		src->context = VMCI_GetContextID();
+
+	/*
+	 * Send it from local client down to the host, which will
+	 * route it to the other guest for us.
+	 */
+	*route = VMCI_ROUTE_AS_GUEST;
+	return VMCI_SUCCESS;
+}
diff --git a/drivers/misc/vmw_vmci/vmci_route.h b/drivers/misc/vmw_vmci/vmci_route.h
new file mode 100644
index 0000000..5a0f312
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_route.h
@@ -0,0 +1,34 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMCI_ROUTE_H_
+#define _VMCI_ROUTE_H_
+
+#include <linux/vmw_vmci_defs.h>
+
+enum vmci_route {
+	VMCI_ROUTE_NONE,
+	VMCI_ROUTE_AS_HOST,
+	VMCI_ROUTE_AS_GUEST,
+};
+
+int vmci_route(struct vmci_handle *src, const struct vmci_handle *dst,
+	       bool fromGuest, enum vmci_route *route);
+
+#endif /* _VMCI_ROUTE_H_ */
-- 
1.7.0.4


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 10/11] Apply vmci routing code
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  0 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, Andrew Stiegmann (stieg), cschamp, gregkh

This code is responsible for routing between various hosts/guests as
well as routing in nested scenarios.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/vmw_vmci/vmci_route.c |  241 ++++++++++++++++++++++++++++++++++++
 drivers/misc/vmw_vmci/vmci_route.h |   34 +++++
 2 files changed, 275 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/vmci_route.c
 create mode 100644 drivers/misc/vmw_vmci/vmci_route.h

diff --git a/drivers/misc/vmw_vmci/vmci_route.c b/drivers/misc/vmw_vmci/vmci_route.c
new file mode 100644
index 0000000..b9c301d
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_route.c
@@ -0,0 +1,241 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include <linux/vmw_vmci_api.h>
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_common_int.h"
+#include "vmci_context.h"
+#include "vmci_driver.h"
+#include "vmci_route.h"
+
+/*
+ * Make a routing decision for the given source and destination handles.
+ * This will try to determine the route using the handles and the available
+ * devices.  Will set the source context if it is invalid.
+ */
+int vmci_route(struct vmci_handle *src,
+	       const struct vmci_handle *dst,
+	       bool fromGuest,
+	       enum vmci_route *route)
+{
+	bool hasHostDevice = vmci_host_code_active();
+	bool hasGuestDevice = vmci_guest_code_active();
+
+	ASSERT(src);
+	ASSERT(dst);
+	ASSERT(route);
+
+	*route = VMCI_ROUTE_NONE;
+
+	/*
+	 * "fromGuest" is only ever set to true by
+	 * IOCTL_VMCI_DATAGRAM_SEND (or by the vmkernel equivalent),
+	 * which comes from the VMX, so we know it is coming from a
+	 * guest.
+	 *
+	 * To avoid inconsistencies, test these once.  We will test
+	 * them again when we do the actual send to ensure that we do
+	 * not touch a non-existent device.
+	 */
+
+	/* Must have a valid destination context. */
+	if (VMCI_INVALID_ID == dst->context)
+		return VMCI_ERROR_INVALID_ARGS;
+
+	/* Anywhere to hypervisor. */
+	if (VMCI_HYPERVISOR_CONTEXT_ID == dst->context) {
+
+		/*
+		 * If this message already came from a guest then we
+		 * cannot send it to the hypervisor.  It must come
+		 * from a local client.
+		 */
+		if (fromGuest)
+			return VMCI_ERROR_DST_UNREACHABLE;
+
+		/*
+		 * We must be acting as a guest in order to send to
+		 * the hypervisor.
+		 */
+		if (!hasGuestDevice)
+			return VMCI_ERROR_DEVICE_NOT_FOUND;
+
+		/* And we cannot send if the source is the host context. */
+		if (VMCI_HOST_CONTEXT_ID == src->context)
+			return VMCI_ERROR_INVALID_ARGS;
+
+		/*
+		 * If the client passed the ANON source handle then
+		 * respect it (both context and resource are invalid).
+		 * However, if they passed only an invalid context,
+		 * then they probably mean ANY, in which case we
+		 * should set the real context here before passing it
+		 * down.
+		 */
+		if (VMCI_INVALID_ID == src->context &&
+		    VMCI_INVALID_ID != src->resource)
+			src->context = VMCI_GetContextID();
+
+		/* Send from local client down to the hypervisor. */
+		*route = VMCI_ROUTE_AS_GUEST;
+		return VMCI_SUCCESS;
+	}
+
+	/* Anywhere to local client on host. */
+	if (VMCI_HOST_CONTEXT_ID == dst->context) {
+		/*
+		 * If it is not from a guest but we are acting as a
+		 * guest, then we need to send it down to the host.
+		 * Note that if we are also acting as a host then this
+		 * will prevent us from sending from local client to
+		 * local client, but we accept that restriction as a
+		 * way to remove any ambiguity from the host context.
+		 */
+		if (src->context == VMCI_HYPERVISOR_CONTEXT_ID) {
+			/*
+			 * If the hypervisor is the source, this is
+			 * host local communication. The hypervisor
+			 * may send vmci event datagrams to the host
+			 * itself, but it will never send datagrams to
+			 * an "outer host" through the guest device.
+			 */
+
+			if (hasHostDevice) {
+				*route = VMCI_ROUTE_AS_HOST;
+				return VMCI_SUCCESS;
+			} else {
+				return VMCI_ERROR_DEVICE_NOT_FOUND;
+			}
+		}
+
+		if (!fromGuest && hasGuestDevice) {
+			/* If no source context then use the current. */
+			if (VMCI_INVALID_ID == src->context)
+				src->context = VMCI_GetContextID();
+
+			/* Send it from local client down to the host. */
+			*route = VMCI_ROUTE_AS_GUEST;
+			return VMCI_SUCCESS;
+		}
+
+		/*
+		 * Otherwise we already received it from a guest and
+		 * it is destined for a local client on this host, or
+		 * it is from another local client on this host.  We
+		 * must be acting as a host to service it.
+		 */
+		if (!hasHostDevice)
+			return VMCI_ERROR_DEVICE_NOT_FOUND;
+
+		if (VMCI_INVALID_ID == src->context) {
+			/*
+			 * If it came from a guest then it must have a
+			 * valid context.  Otherwise we can use the
+			 * host context.
+			 */
+			if (fromGuest)
+				return VMCI_ERROR_INVALID_ARGS;
+
+			src->context = VMCI_HOST_CONTEXT_ID;
+		}
+
+		/* Route to local client. */
+		*route = VMCI_ROUTE_AS_HOST;
+		return VMCI_SUCCESS;
+	}
+
+	/*
+	 * If we are acting as a host then this might be destined for
+	 * a guest.
+	 */
+	if (hasHostDevice) {
+		/* It will have a context if it is meant for a guest. */
+		if (vmci_ctx_exists(dst->context)) {
+			if (VMCI_INVALID_ID == src->context) {
+				/*
+				 * If it came from a guest then it
+				 * must have a valid context.
+				 * Otherwise we can use the host
+				 * context.
+				 */
+
+				if (fromGuest)
+					return VMCI_ERROR_INVALID_ARGS;
+
+				src->context = VMCI_HOST_CONTEXT_ID;
+			} else if (VMCI_CONTEXT_IS_VM(src->context) &&
+				   src->context != dst->context) {
+				/*
+				 * VM to VM communication is not
+				 * allowed. Since we catch all
+				 * communication destined for the host
+				 * above, this must be destined for a
+				 * VM since there is a valid context.
+				 */
+
+				ASSERT(VMCI_CONTEXT_IS_VM(dst->context));
+
+				return VMCI_ERROR_DST_UNREACHABLE;
+			}
+
+			/* Pass it up to the guest. */
+			*route = VMCI_ROUTE_AS_HOST;
+			return VMCI_SUCCESS;
+		} else if (!hasGuestDevice) {
+			/*
+			 * The host is attempting to reach a CID
+			 * without an active context, and we can't
+			 * send it down, since we have no guest
+			 * device.
+			 */
+
+			return VMCI_ERROR_DST_UNREACHABLE;
+		}
+	}
+
+	/*
+	 * We must be a guest trying to send to another guest, which means
+	 * we need to send it down to the host. We do not filter out VM to
+	 * VM communication here, since we want to be able to use the guest
+	 * driver on older versions that do support VM to VM communication.
+	 */
+	if (!hasGuestDevice) {
+		/*
+		 * Ending up here means we have neither guest nor host
+		 * device. That shouldn't happen, since any VMCI
+		 * client in the kernel should have done a successful
+		 * VMCI_DeviceGet.
+		 */
+
+		ASSERT(false);
+		return VMCI_ERROR_DEVICE_NOT_FOUND;
+	}
+
+	/* If no source context then use the current context. */
+	if (VMCI_INVALID_ID == src->context)
+		src->context = VMCI_GetContextID();
+
+	/*
+	 * Send it from local client down to the host, which will
+	 * route it to the other guest for us.
+	 */
+	*route = VMCI_ROUTE_AS_GUEST;
+	return VMCI_SUCCESS;
+}
diff --git a/drivers/misc/vmw_vmci/vmci_route.h b/drivers/misc/vmw_vmci/vmci_route.h
new file mode 100644
index 0000000..5a0f312
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_route.h
@@ -0,0 +1,34 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMCI_ROUTE_H_
+#define _VMCI_ROUTE_H_
+
+#include <linux/vmw_vmci_defs.h>
+
+enum vmci_route {
+	VMCI_ROUTE_NONE,
+	VMCI_ROUTE_AS_HOST,
+	VMCI_ROUTE_AS_GUEST,
+};
+
+int vmci_route(struct vmci_handle *src, const struct vmci_handle *dst,
+	       bool fromGuest, enum vmci_route *route);
+
+#endif /* _VMCI_ROUTE_H_ */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-07-26 23:39 ` Andrew Stiegmann (stieg)
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  -1 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, cschamp, gregkh, Andrew Stiegmann (stieg)

Adds all the necessary files to enable building of the VMCI module
with the Linux Makefiles and Kconfig systems. Also adds the header
files used for building modules against the driver.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/Kconfig                    |    1 +
 drivers/misc/Makefile                   |    1 +
 drivers/misc/vmw_vmci/Kconfig           |   16 +
 drivers/misc/vmw_vmci/Makefile          |   43 ++
 drivers/misc/vmw_vmci/vmci_common_int.h |   58 ++
 include/linux/vmw_vmci_api.h            |   89 +++
 include/linux/vmw_vmci_defs.h           |  921 +++++++++++++++++++++++++++++++
 7 files changed, 1129 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/Kconfig
 create mode 100644 drivers/misc/vmw_vmci/Makefile
 create mode 100644 drivers/misc/vmw_vmci/vmci_common_int.h
 create mode 100644 include/linux/vmw_vmci_api.h
 create mode 100644 include/linux/vmw_vmci_defs.h

diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index 2661f6e..fe38c7a 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -517,4 +517,5 @@ source "drivers/misc/lis3lv02d/Kconfig"
 source "drivers/misc/carma/Kconfig"
 source "drivers/misc/altera-stapl/Kconfig"
 source "drivers/misc/mei/Kconfig"
+source "drivers/misc/vmw_vmci/Kconfig"
 endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index 456972f..af9e413 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -51,3 +51,4 @@ obj-y				+= carma/
 obj-$(CONFIG_USB_SWITCH_FSA9480) += fsa9480.o
 obj-$(CONFIG_ALTERA_STAPL)	+=altera-stapl/
 obj-$(CONFIG_INTEL_MEI)		+= mei/
+obj-y				+= vmw_vmci/
diff --git a/drivers/misc/vmw_vmci/Kconfig b/drivers/misc/vmw_vmci/Kconfig
new file mode 100644
index 0000000..55015e7
--- /dev/null
+++ b/drivers/misc/vmw_vmci/Kconfig
@@ -0,0 +1,16 @@
+#
+# VMware VMCI device
+#
+
+config VMWARE_VMCI
+	tristate "VMware VMCI Driver"
+	depends on X86
+	help
+	  This is VMware's Virtual Machine Communication Interface.  It enables
+	  high-speed communication between host and guest in a virtual
+	  environment via the VMCI virtual device.
+
+	  If unsure, say N.
+
+	  To compile this driver as a module, choose M here: the
+	  module will be called vmw_vmci.
diff --git a/drivers/misc/vmw_vmci/Makefile b/drivers/misc/vmw_vmci/Makefile
new file mode 100644
index 0000000..19755fb
--- /dev/null
+++ b/drivers/misc/vmw_vmci/Makefile
@@ -0,0 +1,43 @@
+################################################################################
+#
+# Linux driver for VMware's VMCI device.
+#
+# Copyright (C) 2007-2012, VMware, Inc. All Rights Reserved.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License as published by the
+# Free Software Foundation; version 2 of the License and no later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+# NON INFRINGEMENT.  See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# The full GNU General Public License is included in this distribution in
+# the file called "COPYING".
+#
+# Maintained by: Andrew Stiegmann <pv-drivers@vmware.com>
+#
+################################################################################
+
+#
+# Makefile for the VMware VMCI
+#
+
+obj-$(CONFIG_VMWARE_VMCI) += vmw_vmci.o
+
+vmw_vmci-objs += vmci_context.o
+vmw_vmci-objs += vmci_datagram.o
+vmw_vmci-objs += vmci_doorbell.o
+vmw_vmci-objs += vmci_driver.o
+vmw_vmci-objs += vmci_event.o
+vmw_vmci-objs += vmci_handle_array.o
+vmw_vmci-objs += vmci_hash_table.o
+vmw_vmci-objs += vmci_queue_pair.o
+vmw_vmci-objs += vmci_resource.o
+vmw_vmci-objs += vmci_route.o
diff --git a/drivers/misc/vmw_vmci/vmci_common_int.h b/drivers/misc/vmw_vmci/vmci_common_int.h
new file mode 100644
index 0000000..6e82610
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_common_int.h
@@ -0,0 +1,58 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMCI_COMMONINT_H_
+#define _VMCI_COMMONINT_H_
+
+#include <linux/printk.h>
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_handle_array.h"
+
+#define ASSERT(cond) BUG_ON(!(cond))
+
+#define CAN_BLOCK(_f) (!((_f) & VMCI_QPFLAG_NONBLOCK))
+#define QP_PINNED(_f) ((_f) & VMCI_QPFLAG_PINNED)
+
+#define PCI_VENDOR_ID_VMWARE	0x15AD
+#define PCI_DEVICE_ID_VMWARE_VMCI	0x0740
+#define VMCI_DRIVER_VERSION_STRING	"9.5.5.0-k"
+#define MODULE_NAME "vmw_vmci"
+
+/* Print magic... whee! */
+#ifdef pr_fmt
+#undef pr_fmt
+#define pr_fmt(fmt) MODULE_NAME ": " fmt
+#endif
+
+/*
+ * Utilility function that checks whether two entities are allowed
+ * to interact. If one of them is restricted, the other one must
+ * be trusted.
+ */
+static inline bool vmci_deny_interaction(uint32_t partOne,
+					 uint32_t partTwo)
+{
+	return ((partOne & VMCI_PRIVILEGE_FLAG_RESTRICTED) &&
+		!(partTwo & VMCI_PRIVILEGE_FLAG_TRUSTED)) ||
+	       ((partTwo & VMCI_PRIVILEGE_FLAG_RESTRICTED) &&
+		!(partOne & VMCI_PRIVILEGE_FLAG_TRUSTED));
+}
+
+#endif				/* _VMCI_COMMONINT_H_ */
diff --git a/include/linux/vmw_vmci_api.h b/include/linux/vmw_vmci_api.h
new file mode 100644
index 0000000..71a4668
--- /dev/null
+++ b/include/linux/vmw_vmci_api.h
@@ -0,0 +1,89 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef __VMW_VMCI_API_H__
+#define __VMW_VMCI_API_H__
+
+#include <linux/vmw_vmci_defs.h>
+
+#undef  VMCI_KERNEL_API_VERSION
+#define VMCI_KERNEL_API_VERSION_2 2
+#define VMCI_KERNEL_API_VERSION   VMCI_KERNEL_API_VERSION_2
+
+typedef void (VMCI_DeviceShutdownFn) (void *deviceRegistration, void *userData);
+
+bool VMCI_DeviceGet(uint32_t *apiVersion,
+		    VMCI_DeviceShutdownFn *deviceShutdownCB,
+		    void *userData, void **deviceRegistration);
+void VMCI_DeviceRelease(void *deviceRegistration);
+int VMCIDatagram_CreateHnd(uint32_t resourceID, uint32_t flags,
+			   VMCIDatagramRecvCB recvCB, void *clientData,
+			   struct vmci_handle *outHandle);
+int VMCIDatagram_CreateHndPriv(uint32_t resourceID, uint32_t flags,
+			       uint32_t privFlags,
+			       VMCIDatagramRecvCB recvCB, void *clientData,
+			       struct vmci_handle *outHandle);
+int VMCIDatagram_DestroyHnd(struct vmci_handle handle);
+int VMCIDatagram_Send(struct vmci_dg *msg);
+int VMCIDoorbell_Create(struct vmci_handle *handle, uint32_t flags,
+			uint32_t privFlags,
+			VMCICallback notifyCB, void *clientData);
+int VMCIDoorbell_Destroy(struct vmci_handle handle);
+int VMCIDoorbell_Notify(struct vmci_handle handle, uint32_t privFlags);
+uint32_t VMCI_GetContextID(void);
+uint32_t VMCI_Version(void);
+int VMCI_ContextID2HostVmID(uint32_t contextID, void *hostVmID,
+			    size_t hostVmIDLen);
+int VMCI_IsContextOwner(uint32_t contextID, void *hostUser);
+
+int VMCIEvent_Subscribe(uint32_t event, uint32_t flags,
+			VMCI_EventCB callback, void *callbackData,
+			uint32_t *subID);
+int VMCIEvent_Unsubscribe(uint32_t subID);
+uint32_t VMCIContext_GetPrivFlags(uint32_t contextID);
+int VMCIQPair_Alloc(struct vmci_qp **qpair,
+		    struct vmci_handle *handle,
+		    uint64_t produceQSize,
+		    uint64_t consumeQSize,
+		    uint32_t peer, uint32_t flags, uint32_t privFlags);
+int VMCIQPair_Detach(struct vmci_qp **qpair);
+int VMCIQPair_GetProduceIndexes(const struct vmci_qp *qpair,
+				uint64_t *producerTail,
+				uint64_t *consumerHead);
+int VMCIQPair_GetConsumeIndexes(const struct vmci_qp *qpair,
+				uint64_t *consumerTail,
+				uint64_t *producerHead);
+int64_t VMCIQPair_ProduceFreeSpace(const struct vmci_qp *qpair);
+int64_t VMCIQPair_ProduceBufReady(const struct vmci_qp *qpair);
+int64_t VMCIQPair_ConsumeFreeSpace(const struct vmci_qp *qpair);
+int64_t VMCIQPair_ConsumeBufReady(const struct vmci_qp *qpair);
+ssize_t VMCIQPair_Enqueue(struct vmci_qp *qpair,
+			  const void *buf, size_t bufSize, int mode);
+ssize_t VMCIQPair_Dequeue(struct vmci_qp *qpair,
+			  void *buf, size_t bufSize, int mode);
+ssize_t VMCIQPair_Peek(struct vmci_qp *qpair, void *buf, size_t bufSize,
+		       int mode);
+ssize_t VMCIQPair_EnqueueV(struct vmci_qp *qpair,
+			   void *iov, size_t iovSize, int mode);
+ssize_t VMCIQPair_DequeueV(struct vmci_qp *qpair,
+			   void *iov, size_t iovSize, int mode);
+ssize_t VMCIQPair_PeekV(struct vmci_qp *qpair, void *iov, size_t iovSize,
+			int mode);
+
+#endif /* !__VMW_VMCI_API_H__ */
diff --git a/include/linux/vmw_vmci_defs.h b/include/linux/vmw_vmci_defs.h
new file mode 100644
index 0000000..d71d5e0
--- /dev/null
+++ b/include/linux/vmw_vmci_defs.h
@@ -0,0 +1,921 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMW_VMCI_DEF_H_
+#define _VMW_VMCI_DEF_H_
+
+#include <linux/atomic.h>
+
+/* Register offsets. */
+#define VMCI_STATUS_ADDR      0x00
+#define VMCI_CONTROL_ADDR     0x04
+#define VMCI_ICR_ADDR	      0x08
+#define VMCI_IMR_ADDR         0x0c
+#define VMCI_DATA_OUT_ADDR    0x10
+#define VMCI_DATA_IN_ADDR     0x14
+#define VMCI_CAPS_ADDR        0x18
+#define VMCI_RESULT_LOW_ADDR  0x1c
+#define VMCI_RESULT_HIGH_ADDR 0x20
+
+/* Max number of devices. */
+#define VMCI_MAX_DEVICES 1
+
+/* Status register bits. */
+#define VMCI_STATUS_INT_ON     0x1
+
+/* Control register bits. */
+#define VMCI_CONTROL_RESET        0x1
+#define VMCI_CONTROL_INT_ENABLE   0x2
+#define VMCI_CONTROL_INT_DISABLE  0x4
+
+/* Capabilities register bits. */
+#define VMCI_CAPS_HYPERCALL     0x1
+#define VMCI_CAPS_GUESTCALL     0x2
+#define VMCI_CAPS_DATAGRAM      0x4
+#define VMCI_CAPS_NOTIFICATIONS 0x8
+
+/* Interrupt Cause register bits. */
+#define VMCI_ICR_DATAGRAM      0x1
+#define VMCI_ICR_NOTIFICATION  0x2
+
+/* Interrupt Mask register bits. */
+#define VMCI_IMR_DATAGRAM      0x1
+#define VMCI_IMR_NOTIFICATION  0x2
+
+/* Interrupt type. */
+enum {
+	VMCI_INTR_TYPE_INTX = 0,
+	VMCI_INTR_TYPE_MSI = 1,
+	VMCI_INTR_TYPE_MSIX = 2,
+};
+
+/* Maximum MSI/MSI-X interrupt vectors in the device. */
+#define VMCI_MAX_INTRS 2
+
+/*
+ * Supported interrupt vectors.  There is one for each ICR value above,
+ * but here they indicate the position in the vector array/message ID.
+ */
+enum {
+	VMCI_INTR_DATAGRAM = 0,
+	VMCI_INTR_NOTIFICATION = 1,
+};
+
+/*
+ * A single VMCI device has an upper limit of 128MB on the amount of
+ * memory that can be used for queue pairs.
+ */
+#define VMCI_MAX_GUEST_QP_MEMORY (128 * 1024 * 1024)
+
+/*
+ * Queues with pre-mapped data pages must be small, so that we don't pin
+ * too much kernel memory (especially on vmkernel).  We limit a queuepair to
+ * 32 KB, or 16 KB per queue for symmetrical pairs.
+ */
+#define VMCI_MAX_PINNED_QP_MEMORY (32 * 1024)
+
+/*
+ * We have a fixed set of resource IDs available in the VMX.
+ * This allows us to have a very simple implementation since we statically
+ * know how many will create datagram handles. If a new caller arrives and
+ * we have run out of slots we can manually increment the maximum size of
+ * available resource IDs.
+ *
+ * VMCI reserved hypervisor datagram resource IDs.
+ */
+enum {
+	VMCI_RESOURCES_QUERY = 0,
+	VMCI_GET_CONTEXT_ID = 1,
+	VMCI_SET_NOTIFY_BITMAP = 2,
+	VMCI_DOORBELL_LINK = 3,
+	VMCI_DOORBELL_UNLINK = 4,
+	VMCI_DOORBELL_NOTIFY = 5,
+/*
+ * VMCI_DATAGRAM_REQUEST_MAP and VMCI_DATAGRAM_REMOVE_MAP are
+ * obsoleted by the removal of VM to VM communication.
+ */
+	VMCI_DATAGRAM_REQUEST_MAP = 6,
+	VMCI_DATAGRAM_REMOVE_MAP = 7,
+	VMCI_EVENT_SUBSCRIBE = 8,
+	VMCI_EVENT_UNSUBSCRIBE = 9,
+	VMCI_QUEUEPAIR_ALLOC = 10,
+	VMCI_QUEUEPAIR_DETACH = 11,
+
+/*
+ * VMCI_VSOCK_VMX_LOOKUP was assigned to 12 for Fusion 3.0/3.1,
+ * WS 7.0/7.1 and ESX 4.1
+ */
+	VMCI_HGFS_TRANSPORT = 13,
+	VMCI_UNITY_PBRPC_REGISTER = 14,
+	VMCI_RESOURCE_MAX = 15,
+};
+
+/**
+ * struct vmci_handle - Ownership information structure
+ * @context:	The VMX context ID.
+ * @resource:	The resource ID (used for locating in resource hash).
+ *
+ * The vmci_handle structure is used to track resources used within
+ * vmw_vmci.
+ */
+struct vmci_handle {
+	uint32_t context;
+	uint32_t resource;
+};
+
+#define VMCI_HANDLE_EQUAL(_h1, _h2) ((_h1).context == (_h2).context &&	\
+				     (_h1).resource == (_h2).resource)
+
+#define VMCI_INVALID_ID ~0
+static const struct vmci_handle VMCI_INVALID_HANDLE = { VMCI_INVALID_ID,
+							VMCI_INVALID_ID
+};
+
+#define VMCI_HANDLE_INVALID(_handle)				\
+	VMCI_HANDLE_EQUAL((_handle), VMCI_INVALID_HANDLE)
+
+/*
+ * The below defines can be used to send anonymous requests.
+ * This also indicates that no response is expected.
+ */
+#define VMCI_ANON_SRC_CONTEXT_ID   VMCI_INVALID_ID
+#define VMCI_ANON_SRC_RESOURCE_ID  VMCI_INVALID_ID
+#define VMCI_ANON_SRC_HANDLE       vmci_make_handle(VMCI_ANON_SRC_CONTEXT_ID, \
+						    VMCI_ANON_SRC_RESOURCE_ID)
+
+/* The lowest 16 context ids are reserved for internal use. */
+#define VMCI_RESERVED_CID_LIMIT ((uint32_t) 16)
+
+/*
+ * Hypervisor context id, used for calling into hypervisor
+ * supplied services from the VM.
+ */
+#define VMCI_HYPERVISOR_CONTEXT_ID 0
+
+/*
+ * Well-known context id, a logical context that contains a set of
+ * well-known services. This context ID is now obsolete.
+ */
+#define VMCI_WELL_KNOWN_CONTEXT_ID 1
+
+/*
+ * Context ID used by host endpoints.
+ */
+#define VMCI_HOST_CONTEXT_ID  2
+
+#define VMCI_CONTEXT_IS_VM(_cid) (VMCI_INVALID_ID != (_cid) &&		\
+				  (_cid) > VMCI_HOST_CONTEXT_ID)
+
+/*
+ * The VMCI_CONTEXT_RESOURCE_ID is used together with vmci_make_handle to make
+ * handles that refer to a specific context.
+ */
+#define VMCI_CONTEXT_RESOURCE_ID 0
+
+/*
+ * VMCI error codes.
+ */
+enum {
+	VMCI_SUCCESS_QUEUEPAIR_ATTACH	=  5,
+	VMCI_SUCCESS_QUEUEPAIR_CREATE	=  4,
+	VMCI_SUCCESS_LAST_DETACH	=  3,
+	VMCI_SUCCESS_ACCESS_GRANTED	=  2,
+	VMCI_SUCCESS_ENTRY_DEAD	=  1,
+	VMCI_SUCCESS			=  0,
+	VMCI_ERROR_INVALID_RESOURCE	= (-1),
+	VMCI_ERROR_INVALID_ARGS	= (-2),
+	VMCI_ERROR_NO_MEM		= (-3),
+	VMCI_ERROR_DATAGRAM_FAILED	= (-4),
+	VMCI_ERROR_MORE_DATA		= (-5),
+	VMCI_ERROR_NO_MORE_DATAGRAMS	= (-6),
+	VMCI_ERROR_NO_ACCESS		= (-7),
+	VMCI_ERROR_NO_HANDLE		= (-8),
+	VMCI_ERROR_DUPLICATE_ENTRY	= (-9),
+	VMCI_ERROR_DST_UNREACHABLE	= (-10),
+	VMCI_ERROR_PAYLOAD_TOO_LARGE	= (-11),
+	VMCI_ERROR_INVALID_PRIV	= (-12),
+	VMCI_ERROR_GENERIC		= (-13),
+	VMCI_ERROR_PAGE_ALREADY_SHARED	= (-14),
+	VMCI_ERROR_CANNOT_SHARE_PAGE	= (-15),
+	VMCI_ERROR_CANNOT_UNSHARE_PAGE	= (-16),
+	VMCI_ERROR_NO_PROCESS		= (-17),
+	VMCI_ERROR_NO_DATAGRAM	= (-18),
+	VMCI_ERROR_NO_RESOURCES	= (-19),
+	VMCI_ERROR_UNAVAILABLE	= (-20),
+	VMCI_ERROR_NOT_FOUND		= (-21),
+	VMCI_ERROR_ALREADY_EXISTS	= (-22),
+	VMCI_ERROR_NOT_PAGE_ALIGNED	= (-23),
+	VMCI_ERROR_INVALID_SIZE	= (-24),
+	VMCI_ERROR_REGION_ALREADY_SHARED = (-25),
+	VMCI_ERROR_TIMEOUT	= (-26),
+	VMCI_ERROR_DATAGRAM_INCOMPLETE	= (-27),
+	VMCI_ERROR_INCORRECT_IRQL	= (-28),
+	VMCI_ERROR_EVENT_UNKNOWN	= (-29),
+	VMCI_ERROR_OBSOLETE	= (-30),
+	VMCI_ERROR_QUEUEPAIR_MISMATCH	= (-31),
+	VMCI_ERROR_QUEUEPAIR_NOTSET	= (-32),
+	VMCI_ERROR_QUEUEPAIR_NOTOWNER	= (-33),
+	VMCI_ERROR_QUEUEPAIR_NOTATTACHED	= (-34),
+	VMCI_ERROR_QUEUEPAIR_NOSPACE	= (-35),
+	VMCI_ERROR_QUEUEPAIR_NODATA	= (-36),
+	VMCI_ERROR_BUSMEM_INVALIDATION	= (-37),
+	VMCI_ERROR_MODULE_NOT_LOADED	= (-38),
+	VMCI_ERROR_DEVICE_NOT_FOUND	= (-39),
+	VMCI_ERROR_QUEUEPAIR_NOT_READY	= (-40),
+	VMCI_ERROR_WOULD_BLOCK	= (-41),
+
+	/* VMCI clients should return error code within this range */
+	VMCI_ERROR_CLIENT_MIN		= (-500),
+	VMCI_ERROR_CLIENT_MAX	= (-550),
+
+	/* Internal error codes. */
+	VMCI_SHAREDMEM_ERROR_BAD_CONTEXT	= (-1000),
+};
+
+/* VMCI reserved events. */
+enum {
+	/* Only applicable to guest endpoints */
+	VMCI_EVENT_CTX_ID_UPDATE  = 0,
+
+	/* Applicable to guest and host */
+	VMCI_EVENT_CTX_REMOVED    = 1,
+
+	/* Only applicable to guest endpoints */
+	VMCI_EVENT_QP_RESUMED	  = 2,
+
+	/* Applicable to guest and host */
+	VMCI_EVENT_QP_PEER_ATTACH = 3,
+
+	/* Applicable to guest and host */
+	VMCI_EVENT_QP_PEER_DETACH = 4,
+
+	/*
+	 * Applicable to VMX and vmk.  On vmk,
+	 * this event has the Context payload type.
+	 */
+	VMCI_EVENT_MEM_ACCESS_ON  = 5,
+
+	/*
+	 * Applicable to VMX and vmk.  Same as
+	 * above for the payload type.
+	 */
+	VMCI_EVENT_MEM_ACCESS_OFF = 6,
+	VMCI_EVENT_MAX = 7,
+};
+
+/*
+ * Of the above events, a few are reserved for use in the VMX, and
+ * other endpoints (guest and host kernel) should not use them. For
+ * the rest of the events, we allow both host and guest endpoints to
+ * subscribe to them, to maintain the same API for host and guest
+ * endpoints.
+ */
+#define VMCI_EVENT_VALID_VMX(_event) ((_event) == VMCI_EVENT_MEM_ACCESS_ON || \
+				      (_event) == VMCI_EVENT_MEM_ACCESS_OFF)
+
+#define VMCI_EVENT_VALID(_event) ((_event) < VMCI_EVENT_MAX &&		\
+				  !VMCI_EVENT_VALID_VMX(_event))
+
+/* Reserved guest datagram resource ids. */
+#define VMCI_EVENT_HANDLER 0
+
+/*
+ * VMCI coarse-grained privileges (per context or host
+ * process/endpoint. An entity with the restricted flag is only
+ * allowed to interact with the hypervisor and trusted entities.
+ */
+enum {
+	VMCI_NO_PRIVILEGE_FLAGS = 0,
+	VMCI_PRIVILEGE_FLAG_RESTRICTED = 1,
+	VMCI_PRIVILEGE_FLAG_TRUSTED = 2,
+	VMCI_PRIVILEGE_ALL_FLAGS = (VMCI_PRIVILEGE_FLAG_RESTRICTED |
+				    VMCI_PRIVILEGE_FLAG_TRUSTED),
+	VMCI_DEFAULT_PROC_PRIVILEGE_FLAGS = VMCI_NO_PRIVILEGE_FLAGS,
+	VMCI_LEAST_PRIVILEGE_FLAGS = VMCI_PRIVILEGE_FLAG_RESTRICTED,
+	VMCI_MAX_PRIVILEGE_FLAGS = VMCI_PRIVILEGE_FLAG_TRUSTED,
+};
+
+/* 0 through VMCI_RESERVED_RESOURCE_ID_MAX are reserved. */
+#define VMCI_RESERVED_RESOURCE_ID_MAX 1023
+
+/*
+ * Driver version.
+ *
+ * Increment major version when you make an incompatible change.
+ * Compatibility goes both ways (old driver with new executable
+ * as well as new driver with old executable).
+ */
+
+/* Never change VMCI_VERSION_SHIFT_WIDTH */
+#define VMCI_VERSION_SHIFT_WIDTH 16
+#define VMCI_MAKE_VERSION(_major, _minor)				\
+	((_major) << VMCI_VERSION_SHIFT_WIDTH | (uint16_t) (_minor))
+
+#define VMCI_VERSION_MAJOR(v)  ((uint32) (v) >> VMCI_VERSION_SHIFT_WIDTH)
+#define VMCI_VERSION_MINOR(v)  ((uint16_t) (v))
+
+/*
+ * VMCI_VERSION is always the current version.  Subsequently listed
+ * versions are ways of detecting previous versions of the connecting
+ * application (i.e., VMX).
+ *
+ * VMCI_VERSION_NOVMVM: This version removed support for VM to VM
+ * communication.
+ *
+ * VMCI_VERSION_NOTIFY: This version introduced doorbell notification
+ * support.
+ *
+ * VMCI_VERSION_HOSTQP: This version introduced host end point support
+ * for hosted products.
+ *
+ * VMCI_VERSION_PREHOSTQP: This is the version prior to the adoption of
+ * support for host end-points.
+ *
+ * VMCI_VERSION_PREVERS2: This fictional version number is intended to
+ * represent the version of a VMX which doesn't call into the driver
+ * with ioctl VERSION2 and thus doesn't establish its version with the
+ * driver.
+ */
+
+#define VMCI_VERSION                VMCI_VERSION_NOVMVM
+#define VMCI_VERSION_NOVMVM         VMCI_MAKE_VERSION(11, 0)
+#define VMCI_VERSION_NOTIFY         VMCI_MAKE_VERSION(10, 0)
+#define VMCI_VERSION_HOSTQP         VMCI_MAKE_VERSION(9, 0)
+#define VMCI_VERSION_PREHOSTQP      VMCI_MAKE_VERSION(8, 0)
+#define VMCI_VERSION_PREVERS2       VMCI_MAKE_VERSION(1, 0)
+
+/*
+ * Linux defines _IO* macros, but the core kernel code ignore the encoded
+ * ioctl value. It is up to individual drivers to decode the value (for
+ * example to look at the size of a structure to determine which version
+ * of a specific command should be used) or not (which is what we
+ * currently do, so right now the ioctl value for a given command is the
+ * command itself).
+ *
+ * Hence, we just define the IOCTL_VMCI_foo values directly, with no
+ * intermediate IOCTLCMD_ representation.
+ */
+#  define IOCTLCMD(_cmd) IOCTL_VMCI_ ## _cmd
+
+enum {
+	/*
+	 * We need to bracket the range of values used for ioctls,
+	 * because x86_64 Linux forces us to explicitly register ioctl
+	 * handlers by value for handling 32 bit ioctl syscalls.
+	 * Hence FIRST and LAST.  Pick something for FIRST that
+	 * doesn't collide with vmmon (2001+).
+	 */
+	IOCTLCMD(FIRST) = 1951,
+	IOCTLCMD(VERSION) = IOCTLCMD(FIRST),
+
+	/* BEGIN VMCI */
+	IOCTLCMD(INIT_CONTEXT),
+
+	/*
+	 * The following two were used for process and datagram
+	 * process creation.  They are not used anymore and reserved
+	 * for future use.  They will fail if issued.
+	 */
+	IOCTLCMD(RESERVED1),
+	IOCTLCMD(RESERVED2),
+
+	/*
+	 * The following used to be for shared memory. It is now
+	 * unused and and is reserved for future use. It will fail if
+	 * issued.
+	 */
+	IOCTLCMD(RESERVED3),
+
+	/*
+	 * The follwoing three were also used to be for shared
+	 * memory. An old WS6 user-mode client might try to use them
+	 * with the new driver, but since we ensure that only contexts
+	 * created by VMX'en of the appropriate version
+	 * (VMCI_VERSION_NOTIFY or VMCI_VERSION_NEWQP) or higher use
+	 * these ioctl, everything is fine.
+	 */
+	IOCTLCMD(QUEUEPAIR_SETVA),
+	IOCTLCMD(NOTIFY_RESOURCE),
+	IOCTLCMD(NOTIFICATIONS_RECEIVE),
+	IOCTLCMD(VERSION2),
+	IOCTLCMD(QUEUEPAIR_ALLOC),
+	IOCTLCMD(QUEUEPAIR_SETPAGEFILE),
+	IOCTLCMD(QUEUEPAIR_DETACH),
+	IOCTLCMD(DATAGRAM_SEND),
+	IOCTLCMD(DATAGRAM_RECEIVE),
+	IOCTLCMD(DATAGRAM_REQUEST_MAP),
+	IOCTLCMD(DATAGRAM_REMOVE_MAP),
+	IOCTLCMD(CTX_ADD_NOTIFICATION),
+	IOCTLCMD(CTX_REMOVE_NOTIFICATION),
+	IOCTLCMD(CTX_GET_CPT_STATE),
+	IOCTLCMD(CTX_SET_CPT_STATE),
+	IOCTLCMD(GET_CONTEXT_ID),
+	IOCTLCMD(LAST),
+	/* END VMCI */
+
+	/*
+	 * VMCI Socket IOCTLS are defined next and go from
+	 * IOCTLCMD(LAST) (1972) to 1990.  VMware reserves a range of
+	 * 4 ioctls for VMCI Sockets to grow.  We cannot reserve many
+	 * ioctls here since we are close to overlapping with vmmon
+	 * ioctls (2001+).  Define a meta-ioctl if running out of this
+	 * binary space.
+	 */
+	IOCTLCMD(SOCKETS_LAST) = 1994,	/* 1994 on Linux. */
+
+	/*
+	 * The VSockets ioctls occupy the block above.  We define a
+	 * new range of VMCI ioctls to maintain binary compatibility
+	 * between the user land and the kernel driver.  Careful,
+	 * vmmon ioctls start from 2001, so this means we can add only
+	 * 4 new VMCI ioctls.  Define a meta-ioctl if running out of
+	 * this binary space.
+	 */
+	IOCTLCMD(FIRST2),
+	IOCTLCMD(SET_NOTIFY) = IOCTLCMD(FIRST2),	/* 1995 on Linux. */
+	IOCTLCMD(LAST2),
+};
+
+/* Clean up helper macros */
+#undef IOCTLCMD
+
+/*
+ * struct vmci_queue_header - VMCI Queue Header information.
+ *
+ * A Queue cannot stand by itself as designed.  Each Queue's header
+ * contains a pointer into itself (the producerTail) and into its peer
+ * (consumerHead).  The reason for the separation is one of
+ * accessibility: Each end-point can modify two things: where the next
+ * location to enqueue is within its produceQ (producerTail); and
+ * where the next dequeue location is in its consumeQ (consumerHead).
+ *
+ * An end-point cannot modify the pointers of its peer (guest to
+ * guest; NOTE that in the host both queue headers are mapped r/w).
+ * But, each end-point needs read access to both Queue header
+ * structures in order to determine how much space is used (or left)
+ * in the Queue.  This is because for an end-point to know how full
+ * its produceQ is, it needs to use the consumerHead that points into
+ * the produceQ but -that- consumerHead is in the Queue header for
+ * that end-points consumeQ.
+ *
+ * Thoroughly confused?  Sorry.
+ *
+ * producerTail: the point to enqueue new entrants.  When you approach
+ * a line in a store, for example, you walk up to the tail.
+ *
+ * consumerHead: the point in the queue from which the next element is
+ * dequeued.  In other words, who is next in line is he who is at the
+ * head of the line.
+ *
+ * Also, producerTail points to an empty byte in the Queue, whereas
+ * consumerHead points to a valid byte of data (unless producerTail ==
+ * consumerHead in which case consumerHead does not point to a valid
+ * byte of data).
+ *
+ * For a queue of buffer 'size' bytes, the tail and head pointers will be in
+ * the range [0, size-1].
+ *
+ * If produceQHeader->producerTail == consumeQHeader->consumerHead
+ * then the produceQ is empty.
+ */
+struct vmci_queue_header {
+	/* All fields are 64bit and aligned. */
+	struct vmci_handle handle;	/* Identifier. */
+	atomic64_t producerTail;	/* Offset in this queue. */
+	atomic64_t consumerHead;	/* Offset in peer queue. */
+};
+
+/**
+ * struct vmci_dg - Base struct for vmci datagrams.
+ * @dst:	A vmci_handle that tracks the destination of the datagram.
+ * @src:	A vmci_handle that tracks the source of the datagram.
+ * @payloadSize:	The size of the payload.
+ *
+ * vmci_dg structs are used when sending vmci datagrams.  They include
+ * the necessary source and destination information to properly route
+ * the information along with the size of the package.
+ */
+struct vmci_dg {
+	struct vmci_handle dst;
+	struct vmci_handle src;
+	uint64_t payloadSize;
+};
+
+/*
+ * Second flag is for creating a well-known handle instead of a per context
+ * handle.  Next flag is for deferring datagram delivery, so that the
+ * datagram callback is invoked in a delayed context (not interrupt context).
+ */
+#define VMCI_FLAG_DG_NONE          0
+#define VMCI_FLAG_WELLKNOWN_DG_HND 0x1
+#define VMCI_FLAG_ANYCID_DG_HND    0x2
+#define VMCI_FLAG_DG_DELAYED_CB    0x4
+
+/* Event callback should fire in a delayed context (not interrupt context.) */
+#define VMCI_FLAG_EVENT_NONE       0
+#define VMCI_FLAG_EVENT_DELAYED_CB 0x1
+
+/*
+ * Maximum supported size of a VMCI datagram for routable datagrams.
+ * Datagrams going to the hypervisor are allowed to be larger.
+ */
+#define VMCI_MAX_DG_SIZE (17 * 4096)
+#define VMCI_MAX_DG_PAYLOAD_SIZE (VMCI_MAX_DG_SIZE - sizeof(struct vmci_dg))
+#define VMCI_DG_PAYLOAD(_dg) (void *)((char *)(_dg) + sizeof(struct vmci_dg))
+#define VMCI_DG_HEADERSIZE sizeof(struct vmci_dg)
+#define VMCI_DG_SIZE(_dg) (VMCI_DG_HEADERSIZE + (size_t)(_dg)->payloadSize)
+#define VMCI_DG_SIZE_ALIGNED(_dg) ((VMCI_DG_SIZE(_dg) + 7) & (~((size_t) 0x7)))
+#define VMCI_MAX_DATAGRAM_QUEUE_SIZE (VMCI_MAX_DG_SIZE * 2)
+
+/* Flags for VMCI QueuePair API. */
+enum {
+	/* Fail alloc if QP not created by peer. */
+	VMCI_QPFLAG_ATTACH_ONLY = 1 << 0,
+
+	/* Only allow attaches from local context. */
+	VMCI_QPFLAG_LOCAL = 1 << 1,
+
+	/* Host won't block when guest is quiesced. */
+	VMCI_QPFLAG_NONBLOCK = 1 << 2,
+
+	/* Pin data pages in ESX.  Used with NONBLOCK */
+	VMCI_QPFLAG_PINNED = 1 << 3,
+
+	/* Update the following flag when adding new flags. */
+	VMCI_QP_ALL_FLAGS = (VMCI_QPFLAG_ATTACH_ONLY | VMCI_QPFLAG_LOCAL |
+			     VMCI_QPFLAG_NONBLOCK | VMCI_QPFLAG_PINNED),
+
+	/* Convenience flags */
+	VMCI_QP_ASYMM = (VMCI_QPFLAG_NONBLOCK | VMCI_QPFLAG_PINNED),
+	VMCI_QP_ASYMM_PEER = (VMCI_QPFLAG_ATTACH_ONLY | VMCI_QP_ASYMM),
+};
+
+/*
+ * We allow at least 1024 more event datagrams from the hypervisor past the
+ * normally allowed datagrams pending for a given context.  We define this
+ * limit on event datagrams from the hypervisor to guard against DoS attack
+ * from a malicious VM which could repeatedly attach to and detach from a queue
+ * pair, causing events to be queued at the destination VM.  However, the rate
+ * at which such events can be generated is small since it requires a VM exit
+ * and handling of queue pair attach/detach call at the hypervisor.  Event
+ * datagrams may be queued up at the destination VM if it has interrupts
+ * disabled or if it is not draining events for some other reason.  1024
+ * datagrams is a grossly conservative estimate of the time for which
+ * interrupts may be disabled in the destination VM, but at the same time does
+ * not exacerbate the memory pressure problem on the host by much (size of each
+ * event datagram is small).
+ */
+#define VMCI_MAX_DATAGRAM_AND_EVENT_QUEUE_SIZE				\
+	(VMCI_MAX_DATAGRAM_QUEUE_SIZE +					\
+	 1024 * (sizeof(struct vmci_dg) + sizeof(struct vmci_event_data_max)))
+
+/*
+ * Struct used for querying, via VMCI_RESOURCES_QUERY, the availability of
+ * hypervisor resources.  Struct size is 16 bytes. All fields in struct are
+ * aligned to their natural alignment.
+ */
+struct vmci_resource_query_hdr {
+	struct vmci_dg hdr;
+	uint32_t numResources;
+	uint32_t _padding;
+};
+
+/*
+ * Convenience struct for negotiating vectors. Must match layout of
+ * VMCIResourceQueryHdr minus the struct vmci_dg header.
+ */
+struct vmci_resource_query_msg {
+	uint32_t numResources;
+	uint32_t _padding;
+	uint32_t resources[1];
+};
+
+/*
+ * The maximum number of resources that can be queried using
+ * VMCI_RESOURCE_QUERY is 31, as the result is encoded in the lower 31
+ * bits of a positive return value. Negative values are reserved for
+ * errors.
+ */
+#define VMCI_RESOURCE_QUERY_MAX_NUM 31
+
+/* Maximum size for the VMCI_RESOURCE_QUERY request. */
+#define VMCI_RESOURCE_QUERY_MAX_SIZE				\
+	(sizeof(struct vmci_resource_query_hdr) +		\
+	 sizeof(uint32_t) * VMCI_RESOURCE_QUERY_MAX_NUM)
+
+/*
+ * Struct used for setting the notification bitmap.  All fields in
+ * struct are aligned to their natural alignment.
+ */
+struct vmci_notify_bm_set_msg {
+	struct vmci_dg hdr;
+	uint32_t bitmapPPN;
+	uint32_t _pad;
+};
+
+/*
+ * Struct used for linking a doorbell handle with an index in the
+ * notify bitmap. All fields in struct are aligned to their natural
+ * alignment.
+ */
+struct vmci_doorbell_link_msg {
+	struct vmci_dg hdr;
+	struct vmci_handle handle;
+	uint64_t notifyIdx;
+};
+
+/*
+ * Struct used for unlinking a doorbell handle from an index in the
+ * notify bitmap. All fields in struct are aligned to their natural
+ * alignment.
+ */
+struct vmci_doorbell_unlink_msg {
+	struct vmci_dg hdr;
+	struct vmci_handle handle;
+};
+
+/*
+ * Struct used for generating a notification on a doorbell handle. All
+ * fields in struct are aligned to their natural alignment.
+ */
+struct vmci_doorbell_ntfy_msg {
+	struct vmci_dg hdr;
+	struct vmci_handle handle;
+};
+
+/*
+ * This struct is used to contain data for events.  Size of this struct is a
+ * multiple of 8 bytes, and all fields are aligned to their natural alignment.
+ */
+struct vmci_event_data {
+	uint32_t event;		/* 4 bytes. */
+	uint32_t _pad;
+	/* Event payload is put here. */
+};
+
+
+/*
+ * Define the different VMCI_EVENT payload data types here.  All structs must
+ * be a multiple of 8 bytes, and fields must be aligned to their natural
+ * alignment.
+ */
+struct vmci_event_payld_ctx {
+	uint32_t contextID;	/* 4 bytes. */
+	uint32_t _pad;
+};
+
+struct vmci_event_payld_qp {
+	struct vmci_handle handle;	/* QueuePair handle. */
+	uint32_t peerId;	/* Context id of attaching/detaching VM. */
+	uint32_t _pad;
+};
+
+/*
+ * We define the following struct to get the size of the maximum event
+ * data the hypervisor may send to the guest.  If adding a new event
+ * payload type above, add it to the following struct too (inside the
+ * union).
+ */
+struct vmci_event_data_max {
+	struct vmci_event_data eventData;
+	union {
+		struct vmci_event_payld_ctx contextPayload;
+		struct vmci_event_payld_qp qpPayload;
+	} evDataPayload;
+};
+
+/*
+ * Struct used for VMCI_EVENT_SUBSCRIBE/UNSUBSCRIBE and
+ * VMCI_EVENT_HANDLER messages.  Struct size is 32 bytes.  All fields
+ * in struct are aligned to their natural alignment.
+ */
+struct vmci_event_msg {
+	struct vmci_dg hdr;
+
+	/* Has event type and payload. */
+	struct vmci_event_data eventData;
+
+	/* Payload gets put here. */
+};
+
+/*
+ * Structs used for QueuePair alloc and detach messages.  We align fields of
+ * these structs to 64bit boundaries.
+ */
+struct vmci_qp_alloc_msg {
+	struct vmci_dg hdr;
+	struct vmci_handle handle;
+	uint32_t peer;
+	uint32_t flags;
+	uint64_t produceSize;
+	uint64_t consumeSize;
+	uint64_t numPPNs;
+
+	/* List of PPNs placed here. */
+};
+
+struct vmci_qp_detach_msg {
+	struct vmci_dg hdr;
+	struct vmci_handle handle;
+};
+
+/* VMCI Doorbell API. */
+#define VMCI_FLAG_DELAYED_CB 0x01
+
+typedef void (*VMCICallback) (void *clientData);
+
+/**
+ * struct vmci_qp - A vmw_vmci queue pair handle.
+ *
+ * This structure is used as a handle to a queue pair created by
+ * VMCI.  It is intentionally left opaque to clients.
+ */
+struct vmci_qp;
+
+/* Callback needed for correctly waiting on events. */
+typedef int (*VMCIDatagramRecvCB) (void *clientData,
+				   struct vmci_dg *msg);
+
+/* VMCI Event API. */
+typedef void (*VMCI_EventCB) (uint32_t subID, struct vmci_event_data *ed,
+			      void *clientData);
+
+/*
+ * We use the following inline function to access the payload data
+ * associated with an event data.
+ */
+static inline void *vmci_event_data_payload(struct vmci_event_data *evData)
+{
+	return (void *)((char *)evData + sizeof *evData);
+}
+
+/*
+ * Helper to add a given offset to a head or tail pointer. Wraps the
+ * value of the pointer around the max size of the queue.
+ */
+static inline void vmci_qp_add_pointer(atomic64_t *var,
+				       size_t add,
+				       uint64_t size)
+{
+	uint64_t newVal = atomic64_read(var);
+
+	if (newVal >= size - add)
+		newVal -= size;
+
+	newVal += add;
+
+	atomic64_set(var, newVal);
+}
+
+/*
+ * Helper routine to get the Producer Tail from the supplied queue.
+ */
+static inline uint64_t
+vmci_q_header_producer_tail(const struct vmci_queue_header *qHeader)
+{
+	struct vmci_queue_header *qh = (struct vmci_queue_header *)qHeader;
+	return atomic64_read(&qh->producerTail);
+}
+
+/*
+ * Helper routine to get the Consumer Head from the supplied queue.
+ */
+static inline uint64_t
+vmci_q_header_consumer_head(const struct vmci_queue_header *qHeader)
+{
+	struct vmci_queue_header *qh = (struct vmci_queue_header *)qHeader;
+	return atomic64_read(&qh->consumerHead);
+}
+
+/*
+ * Helper routine to increment the Producer Tail.  Fundamentally,
+ * vmci_qp_add_pointer() is used to manipulate the tail itself.
+ */
+static inline void
+vmci_q_header_add_producer_tail(struct vmci_queue_header *qHeader,
+				size_t add,
+				uint64_t queueSize)
+{
+	vmci_qp_add_pointer(&qHeader->producerTail, add, queueSize);
+}
+
+/*
+ * Helper routine to increment the Consumer Head.  Fundamentally,
+ * vmci_qp_add_pointer() is used to manipulate the head itself.
+ */
+static inline void
+vmci_q_header_add_consumer_head(struct vmci_queue_header *qHeader,
+				size_t add,
+				uint64_t queueSize)
+{
+	vmci_qp_add_pointer(&qHeader->consumerHead, add, queueSize);
+}
+
+/*
+ * Helper routine for getting the head and the tail pointer for a queue.
+ * Both the VMCIQueues are needed to get both the pointers for one queue.
+ */
+static inline void
+vmci_q_header_get_pointers(const struct vmci_queue_header *produceQHeader,
+			   const struct vmci_queue_header *consumeQHeader,
+			   uint64_t *producerTail,
+			   uint64_t *consumerHead)
+{
+	if (producerTail)
+		*producerTail = vmci_q_header_producer_tail(produceQHeader);
+
+	if (consumerHead)
+		*consumerHead = vmci_q_header_consumer_head(consumeQHeader);
+}
+
+static inline void vmci_q_header_init(struct vmci_queue_header *qHeader,
+				      const struct vmci_handle handle)
+{
+	qHeader->handle = handle;
+	atomic64_set(&qHeader->producerTail, 0);
+	atomic64_set(&qHeader->consumerHead, 0);
+}
+
+/*
+ * Finds available free space in a produce queue to enqueue more
+ * data or reports an error if queue pair corruption is detected.
+ */
+static int64_t
+vmci_q_header_free_space(const struct vmci_queue_header *produceQHeader,
+			 const struct vmci_queue_header *consumeQHeader,
+			 const uint64_t produceQSize)
+{
+	uint64_t tail;
+	uint64_t head;
+	uint64_t freeSpace;
+
+	tail = vmci_q_header_producer_tail(produceQHeader);
+	head = vmci_q_header_consumer_head(consumeQHeader);
+
+	if (tail >= produceQSize || head >= produceQSize)
+		return VMCI_ERROR_INVALID_SIZE;
+
+	/*
+	 * Deduct 1 to avoid tail becoming equal to head which causes
+	 * ambiguity. If head and tail are equal it means that the
+	 * queue is empty.
+	 */
+	if (tail >= head)
+		freeSpace = produceQSize - (tail - head) - 1;
+	else
+		freeSpace = head - tail - 1;
+
+	return freeSpace;
+}
+
+/*
+ * vmci_q_header_free_space() does all the heavy lifting of
+ * determing the number of free bytes in a Queue.  This routine,
+ * then subtracts that size from the full size of the Queue so
+ * the caller knows how many bytes are ready to be dequeued.
+ * Results:
+ * On success, available data size in bytes (up to MAX_INT64).
+ * On failure, appropriate error code.
+ */
+static inline int64_t
+vmci_q_header_buf_ready(const struct vmci_queue_header *consumeQHeader,
+			const struct vmci_queue_header *produceQHeader,
+			const uint64_t consumeQSize)
+{
+	int64_t freeSpace;
+
+	freeSpace = vmci_q_header_free_space(consumeQHeader,
+					     produceQHeader, consumeQSize);
+	if (freeSpace < VMCI_SUCCESS)
+		return freeSpace;
+
+	return consumeQSize - freeSpace - 1;
+}
+
+static inline struct vmci_handle vmci_make_handle(uint32_t cid, uint32_t rid)
+{
+	struct vmci_handle h;
+
+	h.context = cid;
+	h.resource = rid;
+
+	return h;
+}
+
+#endif /* _VMW_VMCI_DEF_H_ */
-- 
1.7.0.4


^ permalink raw reply related	[flat|nested] 72+ messages in thread

* [vmw_vmci 11/11] Apply the header code to make VMCI build
@ 2012-07-26 23:39   ` Andrew Stiegmann (stieg)
  0 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann (stieg) @ 2012-07-26 23:39 UTC (permalink / raw)
  To: linux-kernel, virtualization
  Cc: pv-drivers, vm-crosstalk, Andrew Stiegmann (stieg), cschamp, gregkh

Adds all the necessary files to enable building of the VMCI module
with the Linux Makefiles and Kconfig systems. Also adds the header
files used for building modules against the driver.

Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
---
 drivers/misc/Kconfig                    |    1 +
 drivers/misc/Makefile                   |    1 +
 drivers/misc/vmw_vmci/Kconfig           |   16 +
 drivers/misc/vmw_vmci/Makefile          |   43 ++
 drivers/misc/vmw_vmci/vmci_common_int.h |   58 ++
 include/linux/vmw_vmci_api.h            |   89 +++
 include/linux/vmw_vmci_defs.h           |  921 +++++++++++++++++++++++++++++++
 7 files changed, 1129 insertions(+), 0 deletions(-)
 create mode 100644 drivers/misc/vmw_vmci/Kconfig
 create mode 100644 drivers/misc/vmw_vmci/Makefile
 create mode 100644 drivers/misc/vmw_vmci/vmci_common_int.h
 create mode 100644 include/linux/vmw_vmci_api.h
 create mode 100644 include/linux/vmw_vmci_defs.h

diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index 2661f6e..fe38c7a 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -517,4 +517,5 @@ source "drivers/misc/lis3lv02d/Kconfig"
 source "drivers/misc/carma/Kconfig"
 source "drivers/misc/altera-stapl/Kconfig"
 source "drivers/misc/mei/Kconfig"
+source "drivers/misc/vmw_vmci/Kconfig"
 endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index 456972f..af9e413 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -51,3 +51,4 @@ obj-y				+= carma/
 obj-$(CONFIG_USB_SWITCH_FSA9480) += fsa9480.o
 obj-$(CONFIG_ALTERA_STAPL)	+=altera-stapl/
 obj-$(CONFIG_INTEL_MEI)		+= mei/
+obj-y				+= vmw_vmci/
diff --git a/drivers/misc/vmw_vmci/Kconfig b/drivers/misc/vmw_vmci/Kconfig
new file mode 100644
index 0000000..55015e7
--- /dev/null
+++ b/drivers/misc/vmw_vmci/Kconfig
@@ -0,0 +1,16 @@
+#
+# VMware VMCI device
+#
+
+config VMWARE_VMCI
+	tristate "VMware VMCI Driver"
+	depends on X86
+	help
+	  This is VMware's Virtual Machine Communication Interface.  It enables
+	  high-speed communication between host and guest in a virtual
+	  environment via the VMCI virtual device.
+
+	  If unsure, say N.
+
+	  To compile this driver as a module, choose M here: the
+	  module will be called vmw_vmci.
diff --git a/drivers/misc/vmw_vmci/Makefile b/drivers/misc/vmw_vmci/Makefile
new file mode 100644
index 0000000..19755fb
--- /dev/null
+++ b/drivers/misc/vmw_vmci/Makefile
@@ -0,0 +1,43 @@
+################################################################################
+#
+# Linux driver for VMware's VMCI device.
+#
+# Copyright (C) 2007-2012, VMware, Inc. All Rights Reserved.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License as published by the
+# Free Software Foundation; version 2 of the License and no later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
+# NON INFRINGEMENT.  See the GNU General Public License for more
+# details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# The full GNU General Public License is included in this distribution in
+# the file called "COPYING".
+#
+# Maintained by: Andrew Stiegmann <pv-drivers@vmware.com>
+#
+################################################################################
+
+#
+# Makefile for the VMware VMCI
+#
+
+obj-$(CONFIG_VMWARE_VMCI) += vmw_vmci.o
+
+vmw_vmci-objs += vmci_context.o
+vmw_vmci-objs += vmci_datagram.o
+vmw_vmci-objs += vmci_doorbell.o
+vmw_vmci-objs += vmci_driver.o
+vmw_vmci-objs += vmci_event.o
+vmw_vmci-objs += vmci_handle_array.o
+vmw_vmci-objs += vmci_hash_table.o
+vmw_vmci-objs += vmci_queue_pair.o
+vmw_vmci-objs += vmci_resource.o
+vmw_vmci-objs += vmci_route.o
diff --git a/drivers/misc/vmw_vmci/vmci_common_int.h b/drivers/misc/vmw_vmci/vmci_common_int.h
new file mode 100644
index 0000000..6e82610
--- /dev/null
+++ b/drivers/misc/vmw_vmci/vmci_common_int.h
@@ -0,0 +1,58 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMCI_COMMONINT_H_
+#define _VMCI_COMMONINT_H_
+
+#include <linux/printk.h>
+#include <linux/vmw_vmci_defs.h>
+
+#include "vmci_handle_array.h"
+
+#define ASSERT(cond) BUG_ON(!(cond))
+
+#define CAN_BLOCK(_f) (!((_f) & VMCI_QPFLAG_NONBLOCK))
+#define QP_PINNED(_f) ((_f) & VMCI_QPFLAG_PINNED)
+
+#define PCI_VENDOR_ID_VMWARE	0x15AD
+#define PCI_DEVICE_ID_VMWARE_VMCI	0x0740
+#define VMCI_DRIVER_VERSION_STRING	"9.5.5.0-k"
+#define MODULE_NAME "vmw_vmci"
+
+/* Print magic... whee! */
+#ifdef pr_fmt
+#undef pr_fmt
+#define pr_fmt(fmt) MODULE_NAME ": " fmt
+#endif
+
+/*
+ * Utilility function that checks whether two entities are allowed
+ * to interact. If one of them is restricted, the other one must
+ * be trusted.
+ */
+static inline bool vmci_deny_interaction(uint32_t partOne,
+					 uint32_t partTwo)
+{
+	return ((partOne & VMCI_PRIVILEGE_FLAG_RESTRICTED) &&
+		!(partTwo & VMCI_PRIVILEGE_FLAG_TRUSTED)) ||
+	       ((partTwo & VMCI_PRIVILEGE_FLAG_RESTRICTED) &&
+		!(partOne & VMCI_PRIVILEGE_FLAG_TRUSTED));
+}
+
+#endif				/* _VMCI_COMMONINT_H_ */
diff --git a/include/linux/vmw_vmci_api.h b/include/linux/vmw_vmci_api.h
new file mode 100644
index 0000000..71a4668
--- /dev/null
+++ b/include/linux/vmw_vmci_api.h
@@ -0,0 +1,89 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef __VMW_VMCI_API_H__
+#define __VMW_VMCI_API_H__
+
+#include <linux/vmw_vmci_defs.h>
+
+#undef  VMCI_KERNEL_API_VERSION
+#define VMCI_KERNEL_API_VERSION_2 2
+#define VMCI_KERNEL_API_VERSION   VMCI_KERNEL_API_VERSION_2
+
+typedef void (VMCI_DeviceShutdownFn) (void *deviceRegistration, void *userData);
+
+bool VMCI_DeviceGet(uint32_t *apiVersion,
+		    VMCI_DeviceShutdownFn *deviceShutdownCB,
+		    void *userData, void **deviceRegistration);
+void VMCI_DeviceRelease(void *deviceRegistration);
+int VMCIDatagram_CreateHnd(uint32_t resourceID, uint32_t flags,
+			   VMCIDatagramRecvCB recvCB, void *clientData,
+			   struct vmci_handle *outHandle);
+int VMCIDatagram_CreateHndPriv(uint32_t resourceID, uint32_t flags,
+			       uint32_t privFlags,
+			       VMCIDatagramRecvCB recvCB, void *clientData,
+			       struct vmci_handle *outHandle);
+int VMCIDatagram_DestroyHnd(struct vmci_handle handle);
+int VMCIDatagram_Send(struct vmci_dg *msg);
+int VMCIDoorbell_Create(struct vmci_handle *handle, uint32_t flags,
+			uint32_t privFlags,
+			VMCICallback notifyCB, void *clientData);
+int VMCIDoorbell_Destroy(struct vmci_handle handle);
+int VMCIDoorbell_Notify(struct vmci_handle handle, uint32_t privFlags);
+uint32_t VMCI_GetContextID(void);
+uint32_t VMCI_Version(void);
+int VMCI_ContextID2HostVmID(uint32_t contextID, void *hostVmID,
+			    size_t hostVmIDLen);
+int VMCI_IsContextOwner(uint32_t contextID, void *hostUser);
+
+int VMCIEvent_Subscribe(uint32_t event, uint32_t flags,
+			VMCI_EventCB callback, void *callbackData,
+			uint32_t *subID);
+int VMCIEvent_Unsubscribe(uint32_t subID);
+uint32_t VMCIContext_GetPrivFlags(uint32_t contextID);
+int VMCIQPair_Alloc(struct vmci_qp **qpair,
+		    struct vmci_handle *handle,
+		    uint64_t produceQSize,
+		    uint64_t consumeQSize,
+		    uint32_t peer, uint32_t flags, uint32_t privFlags);
+int VMCIQPair_Detach(struct vmci_qp **qpair);
+int VMCIQPair_GetProduceIndexes(const struct vmci_qp *qpair,
+				uint64_t *producerTail,
+				uint64_t *consumerHead);
+int VMCIQPair_GetConsumeIndexes(const struct vmci_qp *qpair,
+				uint64_t *consumerTail,
+				uint64_t *producerHead);
+int64_t VMCIQPair_ProduceFreeSpace(const struct vmci_qp *qpair);
+int64_t VMCIQPair_ProduceBufReady(const struct vmci_qp *qpair);
+int64_t VMCIQPair_ConsumeFreeSpace(const struct vmci_qp *qpair);
+int64_t VMCIQPair_ConsumeBufReady(const struct vmci_qp *qpair);
+ssize_t VMCIQPair_Enqueue(struct vmci_qp *qpair,
+			  const void *buf, size_t bufSize, int mode);
+ssize_t VMCIQPair_Dequeue(struct vmci_qp *qpair,
+			  void *buf, size_t bufSize, int mode);
+ssize_t VMCIQPair_Peek(struct vmci_qp *qpair, void *buf, size_t bufSize,
+		       int mode);
+ssize_t VMCIQPair_EnqueueV(struct vmci_qp *qpair,
+			   void *iov, size_t iovSize, int mode);
+ssize_t VMCIQPair_DequeueV(struct vmci_qp *qpair,
+			   void *iov, size_t iovSize, int mode);
+ssize_t VMCIQPair_PeekV(struct vmci_qp *qpair, void *iov, size_t iovSize,
+			int mode);
+
+#endif /* !__VMW_VMCI_API_H__ */
diff --git a/include/linux/vmw_vmci_defs.h b/include/linux/vmw_vmci_defs.h
new file mode 100644
index 0000000..d71d5e0
--- /dev/null
+++ b/include/linux/vmw_vmci_defs.h
@@ -0,0 +1,921 @@
+/*
+ * VMware VMCI Driver
+ *
+ * Copyright (C) 2012 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation version 2 and no later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ * for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#ifndef _VMW_VMCI_DEF_H_
+#define _VMW_VMCI_DEF_H_
+
+#include <linux/atomic.h>
+
+/* Register offsets. */
+#define VMCI_STATUS_ADDR      0x00
+#define VMCI_CONTROL_ADDR     0x04
+#define VMCI_ICR_ADDR	      0x08
+#define VMCI_IMR_ADDR         0x0c
+#define VMCI_DATA_OUT_ADDR    0x10
+#define VMCI_DATA_IN_ADDR     0x14
+#define VMCI_CAPS_ADDR        0x18
+#define VMCI_RESULT_LOW_ADDR  0x1c
+#define VMCI_RESULT_HIGH_ADDR 0x20
+
+/* Max number of devices. */
+#define VMCI_MAX_DEVICES 1
+
+/* Status register bits. */
+#define VMCI_STATUS_INT_ON     0x1
+
+/* Control register bits. */
+#define VMCI_CONTROL_RESET        0x1
+#define VMCI_CONTROL_INT_ENABLE   0x2
+#define VMCI_CONTROL_INT_DISABLE  0x4
+
+/* Capabilities register bits. */
+#define VMCI_CAPS_HYPERCALL     0x1
+#define VMCI_CAPS_GUESTCALL     0x2
+#define VMCI_CAPS_DATAGRAM      0x4
+#define VMCI_CAPS_NOTIFICATIONS 0x8
+
+/* Interrupt Cause register bits. */
+#define VMCI_ICR_DATAGRAM      0x1
+#define VMCI_ICR_NOTIFICATION  0x2
+
+/* Interrupt Mask register bits. */
+#define VMCI_IMR_DATAGRAM      0x1
+#define VMCI_IMR_NOTIFICATION  0x2
+
+/* Interrupt type. */
+enum {
+	VMCI_INTR_TYPE_INTX = 0,
+	VMCI_INTR_TYPE_MSI = 1,
+	VMCI_INTR_TYPE_MSIX = 2,
+};
+
+/* Maximum MSI/MSI-X interrupt vectors in the device. */
+#define VMCI_MAX_INTRS 2
+
+/*
+ * Supported interrupt vectors.  There is one for each ICR value above,
+ * but here they indicate the position in the vector array/message ID.
+ */
+enum {
+	VMCI_INTR_DATAGRAM = 0,
+	VMCI_INTR_NOTIFICATION = 1,
+};
+
+/*
+ * A single VMCI device has an upper limit of 128MB on the amount of
+ * memory that can be used for queue pairs.
+ */
+#define VMCI_MAX_GUEST_QP_MEMORY (128 * 1024 * 1024)
+
+/*
+ * Queues with pre-mapped data pages must be small, so that we don't pin
+ * too much kernel memory (especially on vmkernel).  We limit a queuepair to
+ * 32 KB, or 16 KB per queue for symmetrical pairs.
+ */
+#define VMCI_MAX_PINNED_QP_MEMORY (32 * 1024)
+
+/*
+ * We have a fixed set of resource IDs available in the VMX.
+ * This allows us to have a very simple implementation since we statically
+ * know how many will create datagram handles. If a new caller arrives and
+ * we have run out of slots we can manually increment the maximum size of
+ * available resource IDs.
+ *
+ * VMCI reserved hypervisor datagram resource IDs.
+ */
+enum {
+	VMCI_RESOURCES_QUERY = 0,
+	VMCI_GET_CONTEXT_ID = 1,
+	VMCI_SET_NOTIFY_BITMAP = 2,
+	VMCI_DOORBELL_LINK = 3,
+	VMCI_DOORBELL_UNLINK = 4,
+	VMCI_DOORBELL_NOTIFY = 5,
+/*
+ * VMCI_DATAGRAM_REQUEST_MAP and VMCI_DATAGRAM_REMOVE_MAP are
+ * obsoleted by the removal of VM to VM communication.
+ */
+	VMCI_DATAGRAM_REQUEST_MAP = 6,
+	VMCI_DATAGRAM_REMOVE_MAP = 7,
+	VMCI_EVENT_SUBSCRIBE = 8,
+	VMCI_EVENT_UNSUBSCRIBE = 9,
+	VMCI_QUEUEPAIR_ALLOC = 10,
+	VMCI_QUEUEPAIR_DETACH = 11,
+
+/*
+ * VMCI_VSOCK_VMX_LOOKUP was assigned to 12 for Fusion 3.0/3.1,
+ * WS 7.0/7.1 and ESX 4.1
+ */
+	VMCI_HGFS_TRANSPORT = 13,
+	VMCI_UNITY_PBRPC_REGISTER = 14,
+	VMCI_RESOURCE_MAX = 15,
+};
+
+/**
+ * struct vmci_handle - Ownership information structure
+ * @context:	The VMX context ID.
+ * @resource:	The resource ID (used for locating in resource hash).
+ *
+ * The vmci_handle structure is used to track resources used within
+ * vmw_vmci.
+ */
+struct vmci_handle {
+	uint32_t context;
+	uint32_t resource;
+};
+
+#define VMCI_HANDLE_EQUAL(_h1, _h2) ((_h1).context == (_h2).context &&	\
+				     (_h1).resource == (_h2).resource)
+
+#define VMCI_INVALID_ID ~0
+static const struct vmci_handle VMCI_INVALID_HANDLE = { VMCI_INVALID_ID,
+							VMCI_INVALID_ID
+};
+
+#define VMCI_HANDLE_INVALID(_handle)				\
+	VMCI_HANDLE_EQUAL((_handle), VMCI_INVALID_HANDLE)
+
+/*
+ * The below defines can be used to send anonymous requests.
+ * This also indicates that no response is expected.
+ */
+#define VMCI_ANON_SRC_CONTEXT_ID   VMCI_INVALID_ID
+#define VMCI_ANON_SRC_RESOURCE_ID  VMCI_INVALID_ID
+#define VMCI_ANON_SRC_HANDLE       vmci_make_handle(VMCI_ANON_SRC_CONTEXT_ID, \
+						    VMCI_ANON_SRC_RESOURCE_ID)
+
+/* The lowest 16 context ids are reserved for internal use. */
+#define VMCI_RESERVED_CID_LIMIT ((uint32_t) 16)
+
+/*
+ * Hypervisor context id, used for calling into hypervisor
+ * supplied services from the VM.
+ */
+#define VMCI_HYPERVISOR_CONTEXT_ID 0
+
+/*
+ * Well-known context id, a logical context that contains a set of
+ * well-known services. This context ID is now obsolete.
+ */
+#define VMCI_WELL_KNOWN_CONTEXT_ID 1
+
+/*
+ * Context ID used by host endpoints.
+ */
+#define VMCI_HOST_CONTEXT_ID  2
+
+#define VMCI_CONTEXT_IS_VM(_cid) (VMCI_INVALID_ID != (_cid) &&		\
+				  (_cid) > VMCI_HOST_CONTEXT_ID)
+
+/*
+ * The VMCI_CONTEXT_RESOURCE_ID is used together with vmci_make_handle to make
+ * handles that refer to a specific context.
+ */
+#define VMCI_CONTEXT_RESOURCE_ID 0
+
+/*
+ * VMCI error codes.
+ */
+enum {
+	VMCI_SUCCESS_QUEUEPAIR_ATTACH	=  5,
+	VMCI_SUCCESS_QUEUEPAIR_CREATE	=  4,
+	VMCI_SUCCESS_LAST_DETACH	=  3,
+	VMCI_SUCCESS_ACCESS_GRANTED	=  2,
+	VMCI_SUCCESS_ENTRY_DEAD	=  1,
+	VMCI_SUCCESS			=  0,
+	VMCI_ERROR_INVALID_RESOURCE	= (-1),
+	VMCI_ERROR_INVALID_ARGS	= (-2),
+	VMCI_ERROR_NO_MEM		= (-3),
+	VMCI_ERROR_DATAGRAM_FAILED	= (-4),
+	VMCI_ERROR_MORE_DATA		= (-5),
+	VMCI_ERROR_NO_MORE_DATAGRAMS	= (-6),
+	VMCI_ERROR_NO_ACCESS		= (-7),
+	VMCI_ERROR_NO_HANDLE		= (-8),
+	VMCI_ERROR_DUPLICATE_ENTRY	= (-9),
+	VMCI_ERROR_DST_UNREACHABLE	= (-10),
+	VMCI_ERROR_PAYLOAD_TOO_LARGE	= (-11),
+	VMCI_ERROR_INVALID_PRIV	= (-12),
+	VMCI_ERROR_GENERIC		= (-13),
+	VMCI_ERROR_PAGE_ALREADY_SHARED	= (-14),
+	VMCI_ERROR_CANNOT_SHARE_PAGE	= (-15),
+	VMCI_ERROR_CANNOT_UNSHARE_PAGE	= (-16),
+	VMCI_ERROR_NO_PROCESS		= (-17),
+	VMCI_ERROR_NO_DATAGRAM	= (-18),
+	VMCI_ERROR_NO_RESOURCES	= (-19),
+	VMCI_ERROR_UNAVAILABLE	= (-20),
+	VMCI_ERROR_NOT_FOUND		= (-21),
+	VMCI_ERROR_ALREADY_EXISTS	= (-22),
+	VMCI_ERROR_NOT_PAGE_ALIGNED	= (-23),
+	VMCI_ERROR_INVALID_SIZE	= (-24),
+	VMCI_ERROR_REGION_ALREADY_SHARED = (-25),
+	VMCI_ERROR_TIMEOUT	= (-26),
+	VMCI_ERROR_DATAGRAM_INCOMPLETE	= (-27),
+	VMCI_ERROR_INCORRECT_IRQL	= (-28),
+	VMCI_ERROR_EVENT_UNKNOWN	= (-29),
+	VMCI_ERROR_OBSOLETE	= (-30),
+	VMCI_ERROR_QUEUEPAIR_MISMATCH	= (-31),
+	VMCI_ERROR_QUEUEPAIR_NOTSET	= (-32),
+	VMCI_ERROR_QUEUEPAIR_NOTOWNER	= (-33),
+	VMCI_ERROR_QUEUEPAIR_NOTATTACHED	= (-34),
+	VMCI_ERROR_QUEUEPAIR_NOSPACE	= (-35),
+	VMCI_ERROR_QUEUEPAIR_NODATA	= (-36),
+	VMCI_ERROR_BUSMEM_INVALIDATION	= (-37),
+	VMCI_ERROR_MODULE_NOT_LOADED	= (-38),
+	VMCI_ERROR_DEVICE_NOT_FOUND	= (-39),
+	VMCI_ERROR_QUEUEPAIR_NOT_READY	= (-40),
+	VMCI_ERROR_WOULD_BLOCK	= (-41),
+
+	/* VMCI clients should return error code within this range */
+	VMCI_ERROR_CLIENT_MIN		= (-500),
+	VMCI_ERROR_CLIENT_MAX	= (-550),
+
+	/* Internal error codes. */
+	VMCI_SHAREDMEM_ERROR_BAD_CONTEXT	= (-1000),
+};
+
+/* VMCI reserved events. */
+enum {
+	/* Only applicable to guest endpoints */
+	VMCI_EVENT_CTX_ID_UPDATE  = 0,
+
+	/* Applicable to guest and host */
+	VMCI_EVENT_CTX_REMOVED    = 1,
+
+	/* Only applicable to guest endpoints */
+	VMCI_EVENT_QP_RESUMED	  = 2,
+
+	/* Applicable to guest and host */
+	VMCI_EVENT_QP_PEER_ATTACH = 3,
+
+	/* Applicable to guest and host */
+	VMCI_EVENT_QP_PEER_DETACH = 4,
+
+	/*
+	 * Applicable to VMX and vmk.  On vmk,
+	 * this event has the Context payload type.
+	 */
+	VMCI_EVENT_MEM_ACCESS_ON  = 5,
+
+	/*
+	 * Applicable to VMX and vmk.  Same as
+	 * above for the payload type.
+	 */
+	VMCI_EVENT_MEM_ACCESS_OFF = 6,
+	VMCI_EVENT_MAX = 7,
+};
+
+/*
+ * Of the above events, a few are reserved for use in the VMX, and
+ * other endpoints (guest and host kernel) should not use them. For
+ * the rest of the events, we allow both host and guest endpoints to
+ * subscribe to them, to maintain the same API for host and guest
+ * endpoints.
+ */
+#define VMCI_EVENT_VALID_VMX(_event) ((_event) == VMCI_EVENT_MEM_ACCESS_ON || \
+				      (_event) == VMCI_EVENT_MEM_ACCESS_OFF)
+
+#define VMCI_EVENT_VALID(_event) ((_event) < VMCI_EVENT_MAX &&		\
+				  !VMCI_EVENT_VALID_VMX(_event))
+
+/* Reserved guest datagram resource ids. */
+#define VMCI_EVENT_HANDLER 0
+
+/*
+ * VMCI coarse-grained privileges (per context or host
+ * process/endpoint. An entity with the restricted flag is only
+ * allowed to interact with the hypervisor and trusted entities.
+ */
+enum {
+	VMCI_NO_PRIVILEGE_FLAGS = 0,
+	VMCI_PRIVILEGE_FLAG_RESTRICTED = 1,
+	VMCI_PRIVILEGE_FLAG_TRUSTED = 2,
+	VMCI_PRIVILEGE_ALL_FLAGS = (VMCI_PRIVILEGE_FLAG_RESTRICTED |
+				    VMCI_PRIVILEGE_FLAG_TRUSTED),
+	VMCI_DEFAULT_PROC_PRIVILEGE_FLAGS = VMCI_NO_PRIVILEGE_FLAGS,
+	VMCI_LEAST_PRIVILEGE_FLAGS = VMCI_PRIVILEGE_FLAG_RESTRICTED,
+	VMCI_MAX_PRIVILEGE_FLAGS = VMCI_PRIVILEGE_FLAG_TRUSTED,
+};
+
+/* 0 through VMCI_RESERVED_RESOURCE_ID_MAX are reserved. */
+#define VMCI_RESERVED_RESOURCE_ID_MAX 1023
+
+/*
+ * Driver version.
+ *
+ * Increment major version when you make an incompatible change.
+ * Compatibility goes both ways (old driver with new executable
+ * as well as new driver with old executable).
+ */
+
+/* Never change VMCI_VERSION_SHIFT_WIDTH */
+#define VMCI_VERSION_SHIFT_WIDTH 16
+#define VMCI_MAKE_VERSION(_major, _minor)				\
+	((_major) << VMCI_VERSION_SHIFT_WIDTH | (uint16_t) (_minor))
+
+#define VMCI_VERSION_MAJOR(v)  ((uint32) (v) >> VMCI_VERSION_SHIFT_WIDTH)
+#define VMCI_VERSION_MINOR(v)  ((uint16_t) (v))
+
+/*
+ * VMCI_VERSION is always the current version.  Subsequently listed
+ * versions are ways of detecting previous versions of the connecting
+ * application (i.e., VMX).
+ *
+ * VMCI_VERSION_NOVMVM: This version removed support for VM to VM
+ * communication.
+ *
+ * VMCI_VERSION_NOTIFY: This version introduced doorbell notification
+ * support.
+ *
+ * VMCI_VERSION_HOSTQP: This version introduced host end point support
+ * for hosted products.
+ *
+ * VMCI_VERSION_PREHOSTQP: This is the version prior to the adoption of
+ * support for host end-points.
+ *
+ * VMCI_VERSION_PREVERS2: This fictional version number is intended to
+ * represent the version of a VMX which doesn't call into the driver
+ * with ioctl VERSION2 and thus doesn't establish its version with the
+ * driver.
+ */
+
+#define VMCI_VERSION                VMCI_VERSION_NOVMVM
+#define VMCI_VERSION_NOVMVM         VMCI_MAKE_VERSION(11, 0)
+#define VMCI_VERSION_NOTIFY         VMCI_MAKE_VERSION(10, 0)
+#define VMCI_VERSION_HOSTQP         VMCI_MAKE_VERSION(9, 0)
+#define VMCI_VERSION_PREHOSTQP      VMCI_MAKE_VERSION(8, 0)
+#define VMCI_VERSION_PREVERS2       VMCI_MAKE_VERSION(1, 0)
+
+/*
+ * Linux defines _IO* macros, but the core kernel code ignore the encoded
+ * ioctl value. It is up to individual drivers to decode the value (for
+ * example to look at the size of a structure to determine which version
+ * of a specific command should be used) or not (which is what we
+ * currently do, so right now the ioctl value for a given command is the
+ * command itself).
+ *
+ * Hence, we just define the IOCTL_VMCI_foo values directly, with no
+ * intermediate IOCTLCMD_ representation.
+ */
+#  define IOCTLCMD(_cmd) IOCTL_VMCI_ ## _cmd
+
+enum {
+	/*
+	 * We need to bracket the range of values used for ioctls,
+	 * because x86_64 Linux forces us to explicitly register ioctl
+	 * handlers by value for handling 32 bit ioctl syscalls.
+	 * Hence FIRST and LAST.  Pick something for FIRST that
+	 * doesn't collide with vmmon (2001+).
+	 */
+	IOCTLCMD(FIRST) = 1951,
+	IOCTLCMD(VERSION) = IOCTLCMD(FIRST),
+
+	/* BEGIN VMCI */
+	IOCTLCMD(INIT_CONTEXT),
+
+	/*
+	 * The following two were used for process and datagram
+	 * process creation.  They are not used anymore and reserved
+	 * for future use.  They will fail if issued.
+	 */
+	IOCTLCMD(RESERVED1),
+	IOCTLCMD(RESERVED2),
+
+	/*
+	 * The following used to be for shared memory. It is now
+	 * unused and and is reserved for future use. It will fail if
+	 * issued.
+	 */
+	IOCTLCMD(RESERVED3),
+
+	/*
+	 * The follwoing three were also used to be for shared
+	 * memory. An old WS6 user-mode client might try to use them
+	 * with the new driver, but since we ensure that only contexts
+	 * created by VMX'en of the appropriate version
+	 * (VMCI_VERSION_NOTIFY or VMCI_VERSION_NEWQP) or higher use
+	 * these ioctl, everything is fine.
+	 */
+	IOCTLCMD(QUEUEPAIR_SETVA),
+	IOCTLCMD(NOTIFY_RESOURCE),
+	IOCTLCMD(NOTIFICATIONS_RECEIVE),
+	IOCTLCMD(VERSION2),
+	IOCTLCMD(QUEUEPAIR_ALLOC),
+	IOCTLCMD(QUEUEPAIR_SETPAGEFILE),
+	IOCTLCMD(QUEUEPAIR_DETACH),
+	IOCTLCMD(DATAGRAM_SEND),
+	IOCTLCMD(DATAGRAM_RECEIVE),
+	IOCTLCMD(DATAGRAM_REQUEST_MAP),
+	IOCTLCMD(DATAGRAM_REMOVE_MAP),
+	IOCTLCMD(CTX_ADD_NOTIFICATION),
+	IOCTLCMD(CTX_REMOVE_NOTIFICATION),
+	IOCTLCMD(CTX_GET_CPT_STATE),
+	IOCTLCMD(CTX_SET_CPT_STATE),
+	IOCTLCMD(GET_CONTEXT_ID),
+	IOCTLCMD(LAST),
+	/* END VMCI */
+
+	/*
+	 * VMCI Socket IOCTLS are defined next and go from
+	 * IOCTLCMD(LAST) (1972) to 1990.  VMware reserves a range of
+	 * 4 ioctls for VMCI Sockets to grow.  We cannot reserve many
+	 * ioctls here since we are close to overlapping with vmmon
+	 * ioctls (2001+).  Define a meta-ioctl if running out of this
+	 * binary space.
+	 */
+	IOCTLCMD(SOCKETS_LAST) = 1994,	/* 1994 on Linux. */
+
+	/*
+	 * The VSockets ioctls occupy the block above.  We define a
+	 * new range of VMCI ioctls to maintain binary compatibility
+	 * between the user land and the kernel driver.  Careful,
+	 * vmmon ioctls start from 2001, so this means we can add only
+	 * 4 new VMCI ioctls.  Define a meta-ioctl if running out of
+	 * this binary space.
+	 */
+	IOCTLCMD(FIRST2),
+	IOCTLCMD(SET_NOTIFY) = IOCTLCMD(FIRST2),	/* 1995 on Linux. */
+	IOCTLCMD(LAST2),
+};
+
+/* Clean up helper macros */
+#undef IOCTLCMD
+
+/*
+ * struct vmci_queue_header - VMCI Queue Header information.
+ *
+ * A Queue cannot stand by itself as designed.  Each Queue's header
+ * contains a pointer into itself (the producerTail) and into its peer
+ * (consumerHead).  The reason for the separation is one of
+ * accessibility: Each end-point can modify two things: where the next
+ * location to enqueue is within its produceQ (producerTail); and
+ * where the next dequeue location is in its consumeQ (consumerHead).
+ *
+ * An end-point cannot modify the pointers of its peer (guest to
+ * guest; NOTE that in the host both queue headers are mapped r/w).
+ * But, each end-point needs read access to both Queue header
+ * structures in order to determine how much space is used (or left)
+ * in the Queue.  This is because for an end-point to know how full
+ * its produceQ is, it needs to use the consumerHead that points into
+ * the produceQ but -that- consumerHead is in the Queue header for
+ * that end-points consumeQ.
+ *
+ * Thoroughly confused?  Sorry.
+ *
+ * producerTail: the point to enqueue new entrants.  When you approach
+ * a line in a store, for example, you walk up to the tail.
+ *
+ * consumerHead: the point in the queue from which the next element is
+ * dequeued.  In other words, who is next in line is he who is at the
+ * head of the line.
+ *
+ * Also, producerTail points to an empty byte in the Queue, whereas
+ * consumerHead points to a valid byte of data (unless producerTail ==
+ * consumerHead in which case consumerHead does not point to a valid
+ * byte of data).
+ *
+ * For a queue of buffer 'size' bytes, the tail and head pointers will be in
+ * the range [0, size-1].
+ *
+ * If produceQHeader->producerTail == consumeQHeader->consumerHead
+ * then the produceQ is empty.
+ */
+struct vmci_queue_header {
+	/* All fields are 64bit and aligned. */
+	struct vmci_handle handle;	/* Identifier. */
+	atomic64_t producerTail;	/* Offset in this queue. */
+	atomic64_t consumerHead;	/* Offset in peer queue. */
+};
+
+/**
+ * struct vmci_dg - Base struct for vmci datagrams.
+ * @dst:	A vmci_handle that tracks the destination of the datagram.
+ * @src:	A vmci_handle that tracks the source of the datagram.
+ * @payloadSize:	The size of the payload.
+ *
+ * vmci_dg structs are used when sending vmci datagrams.  They include
+ * the necessary source and destination information to properly route
+ * the information along with the size of the package.
+ */
+struct vmci_dg {
+	struct vmci_handle dst;
+	struct vmci_handle src;
+	uint64_t payloadSize;
+};
+
+/*
+ * Second flag is for creating a well-known handle instead of a per context
+ * handle.  Next flag is for deferring datagram delivery, so that the
+ * datagram callback is invoked in a delayed context (not interrupt context).
+ */
+#define VMCI_FLAG_DG_NONE          0
+#define VMCI_FLAG_WELLKNOWN_DG_HND 0x1
+#define VMCI_FLAG_ANYCID_DG_HND    0x2
+#define VMCI_FLAG_DG_DELAYED_CB    0x4
+
+/* Event callback should fire in a delayed context (not interrupt context.) */
+#define VMCI_FLAG_EVENT_NONE       0
+#define VMCI_FLAG_EVENT_DELAYED_CB 0x1
+
+/*
+ * Maximum supported size of a VMCI datagram for routable datagrams.
+ * Datagrams going to the hypervisor are allowed to be larger.
+ */
+#define VMCI_MAX_DG_SIZE (17 * 4096)
+#define VMCI_MAX_DG_PAYLOAD_SIZE (VMCI_MAX_DG_SIZE - sizeof(struct vmci_dg))
+#define VMCI_DG_PAYLOAD(_dg) (void *)((char *)(_dg) + sizeof(struct vmci_dg))
+#define VMCI_DG_HEADERSIZE sizeof(struct vmci_dg)
+#define VMCI_DG_SIZE(_dg) (VMCI_DG_HEADERSIZE + (size_t)(_dg)->payloadSize)
+#define VMCI_DG_SIZE_ALIGNED(_dg) ((VMCI_DG_SIZE(_dg) + 7) & (~((size_t) 0x7)))
+#define VMCI_MAX_DATAGRAM_QUEUE_SIZE (VMCI_MAX_DG_SIZE * 2)
+
+/* Flags for VMCI QueuePair API. */
+enum {
+	/* Fail alloc if QP not created by peer. */
+	VMCI_QPFLAG_ATTACH_ONLY = 1 << 0,
+
+	/* Only allow attaches from local context. */
+	VMCI_QPFLAG_LOCAL = 1 << 1,
+
+	/* Host won't block when guest is quiesced. */
+	VMCI_QPFLAG_NONBLOCK = 1 << 2,
+
+	/* Pin data pages in ESX.  Used with NONBLOCK */
+	VMCI_QPFLAG_PINNED = 1 << 3,
+
+	/* Update the following flag when adding new flags. */
+	VMCI_QP_ALL_FLAGS = (VMCI_QPFLAG_ATTACH_ONLY | VMCI_QPFLAG_LOCAL |
+			     VMCI_QPFLAG_NONBLOCK | VMCI_QPFLAG_PINNED),
+
+	/* Convenience flags */
+	VMCI_QP_ASYMM = (VMCI_QPFLAG_NONBLOCK | VMCI_QPFLAG_PINNED),
+	VMCI_QP_ASYMM_PEER = (VMCI_QPFLAG_ATTACH_ONLY | VMCI_QP_ASYMM),
+};
+
+/*
+ * We allow at least 1024 more event datagrams from the hypervisor past the
+ * normally allowed datagrams pending for a given context.  We define this
+ * limit on event datagrams from the hypervisor to guard against DoS attack
+ * from a malicious VM which could repeatedly attach to and detach from a queue
+ * pair, causing events to be queued at the destination VM.  However, the rate
+ * at which such events can be generated is small since it requires a VM exit
+ * and handling of queue pair attach/detach call at the hypervisor.  Event
+ * datagrams may be queued up at the destination VM if it has interrupts
+ * disabled or if it is not draining events for some other reason.  1024
+ * datagrams is a grossly conservative estimate of the time for which
+ * interrupts may be disabled in the destination VM, but at the same time does
+ * not exacerbate the memory pressure problem on the host by much (size of each
+ * event datagram is small).
+ */
+#define VMCI_MAX_DATAGRAM_AND_EVENT_QUEUE_SIZE				\
+	(VMCI_MAX_DATAGRAM_QUEUE_SIZE +					\
+	 1024 * (sizeof(struct vmci_dg) + sizeof(struct vmci_event_data_max)))
+
+/*
+ * Struct used for querying, via VMCI_RESOURCES_QUERY, the availability of
+ * hypervisor resources.  Struct size is 16 bytes. All fields in struct are
+ * aligned to their natural alignment.
+ */
+struct vmci_resource_query_hdr {
+	struct vmci_dg hdr;
+	uint32_t numResources;
+	uint32_t _padding;
+};
+
+/*
+ * Convenience struct for negotiating vectors. Must match layout of
+ * VMCIResourceQueryHdr minus the struct vmci_dg header.
+ */
+struct vmci_resource_query_msg {
+	uint32_t numResources;
+	uint32_t _padding;
+	uint32_t resources[1];
+};
+
+/*
+ * The maximum number of resources that can be queried using
+ * VMCI_RESOURCE_QUERY is 31, as the result is encoded in the lower 31
+ * bits of a positive return value. Negative values are reserved for
+ * errors.
+ */
+#define VMCI_RESOURCE_QUERY_MAX_NUM 31
+
+/* Maximum size for the VMCI_RESOURCE_QUERY request. */
+#define VMCI_RESOURCE_QUERY_MAX_SIZE				\
+	(sizeof(struct vmci_resource_query_hdr) +		\
+	 sizeof(uint32_t) * VMCI_RESOURCE_QUERY_MAX_NUM)
+
+/*
+ * Struct used for setting the notification bitmap.  All fields in
+ * struct are aligned to their natural alignment.
+ */
+struct vmci_notify_bm_set_msg {
+	struct vmci_dg hdr;
+	uint32_t bitmapPPN;
+	uint32_t _pad;
+};
+
+/*
+ * Struct used for linking a doorbell handle with an index in the
+ * notify bitmap. All fields in struct are aligned to their natural
+ * alignment.
+ */
+struct vmci_doorbell_link_msg {
+	struct vmci_dg hdr;
+	struct vmci_handle handle;
+	uint64_t notifyIdx;
+};
+
+/*
+ * Struct used for unlinking a doorbell handle from an index in the
+ * notify bitmap. All fields in struct are aligned to their natural
+ * alignment.
+ */
+struct vmci_doorbell_unlink_msg {
+	struct vmci_dg hdr;
+	struct vmci_handle handle;
+};
+
+/*
+ * Struct used for generating a notification on a doorbell handle. All
+ * fields in struct are aligned to their natural alignment.
+ */
+struct vmci_doorbell_ntfy_msg {
+	struct vmci_dg hdr;
+	struct vmci_handle handle;
+};
+
+/*
+ * This struct is used to contain data for events.  Size of this struct is a
+ * multiple of 8 bytes, and all fields are aligned to their natural alignment.
+ */
+struct vmci_event_data {
+	uint32_t event;		/* 4 bytes. */
+	uint32_t _pad;
+	/* Event payload is put here. */
+};
+
+
+/*
+ * Define the different VMCI_EVENT payload data types here.  All structs must
+ * be a multiple of 8 bytes, and fields must be aligned to their natural
+ * alignment.
+ */
+struct vmci_event_payld_ctx {
+	uint32_t contextID;	/* 4 bytes. */
+	uint32_t _pad;
+};
+
+struct vmci_event_payld_qp {
+	struct vmci_handle handle;	/* QueuePair handle. */
+	uint32_t peerId;	/* Context id of attaching/detaching VM. */
+	uint32_t _pad;
+};
+
+/*
+ * We define the following struct to get the size of the maximum event
+ * data the hypervisor may send to the guest.  If adding a new event
+ * payload type above, add it to the following struct too (inside the
+ * union).
+ */
+struct vmci_event_data_max {
+	struct vmci_event_data eventData;
+	union {
+		struct vmci_event_payld_ctx contextPayload;
+		struct vmci_event_payld_qp qpPayload;
+	} evDataPayload;
+};
+
+/*
+ * Struct used for VMCI_EVENT_SUBSCRIBE/UNSUBSCRIBE and
+ * VMCI_EVENT_HANDLER messages.  Struct size is 32 bytes.  All fields
+ * in struct are aligned to their natural alignment.
+ */
+struct vmci_event_msg {
+	struct vmci_dg hdr;
+
+	/* Has event type and payload. */
+	struct vmci_event_data eventData;
+
+	/* Payload gets put here. */
+};
+
+/*
+ * Structs used for QueuePair alloc and detach messages.  We align fields of
+ * these structs to 64bit boundaries.
+ */
+struct vmci_qp_alloc_msg {
+	struct vmci_dg hdr;
+	struct vmci_handle handle;
+	uint32_t peer;
+	uint32_t flags;
+	uint64_t produceSize;
+	uint64_t consumeSize;
+	uint64_t numPPNs;
+
+	/* List of PPNs placed here. */
+};
+
+struct vmci_qp_detach_msg {
+	struct vmci_dg hdr;
+	struct vmci_handle handle;
+};
+
+/* VMCI Doorbell API. */
+#define VMCI_FLAG_DELAYED_CB 0x01
+
+typedef void (*VMCICallback) (void *clientData);
+
+/**
+ * struct vmci_qp - A vmw_vmci queue pair handle.
+ *
+ * This structure is used as a handle to a queue pair created by
+ * VMCI.  It is intentionally left opaque to clients.
+ */
+struct vmci_qp;
+
+/* Callback needed for correctly waiting on events. */
+typedef int (*VMCIDatagramRecvCB) (void *clientData,
+				   struct vmci_dg *msg);
+
+/* VMCI Event API. */
+typedef void (*VMCI_EventCB) (uint32_t subID, struct vmci_event_data *ed,
+			      void *clientData);
+
+/*
+ * We use the following inline function to access the payload data
+ * associated with an event data.
+ */
+static inline void *vmci_event_data_payload(struct vmci_event_data *evData)
+{
+	return (void *)((char *)evData + sizeof *evData);
+}
+
+/*
+ * Helper to add a given offset to a head or tail pointer. Wraps the
+ * value of the pointer around the max size of the queue.
+ */
+static inline void vmci_qp_add_pointer(atomic64_t *var,
+				       size_t add,
+				       uint64_t size)
+{
+	uint64_t newVal = atomic64_read(var);
+
+	if (newVal >= size - add)
+		newVal -= size;
+
+	newVal += add;
+
+	atomic64_set(var, newVal);
+}
+
+/*
+ * Helper routine to get the Producer Tail from the supplied queue.
+ */
+static inline uint64_t
+vmci_q_header_producer_tail(const struct vmci_queue_header *qHeader)
+{
+	struct vmci_queue_header *qh = (struct vmci_queue_header *)qHeader;
+	return atomic64_read(&qh->producerTail);
+}
+
+/*
+ * Helper routine to get the Consumer Head from the supplied queue.
+ */
+static inline uint64_t
+vmci_q_header_consumer_head(const struct vmci_queue_header *qHeader)
+{
+	struct vmci_queue_header *qh = (struct vmci_queue_header *)qHeader;
+	return atomic64_read(&qh->consumerHead);
+}
+
+/*
+ * Helper routine to increment the Producer Tail.  Fundamentally,
+ * vmci_qp_add_pointer() is used to manipulate the tail itself.
+ */
+static inline void
+vmci_q_header_add_producer_tail(struct vmci_queue_header *qHeader,
+				size_t add,
+				uint64_t queueSize)
+{
+	vmci_qp_add_pointer(&qHeader->producerTail, add, queueSize);
+}
+
+/*
+ * Helper routine to increment the Consumer Head.  Fundamentally,
+ * vmci_qp_add_pointer() is used to manipulate the head itself.
+ */
+static inline void
+vmci_q_header_add_consumer_head(struct vmci_queue_header *qHeader,
+				size_t add,
+				uint64_t queueSize)
+{
+	vmci_qp_add_pointer(&qHeader->consumerHead, add, queueSize);
+}
+
+/*
+ * Helper routine for getting the head and the tail pointer for a queue.
+ * Both the VMCIQueues are needed to get both the pointers for one queue.
+ */
+static inline void
+vmci_q_header_get_pointers(const struct vmci_queue_header *produceQHeader,
+			   const struct vmci_queue_header *consumeQHeader,
+			   uint64_t *producerTail,
+			   uint64_t *consumerHead)
+{
+	if (producerTail)
+		*producerTail = vmci_q_header_producer_tail(produceQHeader);
+
+	if (consumerHead)
+		*consumerHead = vmci_q_header_consumer_head(consumeQHeader);
+}
+
+static inline void vmci_q_header_init(struct vmci_queue_header *qHeader,
+				      const struct vmci_handle handle)
+{
+	qHeader->handle = handle;
+	atomic64_set(&qHeader->producerTail, 0);
+	atomic64_set(&qHeader->consumerHead, 0);
+}
+
+/*
+ * Finds available free space in a produce queue to enqueue more
+ * data or reports an error if queue pair corruption is detected.
+ */
+static int64_t
+vmci_q_header_free_space(const struct vmci_queue_header *produceQHeader,
+			 const struct vmci_queue_header *consumeQHeader,
+			 const uint64_t produceQSize)
+{
+	uint64_t tail;
+	uint64_t head;
+	uint64_t freeSpace;
+
+	tail = vmci_q_header_producer_tail(produceQHeader);
+	head = vmci_q_header_consumer_head(consumeQHeader);
+
+	if (tail >= produceQSize || head >= produceQSize)
+		return VMCI_ERROR_INVALID_SIZE;
+
+	/*
+	 * Deduct 1 to avoid tail becoming equal to head which causes
+	 * ambiguity. If head and tail are equal it means that the
+	 * queue is empty.
+	 */
+	if (tail >= head)
+		freeSpace = produceQSize - (tail - head) - 1;
+	else
+		freeSpace = head - tail - 1;
+
+	return freeSpace;
+}
+
+/*
+ * vmci_q_header_free_space() does all the heavy lifting of
+ * determing the number of free bytes in a Queue.  This routine,
+ * then subtracts that size from the full size of the Queue so
+ * the caller knows how many bytes are ready to be dequeued.
+ * Results:
+ * On success, available data size in bytes (up to MAX_INT64).
+ * On failure, appropriate error code.
+ */
+static inline int64_t
+vmci_q_header_buf_ready(const struct vmci_queue_header *consumeQHeader,
+			const struct vmci_queue_header *produceQHeader,
+			const uint64_t consumeQSize)
+{
+	int64_t freeSpace;
+
+	freeSpace = vmci_q_header_free_space(consumeQHeader,
+					     produceQHeader, consumeQSize);
+	if (freeSpace < VMCI_SUCCESS)
+		return freeSpace;
+
+	return consumeQSize - freeSpace - 1;
+}
+
+static inline struct vmci_handle vmci_make_handle(uint32_t cid, uint32_t rid)
+{
+	struct vmci_handle h;
+
+	h.context = cid;
+	h.resource = rid;
+
+	return h;
+}
+
+#endif /* _VMW_VMCI_DEF_H_ */
-- 
1.7.0.4

^ permalink raw reply related	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 00/11] VMCI for Linux
  2012-07-26 23:39 ` Andrew Stiegmann (stieg)
@ 2012-07-26 23:47   ` Greg KH
  -1 siblings, 0 replies; 72+ messages in thread
From: Greg KH @ 2012-07-26 23:47 UTC (permalink / raw)
  To: Andrew Stiegmann (stieg)
  Cc: linux-kernel, virtualization, pv-drivers, vm-crosstalk, cschamp

On Thu, Jul 26, 2012 at 04:39:29PM -0700, Andrew Stiegmann (stieg) wrote:
> In an effort to improve the out-of-the-box experience with Linux
> kernels for VMware users, VMware is working on readying the Virtual
> Machine Communication Interface (vmw_vmci) and VMCI Sockets
> (vmw_vsock) kernel modules for inclusion in the Linux kernel. The
> purpose of this post is to acquire feedback on the vmw_vmci kernel
> module. The vmw_vsock kernel module will be presented in a later post.

Ugh, you do realize this is the middle of the merge window when we are
all busy doing other things than code review of new stuff, right?  It's
going to be a few weeks before I can look at this, sorry.

good luck,

greg k-h

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 00/11] VMCI for Linux
@ 2012-07-26 23:47   ` Greg KH
  0 siblings, 0 replies; 72+ messages in thread
From: Greg KH @ 2012-07-26 23:47 UTC (permalink / raw)
  To: Andrew Stiegmann (stieg)
  Cc: pv-drivers, vm-crosstalk, linux-kernel, cschamp, virtualization

On Thu, Jul 26, 2012 at 04:39:29PM -0700, Andrew Stiegmann (stieg) wrote:
> In an effort to improve the out-of-the-box experience with Linux
> kernels for VMware users, VMware is working on readying the Virtual
> Machine Communication Interface (vmw_vmci) and VMCI Sockets
> (vmw_vsock) kernel modules for inclusion in the Linux kernel. The
> purpose of this post is to acquire feedback on the vmw_vmci kernel
> module. The vmw_vsock kernel module will be presented in a later post.

Ugh, you do realize this is the middle of the merge window when we are
all busy doing other things than code review of new stuff, right?  It's
going to be a few weeks before I can look at this, sorry.

good luck,

greg k-h

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 01/11] Apply VMCI context code
  2012-07-26 23:39   ` Andrew Stiegmann (stieg)
@ 2012-07-26 23:48     ` Greg KH
  -1 siblings, 0 replies; 72+ messages in thread
From: Greg KH @ 2012-07-26 23:48 UTC (permalink / raw)
  To: Andrew Stiegmann (stieg)
  Cc: linux-kernel, virtualization, pv-drivers, vm-crosstalk, cschamp

On Thu, Jul 26, 2012 at 04:39:30PM -0700, Andrew Stiegmann (stieg) wrote:
> Context code maintains state for vmci and allows the driver
> to communicate with multiple VMs.
> 
> Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>

One tiny nit:

> diff --git a/drivers/misc/vmw_vmci/vmci_context.c b/drivers/misc/vmw_vmci/vmci_context.c
> new file mode 100644
> index 0000000..46faf10
> --- /dev/null
> +++ b/drivers/misc/vmw_vmci/vmci_context.c
> @@ -0,0 +1,1269 @@
> +/*
> + * VMware VMCI Driver
> + *
> + * Copyright (C) 2012 VMware, Inc. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License as published by the
> + * Free Software Foundation version 2 and no later version.
> + *
> + * This program is distributed in the hope that it will be useful, but
> + * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
> + * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
> + * for more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> + * with this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA

Are you prepared to track the office movements of the FSF for the next
40 years to keep this up to date?  If not, please don't include it, it's
not needed at all.

greg k-h

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 01/11] Apply VMCI context code
@ 2012-07-26 23:48     ` Greg KH
  0 siblings, 0 replies; 72+ messages in thread
From: Greg KH @ 2012-07-26 23:48 UTC (permalink / raw)
  To: Andrew Stiegmann (stieg)
  Cc: pv-drivers, vm-crosstalk, linux-kernel, cschamp, virtualization

On Thu, Jul 26, 2012 at 04:39:30PM -0700, Andrew Stiegmann (stieg) wrote:
> Context code maintains state for vmci and allows the driver
> to communicate with multiple VMs.
> 
> Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>

One tiny nit:

> diff --git a/drivers/misc/vmw_vmci/vmci_context.c b/drivers/misc/vmw_vmci/vmci_context.c
> new file mode 100644
> index 0000000..46faf10
> --- /dev/null
> +++ b/drivers/misc/vmw_vmci/vmci_context.c
> @@ -0,0 +1,1269 @@
> +/*
> + * VMware VMCI Driver
> + *
> + * Copyright (C) 2012 VMware, Inc. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License as published by the
> + * Free Software Foundation version 2 and no later version.
> + *
> + * This program is distributed in the hope that it will be useful, but
> + * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
> + * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
> + * for more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> + * with this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA

Are you prepared to track the office movements of the FSF for the next
40 years to keep this up to date?  If not, please don't include it, it's
not needed at all.

greg k-h

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 07/11] Apply VMCI hash table
  2012-07-26 23:39   ` Andrew Stiegmann (stieg)
@ 2012-07-26 23:49     ` Greg KH
  -1 siblings, 0 replies; 72+ messages in thread
From: Greg KH @ 2012-07-26 23:49 UTC (permalink / raw)
  To: Andrew Stiegmann (stieg)
  Cc: linux-kernel, virtualization, pv-drivers, vm-crosstalk, cschamp

On Thu, Jul 26, 2012 at 04:39:36PM -0700, Andrew Stiegmann (stieg) wrote:
> Implements a hash table for VMCI's use.

What's wrong with the in-kernel hash table(s)?


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 07/11] Apply VMCI hash table
@ 2012-07-26 23:49     ` Greg KH
  0 siblings, 0 replies; 72+ messages in thread
From: Greg KH @ 2012-07-26 23:49 UTC (permalink / raw)
  To: Andrew Stiegmann (stieg)
  Cc: pv-drivers, vm-crosstalk, linux-kernel, cschamp, virtualization

On Thu, Jul 26, 2012 at 04:39:36PM -0700, Andrew Stiegmann (stieg) wrote:
> Implements a hash table for VMCI's use.

What's wrong with the in-kernel hash table(s)?

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-07-26 23:39   ` Andrew Stiegmann (stieg)
@ 2012-07-26 23:56     ` Greg KH
  -1 siblings, 0 replies; 72+ messages in thread
From: Greg KH @ 2012-07-26 23:56 UTC (permalink / raw)
  To: Andrew Stiegmann (stieg)
  Cc: linux-kernel, virtualization, pv-drivers, vm-crosstalk, cschamp

On Thu, Jul 26, 2012 at 04:39:40PM -0700, Andrew Stiegmann (stieg) wrote:
> +#define ASSERT(cond) BUG_ON(!(cond))

Don't do that, you just crashed someone's box and now they have no way
to recover it and tell you that you broke it.

> +#define CAN_BLOCK(_f) (!((_f) & VMCI_QPFLAG_NONBLOCK))
> +#define QP_PINNED(_f) ((_f) & VMCI_QPFLAG_PINNED)
> +
> +#define PCI_VENDOR_ID_VMWARE	0x15AD

What's wrong with the one in pci_ids.h?

> +#define PCI_DEVICE_ID_VMWARE_VMCI	0x0740
> +#define VMCI_DRIVER_VERSION_STRING	"9.5.5.0-k"

Do you really need this?

> +#define MODULE_NAME "vmw_vmci"

The kernel provides this for you already, don't duplicate it.

> +
> +/* Print magic... whee! */
> +#ifdef pr_fmt
> +#undef pr_fmt

No need for these 2 lines

> +#define pr_fmt(fmt) MODULE_NAME ": " fmt
> +#endif

Or this one.

> +/*
> + * Linux defines _IO* macros, but the core kernel code ignore the encoded
> + * ioctl value. It is up to individual drivers to decode the value (for
> + * example to look at the size of a structure to determine which version
> + * of a specific command should be used) or not (which is what we
> + * currently do, so right now the ioctl value for a given command is the
> + * command itself).
> + *
> + * Hence, we just define the IOCTL_VMCI_foo values directly, with no
> + * intermediate IOCTLCMD_ representation.
> + */
> +#  define IOCTLCMD(_cmd) IOCTL_VMCI_ ## _cmd

Are you sure about this comment?


> +
> +enum {
> +	/*
> +	 * We need to bracket the range of values used for ioctls,
> +	 * because x86_64 Linux forces us to explicitly register ioctl
> +	 * handlers by value for handling 32 bit ioctl syscalls.
> +	 * Hence FIRST and LAST.  Pick something for FIRST that
> +	 * doesn't collide with vmmon (2001+).
> +	 */
> +	IOCTLCMD(FIRST) = 1951,
> +	IOCTLCMD(VERSION) = IOCTLCMD(FIRST),
> +
> +	/* BEGIN VMCI */
> +	IOCTLCMD(INIT_CONTEXT),
> +
> +	/*
> +	 * The following two were used for process and datagram
> +	 * process creation.  They are not used anymore and reserved
> +	 * for future use.  They will fail if issued.
> +	 */
> +	IOCTLCMD(RESERVED1),
> +	IOCTLCMD(RESERVED2),
> +
> +	/*
> +	 * The following used to be for shared memory. It is now
> +	 * unused and and is reserved for future use. It will fail if
> +	 * issued.
> +	 */
> +	IOCTLCMD(RESERVED3),
> +
> +	/*
> +	 * The follwoing three were also used to be for shared
> +	 * memory. An old WS6 user-mode client might try to use them
> +	 * with the new driver, but since we ensure that only contexts
> +	 * created by VMX'en of the appropriate version
> +	 * (VMCI_VERSION_NOTIFY or VMCI_VERSION_NEWQP) or higher use
> +	 * these ioctl, everything is fine.
> +	 */
> +	IOCTLCMD(QUEUEPAIR_SETVA),
> +	IOCTLCMD(NOTIFY_RESOURCE),
> +	IOCTLCMD(NOTIFICATIONS_RECEIVE),
> +	IOCTLCMD(VERSION2),
> +	IOCTLCMD(QUEUEPAIR_ALLOC),
> +	IOCTLCMD(QUEUEPAIR_SETPAGEFILE),
> +	IOCTLCMD(QUEUEPAIR_DETACH),
> +	IOCTLCMD(DATAGRAM_SEND),
> +	IOCTLCMD(DATAGRAM_RECEIVE),
> +	IOCTLCMD(DATAGRAM_REQUEST_MAP),
> +	IOCTLCMD(DATAGRAM_REMOVE_MAP),
> +	IOCTLCMD(CTX_ADD_NOTIFICATION),
> +	IOCTLCMD(CTX_REMOVE_NOTIFICATION),
> +	IOCTLCMD(CTX_GET_CPT_STATE),
> +	IOCTLCMD(CTX_SET_CPT_STATE),
> +	IOCTLCMD(GET_CONTEXT_ID),
> +	IOCTLCMD(LAST),
> +	/* END VMCI */
> +
> +	/*
> +	 * VMCI Socket IOCTLS are defined next and go from
> +	 * IOCTLCMD(LAST) (1972) to 1990.  VMware reserves a range of
> +	 * 4 ioctls for VMCI Sockets to grow.  We cannot reserve many
> +	 * ioctls here since we are close to overlapping with vmmon
> +	 * ioctls (2001+).  Define a meta-ioctl if running out of this
> +	 * binary space.
> +	 */
> +	IOCTLCMD(SOCKETS_LAST) = 1994,	/* 1994 on Linux. */
> +
> +	/*
> +	 * The VSockets ioctls occupy the block above.  We define a
> +	 * new range of VMCI ioctls to maintain binary compatibility
> +	 * between the user land and the kernel driver.  Careful,
> +	 * vmmon ioctls start from 2001, so this means we can add only
> +	 * 4 new VMCI ioctls.  Define a meta-ioctl if running out of
> +	 * this binary space.
> +	 */
> +	IOCTLCMD(FIRST2),
> +	IOCTLCMD(SET_NOTIFY) = IOCTLCMD(FIRST2),	/* 1995 on Linux. */
> +	IOCTLCMD(LAST2),
> +};

That's a lot of ioctls.  Why not just create a new system call, or many
system calls, instead?

> +/*
> + * This struct is used to contain data for events.  Size of this struct is a
> + * multiple of 8 bytes, and all fields are aligned to their natural alignment.
> + */
> +struct vmci_event_data {
> +	uint32_t event;		/* 4 bytes. */
> +	uint32_t _pad;
> +	/* Event payload is put here. */
> +};

Why not put an empty array so you can get to the data easier instead of
having to do looney inline functions like this:

> +/*
> + * We use the following inline function to access the payload data
> + * associated with an event data.
> + */
> +static inline void *vmci_event_data_payload(struct vmci_event_data *evData)
> +{
> +	return (void *)((char *)evData + sizeof *evData);
> +}

Same goes for other structures that you do this same thing.

greg k-h

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
@ 2012-07-26 23:56     ` Greg KH
  0 siblings, 0 replies; 72+ messages in thread
From: Greg KH @ 2012-07-26 23:56 UTC (permalink / raw)
  To: Andrew Stiegmann (stieg)
  Cc: pv-drivers, vm-crosstalk, linux-kernel, cschamp, virtualization

On Thu, Jul 26, 2012 at 04:39:40PM -0700, Andrew Stiegmann (stieg) wrote:
> +#define ASSERT(cond) BUG_ON(!(cond))

Don't do that, you just crashed someone's box and now they have no way
to recover it and tell you that you broke it.

> +#define CAN_BLOCK(_f) (!((_f) & VMCI_QPFLAG_NONBLOCK))
> +#define QP_PINNED(_f) ((_f) & VMCI_QPFLAG_PINNED)
> +
> +#define PCI_VENDOR_ID_VMWARE	0x15AD

What's wrong with the one in pci_ids.h?

> +#define PCI_DEVICE_ID_VMWARE_VMCI	0x0740
> +#define VMCI_DRIVER_VERSION_STRING	"9.5.5.0-k"

Do you really need this?

> +#define MODULE_NAME "vmw_vmci"

The kernel provides this for you already, don't duplicate it.

> +
> +/* Print magic... whee! */
> +#ifdef pr_fmt
> +#undef pr_fmt

No need for these 2 lines

> +#define pr_fmt(fmt) MODULE_NAME ": " fmt
> +#endif

Or this one.

> +/*
> + * Linux defines _IO* macros, but the core kernel code ignore the encoded
> + * ioctl value. It is up to individual drivers to decode the value (for
> + * example to look at the size of a structure to determine which version
> + * of a specific command should be used) or not (which is what we
> + * currently do, so right now the ioctl value for a given command is the
> + * command itself).
> + *
> + * Hence, we just define the IOCTL_VMCI_foo values directly, with no
> + * intermediate IOCTLCMD_ representation.
> + */
> +#  define IOCTLCMD(_cmd) IOCTL_VMCI_ ## _cmd

Are you sure about this comment?


> +
> +enum {
> +	/*
> +	 * We need to bracket the range of values used for ioctls,
> +	 * because x86_64 Linux forces us to explicitly register ioctl
> +	 * handlers by value for handling 32 bit ioctl syscalls.
> +	 * Hence FIRST and LAST.  Pick something for FIRST that
> +	 * doesn't collide with vmmon (2001+).
> +	 */
> +	IOCTLCMD(FIRST) = 1951,
> +	IOCTLCMD(VERSION) = IOCTLCMD(FIRST),
> +
> +	/* BEGIN VMCI */
> +	IOCTLCMD(INIT_CONTEXT),
> +
> +	/*
> +	 * The following two were used for process and datagram
> +	 * process creation.  They are not used anymore and reserved
> +	 * for future use.  They will fail if issued.
> +	 */
> +	IOCTLCMD(RESERVED1),
> +	IOCTLCMD(RESERVED2),
> +
> +	/*
> +	 * The following used to be for shared memory. It is now
> +	 * unused and and is reserved for future use. It will fail if
> +	 * issued.
> +	 */
> +	IOCTLCMD(RESERVED3),
> +
> +	/*
> +	 * The follwoing three were also used to be for shared
> +	 * memory. An old WS6 user-mode client might try to use them
> +	 * with the new driver, but since we ensure that only contexts
> +	 * created by VMX'en of the appropriate version
> +	 * (VMCI_VERSION_NOTIFY or VMCI_VERSION_NEWQP) or higher use
> +	 * these ioctl, everything is fine.
> +	 */
> +	IOCTLCMD(QUEUEPAIR_SETVA),
> +	IOCTLCMD(NOTIFY_RESOURCE),
> +	IOCTLCMD(NOTIFICATIONS_RECEIVE),
> +	IOCTLCMD(VERSION2),
> +	IOCTLCMD(QUEUEPAIR_ALLOC),
> +	IOCTLCMD(QUEUEPAIR_SETPAGEFILE),
> +	IOCTLCMD(QUEUEPAIR_DETACH),
> +	IOCTLCMD(DATAGRAM_SEND),
> +	IOCTLCMD(DATAGRAM_RECEIVE),
> +	IOCTLCMD(DATAGRAM_REQUEST_MAP),
> +	IOCTLCMD(DATAGRAM_REMOVE_MAP),
> +	IOCTLCMD(CTX_ADD_NOTIFICATION),
> +	IOCTLCMD(CTX_REMOVE_NOTIFICATION),
> +	IOCTLCMD(CTX_GET_CPT_STATE),
> +	IOCTLCMD(CTX_SET_CPT_STATE),
> +	IOCTLCMD(GET_CONTEXT_ID),
> +	IOCTLCMD(LAST),
> +	/* END VMCI */
> +
> +	/*
> +	 * VMCI Socket IOCTLS are defined next and go from
> +	 * IOCTLCMD(LAST) (1972) to 1990.  VMware reserves a range of
> +	 * 4 ioctls for VMCI Sockets to grow.  We cannot reserve many
> +	 * ioctls here since we are close to overlapping with vmmon
> +	 * ioctls (2001+).  Define a meta-ioctl if running out of this
> +	 * binary space.
> +	 */
> +	IOCTLCMD(SOCKETS_LAST) = 1994,	/* 1994 on Linux. */
> +
> +	/*
> +	 * The VSockets ioctls occupy the block above.  We define a
> +	 * new range of VMCI ioctls to maintain binary compatibility
> +	 * between the user land and the kernel driver.  Careful,
> +	 * vmmon ioctls start from 2001, so this means we can add only
> +	 * 4 new VMCI ioctls.  Define a meta-ioctl if running out of
> +	 * this binary space.
> +	 */
> +	IOCTLCMD(FIRST2),
> +	IOCTLCMD(SET_NOTIFY) = IOCTLCMD(FIRST2),	/* 1995 on Linux. */
> +	IOCTLCMD(LAST2),
> +};

That's a lot of ioctls.  Why not just create a new system call, or many
system calls, instead?

> +/*
> + * This struct is used to contain data for events.  Size of this struct is a
> + * multiple of 8 bytes, and all fields are aligned to their natural alignment.
> + */
> +struct vmci_event_data {
> +	uint32_t event;		/* 4 bytes. */
> +	uint32_t _pad;
> +	/* Event payload is put here. */
> +};

Why not put an empty array so you can get to the data easier instead of
having to do looney inline functions like this:

> +/*
> + * We use the following inline function to access the payload data
> + * associated with an event data.
> + */
> +static inline void *vmci_event_data_payload(struct vmci_event_data *evData)
> +{
> +	return (void *)((char *)evData + sizeof *evData);
> +}

Same goes for other structures that you do this same thing.

greg k-h

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 01/11] Apply VMCI context code
  2012-07-26 23:48     ` Greg KH
@ 2012-07-27  0:01       ` Andrew Stiegmann
  -1 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann @ 2012-07-27  0:01 UTC (permalink / raw)
  To: Greg KH; +Cc: linux-kernel, virtualization, pv-drivers, vm-crosstalk, cschamp

Removed.  Thanks.

----- Original Message -----
> From: "Greg KH" <gregkh@linuxfoundation.org>
> To: "Andrew Stiegmann (stieg)" <astiegmann@vmware.com>
> Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, pv-drivers@vmware.com,
> vm-crosstalk@vmware.com, cschamp@vmware.com
> Sent: Thursday, July 26, 2012 4:48:50 PM
> Subject: Re: [vmw_vmci 01/11] Apply VMCI context code
> 
> On Thu, Jul 26, 2012 at 04:39:30PM -0700, Andrew Stiegmann (stieg)
> wrote:
> > Context code maintains state for vmci and allows the driver
> > to communicate with multiple VMs.
> > 
> > Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
> 
> One tiny nit:
> 
> > diff --git a/drivers/misc/vmw_vmci/vmci_context.c
> > b/drivers/misc/vmw_vmci/vmci_context.c
> > new file mode 100644
> > index 0000000..46faf10
> > --- /dev/null
> > +++ b/drivers/misc/vmw_vmci/vmci_context.c
> > @@ -0,0 +1,1269 @@
> > +/*
> > + * VMware VMCI Driver
> > + *
> > + * Copyright (C) 2012 VMware, Inc. All rights reserved.
> > + *
> > + * This program is free software; you can redistribute it and/or
> > modify it
> > + * under the terms of the GNU General Public License as published
> > by the
> > + * Free Software Foundation version 2 and no later version.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > but
> > + * WITHOUT ANY WARRANTY; without even the implied warranty of
> > MERCHANTABILITY
> > + * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General
> > Public License
> > + * for more details.
> > + *
> > + * You should have received a copy of the GNU General Public
> > License along
> > + * with this program; if not, write to the Free Software
> > Foundation, Inc.,
> > + * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
> 
> Are you prepared to track the office movements of the FSF for the
> next
> 40 years to keep this up to date?  If not, please don't include it,
> it's
> not needed at all.
> 
> greg k-h
> 

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 01/11] Apply VMCI context code
@ 2012-07-27  0:01       ` Andrew Stiegmann
  0 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann @ 2012-07-27  0:01 UTC (permalink / raw)
  To: Greg KH; +Cc: pv-drivers, vm-crosstalk, linux-kernel, cschamp, virtualization

Removed.  Thanks.

----- Original Message -----
> From: "Greg KH" <gregkh@linuxfoundation.org>
> To: "Andrew Stiegmann (stieg)" <astiegmann@vmware.com>
> Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, pv-drivers@vmware.com,
> vm-crosstalk@vmware.com, cschamp@vmware.com
> Sent: Thursday, July 26, 2012 4:48:50 PM
> Subject: Re: [vmw_vmci 01/11] Apply VMCI context code
> 
> On Thu, Jul 26, 2012 at 04:39:30PM -0700, Andrew Stiegmann (stieg)
> wrote:
> > Context code maintains state for vmci and allows the driver
> > to communicate with multiple VMs.
> > 
> > Signed-off-by: Andrew Stiegmann (stieg) <astiegmann@vmware.com>
> 
> One tiny nit:
> 
> > diff --git a/drivers/misc/vmw_vmci/vmci_context.c
> > b/drivers/misc/vmw_vmci/vmci_context.c
> > new file mode 100644
> > index 0000000..46faf10
> > --- /dev/null
> > +++ b/drivers/misc/vmw_vmci/vmci_context.c
> > @@ -0,0 +1,1269 @@
> > +/*
> > + * VMware VMCI Driver
> > + *
> > + * Copyright (C) 2012 VMware, Inc. All rights reserved.
> > + *
> > + * This program is free software; you can redistribute it and/or
> > modify it
> > + * under the terms of the GNU General Public License as published
> > by the
> > + * Free Software Foundation version 2 and no later version.
> > + *
> > + * This program is distributed in the hope that it will be useful,
> > but
> > + * WITHOUT ANY WARRANTY; without even the implied warranty of
> > MERCHANTABILITY
> > + * or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General
> > Public License
> > + * for more details.
> > + *
> > + * You should have received a copy of the GNU General Public
> > License along
> > + * with this program; if not, write to the Free Software
> > Foundation, Inc.,
> > + * 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
> 
> Are you prepared to track the office movements of the FSF for the
> next
> 40 years to keep this up to date?  If not, please don't include it,
> it's
> not needed at all.
> 
> greg k-h
> 

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 07/11] Apply VMCI hash table
  2012-07-26 23:49     ` Greg KH
@ 2012-07-27  0:01       ` Andrew Stiegmann
  -1 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann @ 2012-07-27  0:01 UTC (permalink / raw)
  To: Greg KH; +Cc: linux-kernel, virtualization, pv-drivers, vm-crosstalk, cschamp

Must have missed it.  Will look into it.

----- Original Message -----
> From: "Greg KH" <gregkh@linuxfoundation.org>
> To: "Andrew Stiegmann (stieg)" <astiegmann@vmware.com>
> Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, pv-drivers@vmware.com,
> vm-crosstalk@vmware.com, cschamp@vmware.com
> Sent: Thursday, July 26, 2012 4:49:54 PM
> Subject: Re: [vmw_vmci 07/11] Apply VMCI hash table
> 
> On Thu, Jul 26, 2012 at 04:39:36PM -0700, Andrew Stiegmann (stieg)
> wrote:
> > Implements a hash table for VMCI's use.
> 
> What's wrong with the in-kernel hash table(s)?
> 
> 

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 07/11] Apply VMCI hash table
@ 2012-07-27  0:01       ` Andrew Stiegmann
  0 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann @ 2012-07-27  0:01 UTC (permalink / raw)
  To: Greg KH; +Cc: pv-drivers, vm-crosstalk, linux-kernel, cschamp, virtualization

Must have missed it.  Will look into it.

----- Original Message -----
> From: "Greg KH" <gregkh@linuxfoundation.org>
> To: "Andrew Stiegmann (stieg)" <astiegmann@vmware.com>
> Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, pv-drivers@vmware.com,
> vm-crosstalk@vmware.com, cschamp@vmware.com
> Sent: Thursday, July 26, 2012 4:49:54 PM
> Subject: Re: [vmw_vmci 07/11] Apply VMCI hash table
> 
> On Thu, Jul 26, 2012 at 04:39:36PM -0700, Andrew Stiegmann (stieg)
> wrote:
> > Implements a hash table for VMCI's use.
> 
> What's wrong with the in-kernel hash table(s)?
> 
> 

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 00/11] VMCI for Linux
  2012-07-26 23:39 ` Andrew Stiegmann (stieg)
                   ` (13 preceding siblings ...)
  (?)
@ 2012-07-27  1:06 ` Josh Boyer
  2012-07-27  1:46     ` Greg KH
  -1 siblings, 1 reply; 72+ messages in thread
From: Josh Boyer @ 2012-07-27  1:06 UTC (permalink / raw)
  To: Andrew Stiegmann (stieg)
  Cc: linux-kernel, virtualization, pv-drivers, vm-crosstalk, cschamp, gregkh

On Thu, Jul 26, 2012 at 7:39 PM, Andrew Stiegmann (stieg)
<astiegmann@vmware.com> wrote:
>  drivers/misc/Kconfig                      |    1 +
>  drivers/misc/Makefile                     |    1 +
>  drivers/misc/vmw_vmci/Kconfig             |   16 +

Is there a reason this isn't going into staging first?  The Hyper-V
drivers went through staging and that actually seemed to work fairly
well.

josh

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 00/11] VMCI for Linux
  2012-07-26 23:39 ` Andrew Stiegmann (stieg)
                   ` (12 preceding siblings ...)
  (?)
@ 2012-07-27  1:06 ` Josh Boyer
  -1 siblings, 0 replies; 72+ messages in thread
From: Josh Boyer @ 2012-07-27  1:06 UTC (permalink / raw)
  To: Andrew Stiegmann (stieg)
  Cc: pv-drivers, gregkh, linux-kernel, virtualization, vm-crosstalk, cschamp

On Thu, Jul 26, 2012 at 7:39 PM, Andrew Stiegmann (stieg)
<astiegmann@vmware.com> wrote:
>  drivers/misc/Kconfig                      |    1 +
>  drivers/misc/Makefile                     |    1 +
>  drivers/misc/vmw_vmci/Kconfig             |   16 +

Is there a reason this isn't going into staging first?  The Hyper-V
drivers went through staging and that actually seemed to work fairly
well.

josh

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 00/11] VMCI for Linux
  2012-07-27  1:06 ` Josh Boyer
@ 2012-07-27  1:46     ` Greg KH
  0 siblings, 0 replies; 72+ messages in thread
From: Greg KH @ 2012-07-27  1:46 UTC (permalink / raw)
  To: Josh Boyer
  Cc: Andrew Stiegmann (stieg),
	linux-kernel, virtualization, pv-drivers, vm-crosstalk, cschamp

On Thu, Jul 26, 2012 at 09:06:25PM -0400, Josh Boyer wrote:
> On Thu, Jul 26, 2012 at 7:39 PM, Andrew Stiegmann (stieg)
> <astiegmann@vmware.com> wrote:
> >  drivers/misc/Kconfig                      |    1 +
> >  drivers/misc/Makefile                     |    1 +
> >  drivers/misc/vmw_vmci/Kconfig             |   16 +
> 
> Is there a reason this isn't going into staging first?  The Hyper-V
> drivers went through staging and that actually seemed to work fairly
> well.

Is there some reason you feel this should be in the staging tree now?
Why?

greg k-h

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 00/11] VMCI for Linux
@ 2012-07-27  1:46     ` Greg KH
  0 siblings, 0 replies; 72+ messages in thread
From: Greg KH @ 2012-07-27  1:46 UTC (permalink / raw)
  To: Josh Boyer
  Cc: pv-drivers, linux-kernel, virtualization, vm-crosstalk,
	Andrew Stiegmann (stieg),
	cschamp

On Thu, Jul 26, 2012 at 09:06:25PM -0400, Josh Boyer wrote:
> On Thu, Jul 26, 2012 at 7:39 PM, Andrew Stiegmann (stieg)
> <astiegmann@vmware.com> wrote:
> >  drivers/misc/Kconfig                      |    1 +
> >  drivers/misc/Makefile                     |    1 +
> >  drivers/misc/vmw_vmci/Kconfig             |   16 +
> 
> Is there a reason this isn't going into staging first?  The Hyper-V
> drivers went through staging and that actually seemed to work fairly
> well.

Is there some reason you feel this should be in the staging tree now?
Why?

greg k-h

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-07-26 23:39   ` Andrew Stiegmann (stieg)
@ 2012-07-27  9:53     ` Alan Cox
  -1 siblings, 0 replies; 72+ messages in thread
From: Alan Cox @ 2012-07-27  9:53 UTC (permalink / raw)
  To: Andrew Stiegmann (stieg)
  Cc: linux-kernel, virtualization, pv-drivers, vm-crosstalk, cschamp, gregkh

> +enum {
> +	VMCI_SUCCESS_QUEUEPAIR_ATTACH	=  5,
> +	VMCI_SUCCESS_QUEUEPAIR_CREATE	=  4,
> +	VMCI_SUCCESS_LAST_DETACH	=  3,
> +	VMCI_SUCCESS_ACCESS_GRANTED	=  2,
> +	VMCI_SUCCESS_ENTRY_DEAD	=  1,

We've got a nice collection of Linux error codes than you, and it would
make the driver enormously more readable on the Linux side if as low
level as possible it started using Linux error codes.


> +	VMCI_SUCCESS			=  0,
> +	VMCI_ERROR_INVALID_RESOURCE	= (-1),
> +	VMCI_ERROR_INVALID_ARGS	= (-2),
> +	VMCI_ERROR_NO_MEM		= (-3),
> +	VMCI_ERROR_DATAGRAM_FAILED	= (-4),
> +	VMCI_ERROR_MORE_DATA		= (-5),
> +	VMCI_ERROR_NO_MORE_DATAGRAMS	= (-6),
> +	VMCI_ERROR_NO_ACCESS		= (-7),
> +	VMCI_ERROR_NO_HANDLE		= (-8),
> +	VMCI_ERROR_DUPLICATE_ENTRY	= (-9),
> +	VMCI_ERROR_DST_UNREACHABLE	= (-10),
> +	VMCI_ERROR_PAYLOAD_TOO_LARGE	= (-11),
> +	VMCI_ERROR_INVALID_PRIV	= (-12),

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
@ 2012-07-27  9:53     ` Alan Cox
  0 siblings, 0 replies; 72+ messages in thread
From: Alan Cox @ 2012-07-27  9:53 UTC (permalink / raw)
  To: Andrew Stiegmann (stieg)
  Cc: pv-drivers, gregkh, linux-kernel, virtualization, vm-crosstalk, cschamp

> +enum {
> +	VMCI_SUCCESS_QUEUEPAIR_ATTACH	=  5,
> +	VMCI_SUCCESS_QUEUEPAIR_CREATE	=  4,
> +	VMCI_SUCCESS_LAST_DETACH	=  3,
> +	VMCI_SUCCESS_ACCESS_GRANTED	=  2,
> +	VMCI_SUCCESS_ENTRY_DEAD	=  1,

We've got a nice collection of Linux error codes than you, and it would
make the driver enormously more readable on the Linux side if as low
level as possible it started using Linux error codes.


> +	VMCI_SUCCESS			=  0,
> +	VMCI_ERROR_INVALID_RESOURCE	= (-1),
> +	VMCI_ERROR_INVALID_ARGS	= (-2),
> +	VMCI_ERROR_NO_MEM		= (-3),
> +	VMCI_ERROR_DATAGRAM_FAILED	= (-4),
> +	VMCI_ERROR_MORE_DATA		= (-5),
> +	VMCI_ERROR_NO_MORE_DATAGRAMS	= (-6),
> +	VMCI_ERROR_NO_ACCESS		= (-7),
> +	VMCI_ERROR_NO_HANDLE		= (-8),
> +	VMCI_ERROR_DUPLICATE_ENTRY	= (-9),
> +	VMCI_ERROR_DST_UNREACHABLE	= (-10),
> +	VMCI_ERROR_PAYLOAD_TOO_LARGE	= (-11),
> +	VMCI_ERROR_INVALID_PRIV	= (-12),

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-07-26 23:39   ` Andrew Stiegmann (stieg)
                     ` (3 preceding siblings ...)
  (?)
@ 2012-07-27 10:34   ` Sam Ravnborg
  2012-07-27 17:20       ` Andrew Stiegmann
                       ` (2 more replies)
  -1 siblings, 3 replies; 72+ messages in thread
From: Sam Ravnborg @ 2012-07-27 10:34 UTC (permalink / raw)
  To: Andrew Stiegmann (stieg)
  Cc: linux-kernel, virtualization, pv-drivers, vm-crosstalk, cschamp, gregkh

Hi Andrew.

A few things noted in the following..

> 
> diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
> index 2661f6e..fe38c7a 100644
> --- a/drivers/misc/Kconfig
> +++ b/drivers/misc/Kconfig
> @@ -517,4 +517,5 @@ source "drivers/misc/lis3lv02d/Kconfig"
>  source "drivers/misc/carma/Kconfig"
>  source "drivers/misc/altera-stapl/Kconfig"
>  source "drivers/misc/mei/Kconfig"
> +source "drivers/misc/vmw_vmci/Kconfig"
>  endmenu
> diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
> index 456972f..af9e413 100644
> --- a/drivers/misc/Makefile
> +++ b/drivers/misc/Makefile
> @@ -51,3 +51,4 @@ obj-y				+= carma/
>  obj-$(CONFIG_USB_SWITCH_FSA9480) += fsa9480.o
>  obj-$(CONFIG_ALTERA_STAPL)	+=altera-stapl/
>  obj-$(CONFIG_INTEL_MEI)		+= mei/
> +obj-y				+= vmw_vmci/

Please use obj-$(CONFIG_VMWARE_VMCI) += vmw_vmci/

like we do in the other cases. This prevents us from visiting the directory
when this feature is not enabled.

> +++ b/drivers/misc/vmw_vmci/Makefile
> @@ -0,0 +1,43 @@
> +################################################################################
> +#
> +# Linux driver for VMware's VMCI device.
> +#
> +# Copyright (C) 2007-2012, VMware, Inc. All Rights Reserved.
> +#
> +# This program is free software; you can redistribute it and/or modify it
> +# under the terms of the GNU General Public License as published by the
> +# Free Software Foundation; version 2 of the License and no later version.
> +#
> +# This program is distributed in the hope that it will be useful, but
> +# WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
> +# NON INFRINGEMENT.  See the GNU General Public License for more
> +# details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write to the Free Software
> +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
> +#
> +# The full GNU General Public License is included in this distribution in
> +# the file called "COPYING".
> +#
> +# Maintained by: Andrew Stiegmann <pv-drivers@vmware.com>
> +#
> +################################################################################
Lot's of boilerplate noise for such a simple file...

> +
> +#
> +# Makefile for the VMware VMCI
> +#
> +
> +obj-$(CONFIG_VMWARE_VMCI) += vmw_vmci.o
> +
> +vmw_vmci-objs += vmci_context.o
> +vmw_vmci-objs += vmci_datagram.o
> +vmw_vmci-objs += vmci_doorbell.o
> +vmw_vmci-objs += vmci_driver.o
> +vmw_vmci-objs += vmci_event.o
> +vmw_vmci-objs += vmci_handle_array.o
> +vmw_vmci-objs += vmci_hash_table.o
> +vmw_vmci-objs += vmci_queue_pair.o
> +vmw_vmci-objs += vmci_resource.o
> +vmw_vmci-objs += vmci_route.o

please use:
vmw_vmci-y += vmci_context.o
vmw_vmci-y += vmci_datagram.o
vmw_vmci-y += vmci_doorbell.o

This is recommended these days and allows you to enable/disable
single files later using a config option.



> diff --git a/drivers/misc/vmw_vmci/vmci_common_int.h b/drivers/misc/vmw_vmci/vmci_common_int.h
> +
> +#ifndef _VMCI_COMMONINT_H_
> +#define _VMCI_COMMONINT_H_
> +
> +#include <linux/printk.h>
> +#include <linux/vmw_vmci_defs.h>

Use inverse chrismas tree here.
Longer include lines first, and soret alphabetically when
lines are of the same length.
This applies likely in many cases.

> +#include "vmci_handle_array.h"
> +
> +#define ASSERT(cond) BUG_ON(!(cond))
> +
> +#define CAN_BLOCK(_f) (!((_f) & VMCI_QPFLAG_NONBLOCK))
> +#define QP_PINNED(_f) ((_f) & VMCI_QPFLAG_PINNED)

Looks like poor obscufation.
Use a statis inline function if you need a helper for this.

> +
> +/*
> + * Utilility function that checks whether two entities are allowed
> + * to interact. If one of them is restricted, the other one must
> + * be trusted.
> + */
> +static inline bool vmci_deny_interaction(uint32_t partOne,
> +					 uint32_t partTwo)

The kernel types are u32 not uint32_t - these types belongs in user-space.

> +++ b/include/linux/vmw_vmci_api.h
> +
> +#ifndef __VMW_VMCI_API_H__
> +#define __VMW_VMCI_API_H__
> +
> +#include <linux/vmw_vmci_defs.h>
> +
> +#undef  VMCI_KERNEL_API_VERSION
> +#define VMCI_KERNEL_API_VERSION_2 2
> +#define VMCI_KERNEL_API_VERSION   VMCI_KERNEL_API_VERSION_2
> +
> +typedef void (VMCI_DeviceShutdownFn) (void *deviceRegistration, void *userData);
> +
> +bool VMCI_DeviceGet(uint32_t *apiVersion,
> +		    VMCI_DeviceShutdownFn *deviceShutdownCB,
> +		    void *userData, void **deviceRegistration);

The kernel style is to use lower_case for everything.
So this would become:

    vmci_device_get()

This is obviously a very general comment and applies everywhere.

	Sam

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-07-26 23:39   ` Andrew Stiegmann (stieg)
                     ` (2 preceding siblings ...)
  (?)
@ 2012-07-27 10:34   ` Sam Ravnborg
  -1 siblings, 0 replies; 72+ messages in thread
From: Sam Ravnborg @ 2012-07-27 10:34 UTC (permalink / raw)
  To: Andrew Stiegmann (stieg)
  Cc: pv-drivers, gregkh, linux-kernel, virtualization, vm-crosstalk, cschamp

Hi Andrew.

A few things noted in the following..

> 
> diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
> index 2661f6e..fe38c7a 100644
> --- a/drivers/misc/Kconfig
> +++ b/drivers/misc/Kconfig
> @@ -517,4 +517,5 @@ source "drivers/misc/lis3lv02d/Kconfig"
>  source "drivers/misc/carma/Kconfig"
>  source "drivers/misc/altera-stapl/Kconfig"
>  source "drivers/misc/mei/Kconfig"
> +source "drivers/misc/vmw_vmci/Kconfig"
>  endmenu
> diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
> index 456972f..af9e413 100644
> --- a/drivers/misc/Makefile
> +++ b/drivers/misc/Makefile
> @@ -51,3 +51,4 @@ obj-y				+= carma/
>  obj-$(CONFIG_USB_SWITCH_FSA9480) += fsa9480.o
>  obj-$(CONFIG_ALTERA_STAPL)	+=altera-stapl/
>  obj-$(CONFIG_INTEL_MEI)		+= mei/
> +obj-y				+= vmw_vmci/

Please use obj-$(CONFIG_VMWARE_VMCI) += vmw_vmci/

like we do in the other cases. This prevents us from visiting the directory
when this feature is not enabled.

> +++ b/drivers/misc/vmw_vmci/Makefile
> @@ -0,0 +1,43 @@
> +################################################################################
> +#
> +# Linux driver for VMware's VMCI device.
> +#
> +# Copyright (C) 2007-2012, VMware, Inc. All Rights Reserved.
> +#
> +# This program is free software; you can redistribute it and/or modify it
> +# under the terms of the GNU General Public License as published by the
> +# Free Software Foundation; version 2 of the License and no later version.
> +#
> +# This program is distributed in the hope that it will be useful, but
> +# WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
> +# NON INFRINGEMENT.  See the GNU General Public License for more
> +# details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write to the Free Software
> +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
> +#
> +# The full GNU General Public License is included in this distribution in
> +# the file called "COPYING".
> +#
> +# Maintained by: Andrew Stiegmann <pv-drivers@vmware.com>
> +#
> +################################################################################
Lot's of boilerplate noise for such a simple file...

> +
> +#
> +# Makefile for the VMware VMCI
> +#
> +
> +obj-$(CONFIG_VMWARE_VMCI) += vmw_vmci.o
> +
> +vmw_vmci-objs += vmci_context.o
> +vmw_vmci-objs += vmci_datagram.o
> +vmw_vmci-objs += vmci_doorbell.o
> +vmw_vmci-objs += vmci_driver.o
> +vmw_vmci-objs += vmci_event.o
> +vmw_vmci-objs += vmci_handle_array.o
> +vmw_vmci-objs += vmci_hash_table.o
> +vmw_vmci-objs += vmci_queue_pair.o
> +vmw_vmci-objs += vmci_resource.o
> +vmw_vmci-objs += vmci_route.o

please use:
vmw_vmci-y += vmci_context.o
vmw_vmci-y += vmci_datagram.o
vmw_vmci-y += vmci_doorbell.o

This is recommended these days and allows you to enable/disable
single files later using a config option.



> diff --git a/drivers/misc/vmw_vmci/vmci_common_int.h b/drivers/misc/vmw_vmci/vmci_common_int.h
> +
> +#ifndef _VMCI_COMMONINT_H_
> +#define _VMCI_COMMONINT_H_
> +
> +#include <linux/printk.h>
> +#include <linux/vmw_vmci_defs.h>

Use inverse chrismas tree here.
Longer include lines first, and soret alphabetically when
lines are of the same length.
This applies likely in many cases.

> +#include "vmci_handle_array.h"
> +
> +#define ASSERT(cond) BUG_ON(!(cond))
> +
> +#define CAN_BLOCK(_f) (!((_f) & VMCI_QPFLAG_NONBLOCK))
> +#define QP_PINNED(_f) ((_f) & VMCI_QPFLAG_PINNED)

Looks like poor obscufation.
Use a statis inline function if you need a helper for this.

> +
> +/*
> + * Utilility function that checks whether two entities are allowed
> + * to interact. If one of them is restricted, the other one must
> + * be trusted.
> + */
> +static inline bool vmci_deny_interaction(uint32_t partOne,
> +					 uint32_t partTwo)

The kernel types are u32 not uint32_t - these types belongs in user-space.

> +++ b/include/linux/vmw_vmci_api.h
> +
> +#ifndef __VMW_VMCI_API_H__
> +#define __VMW_VMCI_API_H__
> +
> +#include <linux/vmw_vmci_defs.h>
> +
> +#undef  VMCI_KERNEL_API_VERSION
> +#define VMCI_KERNEL_API_VERSION_2 2
> +#define VMCI_KERNEL_API_VERSION   VMCI_KERNEL_API_VERSION_2
> +
> +typedef void (VMCI_DeviceShutdownFn) (void *deviceRegistration, void *userData);
> +
> +bool VMCI_DeviceGet(uint32_t *apiVersion,
> +		    VMCI_DeviceShutdownFn *deviceShutdownCB,
> +		    void *userData, void **deviceRegistration);

The kernel style is to use lower_case for everything.
So this would become:

    vmci_device_get()

This is obviously a very general comment and applies everywhere.

	Sam

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-07-27 10:34   ` Sam Ravnborg
@ 2012-07-27 17:20       ` Andrew Stiegmann
  2012-08-02 19:50     ` Jan Engelhardt
  2012-08-02 19:50     ` Jan Engelhardt
  2 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann @ 2012-07-27 17:20 UTC (permalink / raw)
  To: Sam Ravnborg
  Cc: linux-kernel, virtualization, pv-drivers, vm-crosstalk, cschamp, gregkh

Hi Sam,

----- Original Message -----
> From: "Sam Ravnborg" <sam@ravnborg.org>
> To: "Andrew Stiegmann (stieg)" <astiegmann@vmware.com>
> Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, pv-drivers@vmware.com,
> vm-crosstalk@vmware.com, cschamp@vmware.com, gregkh@linuxfoundation.org
> Sent: Friday, July 27, 2012 3:34:55 AM
> Subject: Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
> 
> Hi Andrew.
> 
> A few things noted in the following..
> 
> > 
> > diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
> > index 2661f6e..fe38c7a 100644
> > --- a/drivers/misc/Kconfig
> > +++ b/drivers/misc/Kconfig
> > @@ -517,4 +517,5 @@ source "drivers/misc/lis3lv02d/Kconfig"
> >  source "drivers/misc/carma/Kconfig"
> >  source "drivers/misc/altera-stapl/Kconfig"
> >  source "drivers/misc/mei/Kconfig"
> > +source "drivers/misc/vmw_vmci/Kconfig"
> >  endmenu
> > diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
> > index 456972f..af9e413 100644
> > --- a/drivers/misc/Makefile
> > +++ b/drivers/misc/Makefile
> > @@ -51,3 +51,4 @@ obj-y				+= carma/
> >  obj-$(CONFIG_USB_SWITCH_FSA9480) += fsa9480.o
> >  obj-$(CONFIG_ALTERA_STAPL)	+=altera-stapl/
> >  obj-$(CONFIG_INTEL_MEI)		+= mei/
> > +obj-y				+= vmw_vmci/
> 
> Please use obj-$(CONFIG_VMWARE_VMCI) += vmw_vmci/
> 
> like we do in the other cases. This prevents us from visiting the
> directory
> when this feature is not enabled.

Ok.

> > +++ b/drivers/misc/vmw_vmci/Makefile
> > @@ -0,0 +1,43 @@
> > +################################################################################
> > +#
> > +# Linux driver for VMware's VMCI device.
> > +#
> > +# Copyright (C) 2007-2012, VMware, Inc. All Rights Reserved.
> > +#
> > +# This program is free software; you can redistribute it and/or
> > modify it
> > +# under the terms of the GNU General Public License as published
> > by the
> > +# Free Software Foundation; version 2 of the License and no later
> > version.
> > +#
> > +# This program is distributed in the hope that it will be useful,
> > but
> > +# WITHOUT ANY WARRANTY; without even the implied warranty of
> > +# MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE
> > or
> > +# NON INFRINGEMENT.  See the GNU General Public License for more
> > +# details.
> > +#
> > +# You should have received a copy of the GNU General Public
> > License
> > +# along with this program; if not, write to the Free Software
> > +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
> > 02110-1301 USA.
> > +#
> > +# The full GNU General Public License is included in this
> > distribution in
> > +# the file called "COPYING".
> > +#
> > +# Maintained by: Andrew Stiegmann <pv-drivers@vmware.com>
> > +#
> > +################################################################################
> Lot's of boilerplate noise for such a simple file...

I removed the section containing FSF address and section below it as well per Greg KH's request.

> > +
> > +#
> > +# Makefile for the VMware VMCI
> > +#
> > +
> > +obj-$(CONFIG_VMWARE_VMCI) += vmw_vmci.o
> > +
> > +vmw_vmci-objs += vmci_context.o
> > +vmw_vmci-objs += vmci_datagram.o
> > +vmw_vmci-objs += vmci_doorbell.o
> > +vmw_vmci-objs += vmci_driver.o
> > +vmw_vmci-objs += vmci_event.o
> > +vmw_vmci-objs += vmci_handle_array.o
> > +vmw_vmci-objs += vmci_hash_table.o
> > +vmw_vmci-objs += vmci_queue_pair.o
> > +vmw_vmci-objs += vmci_resource.o
> > +vmw_vmci-objs += vmci_route.o
> 
> please use:
> vmw_vmci-y += vmci_context.o
> vmw_vmci-y += vmci_datagram.o
> vmw_vmci-y += vmci_doorbell.o
> 
> This is recommended these days and allows you to enable/disable
> single files later using a config option.

Ok.
 
> > diff --git a/drivers/misc/vmw_vmci/vmci_common_int.h
> > b/drivers/misc/vmw_vmci/vmci_common_int.h
> > +
> > +#ifndef _VMCI_COMMONINT_H_
> > +#define _VMCI_COMMONINT_H_
> > +
> > +#include <linux/printk.h>
> > +#include <linux/vmw_vmci_defs.h>
> 
> Use inverse chrismas tree here.
> Longer include lines first, and soret alphabetically when
> lines are of the same length.
> This applies likely in many cases.
> 
> > +#include "vmci_handle_array.h"
> > +
> > +#define ASSERT(cond) BUG_ON(!(cond))
> > +
> > +#define CAN_BLOCK(_f) (!((_f) & VMCI_QPFLAG_NONBLOCK))
> > +#define QP_PINNED(_f) ((_f) & VMCI_QPFLAG_PINNED)
> 
> Looks like poor obscufation.
> Use a statis inline function if you need a helper for this.

These definitions are intended more as a helper to make reading the code easier.  IMHO ts a lot easier to read

if (CAN_BLOCK(flags))

compared to 

if (!(flags & VMCI_QPFLAG_NONBLOCK))

Wouldn't you agree?  I'm not sure something this simple warrants a static inline function but I don't see any harm in converting it over to that.
 
> > +
> > +/*
> > + * Utilility function that checks whether two entities are allowed
> > + * to interact. If one of them is restricted, the other one must
> > + * be trusted.
> > + */
> > +static inline bool vmci_deny_interaction(uint32_t partOne,
> > +					 uint32_t partTwo)
> 
> The kernel types are u32 not uint32_t - these types belongs in
> user-space.

Ok.

> > +++ b/include/linux/vmw_vmci_api.h
> > +
> > +#ifndef __VMW_VMCI_API_H__
> > +#define __VMW_VMCI_API_H__
> > +
> > +#include <linux/vmw_vmci_defs.h>
> > +
> > +#undef  VMCI_KERNEL_API_VERSION
> > +#define VMCI_KERNEL_API_VERSION_2 2
> > +#define VMCI_KERNEL_API_VERSION   VMCI_KERNEL_API_VERSION_2
> > +
> > +typedef void (VMCI_DeviceShutdownFn) (void *deviceRegistration,
> > void *userData);
> > +
> > +bool VMCI_DeviceGet(uint32_t *apiVersion,
> > +		    VMCI_DeviceShutdownFn *deviceShutdownCB,
> > +		    void *userData, void **deviceRegistration);
> 
> The kernel style is to use lower_case for everything.
> So this would become:
> 
>     vmci_device_get()
> 
> This is obviously a very general comment and applies everywhere.

I wish I could lower case these symbols but VMCI has already existed outside the mainline Linux tree for some time now and changing these exported symbols would mean that other drivers that depend on VMCI (vSock, vmhgfs) would need to change as well.   One thought that did come to mind was exporting both VMCI_Device_Get and vmci_device_get but that would likely just confuse people.  So in short I have made function names lower case where possible, but exported symbols could not be changed.

> 	Sam
> 

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
@ 2012-07-27 17:20       ` Andrew Stiegmann
  0 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann @ 2012-07-27 17:20 UTC (permalink / raw)
  To: Sam Ravnborg
  Cc: pv-drivers, gregkh, linux-kernel, virtualization, vm-crosstalk, cschamp

Hi Sam,

----- Original Message -----
> From: "Sam Ravnborg" <sam@ravnborg.org>
> To: "Andrew Stiegmann (stieg)" <astiegmann@vmware.com>
> Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, pv-drivers@vmware.com,
> vm-crosstalk@vmware.com, cschamp@vmware.com, gregkh@linuxfoundation.org
> Sent: Friday, July 27, 2012 3:34:55 AM
> Subject: Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
> 
> Hi Andrew.
> 
> A few things noted in the following..
> 
> > 
> > diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
> > index 2661f6e..fe38c7a 100644
> > --- a/drivers/misc/Kconfig
> > +++ b/drivers/misc/Kconfig
> > @@ -517,4 +517,5 @@ source "drivers/misc/lis3lv02d/Kconfig"
> >  source "drivers/misc/carma/Kconfig"
> >  source "drivers/misc/altera-stapl/Kconfig"
> >  source "drivers/misc/mei/Kconfig"
> > +source "drivers/misc/vmw_vmci/Kconfig"
> >  endmenu
> > diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
> > index 456972f..af9e413 100644
> > --- a/drivers/misc/Makefile
> > +++ b/drivers/misc/Makefile
> > @@ -51,3 +51,4 @@ obj-y				+= carma/
> >  obj-$(CONFIG_USB_SWITCH_FSA9480) += fsa9480.o
> >  obj-$(CONFIG_ALTERA_STAPL)	+=altera-stapl/
> >  obj-$(CONFIG_INTEL_MEI)		+= mei/
> > +obj-y				+= vmw_vmci/
> 
> Please use obj-$(CONFIG_VMWARE_VMCI) += vmw_vmci/
> 
> like we do in the other cases. This prevents us from visiting the
> directory
> when this feature is not enabled.

Ok.

> > +++ b/drivers/misc/vmw_vmci/Makefile
> > @@ -0,0 +1,43 @@
> > +################################################################################
> > +#
> > +# Linux driver for VMware's VMCI device.
> > +#
> > +# Copyright (C) 2007-2012, VMware, Inc. All Rights Reserved.
> > +#
> > +# This program is free software; you can redistribute it and/or
> > modify it
> > +# under the terms of the GNU General Public License as published
> > by the
> > +# Free Software Foundation; version 2 of the License and no later
> > version.
> > +#
> > +# This program is distributed in the hope that it will be useful,
> > but
> > +# WITHOUT ANY WARRANTY; without even the implied warranty of
> > +# MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE
> > or
> > +# NON INFRINGEMENT.  See the GNU General Public License for more
> > +# details.
> > +#
> > +# You should have received a copy of the GNU General Public
> > License
> > +# along with this program; if not, write to the Free Software
> > +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
> > 02110-1301 USA.
> > +#
> > +# The full GNU General Public License is included in this
> > distribution in
> > +# the file called "COPYING".
> > +#
> > +# Maintained by: Andrew Stiegmann <pv-drivers@vmware.com>
> > +#
> > +################################################################################
> Lot's of boilerplate noise for such a simple file...

I removed the section containing FSF address and section below it as well per Greg KH's request.

> > +
> > +#
> > +# Makefile for the VMware VMCI
> > +#
> > +
> > +obj-$(CONFIG_VMWARE_VMCI) += vmw_vmci.o
> > +
> > +vmw_vmci-objs += vmci_context.o
> > +vmw_vmci-objs += vmci_datagram.o
> > +vmw_vmci-objs += vmci_doorbell.o
> > +vmw_vmci-objs += vmci_driver.o
> > +vmw_vmci-objs += vmci_event.o
> > +vmw_vmci-objs += vmci_handle_array.o
> > +vmw_vmci-objs += vmci_hash_table.o
> > +vmw_vmci-objs += vmci_queue_pair.o
> > +vmw_vmci-objs += vmci_resource.o
> > +vmw_vmci-objs += vmci_route.o
> 
> please use:
> vmw_vmci-y += vmci_context.o
> vmw_vmci-y += vmci_datagram.o
> vmw_vmci-y += vmci_doorbell.o
> 
> This is recommended these days and allows you to enable/disable
> single files later using a config option.

Ok.
 
> > diff --git a/drivers/misc/vmw_vmci/vmci_common_int.h
> > b/drivers/misc/vmw_vmci/vmci_common_int.h
> > +
> > +#ifndef _VMCI_COMMONINT_H_
> > +#define _VMCI_COMMONINT_H_
> > +
> > +#include <linux/printk.h>
> > +#include <linux/vmw_vmci_defs.h>
> 
> Use inverse chrismas tree here.
> Longer include lines first, and soret alphabetically when
> lines are of the same length.
> This applies likely in many cases.
> 
> > +#include "vmci_handle_array.h"
> > +
> > +#define ASSERT(cond) BUG_ON(!(cond))
> > +
> > +#define CAN_BLOCK(_f) (!((_f) & VMCI_QPFLAG_NONBLOCK))
> > +#define QP_PINNED(_f) ((_f) & VMCI_QPFLAG_PINNED)
> 
> Looks like poor obscufation.
> Use a statis inline function if you need a helper for this.

These definitions are intended more as a helper to make reading the code easier.  IMHO ts a lot easier to read

if (CAN_BLOCK(flags))

compared to 

if (!(flags & VMCI_QPFLAG_NONBLOCK))

Wouldn't you agree?  I'm not sure something this simple warrants a static inline function but I don't see any harm in converting it over to that.
 
> > +
> > +/*
> > + * Utilility function that checks whether two entities are allowed
> > + * to interact. If one of them is restricted, the other one must
> > + * be trusted.
> > + */
> > +static inline bool vmci_deny_interaction(uint32_t partOne,
> > +					 uint32_t partTwo)
> 
> The kernel types are u32 not uint32_t - these types belongs in
> user-space.

Ok.

> > +++ b/include/linux/vmw_vmci_api.h
> > +
> > +#ifndef __VMW_VMCI_API_H__
> > +#define __VMW_VMCI_API_H__
> > +
> > +#include <linux/vmw_vmci_defs.h>
> > +
> > +#undef  VMCI_KERNEL_API_VERSION
> > +#define VMCI_KERNEL_API_VERSION_2 2
> > +#define VMCI_KERNEL_API_VERSION   VMCI_KERNEL_API_VERSION_2
> > +
> > +typedef void (VMCI_DeviceShutdownFn) (void *deviceRegistration,
> > void *userData);
> > +
> > +bool VMCI_DeviceGet(uint32_t *apiVersion,
> > +		    VMCI_DeviceShutdownFn *deviceShutdownCB,
> > +		    void *userData, void **deviceRegistration);
> 
> The kernel style is to use lower_case for everything.
> So this would become:
> 
>     vmci_device_get()
> 
> This is obviously a very general comment and applies everywhere.

I wish I could lower case these symbols but VMCI has already existed outside the mainline Linux tree for some time now and changing these exported symbols would mean that other drivers that depend on VMCI (vSock, vmhgfs) would need to change as well.   One thought that did come to mind was exporting both VMCI_Device_Get and vmci_device_get but that would likely just confuse people.  So in short I have made function names lower case where possible, but exported symbols could not be changed.

> 	Sam
> 

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [Pv-drivers] [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-07-27  9:53     ` Alan Cox
@ 2012-07-27 18:04       ` Dmitry Torokhov
  -1 siblings, 0 replies; 72+ messages in thread
From: Dmitry Torokhov @ 2012-07-27 18:04 UTC (permalink / raw)
  To: Alan Cox
  Cc: Andrew Stiegmann (stieg),
	pv-drivers, gregkh, linux-kernel, virtualization, vm-crosstalk

Hi Alan,

On Fri, Jul 27, 2012 at 10:53:57AM +0100, Alan Cox wrote:
> > +enum {
> > +	VMCI_SUCCESS_QUEUEPAIR_ATTACH	=  5,
> > +	VMCI_SUCCESS_QUEUEPAIR_CREATE	=  4,
> > +	VMCI_SUCCESS_LAST_DETACH	=  3,
> > +	VMCI_SUCCESS_ACCESS_GRANTED	=  2,
> > +	VMCI_SUCCESS_ENTRY_DEAD	=  1,
> 
> We've got a nice collection of Linux error codes than you, and it would
> make the driver enormously more readable on the Linux side if as low
> level as possible it started using Linux error codes.

If VMCI was only used on Linux we'd definitely do that; however VMCI
core is shared among several operating systems (much like ACPI is) and
we'd like to limit divergencies between them while conforming to the
kernel coding style as much as possible.

We'll make sure that we will not leak VMCI-specific errors to the
standard kernel APIs.

Thanks,
Dmitry

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [Pv-drivers] [vmw_vmci 11/11] Apply the header code to make VMCI build
@ 2012-07-27 18:04       ` Dmitry Torokhov
  0 siblings, 0 replies; 72+ messages in thread
From: Dmitry Torokhov @ 2012-07-27 18:04 UTC (permalink / raw)
  To: Alan Cox
  Cc: pv-drivers, gregkh, linux-kernel, virtualization, vm-crosstalk,
	Andrew Stiegmann (stieg)

Hi Alan,

On Fri, Jul 27, 2012 at 10:53:57AM +0100, Alan Cox wrote:
> > +enum {
> > +	VMCI_SUCCESS_QUEUEPAIR_ATTACH	=  5,
> > +	VMCI_SUCCESS_QUEUEPAIR_CREATE	=  4,
> > +	VMCI_SUCCESS_LAST_DETACH	=  3,
> > +	VMCI_SUCCESS_ACCESS_GRANTED	=  2,
> > +	VMCI_SUCCESS_ENTRY_DEAD	=  1,
> 
> We've got a nice collection of Linux error codes than you, and it would
> make the driver enormously more readable on the Linux side if as low
> level as possible it started using Linux error codes.

If VMCI was only used on Linux we'd definitely do that; however VMCI
core is shared among several operating systems (much like ACPI is) and
we'd like to limit divergencies between them while conforming to the
kernel coding style as much as possible.

We'll make sure that we will not leak VMCI-specific errors to the
standard kernel APIs.

Thanks,
Dmitry

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-07-27 17:20       ` Andrew Stiegmann
@ 2012-07-27 18:16         ` Greg KH
  -1 siblings, 0 replies; 72+ messages in thread
From: Greg KH @ 2012-07-27 18:16 UTC (permalink / raw)
  To: Andrew Stiegmann
  Cc: Sam Ravnborg, linux-kernel, virtualization, pv-drivers,
	vm-crosstalk, cschamp

On Fri, Jul 27, 2012 at 10:20:43AM -0700, Andrew Stiegmann wrote:
> > The kernel style is to use lower_case for everything.
> > So this would become:
> > 
> >     vmci_device_get()
> > 
> > This is obviously a very general comment and applies everywhere.
> 
> I wish I could lower case these symbols but VMCI has already existed
> outside the mainline Linux tree for some time now and changing these
> exported symbols would mean that other drivers that depend on VMCI
> (vSock, vmhgfs) would need to change as well.   One thought that did
> come to mind was exporting both VMCI_Device_Get and vmci_device_get
> but that would likely just confuse people.  So in short I have made
> function names lower case where possible, but exported symbols could
> not be changed.

Not true at all.  You want those drivers to be merged as well, right?
So they will need to have their functions changed, and their code as
well.

Just wait until we get to the "change your functionality around"
requests, those will require those drivers to change.  Right now we are
at the "silly and obvious things you did wrong" stage of the review
process :)

So please fix these, and also, post these drivers as well, so we can see
how they interact with the core code.

Actually, if you are going to need lots of refactoring for these
drivers, and the core, I would recommend putting this all in the staging
tree, to allow that to happen over time.  That would ensure that your
users keep having working systems, and let you modify the interfaces
better and easier, than having to keep it all out-of-tree.

What do you think?

greg k-h

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
@ 2012-07-27 18:16         ` Greg KH
  0 siblings, 0 replies; 72+ messages in thread
From: Greg KH @ 2012-07-27 18:16 UTC (permalink / raw)
  To: Andrew Stiegmann
  Cc: pv-drivers, linux-kernel, virtualization, vm-crosstalk,
	Sam Ravnborg, cschamp

On Fri, Jul 27, 2012 at 10:20:43AM -0700, Andrew Stiegmann wrote:
> > The kernel style is to use lower_case for everything.
> > So this would become:
> > 
> >     vmci_device_get()
> > 
> > This is obviously a very general comment and applies everywhere.
> 
> I wish I could lower case these symbols but VMCI has already existed
> outside the mainline Linux tree for some time now and changing these
> exported symbols would mean that other drivers that depend on VMCI
> (vSock, vmhgfs) would need to change as well.   One thought that did
> come to mind was exporting both VMCI_Device_Get and vmci_device_get
> but that would likely just confuse people.  So in short I have made
> function names lower case where possible, but exported symbols could
> not be changed.

Not true at all.  You want those drivers to be merged as well, right?
So they will need to have their functions changed, and their code as
well.

Just wait until we get to the "change your functionality around"
requests, those will require those drivers to change.  Right now we are
at the "silly and obvious things you did wrong" stage of the review
process :)

So please fix these, and also, post these drivers as well, so we can see
how they interact with the core code.

Actually, if you are going to need lots of refactoring for these
drivers, and the core, I would recommend putting this all in the staging
tree, to allow that to happen over time.  That would ensure that your
users keep having working systems, and let you modify the interfaces
better and easier, than having to keep it all out-of-tree.

What do you think?

greg k-h

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-07-27 18:16         ` Greg KH
@ 2012-07-27 18:39           ` Andrew Stiegmann
  -1 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann @ 2012-07-27 18:39 UTC (permalink / raw)
  To: Greg KH
  Cc: Sam Ravnborg, linux-kernel, virtualization, pv-drivers,
	vm-crosstalk, cschamp



----- Original Message -----
> From: "Greg KH" <gregkh@linuxfoundation.org>
> To: "Andrew Stiegmann" <astiegmann@vmware.com>
> Cc: "Sam Ravnborg" <sam@ravnborg.org>, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org,
> pv-drivers@vmware.com, vm-crosstalk@vmware.com, cschamp@vmware.com
> Sent: Friday, July 27, 2012 11:16:39 AM
> Subject: Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
> 
> On Fri, Jul 27, 2012 at 10:20:43AM -0700, Andrew Stiegmann wrote:
> > > The kernel style is to use lower_case for everything.
> > > So this would become:
> > > 
> > >     vmci_device_get()
> > > 
> > > This is obviously a very general comment and applies everywhere.
> > 
> > I wish I could lower case these symbols but VMCI has already
> > existed
> > outside the mainline Linux tree for some time now and changing
> > these
> > exported symbols would mean that other drivers that depend on VMCI
> > (vSock, vmhgfs) would need to change as well.   One thought that
> > did
> > come to mind was exporting both VMCI_Device_Get and vmci_device_get
> > but that would likely just confuse people.  So in short I have made
> > function names lower case where possible, but exported symbols
> > could
> > not be changed.
> 
> Not true at all.  You want those drivers to be merged as well, right?
> So they will need to have their functions changed, and their code as
> well.

As previously mentioned VMware is working on upstreaming our vSock driver (one of a few drivers that uses vmw_vmci).  However there are no plans to upstream the other drivers that depend on vmw_vmci.  Because of this these symbols can not change.

> Just wait until we get to the "change your functionality around"
> requests, those will require those drivers to change.  Right now we
> are
> at the "silly and obvious things you did wrong" stage of the review
> process :)
>
> So please fix these, and also, post these drivers as well, so we can
> see
> how they interact with the core code.
> 
> Actually, if you are going to need lots of refactoring for these
> drivers, and the core, I would recommend putting this all in the
> staging
> tree, to allow that to happen over time.  That would ensure that your
> users keep having working systems, and let you modify the interfaces
> better and easier, than having to keep it all out-of-tree.
> 
> What do you think?

We will discuss this internally and let you know.
 
> greg k-h
> 

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
@ 2012-07-27 18:39           ` Andrew Stiegmann
  0 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann @ 2012-07-27 18:39 UTC (permalink / raw)
  To: Greg KH
  Cc: pv-drivers, linux-kernel, virtualization, vm-crosstalk,
	Sam Ravnborg, cschamp



----- Original Message -----
> From: "Greg KH" <gregkh@linuxfoundation.org>
> To: "Andrew Stiegmann" <astiegmann@vmware.com>
> Cc: "Sam Ravnborg" <sam@ravnborg.org>, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org,
> pv-drivers@vmware.com, vm-crosstalk@vmware.com, cschamp@vmware.com
> Sent: Friday, July 27, 2012 11:16:39 AM
> Subject: Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
> 
> On Fri, Jul 27, 2012 at 10:20:43AM -0700, Andrew Stiegmann wrote:
> > > The kernel style is to use lower_case for everything.
> > > So this would become:
> > > 
> > >     vmci_device_get()
> > > 
> > > This is obviously a very general comment and applies everywhere.
> > 
> > I wish I could lower case these symbols but VMCI has already
> > existed
> > outside the mainline Linux tree for some time now and changing
> > these
> > exported symbols would mean that other drivers that depend on VMCI
> > (vSock, vmhgfs) would need to change as well.   One thought that
> > did
> > come to mind was exporting both VMCI_Device_Get and vmci_device_get
> > but that would likely just confuse people.  So in short I have made
> > function names lower case where possible, but exported symbols
> > could
> > not be changed.
> 
> Not true at all.  You want those drivers to be merged as well, right?
> So they will need to have their functions changed, and their code as
> well.

As previously mentioned VMware is working on upstreaming our vSock driver (one of a few drivers that uses vmw_vmci).  However there are no plans to upstream the other drivers that depend on vmw_vmci.  Because of this these symbols can not change.

> Just wait until we get to the "change your functionality around"
> requests, those will require those drivers to change.  Right now we
> are
> at the "silly and obvious things you did wrong" stage of the review
> process :)
>
> So please fix these, and also, post these drivers as well, so we can
> see
> how they interact with the core code.
> 
> Actually, if you are going to need lots of refactoring for these
> drivers, and the core, I would recommend putting this all in the
> staging
> tree, to allow that to happen over time.  That would ensure that your
> users keep having working systems, and let you modify the interfaces
> better and easier, than having to keep it all out-of-tree.
> 
> What do you think?

We will discuss this internally and let you know.
 
> greg k-h
> 

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-07-27 18:39           ` Andrew Stiegmann
@ 2012-07-27 18:52             ` Greg KH
  -1 siblings, 0 replies; 72+ messages in thread
From: Greg KH @ 2012-07-27 18:52 UTC (permalink / raw)
  To: Andrew Stiegmann
  Cc: Sam Ravnborg, linux-kernel, virtualization, pv-drivers,
	vm-crosstalk, cschamp

On Fri, Jul 27, 2012 at 11:39:23AM -0700, Andrew Stiegmann wrote:
> 
> 
> ----- Original Message -----
> > From: "Greg KH" <gregkh@linuxfoundation.org>
> > To: "Andrew Stiegmann" <astiegmann@vmware.com>
> > Cc: "Sam Ravnborg" <sam@ravnborg.org>, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org,
> > pv-drivers@vmware.com, vm-crosstalk@vmware.com, cschamp@vmware.com
> > Sent: Friday, July 27, 2012 11:16:39 AM
> > Subject: Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
> > 
> > On Fri, Jul 27, 2012 at 10:20:43AM -0700, Andrew Stiegmann wrote:
> > > > The kernel style is to use lower_case for everything.
> > > > So this would become:
> > > > 
> > > >     vmci_device_get()
> > > > 
> > > > This is obviously a very general comment and applies everywhere.
> > > 
> > > I wish I could lower case these symbols but VMCI has already
> > > existed
> > > outside the mainline Linux tree for some time now and changing
> > > these
> > > exported symbols would mean that other drivers that depend on VMCI
> > > (vSock, vmhgfs) would need to change as well.   One thought that
> > > did
> > > come to mind was exporting both VMCI_Device_Get and vmci_device_get
> > > but that would likely just confuse people.  So in short I have made
> > > function names lower case where possible, but exported symbols
> > > could
> > > not be changed.
> > 
> > Not true at all.  You want those drivers to be merged as well, right?
> > So they will need to have their functions changed, and their code as
> > well.
> 
> As previously mentioned VMware is working on upstreaming our vSock
> driver (one of a few drivers that uses vmw_vmci).

Great.

> However there are no plans to upstream the other drivers that depend
> on vmw_vmci.

Why not?  That seems quite short-sighted.

> Because of this these symbols can not change.

Then I would argue that we can not accept this code at all, because it
will change over time, both symbol names, and functionality (see my
previous comment about how that is going to have to change.)

sorry,

greg k-h

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
@ 2012-07-27 18:52             ` Greg KH
  0 siblings, 0 replies; 72+ messages in thread
From: Greg KH @ 2012-07-27 18:52 UTC (permalink / raw)
  To: Andrew Stiegmann
  Cc: pv-drivers, linux-kernel, virtualization, vm-crosstalk,
	Sam Ravnborg, cschamp

On Fri, Jul 27, 2012 at 11:39:23AM -0700, Andrew Stiegmann wrote:
> 
> 
> ----- Original Message -----
> > From: "Greg KH" <gregkh@linuxfoundation.org>
> > To: "Andrew Stiegmann" <astiegmann@vmware.com>
> > Cc: "Sam Ravnborg" <sam@ravnborg.org>, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org,
> > pv-drivers@vmware.com, vm-crosstalk@vmware.com, cschamp@vmware.com
> > Sent: Friday, July 27, 2012 11:16:39 AM
> > Subject: Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
> > 
> > On Fri, Jul 27, 2012 at 10:20:43AM -0700, Andrew Stiegmann wrote:
> > > > The kernel style is to use lower_case for everything.
> > > > So this would become:
> > > > 
> > > >     vmci_device_get()
> > > > 
> > > > This is obviously a very general comment and applies everywhere.
> > > 
> > > I wish I could lower case these symbols but VMCI has already
> > > existed
> > > outside the mainline Linux tree for some time now and changing
> > > these
> > > exported symbols would mean that other drivers that depend on VMCI
> > > (vSock, vmhgfs) would need to change as well.   One thought that
> > > did
> > > come to mind was exporting both VMCI_Device_Get and vmci_device_get
> > > but that would likely just confuse people.  So in short I have made
> > > function names lower case where possible, but exported symbols
> > > could
> > > not be changed.
> > 
> > Not true at all.  You want those drivers to be merged as well, right?
> > So they will need to have their functions changed, and their code as
> > well.
> 
> As previously mentioned VMware is working on upstreaming our vSock
> driver (one of a few drivers that uses vmw_vmci).

Great.

> However there are no plans to upstream the other drivers that depend
> on vmw_vmci.

Why not?  That seems quite short-sighted.

> Because of this these symbols can not change.

Then I would argue that we can not accept this code at all, because it
will change over time, both symbol names, and functionality (see my
previous comment about how that is going to have to change.)

sorry,

greg k-h

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-07-27 17:20       ` Andrew Stiegmann
@ 2012-07-27 19:53         ` Sam Ravnborg
  -1 siblings, 0 replies; 72+ messages in thread
From: Sam Ravnborg @ 2012-07-27 19:53 UTC (permalink / raw)
  To: Andrew Stiegmann
  Cc: linux-kernel, virtualization, pv-drivers, vm-crosstalk, cschamp, gregkh

> > > +
> > > +#define CAN_BLOCK(_f) (!((_f) & VMCI_QPFLAG_NONBLOCK))
> > > +#define QP_PINNED(_f) ((_f) & VMCI_QPFLAG_PINNED)
> > 
> > Looks like poor obscufation.
> > Use a statis inline function if you need a helper for this.
> 
> These definitions are intended more as a helper to make reading the code easier.  IMHO ts a lot easier to read
> 
> if (CAN_BLOCK(flags))
> 
> compared to 
> 
> if (!(flags & VMCI_QPFLAG_NONBLOCK))
> 
> Wouldn't you agree?  I'm not sure something this simple warrants a static inline
> function but I don't see any harm in converting it over to that.

I would put it the other way around. I cannot see that such simple stuff warrants a #define.
A static inline is (almost) always preferable to hide code in a macro.

For once you get better type-checks.
And semantics are also much simpler. With a macro you can do so many silly things.

	Sam

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
@ 2012-07-27 19:53         ` Sam Ravnborg
  0 siblings, 0 replies; 72+ messages in thread
From: Sam Ravnborg @ 2012-07-27 19:53 UTC (permalink / raw)
  To: Andrew Stiegmann
  Cc: pv-drivers, gregkh, linux-kernel, virtualization, vm-crosstalk, cschamp

> > > +
> > > +#define CAN_BLOCK(_f) (!((_f) & VMCI_QPFLAG_NONBLOCK))
> > > +#define QP_PINNED(_f) ((_f) & VMCI_QPFLAG_PINNED)
> > 
> > Looks like poor obscufation.
> > Use a statis inline function if you need a helper for this.
> 
> These definitions are intended more as a helper to make reading the code easier.  IMHO ts a lot easier to read
> 
> if (CAN_BLOCK(flags))
> 
> compared to 
> 
> if (!(flags & VMCI_QPFLAG_NONBLOCK))
> 
> Wouldn't you agree?  I'm not sure something this simple warrants a static inline
> function but I don't see any harm in converting it over to that.

I would put it the other way around. I cannot see that such simple stuff warrants a #define.
A static inline is (almost) always preferable to hide code in a macro.

For once you get better type-checks.
And semantics are also much simpler. With a macro you can do so many silly things.

	Sam

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-07-27 19:53         ` Sam Ravnborg
@ 2012-07-27 20:07           ` Andrew Stiegmann
  -1 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann @ 2012-07-27 20:07 UTC (permalink / raw)
  To: Sam Ravnborg
  Cc: linux-kernel, virtualization, pv-drivers, vm-crosstalk, cschamp, gregkh



----- Original Message -----
> From: "Sam Ravnborg" <sam@ravnborg.org>
> To: "Andrew Stiegmann" <astiegmann@vmware.com>
> Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, pv-drivers@vmware.com,
> vm-crosstalk@vmware.com, cschamp@vmware.com, gregkh@linuxfoundation.org
> Sent: Friday, July 27, 2012 12:53:20 PM
> Subject: Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
> 
> > > > +
> > > > +#define CAN_BLOCK(_f) (!((_f) & VMCI_QPFLAG_NONBLOCK))
> > > > +#define QP_PINNED(_f) ((_f) & VMCI_QPFLAG_PINNED)
> > > 
> > > Looks like poor obscufation.
> > > Use a statis inline function if you need a helper for this.
> > 
> > These definitions are intended more as a helper to make reading the
> > code easier.  IMHO ts a lot easier to read
> > 
> > if (CAN_BLOCK(flags))
> > 
> > compared to
> > 
> > if (!(flags & VMCI_QPFLAG_NONBLOCK))
> > 
> > Wouldn't you agree?  I'm not sure something this simple warrants a
> > static inline
> > function but I don't see any harm in converting it over to that.
> 
> I would put it the other way around. I cannot see that such simple
> stuff warrants a #define.
> A static inline is (almost) always preferable to hide code in a
> macro.
> 
> For once you get better type-checks.
> And semantics are also much simpler. With a macro you can do so many
> silly things.

Fair enough.  I'll make them into static inline functions.

> 	Sam
> 

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
@ 2012-07-27 20:07           ` Andrew Stiegmann
  0 siblings, 0 replies; 72+ messages in thread
From: Andrew Stiegmann @ 2012-07-27 20:07 UTC (permalink / raw)
  To: Sam Ravnborg
  Cc: pv-drivers, gregkh, linux-kernel, virtualization, vm-crosstalk, cschamp



----- Original Message -----
> From: "Sam Ravnborg" <sam@ravnborg.org>
> To: "Andrew Stiegmann" <astiegmann@vmware.com>
> Cc: linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, pv-drivers@vmware.com,
> vm-crosstalk@vmware.com, cschamp@vmware.com, gregkh@linuxfoundation.org
> Sent: Friday, July 27, 2012 12:53:20 PM
> Subject: Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
> 
> > > > +
> > > > +#define CAN_BLOCK(_f) (!((_f) & VMCI_QPFLAG_NONBLOCK))
> > > > +#define QP_PINNED(_f) ((_f) & VMCI_QPFLAG_PINNED)
> > > 
> > > Looks like poor obscufation.
> > > Use a statis inline function if you need a helper for this.
> > 
> > These definitions are intended more as a helper to make reading the
> > code easier.  IMHO ts a lot easier to read
> > 
> > if (CAN_BLOCK(flags))
> > 
> > compared to
> > 
> > if (!(flags & VMCI_QPFLAG_NONBLOCK))
> > 
> > Wouldn't you agree?  I'm not sure something this simple warrants a
> > static inline
> > function but I don't see any harm in converting it over to that.
> 
> I would put it the other way around. I cannot see that such simple
> stuff warrants a #define.
> A static inline is (almost) always preferable to hide code in a
> macro.
> 
> For once you get better type-checks.
> And semantics are also much simpler. With a macro you can do so many
> silly things.

Fair enough.  I'll make them into static inline functions.

> 	Sam
> 

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [Pv-drivers] [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-07-27 18:16         ` Greg KH
@ 2012-07-27 20:29           ` Dmitry Torokhov
  -1 siblings, 0 replies; 72+ messages in thread
From: Dmitry Torokhov @ 2012-07-27 20:29 UTC (permalink / raw)
  To: Greg KH
  Cc: Andrew Stiegmann, pv-drivers, linux-kernel, virtualization,
	vm-crosstalk, Sam Ravnborg

On Fri, Jul 27, 2012 at 11:16:39AM -0700, Greg KH wrote:
> On Fri, Jul 27, 2012 at 10:20:43AM -0700, Andrew Stiegmann wrote:
> > > The kernel style is to use lower_case for everything.
> > > So this would become:
> > > 
> > >     vmci_device_get()
> > > 
> > > This is obviously a very general comment and applies everywhere.
> > 
> > I wish I could lower case these symbols but VMCI has already existed
> > outside the mainline Linux tree for some time now and changing these
> > exported symbols would mean that other drivers that depend on VMCI
> > (vSock, vmhgfs) would need to change as well.   One thought that did
> > come to mind was exporting both VMCI_Device_Get and vmci_device_get
> > but that would likely just confuse people.  So in short I have made
> > function names lower case where possible, but exported symbols could
> > not be changed.
> 
> Not true at all.  You want those drivers to be merged as well, right?
> So they will need to have their functions changed, and their code as
> well.
> 
> Just wait until we get to the "change your functionality around"
> requests, those will require those drivers to change.  Right now we are
> at the "silly and obvious things you did wrong" stage of the review
> process :)
> 
> So please fix these, and also, post these drivers as well, so we can see
> how they interact with the core code.
> 
> Actually, if you are going to need lots of refactoring for these
> drivers, and the core, I would recommend putting this all in the staging
> tree, to allow that to happen over time.  That would ensure that your
> users keep having working systems, and let you modify the interfaces
> better and easier, than having to keep it all out-of-tree.
> 
> What do you think?

Actually I think that we'd prefer to keep this in a patch-based form, at
least for now, because majority of our users get these drivers with
VMware Tools and will continue doing so until ditsributions start
enabling VMCI in their kernels. Which they probably won't until VMCI
moves form staging. We'd also have to constantly adjust drivers that we
are not working on getting upstream at this time to work with the
rapidly changing version of VMCI in staging, which will just add work
for us.

So we'd like to get more feedback and have a chance to address issues
and then decide whether staying in staging makes sense or not.

Thanks.

-- 
Dmitry

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [Pv-drivers] [vmw_vmci 11/11] Apply the header code to make VMCI build
@ 2012-07-27 20:29           ` Dmitry Torokhov
  0 siblings, 0 replies; 72+ messages in thread
From: Dmitry Torokhov @ 2012-07-27 20:29 UTC (permalink / raw)
  To: Greg KH
  Cc: pv-drivers, linux-kernel, virtualization, vm-crosstalk,
	Andrew Stiegmann, Sam Ravnborg

On Fri, Jul 27, 2012 at 11:16:39AM -0700, Greg KH wrote:
> On Fri, Jul 27, 2012 at 10:20:43AM -0700, Andrew Stiegmann wrote:
> > > The kernel style is to use lower_case for everything.
> > > So this would become:
> > > 
> > >     vmci_device_get()
> > > 
> > > This is obviously a very general comment and applies everywhere.
> > 
> > I wish I could lower case these symbols but VMCI has already existed
> > outside the mainline Linux tree for some time now and changing these
> > exported symbols would mean that other drivers that depend on VMCI
> > (vSock, vmhgfs) would need to change as well.   One thought that did
> > come to mind was exporting both VMCI_Device_Get and vmci_device_get
> > but that would likely just confuse people.  So in short I have made
> > function names lower case where possible, but exported symbols could
> > not be changed.
> 
> Not true at all.  You want those drivers to be merged as well, right?
> So they will need to have their functions changed, and their code as
> well.
> 
> Just wait until we get to the "change your functionality around"
> requests, those will require those drivers to change.  Right now we are
> at the "silly and obvious things you did wrong" stage of the review
> process :)
> 
> So please fix these, and also, post these drivers as well, so we can see
> how they interact with the core code.
> 
> Actually, if you are going to need lots of refactoring for these
> drivers, and the core, I would recommend putting this all in the staging
> tree, to allow that to happen over time.  That would ensure that your
> users keep having working systems, and let you modify the interfaces
> better and easier, than having to keep it all out-of-tree.
> 
> What do you think?

Actually I think that we'd prefer to keep this in a patch-based form, at
least for now, because majority of our users get these drivers with
VMware Tools and will continue doing so until ditsributions start
enabling VMCI in their kernels. Which they probably won't until VMCI
moves form staging. We'd also have to constantly adjust drivers that we
are not working on getting upstream at this time to work with the
rapidly changing version of VMCI in staging, which will just add work
for us.

So we'd like to get more feedback and have a chance to address issues
and then decide whether staying in staging makes sense or not.

Thanks.

-- 
Dmitry

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [Pv-drivers] [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-07-27 20:29           ` Dmitry Torokhov
@ 2012-07-28 19:55             ` Greg KH
  -1 siblings, 0 replies; 72+ messages in thread
From: Greg KH @ 2012-07-28 19:55 UTC (permalink / raw)
  To: Dmitry Torokhov
  Cc: Andrew Stiegmann, pv-drivers, linux-kernel, virtualization,
	vm-crosstalk, Sam Ravnborg

On Fri, Jul 27, 2012 at 01:29:27PM -0700, Dmitry Torokhov wrote:
> On Fri, Jul 27, 2012 at 11:16:39AM -0700, Greg KH wrote:
> > On Fri, Jul 27, 2012 at 10:20:43AM -0700, Andrew Stiegmann wrote:
> > > > The kernel style is to use lower_case for everything.
> > > > So this would become:
> > > > 
> > > >     vmci_device_get()
> > > > 
> > > > This is obviously a very general comment and applies everywhere.
> > > 
> > > I wish I could lower case these symbols but VMCI has already existed
> > > outside the mainline Linux tree for some time now and changing these
> > > exported symbols would mean that other drivers that depend on VMCI
> > > (vSock, vmhgfs) would need to change as well.   One thought that did
> > > come to mind was exporting both VMCI_Device_Get and vmci_device_get
> > > but that would likely just confuse people.  So in short I have made
> > > function names lower case where possible, but exported symbols could
> > > not be changed.
> > 
> > Not true at all.  You want those drivers to be merged as well, right?
> > So they will need to have their functions changed, and their code as
> > well.
> > 
> > Just wait until we get to the "change your functionality around"
> > requests, those will require those drivers to change.  Right now we are
> > at the "silly and obvious things you did wrong" stage of the review
> > process :)
> > 
> > So please fix these, and also, post these drivers as well, so we can see
> > how they interact with the core code.
> > 
> > Actually, if you are going to need lots of refactoring for these
> > drivers, and the core, I would recommend putting this all in the staging
> > tree, to allow that to happen over time.  That would ensure that your
> > users keep having working systems, and let you modify the interfaces
> > better and easier, than having to keep it all out-of-tree.
> > 
> > What do you think?
> 
> Actually I think that we'd prefer to keep this in a patch-based form, at
> least for now, because majority of our users get these drivers with
> VMware Tools and will continue doing so until ditsributions start
> enabling VMCI in their kernels. Which they probably won't until VMCI
> moves form staging. We'd also have to constantly adjust drivers that we
> are not working on getting upstream at this time to work with the
> rapidly changing version of VMCI in staging, which will just add work
> for us.

That wouldn't be an issue if you just include all of the drivers in the
tree at the same time, right?

Just like what the hyper-v developers did.

greg

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [Pv-drivers] [vmw_vmci 11/11] Apply the header code to make VMCI build
@ 2012-07-28 19:55             ` Greg KH
  0 siblings, 0 replies; 72+ messages in thread
From: Greg KH @ 2012-07-28 19:55 UTC (permalink / raw)
  To: Dmitry Torokhov
  Cc: pv-drivers, linux-kernel, virtualization, vm-crosstalk,
	Andrew Stiegmann, Sam Ravnborg

On Fri, Jul 27, 2012 at 01:29:27PM -0700, Dmitry Torokhov wrote:
> On Fri, Jul 27, 2012 at 11:16:39AM -0700, Greg KH wrote:
> > On Fri, Jul 27, 2012 at 10:20:43AM -0700, Andrew Stiegmann wrote:
> > > > The kernel style is to use lower_case for everything.
> > > > So this would become:
> > > > 
> > > >     vmci_device_get()
> > > > 
> > > > This is obviously a very general comment and applies everywhere.
> > > 
> > > I wish I could lower case these symbols but VMCI has already existed
> > > outside the mainline Linux tree for some time now and changing these
> > > exported symbols would mean that other drivers that depend on VMCI
> > > (vSock, vmhgfs) would need to change as well.   One thought that did
> > > come to mind was exporting both VMCI_Device_Get and vmci_device_get
> > > but that would likely just confuse people.  So in short I have made
> > > function names lower case where possible, but exported symbols could
> > > not be changed.
> > 
> > Not true at all.  You want those drivers to be merged as well, right?
> > So they will need to have their functions changed, and their code as
> > well.
> > 
> > Just wait until we get to the "change your functionality around"
> > requests, those will require those drivers to change.  Right now we are
> > at the "silly and obvious things you did wrong" stage of the review
> > process :)
> > 
> > So please fix these, and also, post these drivers as well, so we can see
> > how they interact with the core code.
> > 
> > Actually, if you are going to need lots of refactoring for these
> > drivers, and the core, I would recommend putting this all in the staging
> > tree, to allow that to happen over time.  That would ensure that your
> > users keep having working systems, and let you modify the interfaces
> > better and easier, than having to keep it all out-of-tree.
> > 
> > What do you think?
> 
> Actually I think that we'd prefer to keep this in a patch-based form, at
> least for now, because majority of our users get these drivers with
> VMware Tools and will continue doing so until ditsributions start
> enabling VMCI in their kernels. Which they probably won't until VMCI
> moves form staging. We'd also have to constantly adjust drivers that we
> are not working on getting upstream at this time to work with the
> rapidly changing version of VMCI in staging, which will just add work
> for us.

That wouldn't be an issue if you just include all of the drivers in the
tree at the same time, right?

Just like what the hyper-v developers did.

greg

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [Pv-drivers] [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-07-28 19:55             ` Greg KH
@ 2012-07-28 21:10               ` Dmitry Torokhov
  -1 siblings, 0 replies; 72+ messages in thread
From: Dmitry Torokhov @ 2012-07-28 21:10 UTC (permalink / raw)
  To: Greg KH
  Cc: Andrew Stiegmann, pv-drivers, linux-kernel, virtualization,
	vm-crosstalk, Sam Ravnborg

On Sat, Jul 28, 2012 at 12:55:35PM -0700, Greg KH wrote:
> On Fri, Jul 27, 2012 at 01:29:27PM -0700, Dmitry Torokhov wrote:
> > On Fri, Jul 27, 2012 at 11:16:39AM -0700, Greg KH wrote:
> > > On Fri, Jul 27, 2012 at 10:20:43AM -0700, Andrew Stiegmann wrote:
> > > > > The kernel style is to use lower_case for everything.
> > > > > So this would become:
> > > > > 
> > > > >     vmci_device_get()
> > > > > 
> > > > > This is obviously a very general comment and applies everywhere.
> > > > 
> > > > I wish I could lower case these symbols but VMCI has already existed
> > > > outside the mainline Linux tree for some time now and changing these
> > > > exported symbols would mean that other drivers that depend on VMCI
> > > > (vSock, vmhgfs) would need to change as well.   One thought that did
> > > > come to mind was exporting both VMCI_Device_Get and vmci_device_get
> > > > but that would likely just confuse people.  So in short I have made
> > > > function names lower case where possible, but exported symbols could
> > > > not be changed.
> > > 
> > > Not true at all.  You want those drivers to be merged as well, right?
> > > So they will need to have their functions changed, and their code as
> > > well.
> > > 
> > > Just wait until we get to the "change your functionality around"
> > > requests, those will require those drivers to change.  Right now we are
> > > at the "silly and obvious things you did wrong" stage of the review
> > > process :)
> > > 
> > > So please fix these, and also, post these drivers as well, so we can see
> > > how they interact with the core code.
> > > 
> > > Actually, if you are going to need lots of refactoring for these
> > > drivers, and the core, I would recommend putting this all in the staging
> > > tree, to allow that to happen over time.  That would ensure that your
> > > users keep having working systems, and let you modify the interfaces
> > > better and easier, than having to keep it all out-of-tree.
> > > 
> > > What do you think?
> > 
> > Actually I think that we'd prefer to keep this in a patch-based form, at
> > least for now, because majority of our users get these drivers with
> > VMware Tools and will continue doing so until ditsributions start
> > enabling VMCI in their kernels. Which they probably won't until VMCI
> > moves form staging. We'd also have to constantly adjust drivers that we
> > are not working on getting upstream at this time to work with the
> > rapidly changing version of VMCI in staging, which will just add work
> > for us.
> 
> That wouldn't be an issue if you just include all of the drivers in the
> tree at the same time, right?

Maybe it wouldn't, however at this time we have not scheduled any
resources for upstreaming vmhgfs driver. We however do seek feedback on
vmci driver (and later vsock) for which we did schedule resources.

Thanks.

-- 
Dmitry

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [Pv-drivers] [vmw_vmci 11/11] Apply the header code to make VMCI build
@ 2012-07-28 21:10               ` Dmitry Torokhov
  0 siblings, 0 replies; 72+ messages in thread
From: Dmitry Torokhov @ 2012-07-28 21:10 UTC (permalink / raw)
  To: Greg KH
  Cc: pv-drivers, linux-kernel, virtualization, vm-crosstalk,
	Andrew Stiegmann, Sam Ravnborg

On Sat, Jul 28, 2012 at 12:55:35PM -0700, Greg KH wrote:
> On Fri, Jul 27, 2012 at 01:29:27PM -0700, Dmitry Torokhov wrote:
> > On Fri, Jul 27, 2012 at 11:16:39AM -0700, Greg KH wrote:
> > > On Fri, Jul 27, 2012 at 10:20:43AM -0700, Andrew Stiegmann wrote:
> > > > > The kernel style is to use lower_case for everything.
> > > > > So this would become:
> > > > > 
> > > > >     vmci_device_get()
> > > > > 
> > > > > This is obviously a very general comment and applies everywhere.
> > > > 
> > > > I wish I could lower case these symbols but VMCI has already existed
> > > > outside the mainline Linux tree for some time now and changing these
> > > > exported symbols would mean that other drivers that depend on VMCI
> > > > (vSock, vmhgfs) would need to change as well.   One thought that did
> > > > come to mind was exporting both VMCI_Device_Get and vmci_device_get
> > > > but that would likely just confuse people.  So in short I have made
> > > > function names lower case where possible, but exported symbols could
> > > > not be changed.
> > > 
> > > Not true at all.  You want those drivers to be merged as well, right?
> > > So they will need to have their functions changed, and their code as
> > > well.
> > > 
> > > Just wait until we get to the "change your functionality around"
> > > requests, those will require those drivers to change.  Right now we are
> > > at the "silly and obvious things you did wrong" stage of the review
> > > process :)
> > > 
> > > So please fix these, and also, post these drivers as well, so we can see
> > > how they interact with the core code.
> > > 
> > > Actually, if you are going to need lots of refactoring for these
> > > drivers, and the core, I would recommend putting this all in the staging
> > > tree, to allow that to happen over time.  That would ensure that your
> > > users keep having working systems, and let you modify the interfaces
> > > better and easier, than having to keep it all out-of-tree.
> > > 
> > > What do you think?
> > 
> > Actually I think that we'd prefer to keep this in a patch-based form, at
> > least for now, because majority of our users get these drivers with
> > VMware Tools and will continue doing so until ditsributions start
> > enabling VMCI in their kernels. Which they probably won't until VMCI
> > moves form staging. We'd also have to constantly adjust drivers that we
> > are not working on getting upstream at this time to work with the
> > rapidly changing version of VMCI in staging, which will just add work
> > for us.
> 
> That wouldn't be an issue if you just include all of the drivers in the
> tree at the same time, right?

Maybe it wouldn't, however at this time we have not scheduled any
resources for upstreaming vmhgfs driver. We however do seek feedback on
vmci driver (and later vsock) for which we did schedule resources.

Thanks.

-- 
Dmitry

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 00/11] VMCI for Linux
  2012-07-27  1:46     ` Greg KH
  (?)
@ 2012-07-31 12:48     ` Josh Boyer
  -1 siblings, 0 replies; 72+ messages in thread
From: Josh Boyer @ 2012-07-31 12:48 UTC (permalink / raw)
  To: Greg KH
  Cc: Andrew Stiegmann (stieg),
	linux-kernel, virtualization, pv-drivers, vm-crosstalk, cschamp

On Thu, Jul 26, 2012 at 9:46 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Thu, Jul 26, 2012 at 09:06:25PM -0400, Josh Boyer wrote:
>> On Thu, Jul 26, 2012 at 7:39 PM, Andrew Stiegmann (stieg)
>> <astiegmann@vmware.com> wrote:
>> >  drivers/misc/Kconfig                      |    1 +
>> >  drivers/misc/Makefile                     |    1 +
>> >  drivers/misc/vmw_vmci/Kconfig             |   16 +
>>
>> Is there a reason this isn't going into staging first?  The Hyper-V
>> drivers went through staging and that actually seemed to work fairly
>> well.
>
> Is there some reason you feel this should be in the staging tree now?
> Why?

Apologies for the delayed reply.  Was on vacation.

Mostly because this is only one of several drivers.  One that the
other drivers depend on, and I don't see those posted at all.  I'm
guessing we'll want changes that will cause those unposted drivers to
break.  It just seems to make sense to work on the API in staging
rather than slam it into drivers/misc/.

josh

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 00/11] VMCI for Linux
  2012-07-27  1:46     ` Greg KH
  (?)
  (?)
@ 2012-07-31 12:48     ` Josh Boyer
  -1 siblings, 0 replies; 72+ messages in thread
From: Josh Boyer @ 2012-07-31 12:48 UTC (permalink / raw)
  To: Greg KH
  Cc: pv-drivers, linux-kernel, virtualization, vm-crosstalk,
	Andrew Stiegmann (stieg),
	cschamp

On Thu, Jul 26, 2012 at 9:46 PM, Greg KH <gregkh@linuxfoundation.org> wrote:
> On Thu, Jul 26, 2012 at 09:06:25PM -0400, Josh Boyer wrote:
>> On Thu, Jul 26, 2012 at 7:39 PM, Andrew Stiegmann (stieg)
>> <astiegmann@vmware.com> wrote:
>> >  drivers/misc/Kconfig                      |    1 +
>> >  drivers/misc/Makefile                     |    1 +
>> >  drivers/misc/vmw_vmci/Kconfig             |   16 +
>>
>> Is there a reason this isn't going into staging first?  The Hyper-V
>> drivers went through staging and that actually seemed to work fairly
>> well.
>
> Is there some reason you feel this should be in the staging tree now?
> Why?

Apologies for the delayed reply.  Was on vacation.

Mostly because this is only one of several drivers.  One that the
other drivers depend on, and I don't see those posted at all.  I'm
guessing we'll want changes that will cause those unposted drivers to
break.  It just seems to make sense to work on the API in staging
rather than slam it into drivers/misc/.

josh

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-07-27 10:34   ` Sam Ravnborg
  2012-07-27 17:20       ` Andrew Stiegmann
  2012-08-02 19:50     ` Jan Engelhardt
@ 2012-08-02 19:50     ` Jan Engelhardt
  2012-08-02 20:22         ` Sam Ravnborg
  2 siblings, 1 reply; 72+ messages in thread
From: Jan Engelhardt @ 2012-08-02 19:50 UTC (permalink / raw)
  To: Sam Ravnborg
  Cc: Andrew Stiegmann (stieg),
	linux-kernel, virtualization, pv-drivers, vm-crosstalk, cschamp,
	gregkh


On Friday 2012-07-27 12:34, Sam Ravnborg wrote:
>> +#ifndef _VMCI_COMMONINT_H_
>> +#define _VMCI_COMMONINT_H_
>> +
>> +#include <linux/printk.h>
>> +#include <linux/vmw_vmci_defs.h>
>
>Use inverse chrismas tree here.
>Longer include lines first, and soret alphabetically when
>lines are of the same length.

So that's where unreadable include lists come from.
Depth-first lexicographically-sorted is a lot less hassle,
especially when it comes to merging patches that each
add one different include.

>> +/*
>> + * Utilility function that checks whether two entities are allowed
>> + * to interact. If one of them is restricted, the other one must
>> + * be trusted.
>> + */
>> +static inline bool vmci_deny_interaction(uint32_t partOne,
>> +					 uint32_t partTwo)
>
>The kernel types are u32 not uint32_t - these types belongs in user-space.

Not really. uint32_t is the C99 type for a 32-bit quantity, and I see
absolutely zero reason not to use standardized things. The only
exception are header files visible to user space where __u32 should
be used for (obscure) reasons of avoiding naming clashes.

(Obscure because uint32_t is always supposed to be 32 bits.)


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-07-27 10:34   ` Sam Ravnborg
  2012-07-27 17:20       ` Andrew Stiegmann
@ 2012-08-02 19:50     ` Jan Engelhardt
  2012-08-02 19:50     ` Jan Engelhardt
  2 siblings, 0 replies; 72+ messages in thread
From: Jan Engelhardt @ 2012-08-02 19:50 UTC (permalink / raw)
  To: Sam Ravnborg
  Cc: pv-drivers, gregkh, linux-kernel, virtualization, vm-crosstalk,
	Andrew Stiegmann (stieg),
	cschamp


On Friday 2012-07-27 12:34, Sam Ravnborg wrote:
>> +#ifndef _VMCI_COMMONINT_H_
>> +#define _VMCI_COMMONINT_H_
>> +
>> +#include <linux/printk.h>
>> +#include <linux/vmw_vmci_defs.h>
>
>Use inverse chrismas tree here.
>Longer include lines first, and soret alphabetically when
>lines are of the same length.

So that's where unreadable include lists come from.
Depth-first lexicographically-sorted is a lot less hassle,
especially when it comes to merging patches that each
add one different include.

>> +/*
>> + * Utilility function that checks whether two entities are allowed
>> + * to interact. If one of them is restricted, the other one must
>> + * be trusted.
>> + */
>> +static inline bool vmci_deny_interaction(uint32_t partOne,
>> +					 uint32_t partTwo)
>
>The kernel types are u32 not uint32_t - these types belongs in user-space.

Not really. uint32_t is the C99 type for a 32-bit quantity, and I see
absolutely zero reason not to use standardized things. The only
exception are header files visible to user space where __u32 should
be used for (obscure) reasons of avoiding naming clashes.

(Obscure because uint32_t is always supposed to be 32 bits.)

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-08-02 19:50     ` Jan Engelhardt
@ 2012-08-02 20:22         ` Sam Ravnborg
  0 siblings, 0 replies; 72+ messages in thread
From: Sam Ravnborg @ 2012-08-02 20:22 UTC (permalink / raw)
  To: Jan Engelhardt
  Cc: Andrew Stiegmann (stieg),
	linux-kernel, virtualization, pv-drivers, vm-crosstalk, cschamp,
	gregkh

On Thu, Aug 02, 2012 at 09:50:02PM +0200, Jan Engelhardt wrote:
> 
> On Friday 2012-07-27 12:34, Sam Ravnborg wrote:
> >> +#ifndef _VMCI_COMMONINT_H_
> >> +#define _VMCI_COMMONINT_H_
> >> +
> >> +#include <linux/printk.h>
> >> +#include <linux/vmw_vmci_defs.h>
> >
> >Use inverse chrismas tree here.
> >Longer include lines first, and soret alphabetically when
> >lines are of the same length.
> 
> So that's where unreadable include lists come from.
> Depth-first lexicographically-sorted is a lot less hassle,
> especially when it comes to merging patches that each
> add one different include.
This is applied in many parts of the kernels and has some benefits:
- easy to spot duplicates
- clash is less likely when two commit adds includes
- easy to do so it looks the same across different files

Obviously <linux/*> comes before include <asm/*> as this is
separate blocks of includes.

net/ and arch/x86/ is two places where this is getting the norm,
and these are trendsetters for the rest of the kernel.

> 
> >> +/*
> >> + * Utilility function that checks whether two entities are allowed
> >> + * to interact. If one of them is restricted, the other one must
> >> + * be trusted.
> >> + */
> >> +static inline bool vmci_deny_interaction(uint32_t partOne,
> >> +					 uint32_t partTwo)
> >
> >The kernel types are u32 not uint32_t - these types belongs in user-space.
> 
> Not really. uint32_t is the C99 type for a 32-bit quantity, and I see
> absolutely zero reason not to use standardized things.
Found the following somewhere on the net:

On Mon, 29 Nov 2004, Paul Mackerras wrote:
>
> uint32_t is defined to be exactly 32 bits wide, so where's the problem
> in using it instead of __u32 in the headers that describe the
> user/kernel interface?  (Ditto for uint{8,16,64}_t, of course.

Ok, this discussion has gone on for too long anyway, but let's make it
easier for everybody. The kernel uses u8/u16/u32 because:

	- the kernel should not depend on, or pollute user-space naming.
	  YOU MUST NOT USE "uint32_t" when that may not be defined, and
	  user-space rules for when it is defined are arcane and totally
	  arbitrary.
...

See http://yarchive.net/comp/linux/kernel_headers.html for additional
rationale. (Second mail listed).

	Sam




^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
@ 2012-08-02 20:22         ` Sam Ravnborg
  0 siblings, 0 replies; 72+ messages in thread
From: Sam Ravnborg @ 2012-08-02 20:22 UTC (permalink / raw)
  To: Jan Engelhardt
  Cc: pv-drivers, gregkh, linux-kernel, virtualization, vm-crosstalk,
	Andrew Stiegmann (stieg),
	cschamp

On Thu, Aug 02, 2012 at 09:50:02PM +0200, Jan Engelhardt wrote:
> 
> On Friday 2012-07-27 12:34, Sam Ravnborg wrote:
> >> +#ifndef _VMCI_COMMONINT_H_
> >> +#define _VMCI_COMMONINT_H_
> >> +
> >> +#include <linux/printk.h>
> >> +#include <linux/vmw_vmci_defs.h>
> >
> >Use inverse chrismas tree here.
> >Longer include lines first, and soret alphabetically when
> >lines are of the same length.
> 
> So that's where unreadable include lists come from.
> Depth-first lexicographically-sorted is a lot less hassle,
> especially when it comes to merging patches that each
> add one different include.
This is applied in many parts of the kernels and has some benefits:
- easy to spot duplicates
- clash is less likely when two commit adds includes
- easy to do so it looks the same across different files

Obviously <linux/*> comes before include <asm/*> as this is
separate blocks of includes.

net/ and arch/x86/ is two places where this is getting the norm,
and these are trendsetters for the rest of the kernel.

> 
> >> +/*
> >> + * Utilility function that checks whether two entities are allowed
> >> + * to interact. If one of them is restricted, the other one must
> >> + * be trusted.
> >> + */
> >> +static inline bool vmci_deny_interaction(uint32_t partOne,
> >> +					 uint32_t partTwo)
> >
> >The kernel types are u32 not uint32_t - these types belongs in user-space.
> 
> Not really. uint32_t is the C99 type for a 32-bit quantity, and I see
> absolutely zero reason not to use standardized things.
Found the following somewhere on the net:

On Mon, 29 Nov 2004, Paul Mackerras wrote:
>
> uint32_t is defined to be exactly 32 bits wide, so where's the problem
> in using it instead of __u32 in the headers that describe the
> user/kernel interface?  (Ditto for uint{8,16,64}_t, of course.

Ok, this discussion has gone on for too long anyway, but let's make it
easier for everybody. The kernel uses u8/u16/u32 because:

	- the kernel should not depend on, or pollute user-space naming.
	  YOU MUST NOT USE "uint32_t" when that may not be defined, and
	  user-space rules for when it is defined are arcane and totally
	  arbitrary.
...

See http://yarchive.net/comp/linux/kernel_headers.html for additional
rationale. (Second mail listed).

	Sam

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
  2012-08-02 20:22         ` Sam Ravnborg
@ 2012-08-15 20:45           ` Jan Engelhardt
  -1 siblings, 0 replies; 72+ messages in thread
From: Jan Engelhardt @ 2012-08-15 20:45 UTC (permalink / raw)
  To: Sam Ravnborg
  Cc: Andrew Stiegmann (stieg),
	linux-kernel, virtualization, pv-drivers, vm-crosstalk, cschamp,
	gregkh


On Thursday 2012-08-02 22:22, Sam Ravnborg wrote:
>> On Friday 2012-07-27 12:34, Sam Ravnborg wrote:
>> >> +#ifndef _VMCI_COMMONINT_H_
>> >> +#define _VMCI_COMMONINT_H_
>> >> +
>> >> +#include <linux/printk.h>
>> >> +#include <linux/vmw_vmci_defs.h>
>> >
>> >Use inverse chrismas tree here.
>> >Longer include lines first, and soret alphabetically when
>> >lines are of the same length.
>> 
>> So that's where unreadable include lists come from.
>> Depth-first lexicographically-sorted is a lot less hassle,
>> especially when it comes to merging patches that each
>> add one different include.
>This is applied in many parts of the kernels and has some benefits:
>- easy to spot duplicates
>- clash is less likely when two commit adds includes

Sorting already addresses the two, the christmas thing (for
files in a single dir) seems like adding no extra value.


>>>The kernel types are u32 not uint32_t - these types belongs in user-space.
>Found the following somewhere on the net:
>
>|	- the kernel should not depend on, or pollute user-space naming.
>|	  YOU MUST NOT USE "uint32_t" when that may not be defined, and
>|	  user-space rules for when it is defined are arcane and totally
>|	  arbitrary.

I can see the reasoning for header files, but it seems
irrelevant for code, in particular .c files, that never
practically get exposed to userspace.

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [vmw_vmci 11/11] Apply the header code to make VMCI build
@ 2012-08-15 20:45           ` Jan Engelhardt
  0 siblings, 0 replies; 72+ messages in thread
From: Jan Engelhardt @ 2012-08-15 20:45 UTC (permalink / raw)
  To: Sam Ravnborg
  Cc: pv-drivers, gregkh, linux-kernel, virtualization, vm-crosstalk,
	Andrew Stiegmann (stieg),
	cschamp


On Thursday 2012-08-02 22:22, Sam Ravnborg wrote:
>> On Friday 2012-07-27 12:34, Sam Ravnborg wrote:
>> >> +#ifndef _VMCI_COMMONINT_H_
>> >> +#define _VMCI_COMMONINT_H_
>> >> +
>> >> +#include <linux/printk.h>
>> >> +#include <linux/vmw_vmci_defs.h>
>> >
>> >Use inverse chrismas tree here.
>> >Longer include lines first, and soret alphabetically when
>> >lines are of the same length.
>> 
>> So that's where unreadable include lists come from.
>> Depth-first lexicographically-sorted is a lot less hassle,
>> especially when it comes to merging patches that each
>> add one different include.
>This is applied in many parts of the kernels and has some benefits:
>- easy to spot duplicates
>- clash is less likely when two commit adds includes

Sorting already addresses the two, the christmas thing (for
files in a single dir) seems like adding no extra value.


>>>The kernel types are u32 not uint32_t - these types belongs in user-space.
>Found the following somewhere on the net:
>
>|	- the kernel should not depend on, or pollute user-space naming.
>|	  YOU MUST NOT USE "uint32_t" when that may not be defined, and
>|	  user-space rules for when it is defined are arcane and totally
>|	  arbitrary.

I can see the reasoning for header files, but it seems
irrelevant for code, in particular .c files, that never
practically get exposed to userspace.

^ permalink raw reply	[flat|nested] 72+ messages in thread

end of thread, other threads:[~2012-08-15 20:45 UTC | newest]

Thread overview: 72+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-07-26 23:39 [vmw_vmci 00/11] VMCI for Linux Andrew Stiegmann (stieg)
2012-07-26 23:39 ` Andrew Stiegmann (stieg)
2012-07-26 23:39 ` [vmw_vmci 01/11] Apply VMCI context code Andrew Stiegmann (stieg)
2012-07-26 23:39   ` Andrew Stiegmann (stieg)
2012-07-26 23:48   ` Greg KH
2012-07-26 23:48     ` Greg KH
2012-07-27  0:01     ` Andrew Stiegmann
2012-07-27  0:01       ` Andrew Stiegmann
2012-07-26 23:39 ` [vmw_vmci 02/11] Apply VMCI datagram code Andrew Stiegmann (stieg)
2012-07-26 23:39   ` Andrew Stiegmann (stieg)
2012-07-26 23:39 ` [vmw_vmci 03/11] Apply VMCI doorbell code Andrew Stiegmann (stieg)
2012-07-26 23:39   ` Andrew Stiegmann (stieg)
2012-07-26 23:39 ` [vmw_vmci 04/11] Apply VMCI driver code Andrew Stiegmann (stieg)
2012-07-26 23:39   ` Andrew Stiegmann (stieg)
2012-07-26 23:39 ` [vmw_vmci 05/11] Apply VMCI event code Andrew Stiegmann (stieg)
2012-07-26 23:39   ` Andrew Stiegmann (stieg)
2012-07-26 23:39 ` [vmw_vmci 06/11] Apply dynamic array code Andrew Stiegmann (stieg)
2012-07-26 23:39   ` Andrew Stiegmann (stieg)
2012-07-26 23:39 ` [vmw_vmci 07/11] Apply VMCI hash table Andrew Stiegmann (stieg)
2012-07-26 23:39   ` Andrew Stiegmann (stieg)
2012-07-26 23:49   ` Greg KH
2012-07-26 23:49     ` Greg KH
2012-07-27  0:01     ` Andrew Stiegmann
2012-07-27  0:01       ` Andrew Stiegmann
2012-07-26 23:39 ` [vmw_vmci 08/11] Apply VMCI queue pairs Andrew Stiegmann (stieg)
2012-07-26 23:39   ` Andrew Stiegmann (stieg)
2012-07-26 23:39 ` [vmw_vmci 09/11] Apply VMCI resource code Andrew Stiegmann (stieg)
2012-07-26 23:39   ` Andrew Stiegmann (stieg)
2012-07-26 23:39 ` [vmw_vmci 10/11] Apply vmci routing code Andrew Stiegmann (stieg)
2012-07-26 23:39   ` Andrew Stiegmann (stieg)
2012-07-26 23:39 ` [vmw_vmci 11/11] Apply the header code to make VMCI build Andrew Stiegmann (stieg)
2012-07-26 23:39   ` Andrew Stiegmann (stieg)
2012-07-26 23:56   ` Greg KH
2012-07-26 23:56     ` Greg KH
2012-07-27  9:53   ` Alan Cox
2012-07-27  9:53     ` Alan Cox
2012-07-27 18:04     ` [Pv-drivers] " Dmitry Torokhov
2012-07-27 18:04       ` Dmitry Torokhov
2012-07-27 10:34   ` Sam Ravnborg
2012-07-27 10:34   ` Sam Ravnborg
2012-07-27 17:20     ` Andrew Stiegmann
2012-07-27 17:20       ` Andrew Stiegmann
2012-07-27 18:16       ` Greg KH
2012-07-27 18:16         ` Greg KH
2012-07-27 18:39         ` Andrew Stiegmann
2012-07-27 18:39           ` Andrew Stiegmann
2012-07-27 18:52           ` Greg KH
2012-07-27 18:52             ` Greg KH
2012-07-27 20:29         ` [Pv-drivers] " Dmitry Torokhov
2012-07-27 20:29           ` Dmitry Torokhov
2012-07-28 19:55           ` Greg KH
2012-07-28 19:55             ` Greg KH
2012-07-28 21:10             ` Dmitry Torokhov
2012-07-28 21:10               ` Dmitry Torokhov
2012-07-27 19:53       ` Sam Ravnborg
2012-07-27 19:53         ` Sam Ravnborg
2012-07-27 20:07         ` Andrew Stiegmann
2012-07-27 20:07           ` Andrew Stiegmann
2012-08-02 19:50     ` Jan Engelhardt
2012-08-02 19:50     ` Jan Engelhardt
2012-08-02 20:22       ` Sam Ravnborg
2012-08-02 20:22         ` Sam Ravnborg
2012-08-15 20:45         ` Jan Engelhardt
2012-08-15 20:45           ` Jan Engelhardt
2012-07-26 23:47 ` [vmw_vmci 00/11] VMCI for Linux Greg KH
2012-07-26 23:47   ` Greg KH
2012-07-27  1:06 ` Josh Boyer
2012-07-27  1:06 ` Josh Boyer
2012-07-27  1:46   ` Greg KH
2012-07-27  1:46     ` Greg KH
2012-07-31 12:48     ` Josh Boyer
2012-07-31 12:48     ` Josh Boyer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.