linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 00/18] introduce the Xen PV Calls backend
@ 2017-07-03 21:08 Stefano Stabellini
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
  0 siblings, 1 reply; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel; +Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky

Hi all,

this series introduces the backend for the newly introduced PV Calls
procotol.

PV Calls is a paravirtualized protocol that allows the implementation of
a set of POSIX functions in a different domain. The PV Calls frontend
sends POSIX function calls to the backend, which implements them and
returns a value to the frontend and acts on the function call.

For more information about PV Calls, please read:

https://xenbits.xen.org/docs/unstable/misc/pvcalls.html

I tried to split the source code into small pieces to make it easier to
read and understand. Please review!


Changes in v6:
- send ENOTSUPP to frontends for unsupported commands
- reset notify_all
- use a threaded irq handler instead of a workqueue to handle guest
  commands

Changes in v5:
- added review-byes
- remove unnecessary gotos
- ret 0 in pvcalls_back_connect
- do not lose ret values
- remove queue->rskq_lock
- make sure all accesses to socket_mappings and socketpass_mappings are
  protected by socket_lock
- rename ring_size to array_size

Changes in v4:
- add reviewed-bys
- fix return values of many functions
- remove pointless initializers
- print a warning if ring_order > MAX_RING_ORDER
- remove map->ioworker.cpu
- use queue_work instead of queue_work_on
- add sock_release() on error paths where appropriate
- add a comment in __pvcalls_back_accept about racing with
  pvcalls_back_accept and atomicity of reqcopy
- remove unneded (void*) casts
- remove unneded {}
- fix backend_disconnect if !mappass
- remove pointless continue in backend_disconnect
- remove pointless memset of &pvcalls_back_global
- pass *opaque to pvcalls_conn_back_read
- improve WARN_ON in pvcalls_conn_back_read
- fix error checks in pvcalls_conn_back_write
- XEN_PVCALLS_BACKEND depends on XEN_BACKEND
- rename priv to fedata across all patches

Changes in v3:
- added reviewed-bys
- return err from pvcalls_back_probe
- remove old comments
- use a xenstore transaction in pvcalls_back_probe
- ignore errors from xenbus_switch_state
- rename pvcalls_back_priv to pvcalls_fedata
- remove addr from backend_connect
- remove priv->work, add comment about theoretical race
- use IPPROTO_IP
- refactor active socket allocation in a single new function

Changes in v2:
- allocate one ioworker per socket (rather than 1 per vcpu)
- rename privs to frontends
- add newlines
- define "1" in the public header
- better error returns in pvcalls_back_probe
- do not set XenbusStateClosed twice in set_backend_state
- add more comments
- replace rw_semaphore with semaphore
- rename pvcallss to socket_lock
- move xenbus_map_ring_valloc closer to first use in backend_connect
- use more traditional return codes from pvcalls_back_handle_cmd and
  callees
- remove useless dev == NULL checks
- replace lock_sock with more appropriate and fine grained socket locks


Stefano Stabellini (18):
      xen: introduce the pvcalls interface header
      xen/pvcalls: introduce the pvcalls xenbus backend
      xen/pvcalls: initialize the module and register the xenbus backend
      xen/pvcalls: xenbus state handling
      xen/pvcalls: connect to a frontend
      xen/pvcalls: handle commands from the frontend
      xen/pvcalls: implement socket command
      xen/pvcalls: implement connect command
      xen/pvcalls: implement bind command
      xen/pvcalls: implement listen command
      xen/pvcalls: implement accept command
      xen/pvcalls: implement poll command
      xen/pvcalls: implement release command
      xen/pvcalls: disconnect and module_exit
      xen/pvcalls: implement the ioworker functions
      xen/pvcalls: implement read
      xen/pvcalls: implement write
      xen: introduce a Kconfig option to enable the pvcalls backend

 drivers/xen/Kconfig                |   12 +
 drivers/xen/Makefile               |    1 +
 drivers/xen/pvcalls-back.c         | 1238 ++++++++++++++++++++++++++++++++++++
 include/xen/interface/io/pvcalls.h |  121 ++++
 include/xen/interface/io/ring.h    |    2 +
 5 files changed, 1374 insertions(+)
 create mode 100644 drivers/xen/pvcalls-back.c
 create mode 100644 include/xen/interface/io/pvcalls.h

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH v6 01/18] xen: introduce the pvcalls interface header
  2017-07-03 21:08 [PATCH v6 00/18] introduce the Xen PV Calls backend Stefano Stabellini
@ 2017-07-03 21:08 ` Stefano Stabellini
  2017-07-03 21:08   ` [PATCH v6 02/18] xen/pvcalls: introduce the pvcalls xenbus backend Stefano Stabellini
                     ` (17 more replies)
  0 siblings, 18 replies; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky,
	Stefano Stabellini, konrad.wilk

Introduce the C header file which defines the PV Calls interface. It is
imported from xen/include/public/io/pvcalls.h.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: konrad.wilk@oracle.com
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 include/xen/interface/io/pvcalls.h | 121 +++++++++++++++++++++++++++++++++++++
 include/xen/interface/io/ring.h    |   2 +
 2 files changed, 123 insertions(+)
 create mode 100644 include/xen/interface/io/pvcalls.h

diff --git a/include/xen/interface/io/pvcalls.h b/include/xen/interface/io/pvcalls.h
new file mode 100644
index 0000000..ccf97b8
--- /dev/null
+++ b/include/xen/interface/io/pvcalls.h
@@ -0,0 +1,121 @@
+#ifndef __XEN_PUBLIC_IO_XEN_PVCALLS_H__
+#define __XEN_PUBLIC_IO_XEN_PVCALLS_H__
+
+#include <linux/net.h>
+#include <xen/interface/io/ring.h>
+#include <xen/interface/grant_table.h>
+
+/* "1" means socket, connect, release, bind, listen, accept and poll */
+#define XENBUS_FUNCTIONS_CALLS "1"
+
+/*
+ * See docs/misc/pvcalls.markdown in xen.git for the full specification:
+ * https://xenbits.xen.org/docs/unstable/misc/pvcalls.html
+ */
+struct pvcalls_data_intf {
+    RING_IDX in_cons, in_prod, in_error;
+
+    uint8_t pad1[52];
+
+    RING_IDX out_cons, out_prod, out_error;
+
+    uint8_t pad2[52];
+
+    RING_IDX ring_order;
+    grant_ref_t ref[];
+};
+DEFINE_XEN_FLEX_RING(pvcalls);
+
+#define PVCALLS_SOCKET         0
+#define PVCALLS_CONNECT        1
+#define PVCALLS_RELEASE        2
+#define PVCALLS_BIND           3
+#define PVCALLS_LISTEN         4
+#define PVCALLS_ACCEPT         5
+#define PVCALLS_POLL           6
+
+struct xen_pvcalls_request {
+    uint32_t req_id; /* private to guest, echoed in response */
+    uint32_t cmd;    /* command to execute */
+    union {
+        struct xen_pvcalls_socket {
+            uint64_t id;
+            uint32_t domain;
+            uint32_t type;
+            uint32_t protocol;
+        } socket;
+        struct xen_pvcalls_connect {
+            uint64_t id;
+            uint8_t addr[28];
+            uint32_t len;
+            uint32_t flags;
+            grant_ref_t ref;
+            uint32_t evtchn;
+        } connect;
+        struct xen_pvcalls_release {
+            uint64_t id;
+            uint8_t reuse;
+        } release;
+        struct xen_pvcalls_bind {
+            uint64_t id;
+            uint8_t addr[28];
+            uint32_t len;
+        } bind;
+        struct xen_pvcalls_listen {
+            uint64_t id;
+            uint32_t backlog;
+        } listen;
+        struct xen_pvcalls_accept {
+            uint64_t id;
+            uint64_t id_new;
+            grant_ref_t ref;
+            uint32_t evtchn;
+        } accept;
+        struct xen_pvcalls_poll {
+            uint64_t id;
+        } poll;
+        /* dummy member to force sizeof(struct xen_pvcalls_request)
+         * to match across archs */
+        struct xen_pvcalls_dummy {
+            uint8_t dummy[56];
+        } dummy;
+    } u;
+};
+
+struct xen_pvcalls_response {
+    uint32_t req_id;
+    uint32_t cmd;
+    int32_t ret;
+    uint32_t pad;
+    union {
+        struct _xen_pvcalls_socket {
+            uint64_t id;
+        } socket;
+        struct _xen_pvcalls_connect {
+            uint64_t id;
+        } connect;
+        struct _xen_pvcalls_release {
+            uint64_t id;
+        } release;
+        struct _xen_pvcalls_bind {
+            uint64_t id;
+        } bind;
+        struct _xen_pvcalls_listen {
+            uint64_t id;
+        } listen;
+        struct _xen_pvcalls_accept {
+            uint64_t id;
+        } accept;
+        struct _xen_pvcalls_poll {
+            uint64_t id;
+        } poll;
+        struct _xen_pvcalls_dummy {
+            uint8_t dummy[8];
+        } dummy;
+    } u;
+};
+
+DEFINE_RING_TYPES(xen_pvcalls, struct xen_pvcalls_request,
+                  struct xen_pvcalls_response);
+
+#endif
diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
index c794568..e547088 100644
--- a/include/xen/interface/io/ring.h
+++ b/include/xen/interface/io/ring.h
@@ -9,6 +9,8 @@
 #ifndef __XEN_PUBLIC_IO_RING_H__
 #define __XEN_PUBLIC_IO_RING_H__
 
+#include <xen/interface/grant_table.h>
+
 typedef unsigned int RING_IDX;
 
 /* Round a 32-bit unsigned constant down to the nearest power of two. */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 02/18] xen/pvcalls: introduce the pvcalls xenbus backend
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
@ 2017-07-03 21:08   ` Stefano Stabellini
  2017-07-04  8:04     ` Juergen Gross
  2017-07-03 21:08   ` [PATCH v6 03/18] xen/pvcalls: initialize the module and register the " Stefano Stabellini
                     ` (16 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Introduce a xenbus backend for the pvcalls protocol, as defined by
https://xenbits.xen.org/docs/unstable/misc/pvcalls.html.

This patch only adds the stubs, the code will be added by the following
patches.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 61 insertions(+)
 create mode 100644 drivers/xen/pvcalls-back.c

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
new file mode 100644
index 0000000..f3d0daa
--- /dev/null
+++ b/drivers/xen/pvcalls-back.c
@@ -0,0 +1,61 @@
+/*
+ * (c) 2017 Stefano Stabellini <stefano@aporeto.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/radix-tree.h>
+#include <linux/module.h>
+#include <linux/semaphore.h>
+#include <linux/wait.h>
+
+#include <xen/events.h>
+#include <xen/grant_table.h>
+#include <xen/xen.h>
+#include <xen/xenbus.h>
+#include <xen/interface/io/pvcalls.h>
+
+static int pvcalls_back_probe(struct xenbus_device *dev,
+			      const struct xenbus_device_id *id)
+{
+	return 0;
+}
+
+static void pvcalls_back_changed(struct xenbus_device *dev,
+				 enum xenbus_state frontend_state)
+{
+}
+
+static int pvcalls_back_remove(struct xenbus_device *dev)
+{
+	return 0;
+}
+
+static int pvcalls_back_uevent(struct xenbus_device *xdev,
+			       struct kobj_uevent_env *env)
+{
+	return 0;
+}
+
+static const struct xenbus_device_id pvcalls_back_ids[] = {
+	{ "pvcalls" },
+	{ "" }
+};
+
+static struct xenbus_driver pvcalls_back_driver = {
+	.ids = pvcalls_back_ids,
+	.probe = pvcalls_back_probe,
+	.remove = pvcalls_back_remove,
+	.uevent = pvcalls_back_uevent,
+	.otherend_changed = pvcalls_back_changed,
+};
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 03/18] xen/pvcalls: initialize the module and register the xenbus backend
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
  2017-07-03 21:08   ` [PATCH v6 02/18] xen/pvcalls: introduce the pvcalls xenbus backend Stefano Stabellini
@ 2017-07-03 21:08   ` Stefano Stabellini
  2017-07-04  8:04     ` Juergen Gross
  2017-07-03 21:08   ` [PATCH v6 04/18] xen/pvcalls: xenbus state handling Stefano Stabellini
                     ` (15 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Keep a list of connected frontends. Use a semaphore to protect list
accesses.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index f3d0daa..9044cf2 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -25,6 +25,11 @@
 #include <xen/xenbus.h>
 #include <xen/interface/io/pvcalls.h>
 
+struct pvcalls_back_global {
+	struct list_head frontends;
+	struct semaphore frontends_lock;
+} pvcalls_back_global;
+
 static int pvcalls_back_probe(struct xenbus_device *dev,
 			      const struct xenbus_device_id *id)
 {
@@ -59,3 +64,20 @@ static int pvcalls_back_uevent(struct xenbus_device *xdev,
 	.uevent = pvcalls_back_uevent,
 	.otherend_changed = pvcalls_back_changed,
 };
+
+static int __init pvcalls_back_init(void)
+{
+	int ret;
+
+	if (!xen_domain())
+		return -ENODEV;
+
+	ret = xenbus_register_backend(&pvcalls_back_driver);
+	if (ret < 0)
+		return ret;
+
+	sema_init(&pvcalls_back_global.frontends_lock, 1);
+	INIT_LIST_HEAD(&pvcalls_back_global.frontends);
+	return 0;
+}
+module_init(pvcalls_back_init);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 04/18] xen/pvcalls: xenbus state handling
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
  2017-07-03 21:08   ` [PATCH v6 02/18] xen/pvcalls: introduce the pvcalls xenbus backend Stefano Stabellini
  2017-07-03 21:08   ` [PATCH v6 03/18] xen/pvcalls: initialize the module and register the " Stefano Stabellini
@ 2017-07-03 21:08   ` Stefano Stabellini
  2017-07-04  8:03     ` Juergen Gross
  2017-07-03 21:08   ` [PATCH v6 05/18] xen/pvcalls: connect to a frontend Stefano Stabellini
                     ` (14 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Introduce the code to handle xenbus state changes.

Implement the probe function for the pvcalls backend. Write the
supported versions, max-page-order and function-calls nodes to xenstore,
as required by the protocol.

Introduce stub functions for disconnecting/connecting to a frontend.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 152 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 152 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 9044cf2..7bce750 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -25,20 +25,172 @@
 #include <xen/xenbus.h>
 #include <xen/interface/io/pvcalls.h>
 
+#define PVCALLS_VERSIONS "1"
+#define MAX_RING_ORDER XENBUS_MAX_RING_GRANT_ORDER
+
 struct pvcalls_back_global {
 	struct list_head frontends;
 	struct semaphore frontends_lock;
 } pvcalls_back_global;
 
+static int backend_connect(struct xenbus_device *dev)
+{
+	return 0;
+}
+
+static int backend_disconnect(struct xenbus_device *dev)
+{
+	return 0;
+}
+
 static int pvcalls_back_probe(struct xenbus_device *dev,
 			      const struct xenbus_device_id *id)
 {
+	int err, abort;
+	struct xenbus_transaction xbt;
+
+again:
+	abort = 1;
+
+	err = xenbus_transaction_start(&xbt);
+	if (err) {
+		pr_warn("%s cannot create xenstore transaction\n", __func__);
+		return err;
+	}
+
+	err = xenbus_printf(xbt, dev->nodename, "versions", "%s",
+			    PVCALLS_VERSIONS);
+	if (err) {
+		pr_warn("%s write out 'version' failed\n", __func__);
+		goto abort;
+	}
+
+	err = xenbus_printf(xbt, dev->nodename, "max-page-order", "%u",
+			    MAX_RING_ORDER);
+	if (err) {
+		pr_warn("%s write out 'max-page-order' failed\n", __func__);
+		goto abort;
+	}
+
+	err = xenbus_printf(xbt, dev->nodename, "function-calls",
+			    XENBUS_FUNCTIONS_CALLS);
+	if (err) {
+		pr_warn("%s write out 'function-calls' failed\n", __func__);
+		goto abort;
+	}
+
+	abort = 0;
+abort:
+	err = xenbus_transaction_end(xbt, abort);
+	if (err) {
+		if (err == -EAGAIN && !abort)
+			goto again;
+		pr_warn("%s cannot complete xenstore transaction\n", __func__);
+		return err;
+	}
+
+	xenbus_switch_state(dev, XenbusStateInitWait);
+
 	return 0;
 }
 
+static void set_backend_state(struct xenbus_device *dev,
+			      enum xenbus_state state)
+{
+	while (dev->state != state) {
+		switch (dev->state) {
+		case XenbusStateClosed:
+			switch (state) {
+			case XenbusStateInitWait:
+			case XenbusStateConnected:
+				xenbus_switch_state(dev, XenbusStateInitWait);
+				break;
+			case XenbusStateClosing:
+				xenbus_switch_state(dev, XenbusStateClosing);
+				break;
+			default:
+				__WARN();
+			}
+			break;
+		case XenbusStateInitWait:
+		case XenbusStateInitialised:
+			switch (state) {
+			case XenbusStateConnected:
+				backend_connect(dev);
+				xenbus_switch_state(dev, XenbusStateConnected);
+				break;
+			case XenbusStateClosing:
+			case XenbusStateClosed:
+				xenbus_switch_state(dev, XenbusStateClosing);
+				break;
+			default:
+				__WARN();
+			}
+			break;
+		case XenbusStateConnected:
+			switch (state) {
+			case XenbusStateInitWait:
+			case XenbusStateClosing:
+			case XenbusStateClosed:
+				down(&pvcalls_back_global.frontends_lock);
+				backend_disconnect(dev);
+				up(&pvcalls_back_global.frontends_lock);
+				xenbus_switch_state(dev, XenbusStateClosing);
+				break;
+			default:
+				__WARN();
+			}
+			break;
+		case XenbusStateClosing:
+			switch (state) {
+			case XenbusStateInitWait:
+			case XenbusStateConnected:
+			case XenbusStateClosed:
+				xenbus_switch_state(dev, XenbusStateClosed);
+				break;
+			default:
+				__WARN();
+			}
+			break;
+		default:
+			__WARN();
+		}
+	}
+}
+
 static void pvcalls_back_changed(struct xenbus_device *dev,
 				 enum xenbus_state frontend_state)
 {
+	switch (frontend_state) {
+	case XenbusStateInitialising:
+		set_backend_state(dev, XenbusStateInitWait);
+		break;
+
+	case XenbusStateInitialised:
+	case XenbusStateConnected:
+		set_backend_state(dev, XenbusStateConnected);
+		break;
+
+	case XenbusStateClosing:
+		set_backend_state(dev, XenbusStateClosing);
+		break;
+
+	case XenbusStateClosed:
+		set_backend_state(dev, XenbusStateClosed);
+		if (xenbus_dev_is_online(dev))
+			break;
+		device_unregister(&dev->dev);
+		break;
+	case XenbusStateUnknown:
+		set_backend_state(dev, XenbusStateClosed);
+		device_unregister(&dev->dev);
+		break;
+
+	default:
+		xenbus_dev_fatal(dev, -EINVAL, "saw state %d at frontend",
+				 frontend_state);
+		break;
+	}
 }
 
 static int pvcalls_back_remove(struct xenbus_device *dev)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 05/18] xen/pvcalls: connect to a frontend
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
                     ` (2 preceding siblings ...)
  2017-07-03 21:08   ` [PATCH v6 04/18] xen/pvcalls: xenbus state handling Stefano Stabellini
@ 2017-07-03 21:08   ` Stefano Stabellini
  2017-07-04  6:59     ` Juergen Gross
  2017-07-03 21:08   ` [PATCH v6 06/18] xen/pvcalls: handle commands from the frontend Stefano Stabellini
                     ` (13 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Introduce a per-frontend data structure named pvcalls_fedata. It
contains pointers to the command ring, its event channel, a list of
active sockets and a tree of passive sockets (passing sockets need to be
looked up from the id on listen, accept and poll commands, while active
sockets only on release).

It also has an unbound workqueue to schedule the work of parsing and
executing commands on the command ring. socket_lock protects the two
lists. In pvcalls_back_global, keep a list of connected frontends.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 92 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 92 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 7bce750..e4c2e46 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -33,9 +33,101 @@ struct pvcalls_back_global {
 	struct semaphore frontends_lock;
 } pvcalls_back_global;
 
+/*
+ * Per-frontend data structure. It contains pointers to the command
+ * ring, its event channel, a list of active sockets and a tree of
+ * passive sockets.
+ */
+struct pvcalls_fedata {
+	struct list_head list;
+	struct xenbus_device *dev;
+	struct xen_pvcalls_sring *sring;
+	struct xen_pvcalls_back_ring ring;
+	int irq;
+	struct list_head socket_mappings;
+	struct radix_tree_root socketpass_mappings;
+	struct semaphore socket_lock;
+	struct workqueue_struct *wq;
+	struct work_struct register_work;
+};
+
+static void pvcalls_back_work(struct work_struct *work)
+{
+}
+
+static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
+{
+	return IRQ_HANDLED;
+}
+
 static int backend_connect(struct xenbus_device *dev)
 {
+	int err, evtchn;
+	grant_ref_t ring_ref;
+	struct pvcalls_fedata *fedata = NULL;
+
+	fedata = kzalloc(sizeof(struct pvcalls_fedata), GFP_KERNEL);
+	if (!fedata)
+		return -ENOMEM;
+
+	err = xenbus_scanf(XBT_NIL, dev->otherend, "port", "%u",
+			   &evtchn);
+	if (err != 1) {
+		err = -EINVAL;
+		xenbus_dev_fatal(dev, err, "reading %s/event-channel",
+				 dev->otherend);
+		goto error;
+	}
+
+	err = xenbus_scanf(XBT_NIL, dev->otherend, "ring-ref", "%u", &ring_ref);
+	if (err != 1) {
+		err = -EINVAL;
+		xenbus_dev_fatal(dev, err, "reading %s/ring-ref",
+				 dev->otherend);
+		goto error;
+	}
+
+	err = bind_interdomain_evtchn_to_irqhandler(dev->otherend_id, evtchn,
+						    pvcalls_back_event, 0,
+						    "pvcalls-backend", dev);
+	if (err < 0)
+		goto error;
+	fedata->irq = err;
+
+	fedata->wq = alloc_workqueue("pvcalls_back_wq", WQ_UNBOUND, 1);
+	if (!fedata->wq) {
+		err = -ENOMEM;
+		goto error;
+	}
+
+	err = xenbus_map_ring_valloc(dev, &ring_ref, 1, (void**)&fedata->sring);
+	if (err < 0)
+		goto error;
+
+	BACK_RING_INIT(&fedata->ring, fedata->sring, XEN_PAGE_SIZE * 1);
+	fedata->dev = dev;
+
+	INIT_WORK(&fedata->register_work, pvcalls_back_work);
+	INIT_LIST_HEAD(&fedata->socket_mappings);
+	INIT_RADIX_TREE(&fedata->socketpass_mappings, GFP_KERNEL);
+	sema_init(&fedata->socket_lock, 1);
+	dev_set_drvdata(&dev->dev, fedata);
+
+	down(&pvcalls_back_global.frontends_lock);
+	list_add_tail(&fedata->list, &pvcalls_back_global.frontends);
+	up(&pvcalls_back_global.frontends_lock);
+	queue_work(fedata->wq, &fedata->register_work);
+
 	return 0;
+
+ error:
+	if (fedata->sring != NULL)
+		xenbus_unmap_ring_vfree(dev, fedata->sring);
+	if (fedata->wq)
+		destroy_workqueue(fedata->wq);
+	unbind_from_irqhandler(fedata->irq, dev);
+	kfree(fedata);
+	return err;
 }
 
 static int backend_disconnect(struct xenbus_device *dev)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 06/18] xen/pvcalls: handle commands from the frontend
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
                     ` (3 preceding siblings ...)
  2017-07-03 21:08   ` [PATCH v6 05/18] xen/pvcalls: connect to a frontend Stefano Stabellini
@ 2017-07-03 21:08   ` Stefano Stabellini
  2017-07-04  6:57     ` Juergen Gross
  2017-07-03 21:08   ` [PATCH v6 07/18] xen/pvcalls: implement socket command Stefano Stabellini
                     ` (12 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

When the other end notifies us that there are commands to be read
(pvcalls_back_event), wake up the backend thread to parse the command.

The command ring works like most other Xen rings, so use the usual
ring macros to read and write to it. The functions implementing the
commands are empty stubs for now.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 144 ++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 129 insertions(+), 15 deletions(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index e4c2e46..9e00971 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -47,16 +47,135 @@ struct pvcalls_fedata {
 	struct list_head socket_mappings;
 	struct radix_tree_root socketpass_mappings;
 	struct semaphore socket_lock;
-	struct workqueue_struct *wq;
-	struct work_struct register_work;
 };
 
-static void pvcalls_back_work(struct work_struct *work)
+static int pvcalls_back_socket(struct xenbus_device *dev,
+		struct xen_pvcalls_request *req)
 {
+	return 0;
+}
+
+static int pvcalls_back_connect(struct xenbus_device *dev,
+				struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_release(struct xenbus_device *dev,
+				struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_bind(struct xenbus_device *dev,
+			     struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_listen(struct xenbus_device *dev,
+			       struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_accept(struct xenbus_device *dev,
+			       struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_poll(struct xenbus_device *dev,
+			     struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_handle_cmd(struct xenbus_device *dev,
+				   struct xen_pvcalls_request *req)
+{
+	int ret = 0;
+
+	switch (req->cmd) {
+	case PVCALLS_SOCKET:
+		ret = pvcalls_back_socket(dev, req);
+		break;
+	case PVCALLS_CONNECT:
+		ret = pvcalls_back_connect(dev, req);
+		break;
+	case PVCALLS_RELEASE:
+		ret = pvcalls_back_release(dev, req);
+		break;
+	case PVCALLS_BIND:
+		ret = pvcalls_back_bind(dev, req);
+		break;
+	case PVCALLS_LISTEN:
+		ret = pvcalls_back_listen(dev, req);
+		break;
+	case PVCALLS_ACCEPT:
+		ret = pvcalls_back_accept(dev, req);
+		break;
+	case PVCALLS_POLL:
+		ret = pvcalls_back_poll(dev, req);
+		break;
+	default:
+	{
+		struct pvcalls_fedata *fedata;
+		struct xen_pvcalls_response *rsp;
+
+		fedata = dev_get_drvdata(&dev->dev);
+		rsp = RING_GET_RESPONSE(
+				&fedata->ring, fedata->ring.rsp_prod_pvt++);
+		rsp->req_id = req->req_id;
+		rsp->cmd = req->cmd;
+		rsp->ret = -ENOTSUPP;
+		break;
+	}
+	}
+	return ret;
+}
+
+static void pvcalls_back_work(struct pvcalls_fedata *fedata)
+{
+	int notify, notify_all = 0, more = 1;
+	struct xen_pvcalls_request req;
+	struct xenbus_device *dev = fedata->dev;
+
+	while (more) {
+		while (RING_HAS_UNCONSUMED_REQUESTS(&fedata->ring)) {
+			RING_COPY_REQUEST(&fedata->ring,
+					  fedata->ring.req_cons++,
+					  &req);
+
+			if (!pvcalls_back_handle_cmd(dev, &req)) {
+				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(
+					&fedata->ring, notify);
+				notify_all += notify;
+			}
+		}
+
+		if (notify_all) {
+			notify_remote_via_irq(fedata->irq);
+			notify_all = 0;
+		}
+
+		RING_FINAL_CHECK_FOR_REQUESTS(&fedata->ring, more);
+	}
 }
 
 static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
 {
+	struct xenbus_device *dev = dev_id;
+	struct pvcalls_fedata *fedata = NULL;
+
+	if (dev == NULL)
+		return IRQ_HANDLED;
+
+	fedata = dev_get_drvdata(&dev->dev);
+	if (fedata == NULL)
+		return IRQ_HANDLED;
+
+	pvcalls_back_work(fedata);
 	return IRQ_HANDLED;
 }
 
@@ -87,18 +206,15 @@ static int backend_connect(struct xenbus_device *dev)
 		goto error;
 	}
 
-	err = bind_interdomain_evtchn_to_irqhandler(dev->otherend_id, evtchn,
-						    pvcalls_back_event, 0,
-						    "pvcalls-backend", dev);
+	err = bind_interdomain_evtchn_to_irq(dev->otherend_id, evtchn);
 	if (err < 0)
 		goto error;
 	fedata->irq = err;
-
-	fedata->wq = alloc_workqueue("pvcalls_back_wq", WQ_UNBOUND, 1);
-	if (!fedata->wq) {
-		err = -ENOMEM;
+	
+	err = request_threaded_irq(fedata->irq, NULL, pvcalls_back_event,
+				   IRQF_ONESHOT, "pvcalls-back", dev);
+	if (err < 0)
 		goto error;
-	}
 
 	err = xenbus_map_ring_valloc(dev, &ring_ref, 1, (void**)&fedata->sring);
 	if (err < 0)
@@ -107,7 +223,6 @@ static int backend_connect(struct xenbus_device *dev)
 	BACK_RING_INIT(&fedata->ring, fedata->sring, XEN_PAGE_SIZE * 1);
 	fedata->dev = dev;
 
-	INIT_WORK(&fedata->register_work, pvcalls_back_work);
 	INIT_LIST_HEAD(&fedata->socket_mappings);
 	INIT_RADIX_TREE(&fedata->socketpass_mappings, GFP_KERNEL);
 	sema_init(&fedata->socket_lock, 1);
@@ -116,15 +231,14 @@ static int backend_connect(struct xenbus_device *dev)
 	down(&pvcalls_back_global.frontends_lock);
 	list_add_tail(&fedata->list, &pvcalls_back_global.frontends);
 	up(&pvcalls_back_global.frontends_lock);
-	queue_work(fedata->wq, &fedata->register_work);
+
+	pvcalls_back_work(fedata);
 
 	return 0;
 
  error:
 	if (fedata->sring != NULL)
 		xenbus_unmap_ring_vfree(dev, fedata->sring);
-	if (fedata->wq)
-		destroy_workqueue(fedata->wq);
 	unbind_from_irqhandler(fedata->irq, dev);
 	kfree(fedata);
 	return err;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 07/18] xen/pvcalls: implement socket command
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
                     ` (4 preceding siblings ...)
  2017-07-03 21:08   ` [PATCH v6 06/18] xen/pvcalls: handle commands from the frontend Stefano Stabellini
@ 2017-07-03 21:08   ` Stefano Stabellini
  2017-07-04  7:52     ` Juergen Gross
  2017-07-03 21:08   ` [PATCH v6 08/18] xen/pvcalls: implement connect command Stefano Stabellini
                     ` (11 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Just reply with success to the other end for now. Delay the allocation
of the actual socket to bind and/or connect.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 9e00971..53fd908 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -12,12 +12,17 @@
  * GNU General Public License for more details.
  */
 
+#include <linux/inet.h>
 #include <linux/kthread.h>
 #include <linux/list.h>
 #include <linux/radix-tree.h>
 #include <linux/module.h>
 #include <linux/semaphore.h>
 #include <linux/wait.h>
+#include <net/sock.h>
+#include <net/inet_common.h>
+#include <net/inet_connection_sock.h>
+#include <net/request_sock.h>
 
 #include <xen/events.h>
 #include <xen/grant_table.h>
@@ -52,6 +57,28 @@ struct pvcalls_fedata {
 static int pvcalls_back_socket(struct xenbus_device *dev,
 		struct xen_pvcalls_request *req)
 {
+	struct pvcalls_fedata *fedata;
+	int ret;
+	struct xen_pvcalls_response *rsp;
+
+	fedata = dev_get_drvdata(&dev->dev);
+
+	if (req->u.socket.domain != AF_INET ||
+	    req->u.socket.type != SOCK_STREAM ||
+	    (req->u.socket.protocol != IPPROTO_IP &&
+	     req->u.socket.protocol != AF_INET))
+		ret = -EAFNOSUPPORT;
+	else
+		ret = 0;
+
+	/* leave the actual socket allocation for later */
+
+	rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.socket.id = req->u.socket.id;
+	rsp->ret = ret;
+
 	return 0;
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 08/18] xen/pvcalls: implement connect command
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
                     ` (5 preceding siblings ...)
  2017-07-03 21:08   ` [PATCH v6 07/18] xen/pvcalls: implement socket command Stefano Stabellini
@ 2017-07-03 21:08   ` Stefano Stabellini
  2017-07-04  7:14     ` Juergen Gross
  2017-07-03 21:08   ` [PATCH v6 09/18] xen/pvcalls: implement bind command Stefano Stabellini
                     ` (10 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Allocate a socket. Keep track of socket <-> ring mappings with a new data
structure, called sock_mapping. Implement the connect command by calling
inet_stream_connect, and mapping the new indexes page and data ring.
Allocate a workqueue and a work_struct, called ioworker, to perform
reads and writes to the socket.

When an active socket is closed (sk_state_change), set in_error to
-ENOTCONN and notify the other end, as specified by the protocol.

sk_data_ready and pvcalls_back_ioworker will be implemented later.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 174 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 174 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 53fd908..1bc2620 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -54,6 +54,39 @@ struct pvcalls_fedata {
 	struct semaphore socket_lock;
 };
 
+struct pvcalls_ioworker {
+	struct work_struct register_work;
+	struct workqueue_struct *wq;
+};
+
+struct sock_mapping {
+	struct list_head list;
+	struct pvcalls_fedata *fedata;
+	struct socket *sock;
+	uint64_t id;
+	grant_ref_t ref;
+	struct pvcalls_data_intf *ring;
+	void *bytes;
+	struct pvcalls_data data;
+	uint32_t ring_order;
+	int irq;
+	atomic_t read;
+	atomic_t write;
+	atomic_t io;
+	atomic_t release;
+	void (*saved_data_ready)(struct sock *sk);
+	struct pvcalls_ioworker ioworker;
+};
+
+static irqreturn_t pvcalls_back_conn_event(int irq, void *sock_map);
+static int pvcalls_back_release_active(struct xenbus_device *dev,
+				       struct pvcalls_fedata *fedata,
+				       struct sock_mapping *map);
+
+static void pvcalls_back_ioworker(struct work_struct *work)
+{
+}
+
 static int pvcalls_back_socket(struct xenbus_device *dev,
 		struct xen_pvcalls_request *req)
 {
@@ -82,9 +115,145 @@ static int pvcalls_back_socket(struct xenbus_device *dev,
 	return 0;
 }
 
+static void pvcalls_sk_state_change(struct sock *sock)
+{
+	struct sock_mapping *map = sock->sk_user_data;
+	struct pvcalls_data_intf *intf;
+
+	if (map == NULL)
+		return;
+
+	intf = map->ring;
+	intf->in_error = -ENOTCONN;
+	notify_remote_via_irq(map->irq);
+}
+
+static void pvcalls_sk_data_ready(struct sock *sock)
+{
+}
+
+static struct sock_mapping *pvcalls_new_active_socket(
+		struct pvcalls_fedata *fedata,
+		uint64_t id,
+		grant_ref_t ref,
+		uint32_t evtchn,
+		struct socket *sock)
+{
+	int ret;
+	struct sock_mapping *map;
+	void *page;
+
+	map = kzalloc(sizeof(*map), GFP_KERNEL);
+	if (map == NULL)
+		return NULL;
+
+	map->fedata = fedata;
+	map->sock = sock;
+	map->id = id;
+	map->ref = ref;
+
+	ret = xenbus_map_ring_valloc(fedata->dev, &ref, 1, &page);
+	if (ret < 0)
+		goto out;
+	map->ring = page;
+	map->ring_order = map->ring->ring_order;
+	/* first read the order, then map the data ring */
+	virt_rmb();
+	if (map->ring_order > MAX_RING_ORDER) {
+		pr_warn("%s frontend requested ring_order %u, which is > MAX (%u)\n",
+				__func__, map->ring_order, MAX_RING_ORDER);
+		goto out;
+	}
+	ret = xenbus_map_ring_valloc(fedata->dev, map->ring->ref,
+				     (1 << map->ring_order), &page);
+	if (ret < 0)
+		goto out;
+	map->bytes = page;
+
+	ret = bind_interdomain_evtchn_to_irqhandler(fedata->dev->otherend_id,
+						    evtchn,
+						    pvcalls_back_conn_event,
+						    0,
+						    "pvcalls-backend",
+						    map);
+	if (ret < 0)
+		goto out;
+	map->irq = ret;
+
+	map->data.in = map->bytes;
+	map->data.out = map->bytes + XEN_FLEX_RING_SIZE(map->ring_order);
+	
+	map->ioworker.wq = alloc_workqueue("pvcalls_io", WQ_UNBOUND, 1);
+	if (!map->ioworker.wq)
+		goto out;
+	atomic_set(&map->io, 1);
+	INIT_WORK(&map->ioworker.register_work,	pvcalls_back_ioworker);
+
+	down(&fedata->socket_lock);
+	list_add_tail(&map->list, &fedata->socket_mappings);
+	up(&fedata->socket_lock);
+
+	write_lock_bh(&map->sock->sk->sk_callback_lock);
+	map->saved_data_ready = map->sock->sk->sk_data_ready;
+	map->sock->sk->sk_user_data = map;
+	map->sock->sk->sk_data_ready = pvcalls_sk_data_ready;
+	map->sock->sk->sk_state_change = pvcalls_sk_state_change;
+	write_unlock_bh(&map->sock->sk->sk_callback_lock);
+
+	return map;
+out:
+	down(&fedata->socket_lock);
+	list_del(&map->list);
+	pvcalls_back_release_active(fedata->dev, fedata, map);
+	up(&fedata->socket_lock);
+	return NULL;
+}
+
 static int pvcalls_back_connect(struct xenbus_device *dev,
 				struct xen_pvcalls_request *req)
 {
+	struct pvcalls_fedata *fedata;
+	int ret = -EINVAL;
+	struct socket *sock;
+	struct sock_mapping *map;
+	struct xen_pvcalls_response *rsp;
+
+	fedata = dev_get_drvdata(&dev->dev);
+
+	ret = sock_create(AF_INET, SOCK_STREAM, 0, &sock);
+	if (ret < 0)
+		goto out;
+	ret = inet_stream_connect(sock, (struct sockaddr *)&req->u.connect.addr,
+				  req->u.connect.len, req->u.connect.flags);
+	if (ret < 0) {
+		sock_release(sock);
+		goto out;
+	}
+	
+	map = pvcalls_new_active_socket(fedata,
+					req->u.connect.id,
+					req->u.connect.ref,
+					req->u.connect.evtchn,
+					sock);
+	if (!map) {
+		ret = -EFAULT;
+		sock_release(map->sock);
+	}
+
+out:
+	rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.connect.id = req->u.connect.id;
+	rsp->ret = ret;
+
+	return 0;
+}
+
+static int pvcalls_back_release_active(struct xenbus_device *dev,
+				       struct pvcalls_fedata *fedata,
+				       struct sock_mapping *map)
+{
 	return 0;
 }
 
@@ -206,6 +375,11 @@ static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static irqreturn_t pvcalls_back_conn_event(int irq, void *sock_map)
+{
+	return IRQ_HANDLED;
+}
+
 static int backend_connect(struct xenbus_device *dev)
 {
 	int err, evtchn;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 09/18] xen/pvcalls: implement bind command
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
                     ` (6 preceding siblings ...)
  2017-07-03 21:08   ` [PATCH v6 08/18] xen/pvcalls: implement connect command Stefano Stabellini
@ 2017-07-03 21:08   ` Stefano Stabellini
  2017-07-04  7:24     ` Juergen Gross
  2017-07-03 21:08   ` [PATCH v6 10/18] xen/pvcalls: implement listen command Stefano Stabellini
                     ` (9 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Allocate a socket. Track the allocated passive sockets with a new data
structure named sockpass_mapping. It contains an unbound workqueue to
schedule delayed work for the accept and poll commands. It also has a
reqcopy field to be used to store a copy of a request for delayed work.
Reads/writes to it are protected by a lock (the "copy_lock" spinlock).
Initialize the workqueue in pvcalls_back_bind.

Implement the bind command with inet_bind.

The pass_sk_data_ready event handler will be added later.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 87 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 87 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 1bc2620..dae91fb 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -78,6 +78,18 @@ struct sock_mapping {
 	struct pvcalls_ioworker ioworker;
 };
 
+struct sockpass_mapping {
+	struct list_head list;
+	struct pvcalls_fedata *fedata;
+	struct socket *sock;
+	uint64_t id;
+	struct xen_pvcalls_request reqcopy;
+	spinlock_t copy_lock;
+	struct workqueue_struct *wq;
+	struct work_struct register_work;
+	void (*saved_data_ready)(struct sock *sk);
+};
+
 static irqreturn_t pvcalls_back_conn_event(int irq, void *sock_map);
 static int pvcalls_back_release_active(struct xenbus_device *dev,
 				       struct pvcalls_fedata *fedata,
@@ -263,9 +275,84 @@ static int pvcalls_back_release(struct xenbus_device *dev,
 	return 0;
 }
 
+static void __pvcalls_back_accept(struct work_struct *work)
+{
+}
+
+static void pvcalls_pass_sk_data_ready(struct sock *sock)
+{
+}
+
 static int pvcalls_back_bind(struct xenbus_device *dev,
 			     struct xen_pvcalls_request *req)
 {
+	struct pvcalls_fedata *fedata;
+	int ret, err;
+	struct socket *sock;
+	struct sockpass_mapping *map;
+	struct xen_pvcalls_response *rsp;
+
+	fedata = dev_get_drvdata(&dev->dev);
+
+	map = kzalloc(sizeof(*map), GFP_KERNEL);
+	if (map == NULL) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	INIT_WORK(&map->register_work, __pvcalls_back_accept);
+	spin_lock_init(&map->copy_lock);
+	map->wq = alloc_workqueue("pvcalls_wq", WQ_UNBOUND, 1);
+	if (!map->wq) {
+		ret = -ENOMEM;
+		kfree(map);
+		goto out;
+	}
+
+	ret = sock_create(AF_INET, SOCK_STREAM, 0, &sock);
+	if (ret < 0) {
+		destroy_workqueue(map->wq);
+		kfree(map);
+		goto out;
+	}
+
+	ret = inet_bind(sock, (struct sockaddr *)&req->u.bind.addr,
+			req->u.bind.len);
+	if (ret < 0) {
+		sock_release(sock);
+		destroy_workqueue(map->wq);
+		kfree(map);
+		goto out;
+	}
+
+	map->fedata = fedata;
+	map->sock = sock;
+	map->id = req->u.bind.id;
+
+	down(&fedata->socket_lock);
+	err = radix_tree_insert(&fedata->socketpass_mappings, map->id,
+				map);
+	up(&fedata->socket_lock);
+	if (err) {
+		ret = err;
+		sock_release(sock);
+		destroy_workqueue(map->wq);
+		kfree(map);
+		goto out;
+	}
+
+	write_lock_bh(&sock->sk->sk_callback_lock);
+	map->saved_data_ready = sock->sk->sk_data_ready;
+	sock->sk->sk_user_data = map;
+	sock->sk->sk_data_ready = pvcalls_pass_sk_data_ready;
+	write_unlock_bh(&sock->sk->sk_callback_lock);
+
+out:
+	rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.bind.id = req->u.bind.id;
+	rsp->ret = ret;
 	return 0;
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 10/18] xen/pvcalls: implement listen command
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
                     ` (7 preceding siblings ...)
  2017-07-03 21:08   ` [PATCH v6 09/18] xen/pvcalls: implement bind command Stefano Stabellini
@ 2017-07-03 21:08   ` Stefano Stabellini
  2017-07-04  7:51     ` Juergen Gross
  2017-07-03 21:08   ` [PATCH v6 11/18] xen/pvcalls: implement accept command Stefano Stabellini
                     ` (8 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Call inet_listen to implement the listen command.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index dae91fb..689b84f 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -359,6 +359,27 @@ static int pvcalls_back_bind(struct xenbus_device *dev,
 static int pvcalls_back_listen(struct xenbus_device *dev,
 			       struct xen_pvcalls_request *req)
 {
+	struct pvcalls_fedata *fedata;
+	int ret = -EINVAL;
+	struct sockpass_mapping *map;
+	struct xen_pvcalls_response *rsp;
+
+	fedata = dev_get_drvdata(&dev->dev);
+
+	down(&fedata->socket_lock);
+	map = radix_tree_lookup(&fedata->socketpass_mappings, req->u.listen.id);
+	up(&fedata->socket_lock);
+	if (map == NULL)
+		goto out;
+
+	ret = inet_listen(map->sock, req->u.listen.backlog);
+
+out:
+	rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.listen.id = req->u.listen.id;
+	rsp->ret = ret;
 	return 0;
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 11/18] xen/pvcalls: implement accept command
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
                     ` (8 preceding siblings ...)
  2017-07-03 21:08   ` [PATCH v6 10/18] xen/pvcalls: implement listen command Stefano Stabellini
@ 2017-07-03 21:08   ` Stefano Stabellini
  2017-07-04  7:26     ` Juergen Gross
  2017-07-03 21:08   ` [PATCH v6 12/18] xen/pvcalls: implement poll command Stefano Stabellini
                     ` (7 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Implement the accept command by calling inet_accept. To avoid blocking
in the kernel, call inet_accept(O_NONBLOCK) from a workqueue, which get
scheduled on sk_data_ready (for a passive socket, it means that there
are connections to accept).

Use the reqcopy field to store the request. Accept the new socket from
the delayed work function, create a new sock_mapping for it, map
the indexes page and data ring, and reply to the other end. Allocate an
ioworker for the socket.

Only support one outstanding blocking accept request for every socket at
any time.

Add a field to sock_mapping to remember the passive socket from which an
active socket was created.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 113 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 113 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 689b84f..75b7b9a9 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -62,6 +62,7 @@ struct pvcalls_ioworker {
 struct sock_mapping {
 	struct list_head list;
 	struct pvcalls_fedata *fedata;
+	struct sockpass_mapping *sockpass;
 	struct socket *sock;
 	uint64_t id;
 	grant_ref_t ref;
@@ -277,10 +278,83 @@ static int pvcalls_back_release(struct xenbus_device *dev,
 
 static void __pvcalls_back_accept(struct work_struct *work)
 {
+	struct sockpass_mapping *mappass = container_of(
+		work, struct sockpass_mapping, register_work);
+	struct sock_mapping *map;
+	struct pvcalls_ioworker *iow;
+	struct pvcalls_fedata *fedata;
+	struct socket *sock;
+	struct xen_pvcalls_response *rsp;
+	struct xen_pvcalls_request *req;
+	int notify;
+	int ret = -EINVAL;
+	unsigned long flags;
+
+	fedata = mappass->fedata;
+	/*
+	 * __pvcalls_back_accept can race against pvcalls_back_accept.
+	 * We only need to check the value of "cmd" on read. It could be
+	 * done atomically, but to simplify the code on the write side, we
+	 * use a spinlock.
+	 */
+	spin_lock_irqsave(&mappass->copy_lock, flags);
+	req = &mappass->reqcopy;
+	if (req->cmd != PVCALLS_ACCEPT) {
+		spin_unlock_irqrestore(&mappass->copy_lock, flags);
+		return;
+	}
+	spin_unlock_irqrestore(&mappass->copy_lock, flags);
+
+	sock = sock_alloc();
+	if (sock == NULL)
+		goto out_error;
+	sock->type = mappass->sock->type;
+	sock->ops = mappass->sock->ops;
+
+	ret = inet_accept(mappass->sock, sock, O_NONBLOCK, true);
+	if (ret == -EAGAIN) {
+		sock_release(sock);
+		goto out_error;
+	}
+
+	map = pvcalls_new_active_socket(fedata,
+					req->u.accept.id_new,
+					req->u.accept.ref,
+					req->u.accept.evtchn,
+					sock);
+	if (!map) {
+		ret = -EFAULT;
+		sock_release(sock);
+		goto out_error;
+	}
+
+	map->sockpass = mappass;
+	iow = &map->ioworker;
+	atomic_inc(&map->read);
+	atomic_inc(&map->io);
+	queue_work(iow->wq, &iow->register_work);
+
+out_error:
+	rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.accept.id = req->u.accept.id;
+	rsp->ret = ret;
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&fedata->ring, notify);
+	if (notify)
+		notify_remote_via_irq(fedata->irq);
+
+	mappass->reqcopy.cmd = 0;
 }
 
 static void pvcalls_pass_sk_data_ready(struct sock *sock)
 {
+	struct sockpass_mapping *mappass = sock->sk_user_data;
+
+	if (mappass == NULL)
+		return;
+
+	queue_work(mappass->wq, &mappass->register_work);
 }
 
 static int pvcalls_back_bind(struct xenbus_device *dev,
@@ -386,6 +460,45 @@ static int pvcalls_back_listen(struct xenbus_device *dev,
 static int pvcalls_back_accept(struct xenbus_device *dev,
 			       struct xen_pvcalls_request *req)
 {
+	struct pvcalls_fedata *fedata;
+	struct sockpass_mapping *mappass;
+	int ret = -EINVAL;
+	struct xen_pvcalls_response *rsp;
+	unsigned long flags;
+
+	fedata = dev_get_drvdata(&dev->dev);
+
+	down(&fedata->socket_lock);
+	mappass = radix_tree_lookup(&fedata->socketpass_mappings,
+		req->u.accept.id);
+	up(&fedata->socket_lock);
+	if (mappass == NULL)
+		goto out_error;
+
+	/* 
+	 * Limitation of the current implementation: only support one
+	 * concurrent accept or poll call on one socket.
+	 */
+	spin_lock_irqsave(&mappass->copy_lock, flags);
+	if (mappass->reqcopy.cmd != 0) {
+		spin_unlock_irqrestore(&mappass->copy_lock, flags);
+		ret = -EINTR;
+		goto out_error;
+	}
+
+	mappass->reqcopy = *req;
+	spin_unlock_irqrestore(&mappass->copy_lock, flags);
+	queue_work(mappass->wq, &mappass->register_work);
+
+	/* Tell the caller we don't need to send back a notification yet */
+	return -1;
+
+out_error:
+	rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.accept.id = req->u.accept.id;
+	rsp->ret = ret;
 	return 0;
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 12/18] xen/pvcalls: implement poll command
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
                     ` (9 preceding siblings ...)
  2017-07-03 21:08   ` [PATCH v6 11/18] xen/pvcalls: implement accept command Stefano Stabellini
@ 2017-07-03 21:08   ` Stefano Stabellini
  2017-07-04  7:28     ` Juergen Gross
  2017-07-03 21:08   ` [PATCH v6 13/18] xen/pvcalls: implement release command Stefano Stabellini
                     ` (6 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Implement poll on passive sockets by requesting a delayed response with
mappass->reqcopy, and reply back when there is data on the passive
socket.

Poll on active socket is unimplemented as by the spec, as the frontend
should just wait for events and check the indexes on the indexes page.

Only support one outstanding poll (or accept) request for every passive
socket at any given time.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 73 +++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 72 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 75b7b9a9..dba7bbf 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -350,11 +350,33 @@ static void __pvcalls_back_accept(struct work_struct *work)
 static void pvcalls_pass_sk_data_ready(struct sock *sock)
 {
 	struct sockpass_mapping *mappass = sock->sk_user_data;
+	struct pvcalls_fedata *fedata;
+	struct xen_pvcalls_response *rsp;
+	unsigned long flags;
+	int notify;
 
 	if (mappass == NULL)
 		return;
 
-	queue_work(mappass->wq, &mappass->register_work);
+	fedata = mappass->fedata;
+	spin_lock_irqsave(&mappass->copy_lock, flags);
+	if (mappass->reqcopy.cmd == PVCALLS_POLL) {
+		rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++);
+		rsp->req_id = mappass->reqcopy.req_id;
+		rsp->u.poll.id = mappass->reqcopy.u.poll.id;
+		rsp->cmd = mappass->reqcopy.cmd;
+		rsp->ret = 0;
+
+		mappass->reqcopy.cmd = 0;
+		spin_unlock_irqrestore(&mappass->copy_lock, flags);
+
+		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&fedata->ring, notify);
+		if (notify)
+			notify_remote_via_irq(mappass->fedata->irq);
+	} else {
+		spin_unlock_irqrestore(&mappass->copy_lock, flags);
+		queue_work(mappass->wq, &mappass->register_work);
+	}
 }
 
 static int pvcalls_back_bind(struct xenbus_device *dev,
@@ -505,6 +527,55 @@ static int pvcalls_back_accept(struct xenbus_device *dev,
 static int pvcalls_back_poll(struct xenbus_device *dev,
 			     struct xen_pvcalls_request *req)
 {
+	struct pvcalls_fedata *fedata;
+	struct sockpass_mapping *mappass;
+	struct xen_pvcalls_response *rsp;
+	struct inet_connection_sock *icsk;
+	struct request_sock_queue *queue;
+	unsigned long flags;
+	int ret;
+	bool data;
+
+	fedata = dev_get_drvdata(&dev->dev);
+
+	down(&fedata->socket_lock);
+	mappass = radix_tree_lookup(&fedata->socketpass_mappings, req->u.poll.id);
+	up(&fedata->socket_lock);
+	if (mappass == NULL)
+		return -EINVAL;
+
+	/*
+	 * Limitation of the current implementation: only support one
+	 * concurrent accept or poll call on one socket.
+	 */
+	spin_lock_irqsave(&mappass->copy_lock, flags);
+	if (mappass->reqcopy.cmd != 0) {
+		ret = -EINTR;
+		goto out;
+	}
+
+	mappass->reqcopy = *req;
+	icsk = inet_csk(mappass->sock->sk);
+	queue = &icsk->icsk_accept_queue;
+	data = queue->rskq_accept_head != NULL;
+	if (data) {
+		mappass->reqcopy.cmd = 0;
+		ret = 0;
+		goto out;
+	}
+	spin_unlock_irqrestore(&mappass->copy_lock, flags);
+
+	/* Tell the caller we don't need to send back a notification yet */
+	return -1;
+
+out:
+	spin_unlock_irqrestore(&mappass->copy_lock, flags);
+
+	rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.poll.id = req->u.poll.id;
+	rsp->ret = ret;
 	return 0;
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 13/18] xen/pvcalls: implement release command
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
                     ` (10 preceding siblings ...)
  2017-07-03 21:08   ` [PATCH v6 12/18] xen/pvcalls: implement poll command Stefano Stabellini
@ 2017-07-03 21:08   ` Stefano Stabellini
  2017-07-04  7:30     ` Juergen Gross
  2017-07-03 21:08   ` [PATCH v6 14/18] xen/pvcalls: disconnect and module_exit Stefano Stabellini
                     ` (5 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Release both active and passive sockets. For active sockets, make sure
to avoid possible conflicts with the ioworker reading/writing to those
sockets concurrently. Set map->release to let the ioworker know
atomically that the socket will be released soon, then wait until the
ioworker finishes (flush_work).

Unmap indexes pages and data rings.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 68 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 68 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index dba7bbf..9f4247f 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -267,12 +267,80 @@ static int pvcalls_back_release_active(struct xenbus_device *dev,
 				       struct pvcalls_fedata *fedata,
 				       struct sock_mapping *map)
 {
+	disable_irq(map->irq);
+	if (map->sock->sk != NULL) {
+		write_lock_bh(&map->sock->sk->sk_callback_lock);
+		map->sock->sk->sk_user_data = NULL;
+		map->sock->sk->sk_data_ready = map->saved_data_ready;
+		write_unlock_bh(&map->sock->sk->sk_callback_lock);
+	}
+
+	atomic_set(&map->release, 1);
+	flush_work(&map->ioworker.register_work);
+
+	xenbus_unmap_ring_vfree(dev, map->bytes);
+	xenbus_unmap_ring_vfree(dev, (void *)map->ring);
+	unbind_from_irqhandler(map->irq, map);
+
+	sock_release(map->sock);
+	kfree(map);
+
+	return 0;
+}
+
+static int pvcalls_back_release_passive(struct xenbus_device *dev,
+					struct pvcalls_fedata *fedata,
+					struct sockpass_mapping *mappass)
+{
+	if (mappass->sock->sk != NULL) {
+		write_lock_bh(&mappass->sock->sk->sk_callback_lock);
+		mappass->sock->sk->sk_user_data = NULL;
+		mappass->sock->sk->sk_data_ready = mappass->saved_data_ready;
+		write_unlock_bh(&mappass->sock->sk->sk_callback_lock);
+	}
+	sock_release(mappass->sock);
+	flush_workqueue(mappass->wq);
+	destroy_workqueue(mappass->wq);
+	kfree(mappass);
+
 	return 0;
 }
 
 static int pvcalls_back_release(struct xenbus_device *dev,
 				struct xen_pvcalls_request *req)
 {
+	struct pvcalls_fedata *fedata;
+	struct sock_mapping *map, *n;
+	struct sockpass_mapping *mappass;
+	int ret = 0;
+	struct xen_pvcalls_response *rsp;
+
+	fedata = dev_get_drvdata(&dev->dev);
+
+	down(&fedata->socket_lock);
+	list_for_each_entry_safe(map, n, &fedata->socket_mappings, list) {
+		if (map->id == req->u.release.id) {
+			list_del(&map->list);
+			up(&fedata->socket_lock);
+			ret = pvcalls_back_release_active(dev, fedata, map);
+			goto out;
+		}
+	}
+	mappass = radix_tree_lookup(&fedata->socketpass_mappings,
+				    req->u.release.id);
+	if (mappass != NULL) {
+		radix_tree_delete(&fedata->socketpass_mappings, mappass->id);
+		up(&fedata->socket_lock);
+		ret = pvcalls_back_release_passive(dev, fedata, mappass);
+	} else
+		up(&fedata->socket_lock);
+
+out:
+	rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->u.release.id = req->u.release.id;
+	rsp->cmd = req->cmd;
+	rsp->ret = ret;
 	return 0;
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 14/18] xen/pvcalls: disconnect and module_exit
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
                     ` (11 preceding siblings ...)
  2017-07-03 21:08   ` [PATCH v6 13/18] xen/pvcalls: implement release command Stefano Stabellini
@ 2017-07-03 21:08   ` Stefano Stabellini
  2017-07-04  7:32     ` Juergen Gross
  2017-07-03 21:08   ` [PATCH v6 15/18] xen/pvcalls: implement the ioworker functions Stefano Stabellini
                     ` (4 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Implement backend_disconnect. Call pvcalls_back_release_active on active
sockets and pvcalls_back_release_passive on passive sockets.

Implement module_exit by calling backend_disconnect on frontend
connections.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 52 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 52 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 9f4247f..71a42fc 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -807,6 +807,42 @@ static int backend_connect(struct xenbus_device *dev)
 
 static int backend_disconnect(struct xenbus_device *dev)
 {
+	struct pvcalls_fedata *fedata;
+	struct sock_mapping *map, *n;
+	struct sockpass_mapping *mappass;
+	struct radix_tree_iter iter;
+	void **slot;
+
+
+	fedata = dev_get_drvdata(&dev->dev);
+
+	down(&fedata->socket_lock);
+	list_for_each_entry_safe(map, n, &fedata->socket_mappings, list) {
+		list_del(&map->list);
+		pvcalls_back_release_active(dev, fedata, map);
+	}
+
+	radix_tree_for_each_slot(slot, &fedata->socketpass_mappings, &iter, 0) {
+		mappass = radix_tree_deref_slot(slot);
+		if (!mappass)
+			continue;
+		if (radix_tree_exception(mappass)) {
+			if (radix_tree_deref_retry(mappass))
+				slot = radix_tree_iter_retry(&iter);
+		} else {
+			radix_tree_delete(&fedata->socketpass_mappings, mappass->id);
+			pvcalls_back_release_passive(dev, fedata, mappass);
+		}
+	}
+	up(&fedata->socket_lock);
+
+	xenbus_unmap_ring_vfree(dev, fedata->sring);
+	unbind_from_irqhandler(fedata->irq, dev);
+
+	list_del(&fedata->list);
+	kfree(fedata);
+	dev_set_drvdata(&dev->dev, NULL);
+
 	return 0;
 }
 
@@ -1000,3 +1036,19 @@ static int __init pvcalls_back_init(void)
 	return 0;
 }
 module_init(pvcalls_back_init);
+
+static void __exit pvcalls_back_fin(void)
+{
+	struct pvcalls_fedata *fedata, *nfedata;
+
+	down(&pvcalls_back_global.frontends_lock);
+	list_for_each_entry_safe(fedata, nfedata, &pvcalls_back_global.frontends,
+				 list) {
+		backend_disconnect(fedata->dev);
+	}
+	up(&pvcalls_back_global.frontends_lock);
+
+	xenbus_unregister_driver(&pvcalls_back_driver);
+}
+
+module_exit(pvcalls_back_fin);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 15/18] xen/pvcalls: implement the ioworker functions
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
                     ` (12 preceding siblings ...)
  2017-07-03 21:08   ` [PATCH v6 14/18] xen/pvcalls: disconnect and module_exit Stefano Stabellini
@ 2017-07-03 21:08   ` Stefano Stabellini
  2017-07-04  7:48     ` Juergen Gross
  2017-07-03 21:08   ` [PATCH v6 16/18] xen/pvcalls: implement read Stefano Stabellini
                     ` (3 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

We have one ioworker per socket. Each ioworker goes through the list of
outstanding read/write requests. Once all requests have been dealt with,
it returns.

We use one atomic counter per socket for "read" operations and one
for "write" operations to keep track of the reads/writes to do.

We also use one atomic counter ("io") per ioworker to keep track of how
many outstanding requests we have in total assigned to the ioworker. The
ioworker finishes when there are none.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 71a42fc..d59c2e4 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -96,8 +96,35 @@ static int pvcalls_back_release_active(struct xenbus_device *dev,
 				       struct pvcalls_fedata *fedata,
 				       struct sock_mapping *map);
 
+static void pvcalls_conn_back_read(void *opaque)
+{
+}
+
+static int pvcalls_conn_back_write(struct sock_mapping *map)
+{
+	return 0;
+}
+
 static void pvcalls_back_ioworker(struct work_struct *work)
 {
+	struct pvcalls_ioworker *ioworker = container_of(work,
+		struct pvcalls_ioworker, register_work);
+	struct sock_mapping *map = container_of(ioworker, struct sock_mapping,
+		ioworker);
+
+	while (atomic_read(&map->io) > 0) {
+		if (atomic_read(&map->release) > 0) {
+			atomic_set(&map->release, 0);
+			return;
+		}
+
+		if (atomic_read(&map->read) > 0)
+			pvcalls_conn_back_read(map);
+		if (atomic_read(&map->write) > 0)
+			pvcalls_conn_back_write(map);
+
+		atomic_dec(&map->io);
+	}
 }
 
 static int pvcalls_back_socket(struct xenbus_device *dev,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 16/18] xen/pvcalls: implement read
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
                     ` (13 preceding siblings ...)
  2017-07-03 21:08   ` [PATCH v6 15/18] xen/pvcalls: implement the ioworker functions Stefano Stabellini
@ 2017-07-03 21:08   ` Stefano Stabellini
  2017-07-04  7:36     ` Juergen Gross
  2017-07-03 21:08   ` [PATCH v6 17/18] xen/pvcalls: implement write Stefano Stabellini
                     ` (2 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

When an active socket has data available, increment the io and read
counters, and schedule the ioworker.

Implement the read function by reading from the socket, writing the data
to the data ring.

Set in_error on error.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 85 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 85 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index d59c2e4..a098c7f 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -98,6 +98,81 @@ static int pvcalls_back_release_active(struct xenbus_device *dev,
 
 static void pvcalls_conn_back_read(void *opaque)
 {
+	struct sock_mapping *map = (struct sock_mapping *)opaque;
+	struct msghdr msg;
+	struct kvec vec[2];
+	RING_IDX cons, prod, size, wanted, array_size, masked_prod, masked_cons;
+	int32_t error;
+	struct pvcalls_data_intf *intf = map->ring;
+	struct pvcalls_data *data = &map->data;
+	unsigned long flags;
+	int ret;
+
+	array_size = XEN_FLEX_RING_SIZE(map->ring_order);
+	cons = intf->in_cons;
+	prod = intf->in_prod;
+	error = intf->in_error;
+	/* read the indexes first, then deal with the data */
+	virt_mb();
+
+	if (error)
+		return;
+
+	size = pvcalls_queued(prod, cons, array_size);
+	if (size >= array_size)
+		return;
+	spin_lock_irqsave(&map->sock->sk->sk_receive_queue.lock, flags);
+	if (skb_queue_empty(&map->sock->sk->sk_receive_queue)) {
+		atomic_set(&map->read, 0);
+		spin_unlock_irqrestore(&map->sock->sk->sk_receive_queue.lock,
+				flags);
+		return;
+	}
+	spin_unlock_irqrestore(&map->sock->sk->sk_receive_queue.lock, flags);
+	wanted = array_size - size;
+	masked_prod = pvcalls_mask(prod, array_size);
+	masked_cons = pvcalls_mask(cons, array_size);
+
+	memset(&msg, 0, sizeof(msg));
+	msg.msg_iter.type = ITER_KVEC|WRITE;
+	msg.msg_iter.count = wanted;
+	if (masked_prod < masked_cons) {
+		vec[0].iov_base = data->in + masked_prod;
+		vec[0].iov_len = wanted;
+		msg.msg_iter.kvec = vec;
+		msg.msg_iter.nr_segs = 1;
+	} else {
+		vec[0].iov_base = data->in + masked_prod;
+		vec[0].iov_len = array_size - masked_prod;
+		vec[1].iov_base = data->in;
+		vec[1].iov_len = wanted - vec[0].iov_len;
+		msg.msg_iter.kvec = vec;
+		msg.msg_iter.nr_segs = 2;
+	}
+
+	atomic_set(&map->read, 0);
+	ret = inet_recvmsg(map->sock, &msg, wanted, MSG_DONTWAIT);
+	WARN_ON(ret > wanted);
+	if (ret == -EAGAIN) /* shouldn't happen */
+		return;
+	if (!ret)
+		ret = -ENOTCONN;
+	spin_lock_irqsave(&map->sock->sk->sk_receive_queue.lock, flags);
+	if (ret > 0 && !skb_queue_empty(&map->sock->sk->sk_receive_queue))
+		atomic_inc(&map->read);
+	spin_unlock_irqrestore(&map->sock->sk->sk_receive_queue.lock, flags);
+
+	/* write the data, then modify the indexes */
+	virt_wmb();
+	if (ret < 0)
+		intf->in_error = ret;
+	else
+		intf->in_prod = prod + ret;
+	/* update the indexes, then notify the other end */
+	virt_wmb();
+	notify_remote_via_irq(map->irq);
+
+	return;
 }
 
 static int pvcalls_conn_back_write(struct sock_mapping *map)
@@ -170,6 +245,16 @@ static void pvcalls_sk_state_change(struct sock *sock)
 
 static void pvcalls_sk_data_ready(struct sock *sock)
 {
+	struct sock_mapping *map = sock->sk_user_data;
+	struct pvcalls_ioworker *iow;
+
+	if (map == NULL)
+		return;
+
+	iow = &map->ioworker;
+	atomic_inc(&map->read);
+	atomic_inc(&map->io);
+	queue_work(iow->wq, &iow->register_work);
 }
 
 static struct sock_mapping *pvcalls_new_active_socket(
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 17/18] xen/pvcalls: implement write
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
                     ` (14 preceding siblings ...)
  2017-07-03 21:08   ` [PATCH v6 16/18] xen/pvcalls: implement read Stefano Stabellini
@ 2017-07-03 21:08   ` Stefano Stabellini
  2017-07-04  7:49     ` Juergen Gross
  2017-07-03 21:08   ` [PATCH v6 18/18] xen: introduce a Kconfig option to enable the pvcalls backend Stefano Stabellini
  2017-07-04  8:05   ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Juergen Gross
  17 siblings, 1 reply; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

When the other end notifies us that there is data to be written
(pvcalls_back_conn_event), increment the io and write counters, and
schedule the ioworker.

Implement the write function called by ioworker by reading the data from
the data ring, writing it to the socket by calling inet_sendmsg.

Set out_error on error.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 74 +++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 73 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index a098c7f..07a8a2e 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -177,7 +177,66 @@ static void pvcalls_conn_back_read(void *opaque)
 
 static int pvcalls_conn_back_write(struct sock_mapping *map)
 {
-	return 0;
+	struct pvcalls_data_intf *intf = map->ring;
+	struct pvcalls_data *data = &map->data;
+	struct msghdr msg;
+	struct kvec vec[2];
+	RING_IDX cons, prod, size, array_size;
+	int ret;
+
+	cons = intf->out_cons;
+	prod = intf->out_prod;
+	/* read the indexes before dealing with the data */
+	virt_mb();
+
+	array_size = XEN_FLEX_RING_SIZE(map->ring_order);
+	size = pvcalls_queued(prod, cons, array_size);
+	if (size == 0)
+		return 0;
+
+	memset(&msg, 0, sizeof(msg));
+	msg.msg_flags |= MSG_DONTWAIT;
+	msg.msg_iter.type = ITER_KVEC|READ;
+	msg.msg_iter.count = size;
+	if (pvcalls_mask(prod, array_size) > pvcalls_mask(cons, array_size)) {
+		vec[0].iov_base = data->out + pvcalls_mask(cons, array_size);
+		vec[0].iov_len = size;
+		msg.msg_iter.kvec = vec;
+		msg.msg_iter.nr_segs = 1;
+	} else {
+		vec[0].iov_base = data->out + pvcalls_mask(cons, array_size);
+		vec[0].iov_len = array_size - pvcalls_mask(cons, array_size);
+		vec[1].iov_base = data->out;
+		vec[1].iov_len = size - vec[0].iov_len;
+		msg.msg_iter.kvec = vec;
+		msg.msg_iter.nr_segs = 2;
+	}
+
+	atomic_set(&map->write, 0);
+	ret = inet_sendmsg(map->sock, &msg, size);
+	if (ret == -EAGAIN || (ret >= 0 && ret < size)) {
+		atomic_inc(&map->write);
+		atomic_inc(&map->io);
+	}
+	if (ret == -EAGAIN)
+		return ret;
+
+	/* write the data, then update the indexes */
+	virt_wmb();
+	if (ret < 0) {
+		intf->out_error = ret;
+	} else {
+		intf->out_error = 0;
+		intf->out_cons = cons + ret;
+		prod = intf->out_prod;
+	}
+	/* update the indexes, then notify the other end */
+	virt_wmb();
+	if (prod != cons + ret)
+		atomic_inc(&map->write);
+	notify_remote_via_irq(map->irq);
+
+	return ret;
 }
 
 static void pvcalls_back_ioworker(struct work_struct *work)
@@ -849,6 +908,19 @@ static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
 
 static irqreturn_t pvcalls_back_conn_event(int irq, void *sock_map)
 {
+	struct sock_mapping *map = sock_map;
+	struct pvcalls_ioworker *iow;
+
+	if (map == NULL || map->sock == NULL || map->sock->sk == NULL ||
+		map->sock->sk->sk_user_data != map)
+		return IRQ_HANDLED;
+
+	iow = &map->ioworker;
+
+	atomic_inc(&map->write);
+	atomic_inc(&map->io);
+	queue_work(iow->wq, &iow->register_work);
+
 	return IRQ_HANDLED;
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH v6 18/18] xen: introduce a Kconfig option to enable the pvcalls backend
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
                     ` (15 preceding siblings ...)
  2017-07-03 21:08   ` [PATCH v6 17/18] xen/pvcalls: implement write Stefano Stabellini
@ 2017-07-03 21:08   ` Stefano Stabellini
  2017-07-04  7:50     ` Juergen Gross
  2017-07-04  8:05   ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Juergen Gross
  17 siblings, 1 reply; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-03 21:08 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Also add pvcalls-back to the Makefile.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/Kconfig  | 12 ++++++++++++
 drivers/xen/Makefile |  1 +
 2 files changed, 13 insertions(+)

diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index f15bb3b7..4545561 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -196,6 +196,18 @@ config XEN_PCIDEV_BACKEND
 
 	  If in doubt, say m.
 
+config XEN_PVCALLS_BACKEND
+	bool "XEN PV Calls backend driver"
+	depends on INET && XEN && XEN_BACKEND
+	default n
+	help
+	  Experimental backend for the Xen PV Calls protocol
+	  (https://xenbits.xen.org/docs/unstable/misc/pvcalls.html). It
+	  allows PV Calls frontends to send POSIX calls to the backend,
+	  which implements them.
+
+	  If in doubt, say n.
+
 config XEN_SCSI_BACKEND
 	tristate "XEN SCSI backend driver"
 	depends on XEN && XEN_BACKEND && TARGET_CORE
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 8feab810..480b928 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -38,6 +38,7 @@ obj-$(CONFIG_XEN_ACPI_PROCESSOR)	+= xen-acpi-processor.o
 obj-$(CONFIG_XEN_EFI)			+= efi.o
 obj-$(CONFIG_XEN_SCSI_BACKEND)		+= xen-scsiback.o
 obj-$(CONFIG_XEN_AUTO_XLATE)		+= xlate_mmu.o
+obj-$(CONFIG_XEN_PVCALLS_BACKEND)	+= pvcalls-back.o
 xen-evtchn-y				:= evtchn.o
 xen-gntdev-y				:= gntdev.o
 xen-gntalloc-y				:= gntalloc.o
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 06/18] xen/pvcalls: handle commands from the frontend
  2017-07-03 21:08   ` [PATCH v6 06/18] xen/pvcalls: handle commands from the frontend Stefano Stabellini
@ 2017-07-04  6:57     ` Juergen Gross
  2017-07-05 20:43       ` Stefano Stabellini
  0 siblings, 1 reply; 44+ messages in thread
From: Juergen Gross @ 2017-07-04  6:57 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini

On 03/07/17 23:08, Stefano Stabellini wrote:
> When the other end notifies us that there are commands to be read
> (pvcalls_back_event), wake up the backend thread to parse the command.
> 
> The command ring works like most other Xen rings, so use the usual
> ring macros to read and write to it. The functions implementing the
> commands are empty stubs for now.
> 
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> ---
>  drivers/xen/pvcalls-back.c | 144 ++++++++++++++++++++++++++++++++++++++++-----
>  1 file changed, 129 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> index e4c2e46..9e00971 100644
> --- a/drivers/xen/pvcalls-back.c
> +++ b/drivers/xen/pvcalls-back.c
> @@ -47,16 +47,135 @@ struct pvcalls_fedata {
>  	struct list_head socket_mappings;
>  	struct radix_tree_root socketpass_mappings;
>  	struct semaphore socket_lock;
> -	struct workqueue_struct *wq;
> -	struct work_struct register_work;
>  };
>  
> -static void pvcalls_back_work(struct work_struct *work)
> +static int pvcalls_back_socket(struct xenbus_device *dev,
> +		struct xen_pvcalls_request *req)
>  {
> +	return 0;
> +}
> +
> +static int pvcalls_back_connect(struct xenbus_device *dev,
> +				struct xen_pvcalls_request *req)
> +{
> +	return 0;
> +}
> +
> +static int pvcalls_back_release(struct xenbus_device *dev,
> +				struct xen_pvcalls_request *req)
> +{
> +	return 0;
> +}
> +
> +static int pvcalls_back_bind(struct xenbus_device *dev,
> +			     struct xen_pvcalls_request *req)
> +{
> +	return 0;
> +}
> +
> +static int pvcalls_back_listen(struct xenbus_device *dev,
> +			       struct xen_pvcalls_request *req)
> +{
> +	return 0;
> +}
> +
> +static int pvcalls_back_accept(struct xenbus_device *dev,
> +			       struct xen_pvcalls_request *req)
> +{
> +	return 0;
> +}
> +
> +static int pvcalls_back_poll(struct xenbus_device *dev,
> +			     struct xen_pvcalls_request *req)
> +{
> +	return 0;
> +}
> +
> +static int pvcalls_back_handle_cmd(struct xenbus_device *dev,
> +				   struct xen_pvcalls_request *req)
> +{
> +	int ret = 0;
> +
> +	switch (req->cmd) {
> +	case PVCALLS_SOCKET:
> +		ret = pvcalls_back_socket(dev, req);
> +		break;
> +	case PVCALLS_CONNECT:
> +		ret = pvcalls_back_connect(dev, req);
> +		break;
> +	case PVCALLS_RELEASE:
> +		ret = pvcalls_back_release(dev, req);
> +		break;
> +	case PVCALLS_BIND:
> +		ret = pvcalls_back_bind(dev, req);
> +		break;
> +	case PVCALLS_LISTEN:
> +		ret = pvcalls_back_listen(dev, req);
> +		break;
> +	case PVCALLS_ACCEPT:
> +		ret = pvcalls_back_accept(dev, req);
> +		break;
> +	case PVCALLS_POLL:
> +		ret = pvcalls_back_poll(dev, req);
> +		break;
> +	default:
> +	{
> +		struct pvcalls_fedata *fedata;
> +		struct xen_pvcalls_response *rsp;
> +
> +		fedata = dev_get_drvdata(&dev->dev);
> +		rsp = RING_GET_RESPONSE(
> +				&fedata->ring, fedata->ring.rsp_prod_pvt++);
> +		rsp->req_id = req->req_id;
> +		rsp->cmd = req->cmd;
> +		rsp->ret = -ENOTSUPP;
> +		break;
> +	}
> +	}
> +	return ret;
> +}
> +
> +static void pvcalls_back_work(struct pvcalls_fedata *fedata)
> +{
> +	int notify, notify_all = 0, more = 1;
> +	struct xen_pvcalls_request req;
> +	struct xenbus_device *dev = fedata->dev;
> +
> +	while (more) {
> +		while (RING_HAS_UNCONSUMED_REQUESTS(&fedata->ring)) {
> +			RING_COPY_REQUEST(&fedata->ring,
> +					  fedata->ring.req_cons++,
> +					  &req);
> +
> +			if (!pvcalls_back_handle_cmd(dev, &req)) {
> +				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(
> +					&fedata->ring, notify);
> +				notify_all += notify;
> +			}
> +		}
> +
> +		if (notify_all) {
> +			notify_remote_via_irq(fedata->irq);
> +			notify_all = 0;
> +		}
> +
> +		RING_FINAL_CHECK_FOR_REQUESTS(&fedata->ring, more);
> +	}
>  }
>  
>  static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
>  {
> +	struct xenbus_device *dev = dev_id;
> +	struct pvcalls_fedata *fedata = NULL;
> +
> +	if (dev == NULL)
> +		return IRQ_HANDLED;
> +
> +	fedata = dev_get_drvdata(&dev->dev);
> +	if (fedata == NULL)
> +		return IRQ_HANDLED;
> +
> +	pvcalls_back_work(fedata);
>  	return IRQ_HANDLED;
>  }
>  
> @@ -87,18 +206,15 @@ static int backend_connect(struct xenbus_device *dev)
>  		goto error;
>  	}
>  
> -	err = bind_interdomain_evtchn_to_irqhandler(dev->otherend_id, evtchn,
> -						    pvcalls_back_event, 0,
> -						    "pvcalls-backend", dev);
> +	err = bind_interdomain_evtchn_to_irq(dev->otherend_id, evtchn);
>  	if (err < 0)
>  		goto error;
>  	fedata->irq = err;
> -
> -	fedata->wq = alloc_workqueue("pvcalls_back_wq", WQ_UNBOUND, 1);
> -	if (!fedata->wq) {
> -		err = -ENOMEM;
> +	
> +	err = request_threaded_irq(fedata->irq, NULL, pvcalls_back_event,
> +				   IRQF_ONESHOT, "pvcalls-back", dev);
> +	if (err < 0)
>  		goto error;
> -	}
>  
>  	err = xenbus_map_ring_valloc(dev, &ring_ref, 1, (void**)&fedata->sring);
>  	if (err < 0)
> @@ -107,7 +223,6 @@ static int backend_connect(struct xenbus_device *dev)
>  	BACK_RING_INIT(&fedata->ring, fedata->sring, XEN_PAGE_SIZE * 1);
>  	fedata->dev = dev;
>  
> -	INIT_WORK(&fedata->register_work, pvcalls_back_work);
>  	INIT_LIST_HEAD(&fedata->socket_mappings);
>  	INIT_RADIX_TREE(&fedata->socketpass_mappings, GFP_KERNEL);
>  	sema_init(&fedata->socket_lock, 1);
> @@ -116,15 +231,14 @@ static int backend_connect(struct xenbus_device *dev)
>  	down(&pvcalls_back_global.frontends_lock);
>  	list_add_tail(&fedata->list, &pvcalls_back_global.frontends);
>  	up(&pvcalls_back_global.frontends_lock);
> -	queue_work(fedata->wq, &fedata->register_work);
> +
> +	pvcalls_back_work(fedata);

Is this call really necessary? I believe this is racy in case an event
is coming in at the same time.

>  
>  	return 0;
>  
>   error:
>  	if (fedata->sring != NULL)
>  		xenbus_unmap_ring_vfree(dev, fedata->sring);
> -	if (fedata->wq)
> -		destroy_workqueue(fedata->wq);
>  	unbind_from_irqhandler(fedata->irq, dev);

Is it secure to unbind the irq handler _after_ unmapping the ring?


Juergen

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 05/18] xen/pvcalls: connect to a frontend
  2017-07-03 21:08   ` [PATCH v6 05/18] xen/pvcalls: connect to a frontend Stefano Stabellini
@ 2017-07-04  6:59     ` Juergen Gross
  2017-07-05 20:29       ` Stefano Stabellini
  0 siblings, 1 reply; 44+ messages in thread
From: Juergen Gross @ 2017-07-04  6:59 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini

On 03/07/17 23:08, Stefano Stabellini wrote:
> Introduce a per-frontend data structure named pvcalls_fedata. It
> contains pointers to the command ring, its event channel, a list of
> active sockets and a tree of passive sockets (passing sockets need to be
> looked up from the id on listen, accept and poll commands, while active
> sockets only on release).
> 
> It also has an unbound workqueue to schedule the work of parsing and
> executing commands on the command ring. socket_lock protects the two
> lists. In pvcalls_back_global, keep a list of connected frontends.
> 
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> ---
>  drivers/xen/pvcalls-back.c | 92 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 92 insertions(+)
> 
> diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> index 7bce750..e4c2e46 100644
> --- a/drivers/xen/pvcalls-back.c
> +++ b/drivers/xen/pvcalls-back.c
> @@ -33,9 +33,101 @@ struct pvcalls_back_global {
>  	struct semaphore frontends_lock;
>  } pvcalls_back_global;
>  
> +/*
> + * Per-frontend data structure. It contains pointers to the command
> + * ring, its event channel, a list of active sockets and a tree of
> + * passive sockets.
> + */
> +struct pvcalls_fedata {
> +	struct list_head list;
> +	struct xenbus_device *dev;
> +	struct xen_pvcalls_sring *sring;
> +	struct xen_pvcalls_back_ring ring;
> +	int irq;
> +	struct list_head socket_mappings;
> +	struct radix_tree_root socketpass_mappings;
> +	struct semaphore socket_lock;
> +	struct workqueue_struct *wq;
> +	struct work_struct register_work;
> +};
> +
> +static void pvcalls_back_work(struct work_struct *work)
> +{
> +}
> +
> +static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
> +{
> +	return IRQ_HANDLED;
> +}
> +
>  static int backend_connect(struct xenbus_device *dev)
>  {
> +	int err, evtchn;
> +	grant_ref_t ring_ref;
> +	struct pvcalls_fedata *fedata = NULL;
> +
> +	fedata = kzalloc(sizeof(struct pvcalls_fedata), GFP_KERNEL);
> +	if (!fedata)
> +		return -ENOMEM;
> +
> +	err = xenbus_scanf(XBT_NIL, dev->otherend, "port", "%u",
> +			   &evtchn);
> +	if (err != 1) {
> +		err = -EINVAL;
> +		xenbus_dev_fatal(dev, err, "reading %s/event-channel",
> +				 dev->otherend);
> +		goto error;
> +	}
> +
> +	err = xenbus_scanf(XBT_NIL, dev->otherend, "ring-ref", "%u", &ring_ref);
> +	if (err != 1) {
> +		err = -EINVAL;
> +		xenbus_dev_fatal(dev, err, "reading %s/ring-ref",
> +				 dev->otherend);
> +		goto error;
> +	}
> +
> +	err = bind_interdomain_evtchn_to_irqhandler(dev->otherend_id, evtchn,
> +						    pvcalls_back_event, 0,
> +						    "pvcalls-backend", dev);
> +	if (err < 0)
> +		goto error;
> +	fedata->irq = err;
> +
> +	fedata->wq = alloc_workqueue("pvcalls_back_wq", WQ_UNBOUND, 1);
> +	if (!fedata->wq) {
> +		err = -ENOMEM;
> +		goto error;
> +	}
> +
> +	err = xenbus_map_ring_valloc(dev, &ring_ref, 1, (void**)&fedata->sring);
> +	if (err < 0)
> +		goto error;
> +
> +	BACK_RING_INIT(&fedata->ring, fedata->sring, XEN_PAGE_SIZE * 1);
> +	fedata->dev = dev;
> +
> +	INIT_WORK(&fedata->register_work, pvcalls_back_work);
> +	INIT_LIST_HEAD(&fedata->socket_mappings);
> +	INIT_RADIX_TREE(&fedata->socketpass_mappings, GFP_KERNEL);
> +	sema_init(&fedata->socket_lock, 1);
> +	dev_set_drvdata(&dev->dev, fedata);
> +
> +	down(&pvcalls_back_global.frontends_lock);
> +	list_add_tail(&fedata->list, &pvcalls_back_global.frontends);
> +	up(&pvcalls_back_global.frontends_lock);
> +	queue_work(fedata->wq, &fedata->register_work);
> +
>  	return 0;
> +
> + error:
> +	if (fedata->sring != NULL)
> +		xenbus_unmap_ring_vfree(dev, fedata->sring);
> +	if (fedata->wq)
> +		destroy_workqueue(fedata->wq);
> +	unbind_from_irqhandler(fedata->irq, dev);

fedata->irq might have not been set and can be zero here. irq 0 is
a valid irq, I think.


Juergen

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 08/18] xen/pvcalls: implement connect command
  2017-07-03 21:08   ` [PATCH v6 08/18] xen/pvcalls: implement connect command Stefano Stabellini
@ 2017-07-04  7:14     ` Juergen Gross
  2017-07-05 21:09       ` Stefano Stabellini
  0 siblings, 1 reply; 44+ messages in thread
From: Juergen Gross @ 2017-07-04  7:14 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini

On 03/07/17 23:08, Stefano Stabellini wrote:
> Allocate a socket. Keep track of socket <-> ring mappings with a new data
> structure, called sock_mapping. Implement the connect command by calling
> inet_stream_connect, and mapping the new indexes page and data ring.
> Allocate a workqueue and a work_struct, called ioworker, to perform
> reads and writes to the socket.
> 
> When an active socket is closed (sk_state_change), set in_error to
> -ENOTCONN and notify the other end, as specified by the protocol.
> 
> sk_data_ready and pvcalls_back_ioworker will be implemented later.
> 
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> ---
>  drivers/xen/pvcalls-back.c | 174 +++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 174 insertions(+)
> 
> diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> index 53fd908..1bc2620 100644
> --- a/drivers/xen/pvcalls-back.c
> +++ b/drivers/xen/pvcalls-back.c
> @@ -54,6 +54,39 @@ struct pvcalls_fedata {
>  	struct semaphore socket_lock;
>  };
>  
> +struct pvcalls_ioworker {
> +	struct work_struct register_work;
> +	struct workqueue_struct *wq;
> +};
> +
> +struct sock_mapping {
> +	struct list_head list;
> +	struct pvcalls_fedata *fedata;
> +	struct socket *sock;
> +	uint64_t id;
> +	grant_ref_t ref;
> +	struct pvcalls_data_intf *ring;
> +	void *bytes;
> +	struct pvcalls_data data;
> +	uint32_t ring_order;
> +	int irq;
> +	atomic_t read;
> +	atomic_t write;
> +	atomic_t io;
> +	atomic_t release;
> +	void (*saved_data_ready)(struct sock *sk);
> +	struct pvcalls_ioworker ioworker;
> +};
> +
> +static irqreturn_t pvcalls_back_conn_event(int irq, void *sock_map);
> +static int pvcalls_back_release_active(struct xenbus_device *dev,
> +				       struct pvcalls_fedata *fedata,
> +				       struct sock_mapping *map);
> +
> +static void pvcalls_back_ioworker(struct work_struct *work)
> +{
> +}
> +
>  static int pvcalls_back_socket(struct xenbus_device *dev,
>  		struct xen_pvcalls_request *req)
>  {
> @@ -82,9 +115,145 @@ static int pvcalls_back_socket(struct xenbus_device *dev,
>  	return 0;
>  }
>  
> +static void pvcalls_sk_state_change(struct sock *sock)
> +{
> +	struct sock_mapping *map = sock->sk_user_data;
> +	struct pvcalls_data_intf *intf;
> +
> +	if (map == NULL)
> +		return;
> +
> +	intf = map->ring;
> +	intf->in_error = -ENOTCONN;
> +	notify_remote_via_irq(map->irq);
> +}
> +
> +static void pvcalls_sk_data_ready(struct sock *sock)
> +{
> +}
> +
> +static struct sock_mapping *pvcalls_new_active_socket(
> +		struct pvcalls_fedata *fedata,
> +		uint64_t id,
> +		grant_ref_t ref,
> +		uint32_t evtchn,
> +		struct socket *sock)
> +{
> +	int ret;
> +	struct sock_mapping *map;
> +	void *page;
> +
> +	map = kzalloc(sizeof(*map), GFP_KERNEL);
> +	if (map == NULL)
> +		return NULL;
> +
> +	map->fedata = fedata;
> +	map->sock = sock;
> +	map->id = id;
> +	map->ref = ref;
> +
> +	ret = xenbus_map_ring_valloc(fedata->dev, &ref, 1, &page);
> +	if (ret < 0)
> +		goto out;
> +	map->ring = page;
> +	map->ring_order = map->ring->ring_order;
> +	/* first read the order, then map the data ring */
> +	virt_rmb();
> +	if (map->ring_order > MAX_RING_ORDER) {
> +		pr_warn("%s frontend requested ring_order %u, which is > MAX (%u)\n",
> +				__func__, map->ring_order, MAX_RING_ORDER);
> +		goto out;
> +	}
> +	ret = xenbus_map_ring_valloc(fedata->dev, map->ring->ref,
> +				     (1 << map->ring_order), &page);
> +	if (ret < 0)
> +		goto out;
> +	map->bytes = page;
> +
> +	ret = bind_interdomain_evtchn_to_irqhandler(fedata->dev->otherend_id,
> +						    evtchn,
> +						    pvcalls_back_conn_event,
> +						    0,
> +						    "pvcalls-backend",
> +						    map);
> +	if (ret < 0)
> +		goto out;
> +	map->irq = ret;
> +
> +	map->data.in = map->bytes;
> +	map->data.out = map->bytes + XEN_FLEX_RING_SIZE(map->ring_order);
> +	
> +	map->ioworker.wq = alloc_workqueue("pvcalls_io", WQ_UNBOUND, 1);
> +	if (!map->ioworker.wq)
> +		goto out;
> +	atomic_set(&map->io, 1);
> +	INIT_WORK(&map->ioworker.register_work,	pvcalls_back_ioworker);
> +
> +	down(&fedata->socket_lock);
> +	list_add_tail(&map->list, &fedata->socket_mappings);
> +	up(&fedata->socket_lock);
> +
> +	write_lock_bh(&map->sock->sk->sk_callback_lock);
> +	map->saved_data_ready = map->sock->sk->sk_data_ready;
> +	map->sock->sk->sk_user_data = map;
> +	map->sock->sk->sk_data_ready = pvcalls_sk_data_ready;
> +	map->sock->sk->sk_state_change = pvcalls_sk_state_change;
> +	write_unlock_bh(&map->sock->sk->sk_callback_lock);
> +
> +	return map;
> +out:
> +	down(&fedata->socket_lock);
> +	list_del(&map->list);
> +	pvcalls_back_release_active(fedata->dev, fedata, map);
> +	up(&fedata->socket_lock);
> +	return NULL;
> +}
> +
>  static int pvcalls_back_connect(struct xenbus_device *dev,
>  				struct xen_pvcalls_request *req)
>  {
> +	struct pvcalls_fedata *fedata;
> +	int ret = -EINVAL;
> +	struct socket *sock;
> +	struct sock_mapping *map;
> +	struct xen_pvcalls_response *rsp;
> +
> +	fedata = dev_get_drvdata(&dev->dev);
> +
> +	ret = sock_create(AF_INET, SOCK_STREAM, 0, &sock);
> +	if (ret < 0)
> +		goto out;
> +	ret = inet_stream_connect(sock, (struct sockaddr *)&req->u.connect.addr,
> +				  req->u.connect.len, req->u.connect.flags);

Shouldn't there be some kind of validation, e.g. whether
req->u.connect.len isn't larger than sizeof(req->u.connect.addr) ?

Are all flags really valid to specify? I'd like to have at least a
comment stating that everything is save without validation.


Juergen

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 09/18] xen/pvcalls: implement bind command
  2017-07-03 21:08   ` [PATCH v6 09/18] xen/pvcalls: implement bind command Stefano Stabellini
@ 2017-07-04  7:24     ` Juergen Gross
  2017-07-05 21:18       ` Stefano Stabellini
  0 siblings, 1 reply; 44+ messages in thread
From: Juergen Gross @ 2017-07-04  7:24 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini

On 03/07/17 23:08, Stefano Stabellini wrote:
> Allocate a socket. Track the allocated passive sockets with a new data
> structure named sockpass_mapping. It contains an unbound workqueue to
> schedule delayed work for the accept and poll commands. It also has a
> reqcopy field to be used to store a copy of a request for delayed work.
> Reads/writes to it are protected by a lock (the "copy_lock" spinlock).
> Initialize the workqueue in pvcalls_back_bind.
> 
> Implement the bind command with inet_bind.
> 
> The pass_sk_data_ready event handler will be added later.
> 
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> ---
>  drivers/xen/pvcalls-back.c | 87 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 87 insertions(+)
> 
> diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> index 1bc2620..dae91fb 100644
> --- a/drivers/xen/pvcalls-back.c
> +++ b/drivers/xen/pvcalls-back.c
> @@ -78,6 +78,18 @@ struct sock_mapping {
>  	struct pvcalls_ioworker ioworker;
>  };
>  
> +struct sockpass_mapping {
> +	struct list_head list;
> +	struct pvcalls_fedata *fedata;
> +	struct socket *sock;
> +	uint64_t id;
> +	struct xen_pvcalls_request reqcopy;
> +	spinlock_t copy_lock;
> +	struct workqueue_struct *wq;
> +	struct work_struct register_work;
> +	void (*saved_data_ready)(struct sock *sk);
> +};
> +
>  static irqreturn_t pvcalls_back_conn_event(int irq, void *sock_map);
>  static int pvcalls_back_release_active(struct xenbus_device *dev,
>  				       struct pvcalls_fedata *fedata,
> @@ -263,9 +275,84 @@ static int pvcalls_back_release(struct xenbus_device *dev,
>  	return 0;
>  }
>  
> +static void __pvcalls_back_accept(struct work_struct *work)
> +{
> +}
> +
> +static void pvcalls_pass_sk_data_ready(struct sock *sock)
> +{
> +}
> +
>  static int pvcalls_back_bind(struct xenbus_device *dev,
>  			     struct xen_pvcalls_request *req)
>  {
> +	struct pvcalls_fedata *fedata;
> +	int ret, err;
> +	struct socket *sock;

Get rid of sock, ...

> +	struct sockpass_mapping *map;
> +	struct xen_pvcalls_response *rsp;
> +
> +	fedata = dev_get_drvdata(&dev->dev);
> +
> +	map = kzalloc(sizeof(*map), GFP_KERNEL);
> +	if (map == NULL) {
> +		ret = -ENOMEM;
> +		goto out;
> +	}
> +
> +	INIT_WORK(&map->register_work, __pvcalls_back_accept);
> +	spin_lock_init(&map->copy_lock);
> +	map->wq = alloc_workqueue("pvcalls_wq", WQ_UNBOUND, 1);
> +	if (!map->wq) {
> +		ret = -ENOMEM;
> +		kfree(map);

Move kfree(map) to the exit path, ...

> +		goto out;
> +	}
> +
> +	ret = sock_create(AF_INET, SOCK_STREAM, 0, &sock);

use &map->sock here, ...

> +	if (ret < 0) {
> +		destroy_workqueue(map->wq);

move destory_workqueue() to the exit path, ...

> +		kfree(map);
> +		goto out;
> +	}
> +
> +	ret = inet_bind(sock, (struct sockaddr *)&req->u.bind.addr,
> +			req->u.bind.len);
> +	if (ret < 0) {
> +		sock_release(sock);

and sock_release, too, ...

> +		destroy_workqueue(map->wq);
> +		kfree(map);
> +		goto out;
> +	}
> +
> +	map->fedata = fedata;
> +	map->sock = sock;
> +	map->id = req->u.bind.id;
> +
> +	down(&fedata->socket_lock);
> +	err = radix_tree_insert(&fedata->socketpass_mappings, map->id,
> +				map);

User ret instead of err?

> +	up(&fedata->socket_lock);
> +	if (err) {
> +		ret = err;
> +		sock_release(sock);
> +		destroy_workqueue(map->wq);
> +		kfree(map);
> +		goto out;
> +	}
> +
> +	write_lock_bh(&sock->sk->sk_callback_lock);
> +	map->saved_data_ready = sock->sk->sk_data_ready;
> +	sock->sk->sk_user_data = map;
> +	sock->sk->sk_data_ready = pvcalls_pass_sk_data_ready;
> +	write_unlock_bh(&sock->sk->sk_callback_lock);
> +
> +out:
> +	rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++);
> +	rsp->req_id = req->req_id;
> +	rsp->cmd = req->cmd;
> +	rsp->u.bind.id = req->u.bind.id;
> +	rsp->ret = ret;

... have a common error exit handling:
+	if (ret) {
+		if (map && map->sock)
+			sock_release(map->sock);
+		if (map && map->wq)
+			destroy_workqueue(map->wq);
+		kfree(map);
+	}


Juergen

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 11/18] xen/pvcalls: implement accept command
  2017-07-03 21:08   ` [PATCH v6 11/18] xen/pvcalls: implement accept command Stefano Stabellini
@ 2017-07-04  7:26     ` Juergen Gross
  0 siblings, 0 replies; 44+ messages in thread
From: Juergen Gross @ 2017-07-04  7:26 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini

On 03/07/17 23:08, Stefano Stabellini wrote:
> Implement the accept command by calling inet_accept. To avoid blocking
> in the kernel, call inet_accept(O_NONBLOCK) from a workqueue, which get
> scheduled on sk_data_ready (for a passive socket, it means that there
> are connections to accept).
> 
> Use the reqcopy field to store the request. Accept the new socket from
> the delayed work function, create a new sock_mapping for it, map
> the indexes page and data ring, and reply to the other end. Allocate an
> ioworker for the socket.
> 
> Only support one outstanding blocking accept request for every socket at
> any time.
> 
> Add a field to sock_mapping to remember the passive socket from which an
> active socket was created.
> 
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Thanks,

Juergen

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 12/18] xen/pvcalls: implement poll command
  2017-07-03 21:08   ` [PATCH v6 12/18] xen/pvcalls: implement poll command Stefano Stabellini
@ 2017-07-04  7:28     ` Juergen Gross
  0 siblings, 0 replies; 44+ messages in thread
From: Juergen Gross @ 2017-07-04  7:28 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini

On 03/07/17 23:08, Stefano Stabellini wrote:
> Implement poll on passive sockets by requesting a delayed response with
> mappass->reqcopy, and reply back when there is data on the passive
> socket.
> 
> Poll on active socket is unimplemented as by the spec, as the frontend
> should just wait for events and check the indexes on the indexes page.
> 
> Only support one outstanding poll (or accept) request for every passive
> socket at any given time.
> 
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Thanks,

Juergen

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 13/18] xen/pvcalls: implement release command
  2017-07-03 21:08   ` [PATCH v6 13/18] xen/pvcalls: implement release command Stefano Stabellini
@ 2017-07-04  7:30     ` Juergen Gross
  0 siblings, 0 replies; 44+ messages in thread
From: Juergen Gross @ 2017-07-04  7:30 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini

On 03/07/17 23:08, Stefano Stabellini wrote:
> Release both active and passive sockets. For active sockets, make sure
> to avoid possible conflicts with the ioworker reading/writing to those
> sockets concurrently. Set map->release to let the ioworker know
> atomically that the socket will be released soon, then wait until the
> ioworker finishes (flush_work).
> 
> Unmap indexes pages and data rings.
> 
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Thanks,

Juergen

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 14/18] xen/pvcalls: disconnect and module_exit
  2017-07-03 21:08   ` [PATCH v6 14/18] xen/pvcalls: disconnect and module_exit Stefano Stabellini
@ 2017-07-04  7:32     ` Juergen Gross
  2017-07-05 21:22       ` Stefano Stabellini
  0 siblings, 1 reply; 44+ messages in thread
From: Juergen Gross @ 2017-07-04  7:32 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini

On 03/07/17 23:08, Stefano Stabellini wrote:
> Implement backend_disconnect. Call pvcalls_back_release_active on active
> sockets and pvcalls_back_release_passive on passive sockets.
> 
> Implement module_exit by calling backend_disconnect on frontend
> connections.
> 
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> ---
>  drivers/xen/pvcalls-back.c | 52 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 52 insertions(+)
> 
> diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> index 9f4247f..71a42fc 100644
> --- a/drivers/xen/pvcalls-back.c
> +++ b/drivers/xen/pvcalls-back.c
> @@ -807,6 +807,42 @@ static int backend_connect(struct xenbus_device *dev)
>  
>  static int backend_disconnect(struct xenbus_device *dev)
>  {
> +	struct pvcalls_fedata *fedata;
> +	struct sock_mapping *map, *n;
> +	struct sockpass_mapping *mappass;
> +	struct radix_tree_iter iter;
> +	void **slot;
> +
> +
> +	fedata = dev_get_drvdata(&dev->dev);
> +
> +	down(&fedata->socket_lock);
> +	list_for_each_entry_safe(map, n, &fedata->socket_mappings, list) {
> +		list_del(&map->list);
> +		pvcalls_back_release_active(dev, fedata, map);
> +	}
> +
> +	radix_tree_for_each_slot(slot, &fedata->socketpass_mappings, &iter, 0) {
> +		mappass = radix_tree_deref_slot(slot);
> +		if (!mappass)
> +			continue;
> +		if (radix_tree_exception(mappass)) {
> +			if (radix_tree_deref_retry(mappass))
> +				slot = radix_tree_iter_retry(&iter);
> +		} else {
> +			radix_tree_delete(&fedata->socketpass_mappings, mappass->id);
> +			pvcalls_back_release_passive(dev, fedata, mappass);
> +		}
> +	}
> +	up(&fedata->socket_lock);
> +
> +	xenbus_unmap_ring_vfree(dev, fedata->sring);
> +	unbind_from_irqhandler(fedata->irq, dev);

Swap above two lines to avoid irq being handled after releasing
ring?


Juergen

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 16/18] xen/pvcalls: implement read
  2017-07-03 21:08   ` [PATCH v6 16/18] xen/pvcalls: implement read Stefano Stabellini
@ 2017-07-04  7:36     ` Juergen Gross
  0 siblings, 0 replies; 44+ messages in thread
From: Juergen Gross @ 2017-07-04  7:36 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini

On 03/07/17 23:08, Stefano Stabellini wrote:
> When an active socket has data available, increment the io and read
> counters, and schedule the ioworker.
> 
> Implement the read function by reading from the socket, writing the data
> to the data ring.
> 
> Set in_error on error.
> 
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Thanks,

Juergen

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 15/18] xen/pvcalls: implement the ioworker functions
  2017-07-03 21:08   ` [PATCH v6 15/18] xen/pvcalls: implement the ioworker functions Stefano Stabellini
@ 2017-07-04  7:48     ` Juergen Gross
  2017-07-05 21:26       ` Stefano Stabellini
  0 siblings, 1 reply; 44+ messages in thread
From: Juergen Gross @ 2017-07-04  7:48 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini

On 03/07/17 23:08, Stefano Stabellini wrote:
> We have one ioworker per socket. Each ioworker goes through the list of
> outstanding read/write requests. Once all requests have been dealt with,
> it returns.
> 
> We use one atomic counter per socket for "read" operations and one
> for "write" operations to keep track of the reads/writes to do.
> 
> We also use one atomic counter ("io") per ioworker to keep track of how
> many outstanding requests we have in total assigned to the ioworker. The
> ioworker finishes when there are none.
> 
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> ---
>  drivers/xen/pvcalls-back.c | 27 +++++++++++++++++++++++++++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> index 71a42fc..d59c2e4 100644
> --- a/drivers/xen/pvcalls-back.c
> +++ b/drivers/xen/pvcalls-back.c
> @@ -96,8 +96,35 @@ static int pvcalls_back_release_active(struct xenbus_device *dev,
>  				       struct pvcalls_fedata *fedata,
>  				       struct sock_mapping *map);
>  
> +static void pvcalls_conn_back_read(void *opaque)
> +{
> +}
> +
> +static int pvcalls_conn_back_write(struct sock_mapping *map)
> +{
> +	return 0;
> +}

Any reason for letting this function return int? I haven't spotted any
use of the return value in this or any later patch.


Juergen

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 17/18] xen/pvcalls: implement write
  2017-07-03 21:08   ` [PATCH v6 17/18] xen/pvcalls: implement write Stefano Stabellini
@ 2017-07-04  7:49     ` Juergen Gross
  0 siblings, 0 replies; 44+ messages in thread
From: Juergen Gross @ 2017-07-04  7:49 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini

On 03/07/17 23:08, Stefano Stabellini wrote:
> When the other end notifies us that there is data to be written
> (pvcalls_back_conn_event), increment the io and write counters, and
> schedule the ioworker.
> 
> Implement the write function called by ioworker by reading the data from
> the data ring, writing it to the socket by calling inet_sendmsg.
> 
> Set out_error on error.
> 
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Thanks,

Juergen

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 18/18] xen: introduce a Kconfig option to enable the pvcalls backend
  2017-07-03 21:08   ` [PATCH v6 18/18] xen: introduce a Kconfig option to enable the pvcalls backend Stefano Stabellini
@ 2017-07-04  7:50     ` Juergen Gross
  0 siblings, 0 replies; 44+ messages in thread
From: Juergen Gross @ 2017-07-04  7:50 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini

On 03/07/17 23:08, Stefano Stabellini wrote:
> Also add pvcalls-back to the Makefile.
> 
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Thanks,

Juergen

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 10/18] xen/pvcalls: implement listen command
  2017-07-03 21:08   ` [PATCH v6 10/18] xen/pvcalls: implement listen command Stefano Stabellini
@ 2017-07-04  7:51     ` Juergen Gross
  0 siblings, 0 replies; 44+ messages in thread
From: Juergen Gross @ 2017-07-04  7:51 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini

On 03/07/17 23:08, Stefano Stabellini wrote:
> Call inet_listen to implement the listen command.
> 
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Thanks,

Juergen

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 07/18] xen/pvcalls: implement socket command
  2017-07-03 21:08   ` [PATCH v6 07/18] xen/pvcalls: implement socket command Stefano Stabellini
@ 2017-07-04  7:52     ` Juergen Gross
  0 siblings, 0 replies; 44+ messages in thread
From: Juergen Gross @ 2017-07-04  7:52 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini

On 03/07/17 23:08, Stefano Stabellini wrote:
> Just reply with success to the other end for now. Delay the allocation
> of the actual socket to bind and/or connect.
> 
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Thanks,

Juergen

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 04/18] xen/pvcalls: xenbus state handling
  2017-07-03 21:08   ` [PATCH v6 04/18] xen/pvcalls: xenbus state handling Stefano Stabellini
@ 2017-07-04  8:03     ` Juergen Gross
  2017-07-05 20:24       ` Stefano Stabellini
  0 siblings, 1 reply; 44+ messages in thread
From: Juergen Gross @ 2017-07-04  8:03 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini

On 03/07/17 23:08, Stefano Stabellini wrote:
> Introduce the code to handle xenbus state changes.
> 
> Implement the probe function for the pvcalls backend. Write the
> supported versions, max-page-order and function-calls nodes to xenstore,
> as required by the protocol.
> 
> Introduce stub functions for disconnecting/connecting to a frontend.
> 
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> ---
>  drivers/xen/pvcalls-back.c | 152 +++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 152 insertions(+)
> 
> diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> index 9044cf2..7bce750 100644
> --- a/drivers/xen/pvcalls-back.c
> +++ b/drivers/xen/pvcalls-back.c
> @@ -25,20 +25,172 @@
>  #include <xen/xenbus.h>
>  #include <xen/interface/io/pvcalls.h>
>  
> +#define PVCALLS_VERSIONS "1"
> +#define MAX_RING_ORDER XENBUS_MAX_RING_GRANT_ORDER
> +
>  struct pvcalls_back_global {
>  	struct list_head frontends;
>  	struct semaphore frontends_lock;
>  } pvcalls_back_global;
>  
> +static int backend_connect(struct xenbus_device *dev)
> +{
> +	return 0;
> +}
> +
> +static int backend_disconnect(struct xenbus_device *dev)
> +{
> +	return 0;
> +}
> +
>  static int pvcalls_back_probe(struct xenbus_device *dev,
>  			      const struct xenbus_device_id *id)
>  {
> +	int err, abort;
> +	struct xenbus_transaction xbt;
> +
> +again:
> +	abort = 1;
> +
> +	err = xenbus_transaction_start(&xbt);
> +	if (err) {
> +		pr_warn("%s cannot create xenstore transaction\n", __func__);
> +		return err;
> +	}
> +
> +	err = xenbus_printf(xbt, dev->nodename, "versions", "%s",
> +			    PVCALLS_VERSIONS);
> +	if (err) {
> +		pr_warn("%s write out 'version' failed\n", __func__);

s/version/versions/ ?

> +		goto abort;
> +	}
> +
> +	err = xenbus_printf(xbt, dev->nodename, "max-page-order", "%u",
> +			    MAX_RING_ORDER);
> +	if (err) {
> +		pr_warn("%s write out 'max-page-order' failed\n", __func__);
> +		goto abort;
> +	}
> +
> +	err = xenbus_printf(xbt, dev->nodename, "function-calls",
> +			    XENBUS_FUNCTIONS_CALLS);
> +	if (err) {
> +		pr_warn("%s write out 'function-calls' failed\n", __func__);
> +		goto abort;
> +	}
> +
> +	abort = 0;
> +abort:
> +	err = xenbus_transaction_end(xbt, abort);
> +	if (err) {
> +		if (err == -EAGAIN && !abort)

Hmm, while I don't think xenbus_transaction_end() will ever
return -EAGAIN in the abort case I'm not sure you should limit
the retry loop to the non-abort case.

> +			goto again;
> +		pr_warn("%s cannot complete xenstore transaction\n", __func__);
> +		return err;
> +	}
> +
> +	xenbus_switch_state(dev, XenbusStateInitWait);

I don't think you should switch state in case of abort set, no?


Juergen

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 03/18] xen/pvcalls: initialize the module and register the xenbus backend
  2017-07-03 21:08   ` [PATCH v6 03/18] xen/pvcalls: initialize the module and register the " Stefano Stabellini
@ 2017-07-04  8:04     ` Juergen Gross
  0 siblings, 0 replies; 44+ messages in thread
From: Juergen Gross @ 2017-07-04  8:04 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini

On 03/07/17 23:08, Stefano Stabellini wrote:
> Keep a list of connected frontends. Use a semaphore to protect list
> accesses.
> 
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Thanks,

Juergen

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 02/18] xen/pvcalls: introduce the pvcalls xenbus backend
  2017-07-03 21:08   ` [PATCH v6 02/18] xen/pvcalls: introduce the pvcalls xenbus backend Stefano Stabellini
@ 2017-07-04  8:04     ` Juergen Gross
  0 siblings, 0 replies; 44+ messages in thread
From: Juergen Gross @ 2017-07-04  8:04 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini

On 03/07/17 23:08, Stefano Stabellini wrote:
> Introduce a xenbus backend for the pvcalls protocol, as defined by
> https://xenbits.xen.org/docs/unstable/misc/pvcalls.html.
> 
> This patch only adds the stubs, the code will be added by the following
> patches.
> 
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Thanks,

Juergen

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 01/18] xen: introduce the pvcalls interface header
  2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
                     ` (16 preceding siblings ...)
  2017-07-03 21:08   ` [PATCH v6 18/18] xen: introduce a Kconfig option to enable the pvcalls backend Stefano Stabellini
@ 2017-07-04  8:05   ` Juergen Gross
  17 siblings, 0 replies; 44+ messages in thread
From: Juergen Gross @ 2017-07-04  8:05 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini, konrad.wilk

On 03/07/17 23:08, Stefano Stabellini wrote:
> Introduce the C header file which defines the PV Calls interface. It is
> imported from xen/include/public/io/pvcalls.h.
> 
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Thanks,

Juergen

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 04/18] xen/pvcalls: xenbus state handling
  2017-07-04  8:03     ` Juergen Gross
@ 2017-07-05 20:24       ` Stefano Stabellini
  0 siblings, 0 replies; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-05 20:24 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, xen-devel, linux-kernel, boris.ostrovsky,
	Stefano Stabellini

Many thanks for all the reviews!

On Tue, 4 Jul 2017, Juergen Gross wrote:
> On 03/07/17 23:08, Stefano Stabellini wrote:
> > Introduce the code to handle xenbus state changes.
> > 
> > Implement the probe function for the pvcalls backend. Write the
> > supported versions, max-page-order and function-calls nodes to xenstore,
> > as required by the protocol.
> > 
> > Introduce stub functions for disconnecting/connecting to a frontend.
> > 
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > CC: boris.ostrovsky@oracle.com
> > CC: jgross@suse.com
> > ---
> >  drivers/xen/pvcalls-back.c | 152 +++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 152 insertions(+)
> > 
> > diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> > index 9044cf2..7bce750 100644
> > --- a/drivers/xen/pvcalls-back.c
> > +++ b/drivers/xen/pvcalls-back.c
> > @@ -25,20 +25,172 @@
> >  #include <xen/xenbus.h>
> >  #include <xen/interface/io/pvcalls.h>
> >  
> > +#define PVCALLS_VERSIONS "1"
> > +#define MAX_RING_ORDER XENBUS_MAX_RING_GRANT_ORDER
> > +
> >  struct pvcalls_back_global {
> >  	struct list_head frontends;
> >  	struct semaphore frontends_lock;
> >  } pvcalls_back_global;
> >  
> > +static int backend_connect(struct xenbus_device *dev)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int backend_disconnect(struct xenbus_device *dev)
> > +{
> > +	return 0;
> > +}
> > +
> >  static int pvcalls_back_probe(struct xenbus_device *dev,
> >  			      const struct xenbus_device_id *id)
> >  {
> > +	int err, abort;
> > +	struct xenbus_transaction xbt;
> > +
> > +again:
> > +	abort = 1;
> > +
> > +	err = xenbus_transaction_start(&xbt);
> > +	if (err) {
> > +		pr_warn("%s cannot create xenstore transaction\n", __func__);
> > +		return err;
> > +	}
> > +
> > +	err = xenbus_printf(xbt, dev->nodename, "versions", "%s",
> > +			    PVCALLS_VERSIONS);
> > +	if (err) {
> > +		pr_warn("%s write out 'version' failed\n", __func__);
> 
> s/version/versions/ ?

OK


> > +		goto abort;
> > +	}
> > +
> > +	err = xenbus_printf(xbt, dev->nodename, "max-page-order", "%u",
> > +			    MAX_RING_ORDER);
> > +	if (err) {
> > +		pr_warn("%s write out 'max-page-order' failed\n", __func__);
> > +		goto abort;
> > +	}
> > +
> > +	err = xenbus_printf(xbt, dev->nodename, "function-calls",
> > +			    XENBUS_FUNCTIONS_CALLS);
> > +	if (err) {
> > +		pr_warn("%s write out 'function-calls' failed\n", __func__);
> > +		goto abort;
> > +	}
> > +
> > +	abort = 0;
> > +abort:
> > +	err = xenbus_transaction_end(xbt, abort);
> > +	if (err) {
> > +		if (err == -EAGAIN && !abort)
> 
> Hmm, while I don't think xenbus_transaction_end() will ever
> return -EAGAIN in the abort case I'm not sure you should limit
> the retry loop to the non-abort case.

Realistically, if we want to abort and get -EAGAIN, the best thing to do
is to get out (current behavior). The other option would be to keep
issuing xenbus_transaction_end(xbr, 1) in a loop until it succeeds, but
it seems more fragile to me.


> > +			goto again;
> > +		pr_warn("%s cannot complete xenstore transaction\n", __func__);
> > +		return err;
> > +	}
> > +
> > +	xenbus_switch_state(dev, XenbusStateInitWait);
> 
> I don't think you should switch state in case of abort set, no?

Good point, I'll change that.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 05/18] xen/pvcalls: connect to a frontend
  2017-07-04  6:59     ` Juergen Gross
@ 2017-07-05 20:29       ` Stefano Stabellini
  0 siblings, 0 replies; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-05 20:29 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, xen-devel, linux-kernel, boris.ostrovsky,
	Stefano Stabellini

On Tue, 4 Jul 2017, Juergen Gross wrote:
> On 03/07/17 23:08, Stefano Stabellini wrote:
> > Introduce a per-frontend data structure named pvcalls_fedata. It
> > contains pointers to the command ring, its event channel, a list of
> > active sockets and a tree of passive sockets (passing sockets need to be
> > looked up from the id on listen, accept and poll commands, while active
> > sockets only on release).
> > 
> > It also has an unbound workqueue to schedule the work of parsing and
> > executing commands on the command ring. socket_lock protects the two
> > lists. In pvcalls_back_global, keep a list of connected frontends.
> > 
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > CC: boris.ostrovsky@oracle.com
> > CC: jgross@suse.com
> > ---
> >  drivers/xen/pvcalls-back.c | 92 ++++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 92 insertions(+)
> > 
> > diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> > index 7bce750..e4c2e46 100644
> > --- a/drivers/xen/pvcalls-back.c
> > +++ b/drivers/xen/pvcalls-back.c
> > @@ -33,9 +33,101 @@ struct pvcalls_back_global {
> >  	struct semaphore frontends_lock;
> >  } pvcalls_back_global;
> >  
> > +/*
> > + * Per-frontend data structure. It contains pointers to the command
> > + * ring, its event channel, a list of active sockets and a tree of
> > + * passive sockets.
> > + */
> > +struct pvcalls_fedata {
> > +	struct list_head list;
> > +	struct xenbus_device *dev;
> > +	struct xen_pvcalls_sring *sring;
> > +	struct xen_pvcalls_back_ring ring;
> > +	int irq;
> > +	struct list_head socket_mappings;
> > +	struct radix_tree_root socketpass_mappings;
> > +	struct semaphore socket_lock;
> > +	struct workqueue_struct *wq;
> > +	struct work_struct register_work;
> > +};
> > +
> > +static void pvcalls_back_work(struct work_struct *work)
> > +{
> > +}
> > +
> > +static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
> > +{
> > +	return IRQ_HANDLED;
> > +}
> > +
> >  static int backend_connect(struct xenbus_device *dev)
> >  {
> > +	int err, evtchn;
> > +	grant_ref_t ring_ref;
> > +	struct pvcalls_fedata *fedata = NULL;
> > +
> > +	fedata = kzalloc(sizeof(struct pvcalls_fedata), GFP_KERNEL);
> > +	if (!fedata)
> > +		return -ENOMEM;
> > +
> > +	err = xenbus_scanf(XBT_NIL, dev->otherend, "port", "%u",
> > +			   &evtchn);
> > +	if (err != 1) {
> > +		err = -EINVAL;
> > +		xenbus_dev_fatal(dev, err, "reading %s/event-channel",
> > +				 dev->otherend);
> > +		goto error;
> > +	}
> > +
> > +	err = xenbus_scanf(XBT_NIL, dev->otherend, "ring-ref", "%u", &ring_ref);
> > +	if (err != 1) {
> > +		err = -EINVAL;
> > +		xenbus_dev_fatal(dev, err, "reading %s/ring-ref",
> > +				 dev->otherend);
> > +		goto error;
> > +	}
> > +
> > +	err = bind_interdomain_evtchn_to_irqhandler(dev->otherend_id, evtchn,
> > +						    pvcalls_back_event, 0,
> > +						    "pvcalls-backend", dev);
> > +	if (err < 0)
> > +		goto error;
> > +	fedata->irq = err;
> > +
> > +	fedata->wq = alloc_workqueue("pvcalls_back_wq", WQ_UNBOUND, 1);
> > +	if (!fedata->wq) {
> > +		err = -ENOMEM;
> > +		goto error;
> > +	}
> > +
> > +	err = xenbus_map_ring_valloc(dev, &ring_ref, 1, (void**)&fedata->sring);
> > +	if (err < 0)
> > +		goto error;
> > +
> > +	BACK_RING_INIT(&fedata->ring, fedata->sring, XEN_PAGE_SIZE * 1);
> > +	fedata->dev = dev;
> > +
> > +	INIT_WORK(&fedata->register_work, pvcalls_back_work);
> > +	INIT_LIST_HEAD(&fedata->socket_mappings);
> > +	INIT_RADIX_TREE(&fedata->socketpass_mappings, GFP_KERNEL);
> > +	sema_init(&fedata->socket_lock, 1);
> > +	dev_set_drvdata(&dev->dev, fedata);
> > +
> > +	down(&pvcalls_back_global.frontends_lock);
> > +	list_add_tail(&fedata->list, &pvcalls_back_global.frontends);
> > +	up(&pvcalls_back_global.frontends_lock);
> > +	queue_work(fedata->wq, &fedata->register_work);
> > +
> >  	return 0;
> > +
> > + error:
> > +	if (fedata->sring != NULL)
> > +		xenbus_unmap_ring_vfree(dev, fedata->sring);
> > +	if (fedata->wq)
> > +		destroy_workqueue(fedata->wq);
> > +	unbind_from_irqhandler(fedata->irq, dev);
> 
> fedata->irq might have not been set and can be zero here. irq 0 is
> a valid irq, I think.

You are right. IRQs cannot be negative, so I'll add an fedata->irq = -1;
at the beginning of the function and an if (fedata->irq >= 0) here.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 06/18] xen/pvcalls: handle commands from the frontend
  2017-07-04  6:57     ` Juergen Gross
@ 2017-07-05 20:43       ` Stefano Stabellini
  0 siblings, 0 replies; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-05 20:43 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, xen-devel, linux-kernel, boris.ostrovsky,
	Stefano Stabellini

On Tue, 4 Jul 2017, Juergen Gross wrote:
> On 03/07/17 23:08, Stefano Stabellini wrote:
> > When the other end notifies us that there are commands to be read
> > (pvcalls_back_event), wake up the backend thread to parse the command.
> > 
> > The command ring works like most other Xen rings, so use the usual
> > ring macros to read and write to it. The functions implementing the
> > commands are empty stubs for now.
> > 
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > CC: boris.ostrovsky@oracle.com
> > CC: jgross@suse.com
> > ---
> >  drivers/xen/pvcalls-back.c | 144 ++++++++++++++++++++++++++++++++++++++++-----
> >  1 file changed, 129 insertions(+), 15 deletions(-)
> > 
> > diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> > index e4c2e46..9e00971 100644
> > --- a/drivers/xen/pvcalls-back.c
> > +++ b/drivers/xen/pvcalls-back.c
> > @@ -47,16 +47,135 @@ struct pvcalls_fedata {
> >  	struct list_head socket_mappings;
> >  	struct radix_tree_root socketpass_mappings;
> >  	struct semaphore socket_lock;
> > -	struct workqueue_struct *wq;
> > -	struct work_struct register_work;

I realize that this changes should actually be folded in the previous
patch (wq and register_work shouldn't be added in the first place). I'll
fix the patches.


> >  };
> >  
> > -static void pvcalls_back_work(struct work_struct *work)
> > +static int pvcalls_back_socket(struct xenbus_device *dev,
> > +		struct xen_pvcalls_request *req)
> >  {
> > +	return 0;
> > +}
> > +
> > +static int pvcalls_back_connect(struct xenbus_device *dev,
> > +				struct xen_pvcalls_request *req)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int pvcalls_back_release(struct xenbus_device *dev,
> > +				struct xen_pvcalls_request *req)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int pvcalls_back_bind(struct xenbus_device *dev,
> > +			     struct xen_pvcalls_request *req)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int pvcalls_back_listen(struct xenbus_device *dev,
> > +			       struct xen_pvcalls_request *req)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int pvcalls_back_accept(struct xenbus_device *dev,
> > +			       struct xen_pvcalls_request *req)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int pvcalls_back_poll(struct xenbus_device *dev,
> > +			     struct xen_pvcalls_request *req)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int pvcalls_back_handle_cmd(struct xenbus_device *dev,
> > +				   struct xen_pvcalls_request *req)
> > +{
> > +	int ret = 0;
> > +
> > +	switch (req->cmd) {
> > +	case PVCALLS_SOCKET:
> > +		ret = pvcalls_back_socket(dev, req);
> > +		break;
> > +	case PVCALLS_CONNECT:
> > +		ret = pvcalls_back_connect(dev, req);
> > +		break;
> > +	case PVCALLS_RELEASE:
> > +		ret = pvcalls_back_release(dev, req);
> > +		break;
> > +	case PVCALLS_BIND:
> > +		ret = pvcalls_back_bind(dev, req);
> > +		break;
> > +	case PVCALLS_LISTEN:
> > +		ret = pvcalls_back_listen(dev, req);
> > +		break;
> > +	case PVCALLS_ACCEPT:
> > +		ret = pvcalls_back_accept(dev, req);
> > +		break;
> > +	case PVCALLS_POLL:
> > +		ret = pvcalls_back_poll(dev, req);
> > +		break;
> > +	default:
> > +	{
> > +		struct pvcalls_fedata *fedata;
> > +		struct xen_pvcalls_response *rsp;
> > +
> > +		fedata = dev_get_drvdata(&dev->dev);
> > +		rsp = RING_GET_RESPONSE(
> > +				&fedata->ring, fedata->ring.rsp_prod_pvt++);
> > +		rsp->req_id = req->req_id;
> > +		rsp->cmd = req->cmd;
> > +		rsp->ret = -ENOTSUPP;
> > +		break;
> > +	}
> > +	}
> > +	return ret;
> > +}
> > +
> > +static void pvcalls_back_work(struct pvcalls_fedata *fedata)
> > +{
> > +	int notify, notify_all = 0, more = 1;
> > +	struct xen_pvcalls_request req;
> > +	struct xenbus_device *dev = fedata->dev;
> > +
> > +	while (more) {
> > +		while (RING_HAS_UNCONSUMED_REQUESTS(&fedata->ring)) {
> > +			RING_COPY_REQUEST(&fedata->ring,
> > +					  fedata->ring.req_cons++,
> > +					  &req);
> > +
> > +			if (!pvcalls_back_handle_cmd(dev, &req)) {
> > +				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(
> > +					&fedata->ring, notify);
> > +				notify_all += notify;
> > +			}
> > +		}
> > +
> > +		if (notify_all) {
> > +			notify_remote_via_irq(fedata->irq);
> > +			notify_all = 0;
> > +		}
> > +
> > +		RING_FINAL_CHECK_FOR_REQUESTS(&fedata->ring, more);
> > +	}
> >  }
> >  
> >  static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
> >  {
> > +	struct xenbus_device *dev = dev_id;
> > +	struct pvcalls_fedata *fedata = NULL;
> > +
> > +	if (dev == NULL)
> > +		return IRQ_HANDLED;
> > +
> > +	fedata = dev_get_drvdata(&dev->dev);
> > +	if (fedata == NULL)
> > +		return IRQ_HANDLED;
> > +
> > +	pvcalls_back_work(fedata);
> >  	return IRQ_HANDLED;
> >  }
> >  
> > @@ -87,18 +206,15 @@ static int backend_connect(struct xenbus_device *dev)
> >  		goto error;
> >  	}
> >  
> > -	err = bind_interdomain_evtchn_to_irqhandler(dev->otherend_id, evtchn,
> > -						    pvcalls_back_event, 0,
> > -						    "pvcalls-backend", dev);
> > +	err = bind_interdomain_evtchn_to_irq(dev->otherend_id, evtchn);
> >  	if (err < 0)
> >  		goto error;
> >  	fedata->irq = err;
> > -
> > -	fedata->wq = alloc_workqueue("pvcalls_back_wq", WQ_UNBOUND, 1);
> > -	if (!fedata->wq) {
> > -		err = -ENOMEM;
> > +	
> > +	err = request_threaded_irq(fedata->irq, NULL, pvcalls_back_event,
> > +				   IRQF_ONESHOT, "pvcalls-back", dev);
> > +	if (err < 0)
> >  		goto error;
> > -	}
> >  
> >  	err = xenbus_map_ring_valloc(dev, &ring_ref, 1, (void**)&fedata->sring);
> >  	if (err < 0)
> > @@ -107,7 +223,6 @@ static int backend_connect(struct xenbus_device *dev)
> >  	BACK_RING_INIT(&fedata->ring, fedata->sring, XEN_PAGE_SIZE * 1);
> >  	fedata->dev = dev;
> >  
> > -	INIT_WORK(&fedata->register_work, pvcalls_back_work);
> >  	INIT_LIST_HEAD(&fedata->socket_mappings);
> >  	INIT_RADIX_TREE(&fedata->socketpass_mappings, GFP_KERNEL);
> >  	sema_init(&fedata->socket_lock, 1);
> > @@ -116,15 +231,14 @@ static int backend_connect(struct xenbus_device *dev)
> >  	down(&pvcalls_back_global.frontends_lock);
> >  	list_add_tail(&fedata->list, &pvcalls_back_global.frontends);
> >  	up(&pvcalls_back_global.frontends_lock);
> > -	queue_work(fedata->wq, &fedata->register_work);
> > +
> > +	pvcalls_back_work(fedata);
> 
> Is this call really necessary? I believe this is racy in case an event
> is coming in at the same time.

No, it is not. I'll remove it.


> >  
> >  	return 0;
> >  
> >   error:
> >  	if (fedata->sring != NULL)
> >  		xenbus_unmap_ring_vfree(dev, fedata->sring);
> > -	if (fedata->wq)
> > -		destroy_workqueue(fedata->wq);
> >  	unbind_from_irqhandler(fedata->irq, dev);
> 
> Is it secure to unbind the irq handler _after_ unmapping the ring?

Good point, I'll change the order.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 08/18] xen/pvcalls: implement connect command
  2017-07-04  7:14     ` Juergen Gross
@ 2017-07-05 21:09       ` Stefano Stabellini
  0 siblings, 0 replies; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-05 21:09 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, xen-devel, linux-kernel, boris.ostrovsky,
	Stefano Stabellini

On Tue, 4 Jul 2017, Juergen Gross wrote:
> On 03/07/17 23:08, Stefano Stabellini wrote:
> > Allocate a socket. Keep track of socket <-> ring mappings with a new data
> > structure, called sock_mapping. Implement the connect command by calling
> > inet_stream_connect, and mapping the new indexes page and data ring.
> > Allocate a workqueue and a work_struct, called ioworker, to perform
> > reads and writes to the socket.
> > 
> > When an active socket is closed (sk_state_change), set in_error to
> > -ENOTCONN and notify the other end, as specified by the protocol.
> > 
> > sk_data_ready and pvcalls_back_ioworker will be implemented later.
> > 
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > CC: boris.ostrovsky@oracle.com
> > CC: jgross@suse.com
> > ---
> >  drivers/xen/pvcalls-back.c | 174 +++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 174 insertions(+)
> > 
> > diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> > index 53fd908..1bc2620 100644
> > --- a/drivers/xen/pvcalls-back.c
> > +++ b/drivers/xen/pvcalls-back.c
> > @@ -54,6 +54,39 @@ struct pvcalls_fedata {
> >  	struct semaphore socket_lock;
> >  };
> >  
> > +struct pvcalls_ioworker {
> > +	struct work_struct register_work;
> > +	struct workqueue_struct *wq;
> > +};
> > +
> > +struct sock_mapping {
> > +	struct list_head list;
> > +	struct pvcalls_fedata *fedata;
> > +	struct socket *sock;
> > +	uint64_t id;
> > +	grant_ref_t ref;
> > +	struct pvcalls_data_intf *ring;
> > +	void *bytes;
> > +	struct pvcalls_data data;
> > +	uint32_t ring_order;
> > +	int irq;
> > +	atomic_t read;
> > +	atomic_t write;
> > +	atomic_t io;
> > +	atomic_t release;
> > +	void (*saved_data_ready)(struct sock *sk);
> > +	struct pvcalls_ioworker ioworker;
> > +};
> > +
> > +static irqreturn_t pvcalls_back_conn_event(int irq, void *sock_map);
> > +static int pvcalls_back_release_active(struct xenbus_device *dev,
> > +				       struct pvcalls_fedata *fedata,
> > +				       struct sock_mapping *map);
> > +
> > +static void pvcalls_back_ioworker(struct work_struct *work)
> > +{
> > +}
> > +
> >  static int pvcalls_back_socket(struct xenbus_device *dev,
> >  		struct xen_pvcalls_request *req)
> >  {
> > @@ -82,9 +115,145 @@ static int pvcalls_back_socket(struct xenbus_device *dev,
> >  	return 0;
> >  }
> >  
> > +static void pvcalls_sk_state_change(struct sock *sock)
> > +{
> > +	struct sock_mapping *map = sock->sk_user_data;
> > +	struct pvcalls_data_intf *intf;
> > +
> > +	if (map == NULL)
> > +		return;
> > +
> > +	intf = map->ring;
> > +	intf->in_error = -ENOTCONN;
> > +	notify_remote_via_irq(map->irq);
> > +}
> > +
> > +static void pvcalls_sk_data_ready(struct sock *sock)
> > +{
> > +}
> > +
> > +static struct sock_mapping *pvcalls_new_active_socket(
> > +		struct pvcalls_fedata *fedata,
> > +		uint64_t id,
> > +		grant_ref_t ref,
> > +		uint32_t evtchn,
> > +		struct socket *sock)
> > +{
> > +	int ret;
> > +	struct sock_mapping *map;
> > +	void *page;
> > +
> > +	map = kzalloc(sizeof(*map), GFP_KERNEL);
> > +	if (map == NULL)
> > +		return NULL;
> > +
> > +	map->fedata = fedata;
> > +	map->sock = sock;
> > +	map->id = id;
> > +	map->ref = ref;
> > +
> > +	ret = xenbus_map_ring_valloc(fedata->dev, &ref, 1, &page);
> > +	if (ret < 0)
> > +		goto out;
> > +	map->ring = page;
> > +	map->ring_order = map->ring->ring_order;
> > +	/* first read the order, then map the data ring */
> > +	virt_rmb();
> > +	if (map->ring_order > MAX_RING_ORDER) {
> > +		pr_warn("%s frontend requested ring_order %u, which is > MAX (%u)\n",
> > +				__func__, map->ring_order, MAX_RING_ORDER);
> > +		goto out;
> > +	}
> > +	ret = xenbus_map_ring_valloc(fedata->dev, map->ring->ref,
> > +				     (1 << map->ring_order), &page);
> > +	if (ret < 0)
> > +		goto out;
> > +	map->bytes = page;
> > +
> > +	ret = bind_interdomain_evtchn_to_irqhandler(fedata->dev->otherend_id,
> > +						    evtchn,
> > +						    pvcalls_back_conn_event,
> > +						    0,
> > +						    "pvcalls-backend",
> > +						    map);
> > +	if (ret < 0)
> > +		goto out;
> > +	map->irq = ret;
> > +
> > +	map->data.in = map->bytes;
> > +	map->data.out = map->bytes + XEN_FLEX_RING_SIZE(map->ring_order);
> > +	
> > +	map->ioworker.wq = alloc_workqueue("pvcalls_io", WQ_UNBOUND, 1);
> > +	if (!map->ioworker.wq)
> > +		goto out;
> > +	atomic_set(&map->io, 1);
> > +	INIT_WORK(&map->ioworker.register_work,	pvcalls_back_ioworker);
> > +
> > +	down(&fedata->socket_lock);
> > +	list_add_tail(&map->list, &fedata->socket_mappings);
> > +	up(&fedata->socket_lock);
> > +
> > +	write_lock_bh(&map->sock->sk->sk_callback_lock);
> > +	map->saved_data_ready = map->sock->sk->sk_data_ready;
> > +	map->sock->sk->sk_user_data = map;
> > +	map->sock->sk->sk_data_ready = pvcalls_sk_data_ready;
> > +	map->sock->sk->sk_state_change = pvcalls_sk_state_change;
> > +	write_unlock_bh(&map->sock->sk->sk_callback_lock);
> > +
> > +	return map;
> > +out:
> > +	down(&fedata->socket_lock);
> > +	list_del(&map->list);
> > +	pvcalls_back_release_active(fedata->dev, fedata, map);
> > +	up(&fedata->socket_lock);
> > +	return NULL;
> > +}
> > +
> >  static int pvcalls_back_connect(struct xenbus_device *dev,
> >  				struct xen_pvcalls_request *req)
> >  {
> > +	struct pvcalls_fedata *fedata;
> > +	int ret = -EINVAL;
> > +	struct socket *sock;
> > +	struct sock_mapping *map;
> > +	struct xen_pvcalls_response *rsp;
> > +
> > +	fedata = dev_get_drvdata(&dev->dev);
> > +
> > +	ret = sock_create(AF_INET, SOCK_STREAM, 0, &sock);
> > +	if (ret < 0)
> > +		goto out;
> > +	ret = inet_stream_connect(sock, (struct sockaddr *)&req->u.connect.addr,
> > +				  req->u.connect.len, req->u.connect.flags);
> 
> Shouldn't there be some kind of validation, e.g. whether
> req->u.connect.len isn't larger than sizeof(req->u.connect.addr) ?

Yes, we should also validate that (req->u.connect.len >=
sizeof(sa->sa_family)). I'll add the checks.


> Are all flags really valid to specify? I'd like to have at least a
> comment stating that everything is save without validation.

The flags field for the connect operation is actually ignored in this
version of the protocol ("reserved for future usage"). I'll drop it.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 09/18] xen/pvcalls: implement bind command
  2017-07-04  7:24     ` Juergen Gross
@ 2017-07-05 21:18       ` Stefano Stabellini
  0 siblings, 0 replies; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-05 21:18 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, xen-devel, linux-kernel, boris.ostrovsky,
	Stefano Stabellini

On Tue, 4 Jul 2017, Juergen Gross wrote:
> On 03/07/17 23:08, Stefano Stabellini wrote:
> > Allocate a socket. Track the allocated passive sockets with a new data
> > structure named sockpass_mapping. It contains an unbound workqueue to
> > schedule delayed work for the accept and poll commands. It also has a
> > reqcopy field to be used to store a copy of a request for delayed work.
> > Reads/writes to it are protected by a lock (the "copy_lock" spinlock).
> > Initialize the workqueue in pvcalls_back_bind.
> > 
> > Implement the bind command with inet_bind.
> > 
> > The pass_sk_data_ready event handler will be added later.
> > 
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > CC: boris.ostrovsky@oracle.com
> > CC: jgross@suse.com
> > ---
> >  drivers/xen/pvcalls-back.c | 87 ++++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 87 insertions(+)
> > 
> > diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> > index 1bc2620..dae91fb 100644
> > --- a/drivers/xen/pvcalls-back.c
> > +++ b/drivers/xen/pvcalls-back.c
> > @@ -78,6 +78,18 @@ struct sock_mapping {
> >  	struct pvcalls_ioworker ioworker;
> >  };
> >  
> > +struct sockpass_mapping {
> > +	struct list_head list;
> > +	struct pvcalls_fedata *fedata;
> > +	struct socket *sock;
> > +	uint64_t id;
> > +	struct xen_pvcalls_request reqcopy;
> > +	spinlock_t copy_lock;
> > +	struct workqueue_struct *wq;
> > +	struct work_struct register_work;
> > +	void (*saved_data_ready)(struct sock *sk);
> > +};
> > +
> >  static irqreturn_t pvcalls_back_conn_event(int irq, void *sock_map);
> >  static int pvcalls_back_release_active(struct xenbus_device *dev,
> >  				       struct pvcalls_fedata *fedata,
> > @@ -263,9 +275,84 @@ static int pvcalls_back_release(struct xenbus_device *dev,
> >  	return 0;
> >  }
> >  
> > +static void __pvcalls_back_accept(struct work_struct *work)
> > +{
> > +}
> > +
> > +static void pvcalls_pass_sk_data_ready(struct sock *sock)
> > +{
> > +}
> > +
> >  static int pvcalls_back_bind(struct xenbus_device *dev,
> >  			     struct xen_pvcalls_request *req)
> >  {
> > +	struct pvcalls_fedata *fedata;
> > +	int ret, err;
> > +	struct socket *sock;
> 
> Get rid of sock, ...

OK


> > +	struct sockpass_mapping *map;
> > +	struct xen_pvcalls_response *rsp;
> > +
> > +	fedata = dev_get_drvdata(&dev->dev);
> > +
> > +	map = kzalloc(sizeof(*map), GFP_KERNEL);
> > +	if (map == NULL) {
> > +		ret = -ENOMEM;
> > +		goto out;
> > +	}
> > +
> > +	INIT_WORK(&map->register_work, __pvcalls_back_accept);
> > +	spin_lock_init(&map->copy_lock);
> > +	map->wq = alloc_workqueue("pvcalls_wq", WQ_UNBOUND, 1);
> > +	if (!map->wq) {
> > +		ret = -ENOMEM;
> > +		kfree(map);
> 
> Move kfree(map) to the exit path, ...

OK


> > +		goto out;
> > +	}
> > +
> > +	ret = sock_create(AF_INET, SOCK_STREAM, 0, &sock);
> 
> use &map->sock here, ...

OK


> > +	if (ret < 0) {
> > +		destroy_workqueue(map->wq);
> 
> move destory_workqueue() to the exit path, ...

OK


> > +		kfree(map);
> > +		goto out;
> > +	}
> > +
> > +	ret = inet_bind(sock, (struct sockaddr *)&req->u.bind.addr,
> > +			req->u.bind.len);
> > +	if (ret < 0) {
> > +		sock_release(sock);
> 
> and sock_release, too, ...

OK


> > +		destroy_workqueue(map->wq);
> > +		kfree(map);
> > +		goto out;
> > +	}
> > +
> > +	map->fedata = fedata;
> > +	map->sock = sock;
> > +	map->id = req->u.bind.id;
> > +
> > +	down(&fedata->socket_lock);
> > +	err = radix_tree_insert(&fedata->socketpass_mappings, map->id,
> > +				map);
> 
> User ret instead of err?

OK


> > +	up(&fedata->socket_lock);
> > +	if (err) {
> > +		ret = err;
> > +		sock_release(sock);
> > +		destroy_workqueue(map->wq);
> > +		kfree(map);
> > +		goto out;
> > +	}
> > +
> > +	write_lock_bh(&sock->sk->sk_callback_lock);
> > +	map->saved_data_ready = sock->sk->sk_data_ready;
> > +	sock->sk->sk_user_data = map;
> > +	sock->sk->sk_data_ready = pvcalls_pass_sk_data_ready;
> > +	write_unlock_bh(&sock->sk->sk_callback_lock);
> > +
> > +out:
> > +	rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++);
> > +	rsp->req_id = req->req_id;
> > +	rsp->cmd = req->cmd;
> > +	rsp->u.bind.id = req->u.bind.id;
> > +	rsp->ret = ret;
> 
> ... have a common error exit handling:
> +	if (ret) {
> +		if (map && map->sock)
> +			sock_release(map->sock);
> +		if (map && map->wq)
> +			destroy_workqueue(map->wq);
> +		kfree(map);
> +	}
 
All good suggestions, I'll make all changes.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 14/18] xen/pvcalls: disconnect and module_exit
  2017-07-04  7:32     ` Juergen Gross
@ 2017-07-05 21:22       ` Stefano Stabellini
  0 siblings, 0 replies; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-05 21:22 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, xen-devel, linux-kernel, boris.ostrovsky,
	Stefano Stabellini

On Tue, 4 Jul 2017, Juergen Gross wrote:
> On 03/07/17 23:08, Stefano Stabellini wrote:
> > Implement backend_disconnect. Call pvcalls_back_release_active on active
> > sockets and pvcalls_back_release_passive on passive sockets.
> > 
> > Implement module_exit by calling backend_disconnect on frontend
> > connections.
> > 
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > CC: boris.ostrovsky@oracle.com
> > CC: jgross@suse.com
> > ---
> >  drivers/xen/pvcalls-back.c | 52 ++++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 52 insertions(+)
> > 
> > diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> > index 9f4247f..71a42fc 100644
> > --- a/drivers/xen/pvcalls-back.c
> > +++ b/drivers/xen/pvcalls-back.c
> > @@ -807,6 +807,42 @@ static int backend_connect(struct xenbus_device *dev)
> >  
> >  static int backend_disconnect(struct xenbus_device *dev)
> >  {
> > +	struct pvcalls_fedata *fedata;
> > +	struct sock_mapping *map, *n;
> > +	struct sockpass_mapping *mappass;
> > +	struct radix_tree_iter iter;
> > +	void **slot;
> > +
> > +
> > +	fedata = dev_get_drvdata(&dev->dev);
> > +
> > +	down(&fedata->socket_lock);
> > +	list_for_each_entry_safe(map, n, &fedata->socket_mappings, list) {
> > +		list_del(&map->list);
> > +		pvcalls_back_release_active(dev, fedata, map);
> > +	}
> > +
> > +	radix_tree_for_each_slot(slot, &fedata->socketpass_mappings, &iter, 0) {
> > +		mappass = radix_tree_deref_slot(slot);
> > +		if (!mappass)
> > +			continue;
> > +		if (radix_tree_exception(mappass)) {
> > +			if (radix_tree_deref_retry(mappass))
> > +				slot = radix_tree_iter_retry(&iter);
> > +		} else {
> > +			radix_tree_delete(&fedata->socketpass_mappings, mappass->id);
> > +			pvcalls_back_release_passive(dev, fedata, mappass);
> > +		}
> > +	}
> > +	up(&fedata->socket_lock);
> > +
> > +	xenbus_unmap_ring_vfree(dev, fedata->sring);
> > +	unbind_from_irqhandler(fedata->irq, dev);
> 
> Swap above two lines to avoid irq being handled after releasing
> ring?

Will do

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH v6 15/18] xen/pvcalls: implement the ioworker functions
  2017-07-04  7:48     ` Juergen Gross
@ 2017-07-05 21:26       ` Stefano Stabellini
  0 siblings, 0 replies; 44+ messages in thread
From: Stefano Stabellini @ 2017-07-05 21:26 UTC (permalink / raw)
  To: Juergen Gross
  Cc: Stefano Stabellini, xen-devel, linux-kernel, boris.ostrovsky,
	Stefano Stabellini

On Tue, 4 Jul 2017, Juergen Gross wrote:
> On 03/07/17 23:08, Stefano Stabellini wrote:
> > We have one ioworker per socket. Each ioworker goes through the list of
> > outstanding read/write requests. Once all requests have been dealt with,
> > it returns.
> > 
> > We use one atomic counter per socket for "read" operations and one
> > for "write" operations to keep track of the reads/writes to do.
> > 
> > We also use one atomic counter ("io") per ioworker to keep track of how
> > many outstanding requests we have in total assigned to the ioworker. The
> > ioworker finishes when there are none.
> > 
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > CC: boris.ostrovsky@oracle.com
> > CC: jgross@suse.com
> > ---
> >  drivers/xen/pvcalls-back.c | 27 +++++++++++++++++++++++++++
> >  1 file changed, 27 insertions(+)
> > 
> > diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> > index 71a42fc..d59c2e4 100644
> > --- a/drivers/xen/pvcalls-back.c
> > +++ b/drivers/xen/pvcalls-back.c
> > @@ -96,8 +96,35 @@ static int pvcalls_back_release_active(struct xenbus_device *dev,
> >  				       struct pvcalls_fedata *fedata,
> >  				       struct sock_mapping *map);
> >  
> > +static void pvcalls_conn_back_read(void *opaque)
> > +{
> > +}
> > +
> > +static int pvcalls_conn_back_write(struct sock_mapping *map)
> > +{
> > +	return 0;
> > +}
> 
> Any reason for letting this function return int? I haven't spotted any
> use of the return value in this or any later patch.

No reason. I'll change it to void.

^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2017-07-05 21:26 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-03 21:08 [PATCH v6 00/18] introduce the Xen PV Calls backend Stefano Stabellini
2017-07-03 21:08 ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
2017-07-03 21:08   ` [PATCH v6 02/18] xen/pvcalls: introduce the pvcalls xenbus backend Stefano Stabellini
2017-07-04  8:04     ` Juergen Gross
2017-07-03 21:08   ` [PATCH v6 03/18] xen/pvcalls: initialize the module and register the " Stefano Stabellini
2017-07-04  8:04     ` Juergen Gross
2017-07-03 21:08   ` [PATCH v6 04/18] xen/pvcalls: xenbus state handling Stefano Stabellini
2017-07-04  8:03     ` Juergen Gross
2017-07-05 20:24       ` Stefano Stabellini
2017-07-03 21:08   ` [PATCH v6 05/18] xen/pvcalls: connect to a frontend Stefano Stabellini
2017-07-04  6:59     ` Juergen Gross
2017-07-05 20:29       ` Stefano Stabellini
2017-07-03 21:08   ` [PATCH v6 06/18] xen/pvcalls: handle commands from the frontend Stefano Stabellini
2017-07-04  6:57     ` Juergen Gross
2017-07-05 20:43       ` Stefano Stabellini
2017-07-03 21:08   ` [PATCH v6 07/18] xen/pvcalls: implement socket command Stefano Stabellini
2017-07-04  7:52     ` Juergen Gross
2017-07-03 21:08   ` [PATCH v6 08/18] xen/pvcalls: implement connect command Stefano Stabellini
2017-07-04  7:14     ` Juergen Gross
2017-07-05 21:09       ` Stefano Stabellini
2017-07-03 21:08   ` [PATCH v6 09/18] xen/pvcalls: implement bind command Stefano Stabellini
2017-07-04  7:24     ` Juergen Gross
2017-07-05 21:18       ` Stefano Stabellini
2017-07-03 21:08   ` [PATCH v6 10/18] xen/pvcalls: implement listen command Stefano Stabellini
2017-07-04  7:51     ` Juergen Gross
2017-07-03 21:08   ` [PATCH v6 11/18] xen/pvcalls: implement accept command Stefano Stabellini
2017-07-04  7:26     ` Juergen Gross
2017-07-03 21:08   ` [PATCH v6 12/18] xen/pvcalls: implement poll command Stefano Stabellini
2017-07-04  7:28     ` Juergen Gross
2017-07-03 21:08   ` [PATCH v6 13/18] xen/pvcalls: implement release command Stefano Stabellini
2017-07-04  7:30     ` Juergen Gross
2017-07-03 21:08   ` [PATCH v6 14/18] xen/pvcalls: disconnect and module_exit Stefano Stabellini
2017-07-04  7:32     ` Juergen Gross
2017-07-05 21:22       ` Stefano Stabellini
2017-07-03 21:08   ` [PATCH v6 15/18] xen/pvcalls: implement the ioworker functions Stefano Stabellini
2017-07-04  7:48     ` Juergen Gross
2017-07-05 21:26       ` Stefano Stabellini
2017-07-03 21:08   ` [PATCH v6 16/18] xen/pvcalls: implement read Stefano Stabellini
2017-07-04  7:36     ` Juergen Gross
2017-07-03 21:08   ` [PATCH v6 17/18] xen/pvcalls: implement write Stefano Stabellini
2017-07-04  7:49     ` Juergen Gross
2017-07-03 21:08   ` [PATCH v6 18/18] xen: introduce a Kconfig option to enable the pvcalls backend Stefano Stabellini
2017-07-04  7:50     ` Juergen Gross
2017-07-04  8:05   ` [PATCH v6 01/18] xen: introduce the pvcalls interface header Juergen Gross

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).