All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/18] introduce the Xen PV Calls backend
@ 2017-05-19 23:17 Stefano Stabellini
  2017-05-19 23:22   ` Stefano Stabellini
  0 siblings, 1 reply; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:17 UTC (permalink / raw)
  To: xen-devel; +Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky

Hi all,

this series introduces the backend for the newly introduced PV Calls
procotol.

PV Calls is a paravirtualized protocol that allows the implementation of
a set of POSIX functions in a different domain. The PV Calls frontend
sends POSIX function calls to the backend, which implements them and
returns a value to the frontend and acts on the function call.

For more information about PV Calls, please read:

https://xenbits.xen.org/docs/unstable/misc/pvcalls.html

I tried to split the source code into small pieces to make it easier to
read and understand. Please review!


Changes in v2:
- allocate one ioworker per socket (rather than 1 per vcpu)
- rename privs to frontends
- add newlines
- define "1" in the public header
- better error returns in pvcalls_back_probe
- do not set XenbusStateClosed twice in set_backend_state
- add more comments
- replace rw_semaphore with semaphore
- rename pvcallss to socket_lock
- move xenbus_map_ring_valloc closer to first use in backend_connect
- use more traditional return codes from pvcalls_back_handle_cmd and
  callees
- remove useless dev == NULL checks
- replace lock_sock with more appropriate and fine grained socket locks


Stefano Stabellini (18):
      xen: introduce the pvcalls interface header
      xen/pvcalls: introduce the pvcalls xenbus backend
      xen/pvcalls: initialize the module and register the xenbus backend
      xen/pvcalls: xenbus state handling
      xen/pvcalls: connect to a frontend
      xen/pvcalls: handle commands from the frontend
      xen/pvcalls: implement socket command
      xen/pvcalls: implement connect command
      xen/pvcalls: implement bind command
      xen/pvcalls: implement listen command
      xen/pvcalls: implement accept command
      xen/pvcalls: implement poll command
      xen/pvcalls: implement release command
      xen/pvcalls: disconnect and module_exit
      xen/pvcalls: implement the ioworker functions
      xen/pvcalls: implement read
      xen/pvcalls: implement write
      xen: introduce a Kconfig option to enable the pvcalls backend

 drivers/xen/Kconfig                |   12 +
 drivers/xen/Makefile               |    1 +
 drivers/xen/pvcalls-back.c         | 1254 ++++++++++++++++++++++++++++++++++++
 include/xen/interface/io/pvcalls.h |  120 ++++
 4 files changed, 1387 insertions(+)
 create mode 100644 drivers/xen/pvcalls-back.c
 create mode 100644 include/xen/interface/io/pvcalls.h

^ permalink raw reply	[flat|nested] 69+ messages in thread

* [PATCH v2 01/18] xen: introduce the pvcalls interface header
  2017-05-19 23:17 [PATCH v2 00/18] introduce the Xen PV Calls backend Stefano Stabellini
@ 2017-05-19 23:22   ` Stefano Stabellini
  0 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky,
	Stefano Stabellini, konrad.wilk

Introduce the C header file which defines the PV Calls interface. It is
imported from xen/include/public/io/pvcalls.h.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: konrad.wilk@oracle.com
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 include/xen/interface/io/pvcalls.h | 120 +++++++++++++++++++++++++++++++++++++
 1 file changed, 120 insertions(+)
 create mode 100644 include/xen/interface/io/pvcalls.h

diff --git a/include/xen/interface/io/pvcalls.h b/include/xen/interface/io/pvcalls.h
new file mode 100644
index 0000000..0d41959
--- /dev/null
+++ b/include/xen/interface/io/pvcalls.h
@@ -0,0 +1,120 @@
+#ifndef __XEN_PUBLIC_IO_XEN_PVCALLS_H__
+#define __XEN_PUBLIC_IO_XEN_PVCALLS_H__
+
+#include <linux/net.h>
+#include "xen/interface/io/ring.h"
+
+/* "1" means socket, connect, release, bind, listen, accept and poll */
+#define XENBUS_FUNCTIONS_CALLS "1"
+
+/*
+ * See docs/misc/pvcalls.markdown in xen.git for the full specification:
+ * https://xenbits.xen.org/docs/unstable/misc/pvcalls.html
+ */
+struct pvcalls_data_intf {
+    RING_IDX in_cons, in_prod, in_error;
+
+    uint8_t pad1[52];
+
+    RING_IDX out_cons, out_prod, out_error;
+
+    uint8_t pad2[52];
+
+    RING_IDX ring_order;
+    grant_ref_t ref[];
+};
+DEFINE_XEN_FLEX_RING(pvcalls);
+
+#define PVCALLS_SOCKET         0
+#define PVCALLS_CONNECT        1
+#define PVCALLS_RELEASE        2
+#define PVCALLS_BIND           3
+#define PVCALLS_LISTEN         4
+#define PVCALLS_ACCEPT         5
+#define PVCALLS_POLL           6
+
+struct xen_pvcalls_request {
+    uint32_t req_id; /* private to guest, echoed in response */
+    uint32_t cmd;    /* command to execute */
+    union {
+        struct xen_pvcalls_socket {
+            uint64_t id;
+            uint32_t domain;
+            uint32_t type;
+            uint32_t protocol;
+        } socket;
+        struct xen_pvcalls_connect {
+            uint64_t id;
+            uint8_t addr[28];
+            uint32_t len;
+            uint32_t flags;
+            grant_ref_t ref;
+            uint32_t evtchn;
+        } connect;
+        struct xen_pvcalls_release {
+            uint64_t id;
+            uint8_t reuse;
+        } release;
+        struct xen_pvcalls_bind {
+            uint64_t id;
+            uint8_t addr[28];
+            uint32_t len;
+        } bind;
+        struct xen_pvcalls_listen {
+            uint64_t id;
+            uint32_t backlog;
+        } listen;
+        struct xen_pvcalls_accept {
+            uint64_t id;
+            uint64_t id_new;
+            grant_ref_t ref;
+            uint32_t evtchn;
+        } accept;
+        struct xen_pvcalls_poll {
+            uint64_t id;
+        } poll;
+        /* dummy member to force sizeof(struct xen_pvcalls_request)
+         * to match across archs */
+        struct xen_pvcalls_dummy {
+            uint8_t dummy[56];
+        } dummy;
+    } u;
+};
+
+struct xen_pvcalls_response {
+    uint32_t req_id;
+    uint32_t cmd;
+    int32_t ret;
+    uint32_t pad;
+    union {
+        struct _xen_pvcalls_socket {
+            uint64_t id;
+        } socket;
+        struct _xen_pvcalls_connect {
+            uint64_t id;
+        } connect;
+        struct _xen_pvcalls_release {
+            uint64_t id;
+        } release;
+        struct _xen_pvcalls_bind {
+            uint64_t id;
+        } bind;
+        struct _xen_pvcalls_listen {
+            uint64_t id;
+        } listen;
+        struct _xen_pvcalls_accept {
+            uint64_t id;
+        } accept;
+        struct _xen_pvcalls_poll {
+            uint64_t id;
+        } poll;
+        struct _xen_pvcalls_dummy {
+            uint8_t dummy[8];
+        } dummy;
+    } u;
+};
+
+DEFINE_RING_TYPES(xen_pvcalls, struct xen_pvcalls_request,
+                  struct xen_pvcalls_response);
+
+#endif
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 01/18] xen: introduce the pvcalls interface header
@ 2017-05-19 23:22   ` Stefano Stabellini
  0 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, sstabellini, linux-kernel, Stefano Stabellini, boris.ostrovsky

Introduce the C header file which defines the PV Calls interface. It is
imported from xen/include/public/io/pvcalls.h.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: konrad.wilk@oracle.com
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 include/xen/interface/io/pvcalls.h | 120 +++++++++++++++++++++++++++++++++++++
 1 file changed, 120 insertions(+)
 create mode 100644 include/xen/interface/io/pvcalls.h

diff --git a/include/xen/interface/io/pvcalls.h b/include/xen/interface/io/pvcalls.h
new file mode 100644
index 0000000..0d41959
--- /dev/null
+++ b/include/xen/interface/io/pvcalls.h
@@ -0,0 +1,120 @@
+#ifndef __XEN_PUBLIC_IO_XEN_PVCALLS_H__
+#define __XEN_PUBLIC_IO_XEN_PVCALLS_H__
+
+#include <linux/net.h>
+#include "xen/interface/io/ring.h"
+
+/* "1" means socket, connect, release, bind, listen, accept and poll */
+#define XENBUS_FUNCTIONS_CALLS "1"
+
+/*
+ * See docs/misc/pvcalls.markdown in xen.git for the full specification:
+ * https://xenbits.xen.org/docs/unstable/misc/pvcalls.html
+ */
+struct pvcalls_data_intf {
+    RING_IDX in_cons, in_prod, in_error;
+
+    uint8_t pad1[52];
+
+    RING_IDX out_cons, out_prod, out_error;
+
+    uint8_t pad2[52];
+
+    RING_IDX ring_order;
+    grant_ref_t ref[];
+};
+DEFINE_XEN_FLEX_RING(pvcalls);
+
+#define PVCALLS_SOCKET         0
+#define PVCALLS_CONNECT        1
+#define PVCALLS_RELEASE        2
+#define PVCALLS_BIND           3
+#define PVCALLS_LISTEN         4
+#define PVCALLS_ACCEPT         5
+#define PVCALLS_POLL           6
+
+struct xen_pvcalls_request {
+    uint32_t req_id; /* private to guest, echoed in response */
+    uint32_t cmd;    /* command to execute */
+    union {
+        struct xen_pvcalls_socket {
+            uint64_t id;
+            uint32_t domain;
+            uint32_t type;
+            uint32_t protocol;
+        } socket;
+        struct xen_pvcalls_connect {
+            uint64_t id;
+            uint8_t addr[28];
+            uint32_t len;
+            uint32_t flags;
+            grant_ref_t ref;
+            uint32_t evtchn;
+        } connect;
+        struct xen_pvcalls_release {
+            uint64_t id;
+            uint8_t reuse;
+        } release;
+        struct xen_pvcalls_bind {
+            uint64_t id;
+            uint8_t addr[28];
+            uint32_t len;
+        } bind;
+        struct xen_pvcalls_listen {
+            uint64_t id;
+            uint32_t backlog;
+        } listen;
+        struct xen_pvcalls_accept {
+            uint64_t id;
+            uint64_t id_new;
+            grant_ref_t ref;
+            uint32_t evtchn;
+        } accept;
+        struct xen_pvcalls_poll {
+            uint64_t id;
+        } poll;
+        /* dummy member to force sizeof(struct xen_pvcalls_request)
+         * to match across archs */
+        struct xen_pvcalls_dummy {
+            uint8_t dummy[56];
+        } dummy;
+    } u;
+};
+
+struct xen_pvcalls_response {
+    uint32_t req_id;
+    uint32_t cmd;
+    int32_t ret;
+    uint32_t pad;
+    union {
+        struct _xen_pvcalls_socket {
+            uint64_t id;
+        } socket;
+        struct _xen_pvcalls_connect {
+            uint64_t id;
+        } connect;
+        struct _xen_pvcalls_release {
+            uint64_t id;
+        } release;
+        struct _xen_pvcalls_bind {
+            uint64_t id;
+        } bind;
+        struct _xen_pvcalls_listen {
+            uint64_t id;
+        } listen;
+        struct _xen_pvcalls_accept {
+            uint64_t id;
+        } accept;
+        struct _xen_pvcalls_poll {
+            uint64_t id;
+        } poll;
+        struct _xen_pvcalls_dummy {
+            uint8_t dummy[8];
+        } dummy;
+    } u;
+};
+
+DEFINE_RING_TYPES(xen_pvcalls, struct xen_pvcalls_request,
+                  struct xen_pvcalls_response);
+
+#endif
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 02/18] xen/pvcalls: introduce the pvcalls xenbus backend
  2017-05-19 23:22   ` Stefano Stabellini
  (?)
@ 2017-05-19 23:22   ` Stefano Stabellini
  2017-05-25 22:04     ` Boris Ostrovsky
  2017-05-25 22:04     ` Boris Ostrovsky
  -1 siblings, 2 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Introduce a xenbus backend for the pvcalls protocol, as defined by
https://xenbits.xen.org/docs/unstable/misc/pvcalls.html.

This patch only adds the stubs, the code will be added by the following
patches.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 61 insertions(+)
 create mode 100644 drivers/xen/pvcalls-back.c

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
new file mode 100644
index 0000000..f3d0daa
--- /dev/null
+++ b/drivers/xen/pvcalls-back.c
@@ -0,0 +1,61 @@
+/*
+ * (c) 2017 Stefano Stabellini <stefano@aporeto.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/radix-tree.h>
+#include <linux/module.h>
+#include <linux/semaphore.h>
+#include <linux/wait.h>
+
+#include <xen/events.h>
+#include <xen/grant_table.h>
+#include <xen/xen.h>
+#include <xen/xenbus.h>
+#include <xen/interface/io/pvcalls.h>
+
+static int pvcalls_back_probe(struct xenbus_device *dev,
+			      const struct xenbus_device_id *id)
+{
+	return 0;
+}
+
+static void pvcalls_back_changed(struct xenbus_device *dev,
+				 enum xenbus_state frontend_state)
+{
+}
+
+static int pvcalls_back_remove(struct xenbus_device *dev)
+{
+	return 0;
+}
+
+static int pvcalls_back_uevent(struct xenbus_device *xdev,
+			       struct kobj_uevent_env *env)
+{
+	return 0;
+}
+
+static const struct xenbus_device_id pvcalls_back_ids[] = {
+	{ "pvcalls" },
+	{ "" }
+};
+
+static struct xenbus_driver pvcalls_back_driver = {
+	.ids = pvcalls_back_ids,
+	.probe = pvcalls_back_probe,
+	.remove = pvcalls_back_remove,
+	.uevent = pvcalls_back_uevent,
+	.otherend_changed = pvcalls_back_changed,
+};
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 02/18] xen/pvcalls: introduce the pvcalls xenbus backend
  2017-05-19 23:22   ` Stefano Stabellini
  (?)
  (?)
@ 2017-05-19 23:22   ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, Stefano Stabellini, boris.ostrovsky, sstabellini, linux-kernel

Introduce a xenbus backend for the pvcalls protocol, as defined by
https://xenbits.xen.org/docs/unstable/misc/pvcalls.html.

This patch only adds the stubs, the code will be added by the following
patches.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 61 insertions(+)
 create mode 100644 drivers/xen/pvcalls-back.c

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
new file mode 100644
index 0000000..f3d0daa
--- /dev/null
+++ b/drivers/xen/pvcalls-back.c
@@ -0,0 +1,61 @@
+/*
+ * (c) 2017 Stefano Stabellini <stefano@aporeto.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/radix-tree.h>
+#include <linux/module.h>
+#include <linux/semaphore.h>
+#include <linux/wait.h>
+
+#include <xen/events.h>
+#include <xen/grant_table.h>
+#include <xen/xen.h>
+#include <xen/xenbus.h>
+#include <xen/interface/io/pvcalls.h>
+
+static int pvcalls_back_probe(struct xenbus_device *dev,
+			      const struct xenbus_device_id *id)
+{
+	return 0;
+}
+
+static void pvcalls_back_changed(struct xenbus_device *dev,
+				 enum xenbus_state frontend_state)
+{
+}
+
+static int pvcalls_back_remove(struct xenbus_device *dev)
+{
+	return 0;
+}
+
+static int pvcalls_back_uevent(struct xenbus_device *xdev,
+			       struct kobj_uevent_env *env)
+{
+	return 0;
+}
+
+static const struct xenbus_device_id pvcalls_back_ids[] = {
+	{ "pvcalls" },
+	{ "" }
+};
+
+static struct xenbus_driver pvcalls_back_driver = {
+	.ids = pvcalls_back_ids,
+	.probe = pvcalls_back_probe,
+	.remove = pvcalls_back_remove,
+	.uevent = pvcalls_back_uevent,
+	.otherend_changed = pvcalls_back_changed,
+};
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 03/18] xen/pvcalls: initialize the module and register the xenbus backend
  2017-05-19 23:22   ` Stefano Stabellini
                     ` (3 preceding siblings ...)
  (?)
@ 2017-05-19 23:22   ` Stefano Stabellini
  2017-05-25 22:12       ` Boris Ostrovsky
  -1 siblings, 1 reply; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Keep a list of connected frontends. Use a semaphore to protect list
accesses.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index f3d0daa..9044cf2 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -25,6 +25,11 @@
 #include <xen/xenbus.h>
 #include <xen/interface/io/pvcalls.h>
 
+struct pvcalls_back_global {
+	struct list_head frontends;
+	struct semaphore frontends_lock;
+} pvcalls_back_global;
+
 static int pvcalls_back_probe(struct xenbus_device *dev,
 			      const struct xenbus_device_id *id)
 {
@@ -59,3 +64,20 @@ static int pvcalls_back_uevent(struct xenbus_device *xdev,
 	.uevent = pvcalls_back_uevent,
 	.otherend_changed = pvcalls_back_changed,
 };
+
+static int __init pvcalls_back_init(void)
+{
+	int ret;
+
+	if (!xen_domain())
+		return -ENODEV;
+
+	ret = xenbus_register_backend(&pvcalls_back_driver);
+	if (ret < 0)
+		return ret;
+
+	sema_init(&pvcalls_back_global.frontends_lock, 1);
+	INIT_LIST_HEAD(&pvcalls_back_global.frontends);
+	return 0;
+}
+module_init(pvcalls_back_init);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 03/18] xen/pvcalls: initialize the module and register the xenbus backend
  2017-05-19 23:22   ` Stefano Stabellini
                     ` (2 preceding siblings ...)
  (?)
@ 2017-05-19 23:22   ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, Stefano Stabellini, boris.ostrovsky, sstabellini, linux-kernel

Keep a list of connected frontends. Use a semaphore to protect list
accesses.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index f3d0daa..9044cf2 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -25,6 +25,11 @@
 #include <xen/xenbus.h>
 #include <xen/interface/io/pvcalls.h>
 
+struct pvcalls_back_global {
+	struct list_head frontends;
+	struct semaphore frontends_lock;
+} pvcalls_back_global;
+
 static int pvcalls_back_probe(struct xenbus_device *dev,
 			      const struct xenbus_device_id *id)
 {
@@ -59,3 +64,20 @@ static int pvcalls_back_uevent(struct xenbus_device *xdev,
 	.uevent = pvcalls_back_uevent,
 	.otherend_changed = pvcalls_back_changed,
 };
+
+static int __init pvcalls_back_init(void)
+{
+	int ret;
+
+	if (!xen_domain())
+		return -ENODEV;
+
+	ret = xenbus_register_backend(&pvcalls_back_driver);
+	if (ret < 0)
+		return ret;
+
+	sema_init(&pvcalls_back_global.frontends_lock, 1);
+	INIT_LIST_HEAD(&pvcalls_back_global.frontends);
+	return 0;
+}
+module_init(pvcalls_back_init);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 04/18] xen/pvcalls: xenbus state handling
  2017-05-19 23:22   ` Stefano Stabellini
@ 2017-05-19 23:22     ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Introduce the code to handle xenbus state changes.

Implement the probe function for the pvcalls backend. Write the
supported versions, max-page-order and function-calls nodes to xenstore,
as required by the protocol.

Introduce stub functions for disconnecting/connecting to a frontend.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 135 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 135 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 9044cf2..b4da138 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -25,20 +25,155 @@
 #include <xen/xenbus.h>
 #include <xen/interface/io/pvcalls.h>
 
+#define PVCALLS_VERSIONS "1"
+#define MAX_RING_ORDER XENBUS_MAX_RING_GRANT_ORDER
+
 struct pvcalls_back_global {
 	struct list_head frontends;
 	struct semaphore frontends_lock;
 } pvcalls_back_global;
 
+static int backend_connect(struct xenbus_device *dev)
+{
+	return 0;
+}
+
+static int backend_disconnect(struct xenbus_device *dev)
+{
+	return 0;
+}
+
 static int pvcalls_back_probe(struct xenbus_device *dev,
 			      const struct xenbus_device_id *id)
 {
+	int err;
+
+	err = xenbus_printf(XBT_NIL, dev->nodename, "versions", "%s",
+			    PVCALLS_VERSIONS);
+	if (err) {
+		pr_warn("%s write out 'version' failed\n", __func__);
+		return -EINVAL;
+	}
+
+	err = xenbus_printf(XBT_NIL, dev->nodename, "max-page-order", "%u",
+			    MAX_RING_ORDER);
+	if (err) {
+		pr_warn("%s write out 'max-page-order' failed\n", __func__);
+		return err;
+	}
+
+	/* "1" means socket, connect, release, bind, listen, accept and poll*/
+	err = xenbus_printf(XBT_NIL, dev->nodename, "function-calls",
+			    XENBUS_FUNCTIONS_CALLS);
+	if (err) {
+		pr_warn("%s write out 'function-calls' failed\n", __func__);
+		return err;
+	}
+
+	err = xenbus_switch_state(dev, XenbusStateInitWait);
+	if (err)
+		return err;
+
 	return 0;
 }
 
+static void set_backend_state(struct xenbus_device *dev,
+			      enum xenbus_state state)
+{
+	while (dev->state != state) {
+		switch (dev->state) {
+		case XenbusStateClosed:
+			switch (state) {
+			case XenbusStateInitWait:
+			case XenbusStateConnected:
+				xenbus_switch_state(dev, XenbusStateInitWait);
+				break;
+			case XenbusStateClosing:
+				xenbus_switch_state(dev, XenbusStateClosing);
+				break;
+			default:
+				__WARN();
+			}
+			break;
+		case XenbusStateInitWait:
+		case XenbusStateInitialised:
+			switch (state) {
+			case XenbusStateConnected:
+				backend_connect(dev);
+				xenbus_switch_state(dev, XenbusStateConnected);
+				break;
+			case XenbusStateClosing:
+			case XenbusStateClosed:
+				xenbus_switch_state(dev, XenbusStateClosing);
+				break;
+			default:
+				__WARN();
+			}
+			break;
+		case XenbusStateConnected:
+			switch (state) {
+			case XenbusStateInitWait:
+			case XenbusStateClosing:
+			case XenbusStateClosed:
+				down(&pvcalls_back_global.frontends_lock);
+				backend_disconnect(dev);
+				up(&pvcalls_back_global.frontends_lock);
+				xenbus_switch_state(dev, XenbusStateClosing);
+				break;
+			default:
+				__WARN();
+			}
+			break;
+		case XenbusStateClosing:
+			switch (state) {
+			case XenbusStateInitWait:
+			case XenbusStateConnected:
+			case XenbusStateClosed:
+				xenbus_switch_state(dev, XenbusStateClosed);
+				break;
+			default:
+				__WARN();
+			}
+			break;
+		default:
+			__WARN();
+		}
+	}
+}
+
 static void pvcalls_back_changed(struct xenbus_device *dev,
 				 enum xenbus_state frontend_state)
 {
+	switch (frontend_state) {
+	case XenbusStateInitialising:
+		set_backend_state(dev, XenbusStateInitWait);
+		break;
+
+	case XenbusStateInitialised:
+	case XenbusStateConnected:
+		set_backend_state(dev, XenbusStateConnected);
+		break;
+
+	case XenbusStateClosing:
+		set_backend_state(dev, XenbusStateClosing);
+		break;
+
+	case XenbusStateClosed:
+		set_backend_state(dev, XenbusStateClosed);
+		if (xenbus_dev_is_online(dev))
+			break;
+		device_unregister(&dev->dev);
+		break;
+	case XenbusStateUnknown:
+		set_backend_state(dev, XenbusStateClosed);
+		device_unregister(&dev->dev);
+		break;
+
+	default:
+		xenbus_dev_fatal(dev, -EINVAL, "saw state %d at frontend",
+				 frontend_state);
+		break;
+	}
 }
 
 static int pvcalls_back_remove(struct xenbus_device *dev)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 04/18] xen/pvcalls: xenbus state handling
@ 2017-05-19 23:22     ` Stefano Stabellini
  0 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, Stefano Stabellini, boris.ostrovsky, sstabellini, linux-kernel

Introduce the code to handle xenbus state changes.

Implement the probe function for the pvcalls backend. Write the
supported versions, max-page-order and function-calls nodes to xenstore,
as required by the protocol.

Introduce stub functions for disconnecting/connecting to a frontend.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 135 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 135 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 9044cf2..b4da138 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -25,20 +25,155 @@
 #include <xen/xenbus.h>
 #include <xen/interface/io/pvcalls.h>
 
+#define PVCALLS_VERSIONS "1"
+#define MAX_RING_ORDER XENBUS_MAX_RING_GRANT_ORDER
+
 struct pvcalls_back_global {
 	struct list_head frontends;
 	struct semaphore frontends_lock;
 } pvcalls_back_global;
 
+static int backend_connect(struct xenbus_device *dev)
+{
+	return 0;
+}
+
+static int backend_disconnect(struct xenbus_device *dev)
+{
+	return 0;
+}
+
 static int pvcalls_back_probe(struct xenbus_device *dev,
 			      const struct xenbus_device_id *id)
 {
+	int err;
+
+	err = xenbus_printf(XBT_NIL, dev->nodename, "versions", "%s",
+			    PVCALLS_VERSIONS);
+	if (err) {
+		pr_warn("%s write out 'version' failed\n", __func__);
+		return -EINVAL;
+	}
+
+	err = xenbus_printf(XBT_NIL, dev->nodename, "max-page-order", "%u",
+			    MAX_RING_ORDER);
+	if (err) {
+		pr_warn("%s write out 'max-page-order' failed\n", __func__);
+		return err;
+	}
+
+	/* "1" means socket, connect, release, bind, listen, accept and poll*/
+	err = xenbus_printf(XBT_NIL, dev->nodename, "function-calls",
+			    XENBUS_FUNCTIONS_CALLS);
+	if (err) {
+		pr_warn("%s write out 'function-calls' failed\n", __func__);
+		return err;
+	}
+
+	err = xenbus_switch_state(dev, XenbusStateInitWait);
+	if (err)
+		return err;
+
 	return 0;
 }
 
+static void set_backend_state(struct xenbus_device *dev,
+			      enum xenbus_state state)
+{
+	while (dev->state != state) {
+		switch (dev->state) {
+		case XenbusStateClosed:
+			switch (state) {
+			case XenbusStateInitWait:
+			case XenbusStateConnected:
+				xenbus_switch_state(dev, XenbusStateInitWait);
+				break;
+			case XenbusStateClosing:
+				xenbus_switch_state(dev, XenbusStateClosing);
+				break;
+			default:
+				__WARN();
+			}
+			break;
+		case XenbusStateInitWait:
+		case XenbusStateInitialised:
+			switch (state) {
+			case XenbusStateConnected:
+				backend_connect(dev);
+				xenbus_switch_state(dev, XenbusStateConnected);
+				break;
+			case XenbusStateClosing:
+			case XenbusStateClosed:
+				xenbus_switch_state(dev, XenbusStateClosing);
+				break;
+			default:
+				__WARN();
+			}
+			break;
+		case XenbusStateConnected:
+			switch (state) {
+			case XenbusStateInitWait:
+			case XenbusStateClosing:
+			case XenbusStateClosed:
+				down(&pvcalls_back_global.frontends_lock);
+				backend_disconnect(dev);
+				up(&pvcalls_back_global.frontends_lock);
+				xenbus_switch_state(dev, XenbusStateClosing);
+				break;
+			default:
+				__WARN();
+			}
+			break;
+		case XenbusStateClosing:
+			switch (state) {
+			case XenbusStateInitWait:
+			case XenbusStateConnected:
+			case XenbusStateClosed:
+				xenbus_switch_state(dev, XenbusStateClosed);
+				break;
+			default:
+				__WARN();
+			}
+			break;
+		default:
+			__WARN();
+		}
+	}
+}
+
 static void pvcalls_back_changed(struct xenbus_device *dev,
 				 enum xenbus_state frontend_state)
 {
+	switch (frontend_state) {
+	case XenbusStateInitialising:
+		set_backend_state(dev, XenbusStateInitWait);
+		break;
+
+	case XenbusStateInitialised:
+	case XenbusStateConnected:
+		set_backend_state(dev, XenbusStateConnected);
+		break;
+
+	case XenbusStateClosing:
+		set_backend_state(dev, XenbusStateClosing);
+		break;
+
+	case XenbusStateClosed:
+		set_backend_state(dev, XenbusStateClosed);
+		if (xenbus_dev_is_online(dev))
+			break;
+		device_unregister(&dev->dev);
+		break;
+	case XenbusStateUnknown:
+		set_backend_state(dev, XenbusStateClosed);
+		device_unregister(&dev->dev);
+		break;
+
+	default:
+		xenbus_dev_fatal(dev, -EINVAL, "saw state %d at frontend",
+				 frontend_state);
+		break;
+	}
 }
 
 static int pvcalls_back_remove(struct xenbus_device *dev)
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 05/18] xen/pvcalls: connect to a frontend
  2017-05-19 23:22   ` Stefano Stabellini
@ 2017-05-19 23:22     ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Introduce a per-frontend data structure named pvcalls_back_priv. It
contains pointers to the command ring, its event channel, a list of
active sockets and a tree of passive sockets (passing sockets need to be
looked up from the id on listen, accept and poll commands, while active
sockets only on release).

It also has an unbound workqueue to schedule the work of parsing and
executing commands on the command ring. socket_lock protects the two
lists. In pvcalls_back_global, keep a list of connected frontends.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 95 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 95 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index b4da138..a48b0d9 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -33,9 +33,104 @@ struct pvcalls_back_global {
 	struct semaphore frontends_lock;
 } pvcalls_back_global;
 
+/*
+ * Per-frontend data structure. It contains pointers to the command
+ * ring, its event channel, a list of active sockets and a tree of
+ * passive sockets.
+ */
+struct pvcalls_back_priv {
+	struct list_head list;
+	struct xenbus_device *dev;
+	struct xen_pvcalls_sring *sring;
+	struct xen_pvcalls_back_ring ring;
+	int irq;
+	struct list_head socket_mappings;
+	struct radix_tree_root socketpass_mappings;
+	struct semaphore socket_lock;
+	atomic_t work;
+	struct workqueue_struct *wq;
+	struct work_struct register_work;
+};
+
+static void pvcalls_back_work(struct work_struct *work)
+{
+}
+
+static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
+{
+	return IRQ_HANDLED;
+}
+
 static int backend_connect(struct xenbus_device *dev)
 {
+	int err, evtchn;
+	grant_ref_t ring_ref;
+	void *addr = NULL;
+	struct pvcalls_back_priv *priv = NULL;
+
+	priv = kzalloc(sizeof(struct pvcalls_back_priv), GFP_KERNEL);
+	if (!priv)
+		return -ENOMEM;
+
+	err = xenbus_scanf(XBT_NIL, dev->otherend, "port", "%u",
+			   &evtchn);
+	if (err != 1) {
+		err = -EINVAL;
+		xenbus_dev_fatal(dev, err, "reading %s/event-channel",
+				 dev->otherend);
+		goto error;
+	}
+
+	err = xenbus_scanf(XBT_NIL, dev->otherend, "ring-ref", "%u", &ring_ref);
+	if (err != 1) {
+		err = -EINVAL;
+		xenbus_dev_fatal(dev, err, "reading %s/ring-ref",
+				 dev->otherend);
+		goto error;
+	}
+
+	err = bind_interdomain_evtchn_to_irqhandler(dev->otherend_id, evtchn,
+						    pvcalls_back_event, 0,
+						    "pvcalls-backend", dev);
+	if (err < 0)
+		goto error;
+	priv->irq = err;
+
+	priv->wq = alloc_workqueue("pvcalls_back_wq", WQ_UNBOUND, 1);
+	if (!priv->wq) {
+		err = -ENOMEM;
+		goto error;
+	}
+
+	err = xenbus_map_ring_valloc(dev, &ring_ref, 1, &addr);
+	if (err < 0)
+		goto error;
+	priv->sring = addr;
+
+	BACK_RING_INIT(&priv->ring, priv->sring, XEN_PAGE_SIZE * 1);
+	priv->dev = dev;
+
+	INIT_WORK(&priv->register_work, pvcalls_back_work);
+	INIT_LIST_HEAD(&priv->socket_mappings);
+	INIT_RADIX_TREE(&priv->socketpass_mappings, GFP_KERNEL);
+	sema_init(&priv->socket_lock, 1);
+	dev_set_drvdata(&dev->dev, priv);
+
+	down(&pvcalls_back_global.frontends_lock);
+	list_add_tail(&priv->list, &pvcalls_back_global.frontends);
+	up(&pvcalls_back_global.frontends_lock);
+	queue_work(priv->wq, &priv->register_work);
+
 	return 0;
+
+ error:
+	if (addr != NULL)
+		xenbus_unmap_ring_vfree(dev, addr);
+	if (priv->wq)
+		destroy_workqueue(priv->wq);
+	unbind_from_irqhandler(priv->irq, dev);
+	kfree(priv);
+	return err;
 }
 
 static int backend_disconnect(struct xenbus_device *dev)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 05/18] xen/pvcalls: connect to a frontend
@ 2017-05-19 23:22     ` Stefano Stabellini
  0 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, Stefano Stabellini, boris.ostrovsky, sstabellini, linux-kernel

Introduce a per-frontend data structure named pvcalls_back_priv. It
contains pointers to the command ring, its event channel, a list of
active sockets and a tree of passive sockets (passing sockets need to be
looked up from the id on listen, accept and poll commands, while active
sockets only on release).

It also has an unbound workqueue to schedule the work of parsing and
executing commands on the command ring. socket_lock protects the two
lists. In pvcalls_back_global, keep a list of connected frontends.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 95 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 95 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index b4da138..a48b0d9 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -33,9 +33,104 @@ struct pvcalls_back_global {
 	struct semaphore frontends_lock;
 } pvcalls_back_global;
 
+/*
+ * Per-frontend data structure. It contains pointers to the command
+ * ring, its event channel, a list of active sockets and a tree of
+ * passive sockets.
+ */
+struct pvcalls_back_priv {
+	struct list_head list;
+	struct xenbus_device *dev;
+	struct xen_pvcalls_sring *sring;
+	struct xen_pvcalls_back_ring ring;
+	int irq;
+	struct list_head socket_mappings;
+	struct radix_tree_root socketpass_mappings;
+	struct semaphore socket_lock;
+	atomic_t work;
+	struct workqueue_struct *wq;
+	struct work_struct register_work;
+};
+
+static void pvcalls_back_work(struct work_struct *work)
+{
+}
+
+static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
+{
+	return IRQ_HANDLED;
+}
+
 static int backend_connect(struct xenbus_device *dev)
 {
+	int err, evtchn;
+	grant_ref_t ring_ref;
+	void *addr = NULL;
+	struct pvcalls_back_priv *priv = NULL;
+
+	priv = kzalloc(sizeof(struct pvcalls_back_priv), GFP_KERNEL);
+	if (!priv)
+		return -ENOMEM;
+
+	err = xenbus_scanf(XBT_NIL, dev->otherend, "port", "%u",
+			   &evtchn);
+	if (err != 1) {
+		err = -EINVAL;
+		xenbus_dev_fatal(dev, err, "reading %s/event-channel",
+				 dev->otherend);
+		goto error;
+	}
+
+	err = xenbus_scanf(XBT_NIL, dev->otherend, "ring-ref", "%u", &ring_ref);
+	if (err != 1) {
+		err = -EINVAL;
+		xenbus_dev_fatal(dev, err, "reading %s/ring-ref",
+				 dev->otherend);
+		goto error;
+	}
+
+	err = bind_interdomain_evtchn_to_irqhandler(dev->otherend_id, evtchn,
+						    pvcalls_back_event, 0,
+						    "pvcalls-backend", dev);
+	if (err < 0)
+		goto error;
+	priv->irq = err;
+
+	priv->wq = alloc_workqueue("pvcalls_back_wq", WQ_UNBOUND, 1);
+	if (!priv->wq) {
+		err = -ENOMEM;
+		goto error;
+	}
+
+	err = xenbus_map_ring_valloc(dev, &ring_ref, 1, &addr);
+	if (err < 0)
+		goto error;
+	priv->sring = addr;
+
+	BACK_RING_INIT(&priv->ring, priv->sring, XEN_PAGE_SIZE * 1);
+	priv->dev = dev;
+
+	INIT_WORK(&priv->register_work, pvcalls_back_work);
+	INIT_LIST_HEAD(&priv->socket_mappings);
+	INIT_RADIX_TREE(&priv->socketpass_mappings, GFP_KERNEL);
+	sema_init(&priv->socket_lock, 1);
+	dev_set_drvdata(&dev->dev, priv);
+
+	down(&pvcalls_back_global.frontends_lock);
+	list_add_tail(&priv->list, &pvcalls_back_global.frontends);
+	up(&pvcalls_back_global.frontends_lock);
+	queue_work(priv->wq, &priv->register_work);
+
 	return 0;
+
+ error:
+	if (addr != NULL)
+		xenbus_unmap_ring_vfree(dev, addr);
+	if (priv->wq)
+		destroy_workqueue(priv->wq);
+	unbind_from_irqhandler(priv->irq, dev);
+	kfree(priv);
+	return err;
 }
 
 static int backend_disconnect(struct xenbus_device *dev)
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 06/18] xen/pvcalls: handle commands from the frontend
  2017-05-19 23:22   ` Stefano Stabellini
@ 2017-05-19 23:22     ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

When the other end notifies us that there are commands to be read
(pvcalls_back_event), wake up the backend thread to parse the command.

The command ring works like most other Xen rings, so use the usual
ring macros to read and write to it. The functions implementing the
commands are empty stubs for now.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 115 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 115 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index a48b0d9..9dc8a28 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -52,12 +52,127 @@ struct pvcalls_back_priv {
 	struct work_struct register_work;
 };
 
+static int pvcalls_back_socket(struct xenbus_device *dev,
+		struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_connect(struct xenbus_device *dev,
+				struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_release(struct xenbus_device *dev,
+				struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_bind(struct xenbus_device *dev,
+			     struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_listen(struct xenbus_device *dev,
+			       struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_accept(struct xenbus_device *dev,
+			       struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_poll(struct xenbus_device *dev,
+			     struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_handle_cmd(struct xenbus_device *dev,
+				   struct xen_pvcalls_request *req)
+{
+	int ret = 0;
+
+	switch (req->cmd) {
+	case PVCALLS_SOCKET:
+		ret = pvcalls_back_socket(dev, req);
+		break;
+	case PVCALLS_CONNECT:
+		ret = pvcalls_back_connect(dev, req);
+		break;
+	case PVCALLS_RELEASE:
+		ret = pvcalls_back_release(dev, req);
+		break;
+	case PVCALLS_BIND:
+		ret = pvcalls_back_bind(dev, req);
+		break;
+	case PVCALLS_LISTEN:
+		ret = pvcalls_back_listen(dev, req);
+		break;
+	case PVCALLS_ACCEPT:
+		ret = pvcalls_back_accept(dev, req);
+		break;
+	case PVCALLS_POLL:
+		ret = pvcalls_back_poll(dev, req);
+		break;
+	default:
+		ret = -ENOTSUPP;
+		break;
+	}
+	return ret;
+}
+
 static void pvcalls_back_work(struct work_struct *work)
 {
+	struct pvcalls_back_priv *priv = container_of(work,
+		struct pvcalls_back_priv, register_work);
+	int notify, notify_all = 0, more = 1;
+	struct xen_pvcalls_request req;
+	struct xenbus_device *dev = priv->dev;
+
+	atomic_set(&priv->work, 1);
+
+	while (more || !atomic_dec_and_test(&priv->work)) {
+		while (RING_HAS_UNCONSUMED_REQUESTS(&priv->ring)) {
+			RING_COPY_REQUEST(&priv->ring,
+					  priv->ring.req_cons++,
+					  &req);
+
+			if (!pvcalls_back_handle_cmd(dev, &req)) {
+				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(
+					&priv->ring, notify);
+				notify_all += notify;
+			}
+		}
+
+		if (notify_all)
+			notify_remote_via_irq(priv->irq);
+
+		RING_FINAL_CHECK_FOR_REQUESTS(&priv->ring, more);
+	}
 }
 
 static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
 {
+	struct xenbus_device *dev = dev_id;
+	struct pvcalls_back_priv *priv = NULL;
+
+	if (dev == NULL)
+		return IRQ_HANDLED;
+
+	priv = dev_get_drvdata(&dev->dev);
+	if (priv == NULL)
+		return IRQ_HANDLED;
+
+	atomic_inc(&priv->work);
+	queue_work(priv->wq, &priv->register_work);
+
 	return IRQ_HANDLED;
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 06/18] xen/pvcalls: handle commands from the frontend
@ 2017-05-19 23:22     ` Stefano Stabellini
  0 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, Stefano Stabellini, boris.ostrovsky, sstabellini, linux-kernel

When the other end notifies us that there are commands to be read
(pvcalls_back_event), wake up the backend thread to parse the command.

The command ring works like most other Xen rings, so use the usual
ring macros to read and write to it. The functions implementing the
commands are empty stubs for now.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 115 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 115 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index a48b0d9..9dc8a28 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -52,12 +52,127 @@ struct pvcalls_back_priv {
 	struct work_struct register_work;
 };
 
+static int pvcalls_back_socket(struct xenbus_device *dev,
+		struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_connect(struct xenbus_device *dev,
+				struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_release(struct xenbus_device *dev,
+				struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_bind(struct xenbus_device *dev,
+			     struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_listen(struct xenbus_device *dev,
+			       struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_accept(struct xenbus_device *dev,
+			       struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_poll(struct xenbus_device *dev,
+			     struct xen_pvcalls_request *req)
+{
+	return 0;
+}
+
+static int pvcalls_back_handle_cmd(struct xenbus_device *dev,
+				   struct xen_pvcalls_request *req)
+{
+	int ret = 0;
+
+	switch (req->cmd) {
+	case PVCALLS_SOCKET:
+		ret = pvcalls_back_socket(dev, req);
+		break;
+	case PVCALLS_CONNECT:
+		ret = pvcalls_back_connect(dev, req);
+		break;
+	case PVCALLS_RELEASE:
+		ret = pvcalls_back_release(dev, req);
+		break;
+	case PVCALLS_BIND:
+		ret = pvcalls_back_bind(dev, req);
+		break;
+	case PVCALLS_LISTEN:
+		ret = pvcalls_back_listen(dev, req);
+		break;
+	case PVCALLS_ACCEPT:
+		ret = pvcalls_back_accept(dev, req);
+		break;
+	case PVCALLS_POLL:
+		ret = pvcalls_back_poll(dev, req);
+		break;
+	default:
+		ret = -ENOTSUPP;
+		break;
+	}
+	return ret;
+}
+
 static void pvcalls_back_work(struct work_struct *work)
 {
+	struct pvcalls_back_priv *priv = container_of(work,
+		struct pvcalls_back_priv, register_work);
+	int notify, notify_all = 0, more = 1;
+	struct xen_pvcalls_request req;
+	struct xenbus_device *dev = priv->dev;
+
+	atomic_set(&priv->work, 1);
+
+	while (more || !atomic_dec_and_test(&priv->work)) {
+		while (RING_HAS_UNCONSUMED_REQUESTS(&priv->ring)) {
+			RING_COPY_REQUEST(&priv->ring,
+					  priv->ring.req_cons++,
+					  &req);
+
+			if (!pvcalls_back_handle_cmd(dev, &req)) {
+				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(
+					&priv->ring, notify);
+				notify_all += notify;
+			}
+		}
+
+		if (notify_all)
+			notify_remote_via_irq(priv->irq);
+
+		RING_FINAL_CHECK_FOR_REQUESTS(&priv->ring, more);
+	}
 }
 
 static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
 {
+	struct xenbus_device *dev = dev_id;
+	struct pvcalls_back_priv *priv = NULL;
+
+	if (dev == NULL)
+		return IRQ_HANDLED;
+
+	priv = dev_get_drvdata(&dev->dev);
+	if (priv == NULL)
+		return IRQ_HANDLED;
+
+	atomic_inc(&priv->work);
+	queue_work(priv->wq, &priv->register_work);
+
 	return IRQ_HANDLED;
 }
 
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 07/18] xen/pvcalls: implement socket command
  2017-05-19 23:22   ` Stefano Stabellini
@ 2017-05-19 23:22     ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Just reply with success to the other end for now. Delay the allocation
of the actual socket to bind and/or connect.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 29 ++++++++++++++++++++++++++++-
 1 file changed, 28 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 9dc8a28..fed54bf 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -12,12 +12,17 @@
  * GNU General Public License for more details.
  */
 
+#include <linux/inet.h>
 #include <linux/kthread.h>
 #include <linux/list.h>
 #include <linux/radix-tree.h>
 #include <linux/module.h>
 #include <linux/semaphore.h>
 #include <linux/wait.h>
+#include <net/sock.h>
+#include <net/inet_common.h>
+#include <net/inet_connection_sock.h>
+#include <net/request_sock.h>
 
 #include <xen/events.h>
 #include <xen/grant_table.h>
@@ -55,7 +60,29 @@ struct pvcalls_back_priv {
 static int pvcalls_back_socket(struct xenbus_device *dev,
 		struct xen_pvcalls_request *req)
 {
-	return 0;
+	struct pvcalls_back_priv *priv;
+	int ret;
+	struct xen_pvcalls_response *rsp;
+
+	priv = dev_get_drvdata(&dev->dev);
+
+	if (req->u.socket.domain != AF_INET ||
+	    req->u.socket.type != SOCK_STREAM ||
+	    (req->u.socket.protocol != 0 &&
+	     req->u.socket.protocol != AF_INET))
+		ret = -EAFNOSUPPORT;
+	else
+		ret = 0;
+
+	/* leave the actual socket allocation for later */
+
+	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.socket.id = req->u.socket.id;
+	rsp->ret = ret;
+
+	return ret;
 }
 
 static int pvcalls_back_connect(struct xenbus_device *dev,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 07/18] xen/pvcalls: implement socket command
@ 2017-05-19 23:22     ` Stefano Stabellini
  0 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, Stefano Stabellini, boris.ostrovsky, sstabellini, linux-kernel

Just reply with success to the other end for now. Delay the allocation
of the actual socket to bind and/or connect.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 29 ++++++++++++++++++++++++++++-
 1 file changed, 28 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 9dc8a28..fed54bf 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -12,12 +12,17 @@
  * GNU General Public License for more details.
  */
 
+#include <linux/inet.h>
 #include <linux/kthread.h>
 #include <linux/list.h>
 #include <linux/radix-tree.h>
 #include <linux/module.h>
 #include <linux/semaphore.h>
 #include <linux/wait.h>
+#include <net/sock.h>
+#include <net/inet_common.h>
+#include <net/inet_connection_sock.h>
+#include <net/request_sock.h>
 
 #include <xen/events.h>
 #include <xen/grant_table.h>
@@ -55,7 +60,29 @@ struct pvcalls_back_priv {
 static int pvcalls_back_socket(struct xenbus_device *dev,
 		struct xen_pvcalls_request *req)
 {
-	return 0;
+	struct pvcalls_back_priv *priv;
+	int ret;
+	struct xen_pvcalls_response *rsp;
+
+	priv = dev_get_drvdata(&dev->dev);
+
+	if (req->u.socket.domain != AF_INET ||
+	    req->u.socket.type != SOCK_STREAM ||
+	    (req->u.socket.protocol != 0 &&
+	     req->u.socket.protocol != AF_INET))
+		ret = -EAFNOSUPPORT;
+	else
+		ret = 0;
+
+	/* leave the actual socket allocation for later */
+
+	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.socket.id = req->u.socket.id;
+	rsp->ret = ret;
+
+	return ret;
 }
 
 static int pvcalls_back_connect(struct xenbus_device *dev,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 08/18] xen/pvcalls: implement connect command
  2017-05-19 23:22   ` Stefano Stabellini
@ 2017-05-19 23:22     ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Allocate a socket. Keep track of socket <-> ring mappings with a new data
structure, called sock_mapping. Implement the connect command by calling
inet_stream_connect, and mapping the new indexes page and data ring.
Allocate a workqueue and a work_struct, called ioworker, to perform
reads and writes to the socket.

When an active socket is closed (sk_state_change), set in_error to
-ENOTCONN and notify the other end, as specified by the protocol.

sk_data_ready and pvcalls_back_ioworker will be implemented later.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 161 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 161 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index fed54bf..65fbc39 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -57,6 +57,40 @@ struct pvcalls_back_priv {
 	struct work_struct register_work;
 };
 
+struct pvcalls_ioworker {
+	struct work_struct register_work;
+	struct workqueue_struct *wq;
+	unsigned int cpu;
+};
+
+struct sock_mapping {
+	struct list_head list;
+	struct pvcalls_back_priv *priv;
+	struct socket *sock;
+	uint64_t id;
+	grant_ref_t ref;
+	struct pvcalls_data_intf *ring;
+	void *bytes;
+	struct pvcalls_data data;
+	uint32_t ring_order;
+	int irq;
+	atomic_t read;
+	atomic_t write;
+	atomic_t io;
+	atomic_t release;
+	void (*saved_data_ready)(struct sock *sk);
+	struct pvcalls_ioworker ioworker;
+};
+
+static irqreturn_t pvcalls_back_conn_event(int irq, void *sock_map);
+static int pvcalls_back_release_active(struct xenbus_device *dev,
+				       struct pvcalls_back_priv *priv,
+				       struct sock_mapping *map);
+
+static void pvcalls_back_ioworker(struct work_struct *work)
+{
+}
+
 static int pvcalls_back_socket(struct xenbus_device *dev,
 		struct xen_pvcalls_request *req)
 {
@@ -85,9 +119,131 @@ static int pvcalls_back_socket(struct xenbus_device *dev,
 	return ret;
 }
 
+static void pvcalls_sk_state_change(struct sock *sock)
+{
+	struct sock_mapping *map = sock->sk_user_data;
+	struct pvcalls_data_intf *intf;
+
+	if (map == NULL)
+		return;
+
+	intf = map->ring;
+	intf->in_error = -ENOTCONN;
+	notify_remote_via_irq(map->irq);
+}
+
+static void pvcalls_sk_data_ready(struct sock *sock)
+{
+}
+
 static int pvcalls_back_connect(struct xenbus_device *dev,
 				struct xen_pvcalls_request *req)
 {
+	struct pvcalls_back_priv *priv;
+	int ret;
+	struct socket *sock;
+	struct sock_mapping *map = NULL;
+	void *page;
+	struct xen_pvcalls_response *rsp;
+
+	priv = dev_get_drvdata(&dev->dev);
+
+	map = kzalloc(sizeof(*map), GFP_KERNEL);
+	if (map == NULL) {
+		ret = -ENOMEM;
+		goto out;
+	}
+	ret = sock_create(AF_INET, SOCK_STREAM, 0, &sock);
+	if (ret < 0) {
+		kfree(map);
+		goto out;
+	}
+
+	map->priv = priv;
+	map->sock = sock;
+	map->id = req->u.connect.id;
+	map->ref = req->u.connect.ref;
+
+	ret = xenbus_map_ring_valloc(dev, &req->u.connect.ref, 1, &page);
+	if (ret < 0) {
+		sock_release(map->sock);
+		kfree(map);
+		goto out;
+	}
+	map->ring = page;
+	map->ring_order = map->ring->ring_order;
+	/* first read the order, then map the data ring */
+	virt_rmb();
+	if (map->ring_order > MAX_RING_ORDER) {
+		ret = -EFAULT;
+		goto out;
+	}
+	ret = xenbus_map_ring_valloc(dev, map->ring->ref,
+				     (1 << map->ring_order), &page);
+	if (ret < 0) {
+		sock_release(map->sock);
+		xenbus_unmap_ring_vfree(dev, map->ring);
+		kfree(map);
+		goto out;
+	}
+	map->bytes = page;
+
+	ret = bind_interdomain_evtchn_to_irqhandler(priv->dev->otherend_id,
+						    req->u.connect.evtchn,
+						    pvcalls_back_conn_event,
+						    0,
+						    "pvcalls-backend",
+						    map);
+	if (ret < 0) {
+		sock_release(map->sock);
+		kfree(map);
+		goto out;
+	}
+	map->irq = ret;
+
+	map->data.in = map->bytes;
+	map->data.out = map->bytes + XEN_FLEX_RING_SIZE(map->ring_order);
+
+	map->ioworker.wq = alloc_workqueue("pvcalls_io", WQ_UNBOUND, 1);
+	if (!map->ioworker.wq) {
+	    ret = -ENOMEM;
+	    goto out;
+	}
+	map->ioworker.cpu = get_random_int() % num_online_cpus();
+	atomic_set(&map->io, 1);
+	INIT_WORK(&map->ioworker.register_work,	pvcalls_back_ioworker);
+
+	down(&priv->socket_lock);
+	list_add_tail(&map->list, &priv->socket_mappings);
+	up(&priv->socket_lock);
+
+	ret = inet_stream_connect(sock, (struct sockaddr *)&req->u.connect.addr,
+				  req->u.connect.len, req->u.connect.flags);
+	if (ret < 0) {
+		pvcalls_back_release_active(dev, priv, map);
+	} else {
+		write_lock_bh(&sock->sk->sk_callback_lock);
+		map->saved_data_ready = sock->sk->sk_data_ready;
+		sock->sk->sk_user_data = map;
+		sock->sk->sk_data_ready = pvcalls_sk_data_ready;
+		sock->sk->sk_state_change = pvcalls_sk_state_change;
+		write_unlock_bh(&sock->sk->sk_callback_lock);
+	}
+
+out:
+	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.connect.id = req->u.connect.id;
+	rsp->ret = ret;
+
+	return ret;
+}
+
+static int pvcalls_back_release_active(struct xenbus_device *dev,
+				       struct pvcalls_back_priv *priv,
+				       struct sock_mapping *map)
+{
 	return 0;
 }
 
@@ -203,6 +359,11 @@ static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static irqreturn_t pvcalls_back_conn_event(int irq, void *sock_map)
+{
+	return IRQ_HANDLED;
+}
+
 static int backend_connect(struct xenbus_device *dev)
 {
 	int err, evtchn;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 08/18] xen/pvcalls: implement connect command
@ 2017-05-19 23:22     ` Stefano Stabellini
  0 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, Stefano Stabellini, boris.ostrovsky, sstabellini, linux-kernel

Allocate a socket. Keep track of socket <-> ring mappings with a new data
structure, called sock_mapping. Implement the connect command by calling
inet_stream_connect, and mapping the new indexes page and data ring.
Allocate a workqueue and a work_struct, called ioworker, to perform
reads and writes to the socket.

When an active socket is closed (sk_state_change), set in_error to
-ENOTCONN and notify the other end, as specified by the protocol.

sk_data_ready and pvcalls_back_ioworker will be implemented later.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 161 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 161 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index fed54bf..65fbc39 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -57,6 +57,40 @@ struct pvcalls_back_priv {
 	struct work_struct register_work;
 };
 
+struct pvcalls_ioworker {
+	struct work_struct register_work;
+	struct workqueue_struct *wq;
+	unsigned int cpu;
+};
+
+struct sock_mapping {
+	struct list_head list;
+	struct pvcalls_back_priv *priv;
+	struct socket *sock;
+	uint64_t id;
+	grant_ref_t ref;
+	struct pvcalls_data_intf *ring;
+	void *bytes;
+	struct pvcalls_data data;
+	uint32_t ring_order;
+	int irq;
+	atomic_t read;
+	atomic_t write;
+	atomic_t io;
+	atomic_t release;
+	void (*saved_data_ready)(struct sock *sk);
+	struct pvcalls_ioworker ioworker;
+};
+
+static irqreturn_t pvcalls_back_conn_event(int irq, void *sock_map);
+static int pvcalls_back_release_active(struct xenbus_device *dev,
+				       struct pvcalls_back_priv *priv,
+				       struct sock_mapping *map);
+
+static void pvcalls_back_ioworker(struct work_struct *work)
+{
+}
+
 static int pvcalls_back_socket(struct xenbus_device *dev,
 		struct xen_pvcalls_request *req)
 {
@@ -85,9 +119,131 @@ static int pvcalls_back_socket(struct xenbus_device *dev,
 	return ret;
 }
 
+static void pvcalls_sk_state_change(struct sock *sock)
+{
+	struct sock_mapping *map = sock->sk_user_data;
+	struct pvcalls_data_intf *intf;
+
+	if (map == NULL)
+		return;
+
+	intf = map->ring;
+	intf->in_error = -ENOTCONN;
+	notify_remote_via_irq(map->irq);
+}
+
+static void pvcalls_sk_data_ready(struct sock *sock)
+{
+}
+
 static int pvcalls_back_connect(struct xenbus_device *dev,
 				struct xen_pvcalls_request *req)
 {
+	struct pvcalls_back_priv *priv;
+	int ret;
+	struct socket *sock;
+	struct sock_mapping *map = NULL;
+	void *page;
+	struct xen_pvcalls_response *rsp;
+
+	priv = dev_get_drvdata(&dev->dev);
+
+	map = kzalloc(sizeof(*map), GFP_KERNEL);
+	if (map == NULL) {
+		ret = -ENOMEM;
+		goto out;
+	}
+	ret = sock_create(AF_INET, SOCK_STREAM, 0, &sock);
+	if (ret < 0) {
+		kfree(map);
+		goto out;
+	}
+
+	map->priv = priv;
+	map->sock = sock;
+	map->id = req->u.connect.id;
+	map->ref = req->u.connect.ref;
+
+	ret = xenbus_map_ring_valloc(dev, &req->u.connect.ref, 1, &page);
+	if (ret < 0) {
+		sock_release(map->sock);
+		kfree(map);
+		goto out;
+	}
+	map->ring = page;
+	map->ring_order = map->ring->ring_order;
+	/* first read the order, then map the data ring */
+	virt_rmb();
+	if (map->ring_order > MAX_RING_ORDER) {
+		ret = -EFAULT;
+		goto out;
+	}
+	ret = xenbus_map_ring_valloc(dev, map->ring->ref,
+				     (1 << map->ring_order), &page);
+	if (ret < 0) {
+		sock_release(map->sock);
+		xenbus_unmap_ring_vfree(dev, map->ring);
+		kfree(map);
+		goto out;
+	}
+	map->bytes = page;
+
+	ret = bind_interdomain_evtchn_to_irqhandler(priv->dev->otherend_id,
+						    req->u.connect.evtchn,
+						    pvcalls_back_conn_event,
+						    0,
+						    "pvcalls-backend",
+						    map);
+	if (ret < 0) {
+		sock_release(map->sock);
+		kfree(map);
+		goto out;
+	}
+	map->irq = ret;
+
+	map->data.in = map->bytes;
+	map->data.out = map->bytes + XEN_FLEX_RING_SIZE(map->ring_order);
+
+	map->ioworker.wq = alloc_workqueue("pvcalls_io", WQ_UNBOUND, 1);
+	if (!map->ioworker.wq) {
+	    ret = -ENOMEM;
+	    goto out;
+	}
+	map->ioworker.cpu = get_random_int() % num_online_cpus();
+	atomic_set(&map->io, 1);
+	INIT_WORK(&map->ioworker.register_work,	pvcalls_back_ioworker);
+
+	down(&priv->socket_lock);
+	list_add_tail(&map->list, &priv->socket_mappings);
+	up(&priv->socket_lock);
+
+	ret = inet_stream_connect(sock, (struct sockaddr *)&req->u.connect.addr,
+				  req->u.connect.len, req->u.connect.flags);
+	if (ret < 0) {
+		pvcalls_back_release_active(dev, priv, map);
+	} else {
+		write_lock_bh(&sock->sk->sk_callback_lock);
+		map->saved_data_ready = sock->sk->sk_data_ready;
+		sock->sk->sk_user_data = map;
+		sock->sk->sk_data_ready = pvcalls_sk_data_ready;
+		sock->sk->sk_state_change = pvcalls_sk_state_change;
+		write_unlock_bh(&sock->sk->sk_callback_lock);
+	}
+
+out:
+	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.connect.id = req->u.connect.id;
+	rsp->ret = ret;
+
+	return ret;
+}
+
+static int pvcalls_back_release_active(struct xenbus_device *dev,
+				       struct pvcalls_back_priv *priv,
+				       struct sock_mapping *map)
+{
 	return 0;
 }
 
@@ -203,6 +359,11 @@ static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static irqreturn_t pvcalls_back_conn_event(int irq, void *sock_map)
+{
+	return IRQ_HANDLED;
+}
+
 static int backend_connect(struct xenbus_device *dev)
 {
 	int err, evtchn;
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 09/18] xen/pvcalls: implement bind command
  2017-05-19 23:22   ` Stefano Stabellini
@ 2017-05-19 23:22     ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Allocate a socket. Track the allocated passive sockets with a new data
structure named sockpass_mapping. It contains an unbound workqueue to
schedule delayed work for the accept and poll commands. It also has a
reqcopy field to be used to store a copy of a request for delayed work.
Reads/writes to it are protected by a lock (the "copy_lock" spinlock).
Initialize the workqueue in pvcalls_back_bind.

Implement the bind command with inet_bind.

The pass_sk_data_ready event handler will be added later.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 87 +++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 86 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 65fbc39..d3278bd 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -82,6 +82,18 @@ struct sock_mapping {
 	struct pvcalls_ioworker ioworker;
 };
 
+struct sockpass_mapping {
+	struct list_head list;
+	struct pvcalls_back_priv *priv;
+	struct socket *sock;
+	uint64_t id;
+	struct xen_pvcalls_request reqcopy;
+	spinlock_t copy_lock;
+	struct workqueue_struct *wq;
+	struct work_struct register_work;
+	void (*saved_data_ready)(struct sock *sk);
+};
+
 static irqreturn_t pvcalls_back_conn_event(int irq, void *sock_map);
 static int pvcalls_back_release_active(struct xenbus_device *dev,
 				       struct pvcalls_back_priv *priv,
@@ -253,10 +265,83 @@ static int pvcalls_back_release(struct xenbus_device *dev,
 	return 0;
 }
 
+static void __pvcalls_back_accept(struct work_struct *work)
+{
+}
+
+static void pvcalls_pass_sk_data_ready(struct sock *sock)
+{
+}
+
 static int pvcalls_back_bind(struct xenbus_device *dev,
 			     struct xen_pvcalls_request *req)
 {
-	return 0;
+	struct pvcalls_back_priv *priv;
+	int ret, err;
+	struct socket *sock;
+	struct sockpass_mapping *map = NULL;
+	struct xen_pvcalls_response *rsp;
+
+	priv = dev_get_drvdata(&dev->dev);
+
+	map = kzalloc(sizeof(*map), GFP_KERNEL);
+	if (map == NULL) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	INIT_WORK(&map->register_work, __pvcalls_back_accept);
+	spin_lock_init(&map->copy_lock);
+	map->wq = alloc_workqueue("pvcalls_wq", WQ_UNBOUND, 1);
+	if (!map->wq) {
+		ret = -ENOMEM;
+		kfree(map);
+		goto out;
+	}
+
+	ret = sock_create(AF_INET, SOCK_STREAM, 0, &sock);
+	if (ret < 0) {
+		destroy_workqueue(map->wq);
+		kfree(map);
+		goto out;
+	}
+
+	ret = inet_bind(sock, (struct sockaddr *)&req->u.bind.addr,
+			req->u.bind.len);
+	if (ret < 0) {
+		destroy_workqueue(map->wq);
+		kfree(map);
+		goto out;
+	}
+
+	map->priv = priv;
+	map->sock = sock;
+	map->id = req->u.bind.id;
+
+	down(&priv->socket_lock);
+	err = radix_tree_insert(&priv->socketpass_mappings, map->id,
+				map);
+	up(&priv->socket_lock);
+	if (err) {
+		ret = err;
+		destroy_workqueue(map->wq);
+		kfree(map);
+		goto out;
+	}
+
+	write_lock_bh(&sock->sk->sk_callback_lock);
+	map->saved_data_ready = sock->sk->sk_data_ready;
+	sock->sk->sk_user_data = map;
+	sock->sk->sk_data_ready = pvcalls_pass_sk_data_ready;
+	write_unlock_bh(&sock->sk->sk_callback_lock);
+
+out:
+	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.bind.id = req->u.bind.id;
+	rsp->ret = ret;
+	return ret;
 }
 
 static int pvcalls_back_listen(struct xenbus_device *dev,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 09/18] xen/pvcalls: implement bind command
@ 2017-05-19 23:22     ` Stefano Stabellini
  0 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, Stefano Stabellini, boris.ostrovsky, sstabellini, linux-kernel

Allocate a socket. Track the allocated passive sockets with a new data
structure named sockpass_mapping. It contains an unbound workqueue to
schedule delayed work for the accept and poll commands. It also has a
reqcopy field to be used to store a copy of a request for delayed work.
Reads/writes to it are protected by a lock (the "copy_lock" spinlock).
Initialize the workqueue in pvcalls_back_bind.

Implement the bind command with inet_bind.

The pass_sk_data_ready event handler will be added later.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 87 +++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 86 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 65fbc39..d3278bd 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -82,6 +82,18 @@ struct sock_mapping {
 	struct pvcalls_ioworker ioworker;
 };
 
+struct sockpass_mapping {
+	struct list_head list;
+	struct pvcalls_back_priv *priv;
+	struct socket *sock;
+	uint64_t id;
+	struct xen_pvcalls_request reqcopy;
+	spinlock_t copy_lock;
+	struct workqueue_struct *wq;
+	struct work_struct register_work;
+	void (*saved_data_ready)(struct sock *sk);
+};
+
 static irqreturn_t pvcalls_back_conn_event(int irq, void *sock_map);
 static int pvcalls_back_release_active(struct xenbus_device *dev,
 				       struct pvcalls_back_priv *priv,
@@ -253,10 +265,83 @@ static int pvcalls_back_release(struct xenbus_device *dev,
 	return 0;
 }
 
+static void __pvcalls_back_accept(struct work_struct *work)
+{
+}
+
+static void pvcalls_pass_sk_data_ready(struct sock *sock)
+{
+}
+
 static int pvcalls_back_bind(struct xenbus_device *dev,
 			     struct xen_pvcalls_request *req)
 {
-	return 0;
+	struct pvcalls_back_priv *priv;
+	int ret, err;
+	struct socket *sock;
+	struct sockpass_mapping *map = NULL;
+	struct xen_pvcalls_response *rsp;
+
+	priv = dev_get_drvdata(&dev->dev);
+
+	map = kzalloc(sizeof(*map), GFP_KERNEL);
+	if (map == NULL) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	INIT_WORK(&map->register_work, __pvcalls_back_accept);
+	spin_lock_init(&map->copy_lock);
+	map->wq = alloc_workqueue("pvcalls_wq", WQ_UNBOUND, 1);
+	if (!map->wq) {
+		ret = -ENOMEM;
+		kfree(map);
+		goto out;
+	}
+
+	ret = sock_create(AF_INET, SOCK_STREAM, 0, &sock);
+	if (ret < 0) {
+		destroy_workqueue(map->wq);
+		kfree(map);
+		goto out;
+	}
+
+	ret = inet_bind(sock, (struct sockaddr *)&req->u.bind.addr,
+			req->u.bind.len);
+	if (ret < 0) {
+		destroy_workqueue(map->wq);
+		kfree(map);
+		goto out;
+	}
+
+	map->priv = priv;
+	map->sock = sock;
+	map->id = req->u.bind.id;
+
+	down(&priv->socket_lock);
+	err = radix_tree_insert(&priv->socketpass_mappings, map->id,
+				map);
+	up(&priv->socket_lock);
+	if (err) {
+		ret = err;
+		destroy_workqueue(map->wq);
+		kfree(map);
+		goto out;
+	}
+
+	write_lock_bh(&sock->sk->sk_callback_lock);
+	map->saved_data_ready = sock->sk->sk_data_ready;
+	sock->sk->sk_user_data = map;
+	sock->sk->sk_data_ready = pvcalls_pass_sk_data_ready;
+	write_unlock_bh(&sock->sk->sk_callback_lock);
+
+out:
+	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.bind.id = req->u.bind.id;
+	rsp->ret = ret;
+	return ret;
 }
 
 static int pvcalls_back_listen(struct xenbus_device *dev,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 10/18] xen/pvcalls: implement listen command
  2017-05-19 23:22   ` Stefano Stabellini
@ 2017-05-19 23:22     ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Call inet_listen to implement the listen command.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 21 ++++++++++++++++++++-
 1 file changed, 20 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index d3278bd..de82bf5 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -347,7 +347,26 @@ static int pvcalls_back_bind(struct xenbus_device *dev,
 static int pvcalls_back_listen(struct xenbus_device *dev,
 			       struct xen_pvcalls_request *req)
 {
-	return 0;
+	struct pvcalls_back_priv *priv;
+	int ret = -EINVAL;
+	struct sockpass_mapping *map;
+	struct xen_pvcalls_response *rsp;
+
+	priv = dev_get_drvdata(&dev->dev);
+
+	map = radix_tree_lookup(&priv->socketpass_mappings, req->u.listen.id);
+	if (map == NULL)
+		goto out;
+
+	ret = inet_listen(map->sock, req->u.listen.backlog);
+
+out:
+	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.listen.id = req->u.listen.id;
+	rsp->ret = ret;
+	return ret;
 }
 
 static int pvcalls_back_accept(struct xenbus_device *dev,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 10/18] xen/pvcalls: implement listen command
@ 2017-05-19 23:22     ` Stefano Stabellini
  0 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, Stefano Stabellini, boris.ostrovsky, sstabellini, linux-kernel

Call inet_listen to implement the listen command.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 21 ++++++++++++++++++++-
 1 file changed, 20 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index d3278bd..de82bf5 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -347,7 +347,26 @@ static int pvcalls_back_bind(struct xenbus_device *dev,
 static int pvcalls_back_listen(struct xenbus_device *dev,
 			       struct xen_pvcalls_request *req)
 {
-	return 0;
+	struct pvcalls_back_priv *priv;
+	int ret = -EINVAL;
+	struct sockpass_mapping *map;
+	struct xen_pvcalls_response *rsp;
+
+	priv = dev_get_drvdata(&dev->dev);
+
+	map = radix_tree_lookup(&priv->socketpass_mappings, req->u.listen.id);
+	if (map == NULL)
+		goto out;
+
+	ret = inet_listen(map->sock, req->u.listen.backlog);
+
+out:
+	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.listen.id = req->u.listen.id;
+	rsp->ret = ret;
+	return ret;
 }
 
 static int pvcalls_back_accept(struct xenbus_device *dev,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 11/18] xen/pvcalls: implement accept command
  2017-05-19 23:22   ` Stefano Stabellini
@ 2017-05-19 23:22     ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Implement the accept command by calling inet_accept. To avoid blocking
in the kernel, call inet_accept(O_NONBLOCK) from a workqueue, which get
scheduled on sk_data_ready (for a passive socket, it means that there
are connections to accept).

Use the reqcopy field to store the request. Accept the new socket from
the delayed work function, create a new sock_mapping for it, map
the indexes page and data ring, and reply to the other end. Allocate an
ioworker for the socket.

Only support one outstanding blocking accept request for every socket at
any time.

Add a field to sock_mapping to remember the passive socket from which an
active socket was created.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 161 ++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 160 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index de82bf5..bc641a8 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -66,6 +66,7 @@ struct pvcalls_ioworker {
 struct sock_mapping {
 	struct list_head list;
 	struct pvcalls_back_priv *priv;
+	struct sockpass_mapping *sockpass;
 	struct socket *sock;
 	uint64_t id;
 	grant_ref_t ref;
@@ -267,10 +268,131 @@ static int pvcalls_back_release(struct xenbus_device *dev,
 
 static void __pvcalls_back_accept(struct work_struct *work)
 {
+	struct sockpass_mapping *mappass = container_of(
+		work, struct sockpass_mapping, register_work);
+	struct sock_mapping *map;
+	struct pvcalls_ioworker *iow;
+	struct pvcalls_back_priv *priv;
+	struct xen_pvcalls_response *rsp;
+	struct xen_pvcalls_request *req;
+	void *page = NULL;
+	int notify;
+	int ret = -EINVAL;
+	unsigned long flags;
+
+	priv = mappass->priv;
+	/* We only need to check the value of "cmd" atomically on read. */
+	spin_lock_irqsave(&mappass->copy_lock, flags);
+	req = &mappass->reqcopy;
+	if (req->cmd != PVCALLS_ACCEPT) {
+		spin_unlock_irqrestore(&mappass->copy_lock, flags);
+		return;
+	}
+	spin_unlock_irqrestore(&mappass->copy_lock, flags);
+
+	map = kzalloc(sizeof(*map), GFP_KERNEL);
+	if (map == NULL) {
+		ret = -ENOMEM;
+		goto out_error;
+	}
+
+	map->sock = sock_alloc();
+	if (!map->sock)
+		goto out_error;
+
+	map->ref = req->u.accept.ref;
+
+	map->priv = priv;
+	map->sockpass = mappass;
+	map->sock->type = mappass->sock->type;
+	map->sock->ops = mappass->sock->ops;
+	map->id = req->u.accept.id_new;
+
+	ret = xenbus_map_ring_valloc(priv->dev, &req->u.accept.ref, 1, &page);
+	if (ret < 0)
+		goto out_error;
+	map->ring = page;
+	map->ring_order = map->ring->ring_order;
+	/* first read the order, then map the data ring */
+	virt_rmb();
+	if (map->ring_order > MAX_RING_ORDER) {
+		ret = -EFAULT;
+		goto out_error;
+	}
+	ret = xenbus_map_ring_valloc(priv->dev, map->ring->ref,
+				     (1 << map->ring_order), &page);
+	if (ret < 0)
+		goto out_error;
+	map->bytes = page;
+
+	ret = bind_interdomain_evtchn_to_irqhandler(priv->dev->otherend_id,
+						    req->u.accept.evtchn,
+						    pvcalls_back_conn_event,
+						    0,
+						    "pvcalls-backend",
+						    map);
+	if (ret < 0)
+		goto out_error;
+	map->irq = ret;
+
+	map->data.in = map->bytes;
+	map->data.out = map->bytes + XEN_FLEX_RING_SIZE(map->ring_order);
+
+	map->ioworker.wq = alloc_workqueue("pvcalls_io", WQ_UNBOUND, 1);
+	if (!map->ioworker.wq) {
+	    ret = -ENOMEM;
+	    goto out_error;
+	}
+	map->ioworker.cpu = get_random_int() % num_online_cpus();
+	atomic_set(&map->io, 1);
+	INIT_WORK(&map->ioworker.register_work,	pvcalls_back_ioworker);
+
+	down(&priv->socket_lock);
+	list_add_tail(&map->list, &priv->socket_mappings);
+	up(&priv->socket_lock);
+
+	ret = inet_accept(mappass->sock, map->sock, O_NONBLOCK, true);
+	if (ret == -EAGAIN)
+		goto out_error;
+
+	write_lock_bh(&map->sock->sk->sk_callback_lock);
+	map->saved_data_ready = map->sock->sk->sk_data_ready;
+	map->sock->sk->sk_user_data = map;
+	map->sock->sk->sk_data_ready = pvcalls_sk_data_ready;
+	map->sock->sk->sk_state_change = pvcalls_sk_state_change;
+	write_unlock_bh(&map->sock->sk->sk_callback_lock);
+
+	iow = &map->ioworker;
+	atomic_inc(&map->read);
+	atomic_inc(&map->io);
+	queue_work_on(iow->cpu, iow->wq, &iow->register_work);
+
+out_error:
+	if (ret < 0)
+		pvcalls_back_release_active(priv->dev, priv, map);
+
+	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.accept.id = req->u.accept.id;
+	rsp->ret = ret;
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&priv->ring, notify);
+	if (notify)
+		notify_remote_via_irq(priv->irq);
+
+	spin_lock_irqsave(&mappass->copy_lock, flags);
+	mappass->reqcopy.cmd = 0;
+	spin_unlock_irqrestore(&mappass->copy_lock, flags);
 }
 
 static void pvcalls_pass_sk_data_ready(struct sock *sock)
 {
+	struct sockpass_mapping *mappass = sock->sk_user_data;
+
+	if (mappass == NULL)
+		return;
+
+	queue_work(mappass->wq, &mappass->register_work);
 }
 
 static int pvcalls_back_bind(struct xenbus_device *dev,
@@ -372,7 +494,44 @@ static int pvcalls_back_listen(struct xenbus_device *dev,
 static int pvcalls_back_accept(struct xenbus_device *dev,
 			       struct xen_pvcalls_request *req)
 {
-	return 0;
+	struct pvcalls_back_priv *priv;
+	struct sockpass_mapping *mappass;
+	int ret = -EINVAL;
+	struct xen_pvcalls_response *rsp;
+	unsigned long flags;
+
+	priv = dev_get_drvdata(&dev->dev);
+
+	mappass = radix_tree_lookup(&priv->socketpass_mappings,
+		req->u.accept.id);
+	if (mappass == NULL)
+		goto out_error;
+
+	/* 
+	 * Limitation of the current implementation: only support one
+	 * concurrent accept or poll call on one socket.
+	 */
+	spin_lock_irqsave(&mappass->copy_lock, flags);
+	if (mappass->reqcopy.cmd != 0) {
+		spin_unlock_irqrestore(&mappass->copy_lock, flags);
+		ret = -EINTR;
+		goto out_error;
+	}
+
+	mappass->reqcopy = *req;
+	spin_unlock_irqrestore(&mappass->copy_lock, flags);
+	queue_work(mappass->wq, &mappass->register_work);
+
+	/* Tell the caller we don't need to send back a notification yet */
+	return -1;
+
+out_error:
+	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.accept.id = req->u.accept.id;
+	rsp->ret = ret;
+	return ret;
 }
 
 static int pvcalls_back_poll(struct xenbus_device *dev,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 11/18] xen/pvcalls: implement accept command
@ 2017-05-19 23:22     ` Stefano Stabellini
  0 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, Stefano Stabellini, boris.ostrovsky, sstabellini, linux-kernel

Implement the accept command by calling inet_accept. To avoid blocking
in the kernel, call inet_accept(O_NONBLOCK) from a workqueue, which get
scheduled on sk_data_ready (for a passive socket, it means that there
are connections to accept).

Use the reqcopy field to store the request. Accept the new socket from
the delayed work function, create a new sock_mapping for it, map
the indexes page and data ring, and reply to the other end. Allocate an
ioworker for the socket.

Only support one outstanding blocking accept request for every socket at
any time.

Add a field to sock_mapping to remember the passive socket from which an
active socket was created.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 161 ++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 160 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index de82bf5..bc641a8 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -66,6 +66,7 @@ struct pvcalls_ioworker {
 struct sock_mapping {
 	struct list_head list;
 	struct pvcalls_back_priv *priv;
+	struct sockpass_mapping *sockpass;
 	struct socket *sock;
 	uint64_t id;
 	grant_ref_t ref;
@@ -267,10 +268,131 @@ static int pvcalls_back_release(struct xenbus_device *dev,
 
 static void __pvcalls_back_accept(struct work_struct *work)
 {
+	struct sockpass_mapping *mappass = container_of(
+		work, struct sockpass_mapping, register_work);
+	struct sock_mapping *map;
+	struct pvcalls_ioworker *iow;
+	struct pvcalls_back_priv *priv;
+	struct xen_pvcalls_response *rsp;
+	struct xen_pvcalls_request *req;
+	void *page = NULL;
+	int notify;
+	int ret = -EINVAL;
+	unsigned long flags;
+
+	priv = mappass->priv;
+	/* We only need to check the value of "cmd" atomically on read. */
+	spin_lock_irqsave(&mappass->copy_lock, flags);
+	req = &mappass->reqcopy;
+	if (req->cmd != PVCALLS_ACCEPT) {
+		spin_unlock_irqrestore(&mappass->copy_lock, flags);
+		return;
+	}
+	spin_unlock_irqrestore(&mappass->copy_lock, flags);
+
+	map = kzalloc(sizeof(*map), GFP_KERNEL);
+	if (map == NULL) {
+		ret = -ENOMEM;
+		goto out_error;
+	}
+
+	map->sock = sock_alloc();
+	if (!map->sock)
+		goto out_error;
+
+	map->ref = req->u.accept.ref;
+
+	map->priv = priv;
+	map->sockpass = mappass;
+	map->sock->type = mappass->sock->type;
+	map->sock->ops = mappass->sock->ops;
+	map->id = req->u.accept.id_new;
+
+	ret = xenbus_map_ring_valloc(priv->dev, &req->u.accept.ref, 1, &page);
+	if (ret < 0)
+		goto out_error;
+	map->ring = page;
+	map->ring_order = map->ring->ring_order;
+	/* first read the order, then map the data ring */
+	virt_rmb();
+	if (map->ring_order > MAX_RING_ORDER) {
+		ret = -EFAULT;
+		goto out_error;
+	}
+	ret = xenbus_map_ring_valloc(priv->dev, map->ring->ref,
+				     (1 << map->ring_order), &page);
+	if (ret < 0)
+		goto out_error;
+	map->bytes = page;
+
+	ret = bind_interdomain_evtchn_to_irqhandler(priv->dev->otherend_id,
+						    req->u.accept.evtchn,
+						    pvcalls_back_conn_event,
+						    0,
+						    "pvcalls-backend",
+						    map);
+	if (ret < 0)
+		goto out_error;
+	map->irq = ret;
+
+	map->data.in = map->bytes;
+	map->data.out = map->bytes + XEN_FLEX_RING_SIZE(map->ring_order);
+
+	map->ioworker.wq = alloc_workqueue("pvcalls_io", WQ_UNBOUND, 1);
+	if (!map->ioworker.wq) {
+	    ret = -ENOMEM;
+	    goto out_error;
+	}
+	map->ioworker.cpu = get_random_int() % num_online_cpus();
+	atomic_set(&map->io, 1);
+	INIT_WORK(&map->ioworker.register_work,	pvcalls_back_ioworker);
+
+	down(&priv->socket_lock);
+	list_add_tail(&map->list, &priv->socket_mappings);
+	up(&priv->socket_lock);
+
+	ret = inet_accept(mappass->sock, map->sock, O_NONBLOCK, true);
+	if (ret == -EAGAIN)
+		goto out_error;
+
+	write_lock_bh(&map->sock->sk->sk_callback_lock);
+	map->saved_data_ready = map->sock->sk->sk_data_ready;
+	map->sock->sk->sk_user_data = map;
+	map->sock->sk->sk_data_ready = pvcalls_sk_data_ready;
+	map->sock->sk->sk_state_change = pvcalls_sk_state_change;
+	write_unlock_bh(&map->sock->sk->sk_callback_lock);
+
+	iow = &map->ioworker;
+	atomic_inc(&map->read);
+	atomic_inc(&map->io);
+	queue_work_on(iow->cpu, iow->wq, &iow->register_work);
+
+out_error:
+	if (ret < 0)
+		pvcalls_back_release_active(priv->dev, priv, map);
+
+	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.accept.id = req->u.accept.id;
+	rsp->ret = ret;
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&priv->ring, notify);
+	if (notify)
+		notify_remote_via_irq(priv->irq);
+
+	spin_lock_irqsave(&mappass->copy_lock, flags);
+	mappass->reqcopy.cmd = 0;
+	spin_unlock_irqrestore(&mappass->copy_lock, flags);
 }
 
 static void pvcalls_pass_sk_data_ready(struct sock *sock)
 {
+	struct sockpass_mapping *mappass = sock->sk_user_data;
+
+	if (mappass == NULL)
+		return;
+
+	queue_work(mappass->wq, &mappass->register_work);
 }
 
 static int pvcalls_back_bind(struct xenbus_device *dev,
@@ -372,7 +494,44 @@ static int pvcalls_back_listen(struct xenbus_device *dev,
 static int pvcalls_back_accept(struct xenbus_device *dev,
 			       struct xen_pvcalls_request *req)
 {
-	return 0;
+	struct pvcalls_back_priv *priv;
+	struct sockpass_mapping *mappass;
+	int ret = -EINVAL;
+	struct xen_pvcalls_response *rsp;
+	unsigned long flags;
+
+	priv = dev_get_drvdata(&dev->dev);
+
+	mappass = radix_tree_lookup(&priv->socketpass_mappings,
+		req->u.accept.id);
+	if (mappass == NULL)
+		goto out_error;
+
+	/* 
+	 * Limitation of the current implementation: only support one
+	 * concurrent accept or poll call on one socket.
+	 */
+	spin_lock_irqsave(&mappass->copy_lock, flags);
+	if (mappass->reqcopy.cmd != 0) {
+		spin_unlock_irqrestore(&mappass->copy_lock, flags);
+		ret = -EINTR;
+		goto out_error;
+	}
+
+	mappass->reqcopy = *req;
+	spin_unlock_irqrestore(&mappass->copy_lock, flags);
+	queue_work(mappass->wq, &mappass->register_work);
+
+	/* Tell the caller we don't need to send back a notification yet */
+	return -1;
+
+out_error:
+	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.accept.id = req->u.accept.id;
+	rsp->ret = ret;
+	return ret;
 }
 
 static int pvcalls_back_poll(struct xenbus_device *dev,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 12/18] xen/pvcalls: implement poll command
  2017-05-19 23:22   ` Stefano Stabellini
@ 2017-05-19 23:22     ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Implement poll on passive sockets by requesting a delayed response with
mappass->reqcopy, and reply back when there is data on the passive
socket.

Poll on active socket is unimplemented as by the spec, as the frontend
should just wait for events and check the indexes on the indexes page.

Only support one outstanding poll (or accept) request for every passive
socket at any given time.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 75 ++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 73 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index bc641a8..5ff1504 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -388,11 +388,33 @@ static void __pvcalls_back_accept(struct work_struct *work)
 static void pvcalls_pass_sk_data_ready(struct sock *sock)
 {
 	struct sockpass_mapping *mappass = sock->sk_user_data;
+	struct pvcalls_back_priv *priv;
+	struct xen_pvcalls_response *rsp;
+	unsigned long flags;
+	int notify;
 
 	if (mappass == NULL)
 		return;
 
-	queue_work(mappass->wq, &mappass->register_work);
+	priv = mappass->priv;
+	spin_lock_irqsave(&mappass->copy_lock, flags);
+	if (mappass->reqcopy.cmd == PVCALLS_POLL) {
+		rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
+		rsp->req_id = mappass->reqcopy.req_id;
+		rsp->u.poll.id = mappass->reqcopy.u.poll.id;
+		rsp->cmd = mappass->reqcopy.cmd;
+		rsp->ret = 0;
+
+		mappass->reqcopy.cmd = 0;
+		spin_unlock_irqrestore(&mappass->copy_lock, flags);
+
+		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&priv->ring, notify);
+		if (notify)
+			notify_remote_via_irq(mappass->priv->irq);
+	} else {
+		spin_unlock_irqrestore(&mappass->copy_lock, flags);
+		queue_work(mappass->wq, &mappass->register_work);
+	}
 }
 
 static int pvcalls_back_bind(struct xenbus_device *dev,
@@ -537,7 +559,56 @@ static int pvcalls_back_accept(struct xenbus_device *dev,
 static int pvcalls_back_poll(struct xenbus_device *dev,
 			     struct xen_pvcalls_request *req)
 {
-	return 0;
+	struct pvcalls_back_priv *priv;
+	struct sockpass_mapping *mappass;
+	struct xen_pvcalls_response *rsp;
+	struct inet_connection_sock *icsk;
+	struct request_sock_queue *queue;
+	unsigned long flags;
+	int ret;
+	bool data;
+
+	priv = dev_get_drvdata(&dev->dev);
+
+	mappass = radix_tree_lookup(&priv->socketpass_mappings, req->u.poll.id);
+	if (mappass == NULL)
+		return -EINVAL;
+
+	/*
+	 * Limitation of the current implementation: only support one
+	 * concurrent accept or poll call on one socket.
+	 */
+	spin_lock_irqsave(&mappass->copy_lock, flags);
+	if (mappass->reqcopy.cmd != 0) {
+		ret = -EINTR;
+		goto out;
+	}
+
+	mappass->reqcopy = *req;
+	icsk = inet_csk(mappass->sock->sk);
+	queue = &icsk->icsk_accept_queue;
+	spin_lock(&queue->rskq_lock);
+	data = queue->rskq_accept_head != NULL;
+	spin_unlock(&queue->rskq_lock);
+	if (data) {
+		mappass->reqcopy.cmd = 0;
+		ret = 0;
+		goto out;
+	}
+	spin_unlock_irqrestore(&mappass->copy_lock, flags);
+
+	/* Tell the caller we don't need to send back a notification yet */
+	return -1;
+
+out:
+	spin_unlock_irqrestore(&mappass->copy_lock, flags);
+
+	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.poll.id = req->u.poll.id;
+	rsp->ret = ret;
+	return ret;
 }
 
 static int pvcalls_back_handle_cmd(struct xenbus_device *dev,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 12/18] xen/pvcalls: implement poll command
@ 2017-05-19 23:22     ` Stefano Stabellini
  0 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, Stefano Stabellini, boris.ostrovsky, sstabellini, linux-kernel

Implement poll on passive sockets by requesting a delayed response with
mappass->reqcopy, and reply back when there is data on the passive
socket.

Poll on active socket is unimplemented as by the spec, as the frontend
should just wait for events and check the indexes on the indexes page.

Only support one outstanding poll (or accept) request for every passive
socket at any given time.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 75 ++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 73 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index bc641a8..5ff1504 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -388,11 +388,33 @@ static void __pvcalls_back_accept(struct work_struct *work)
 static void pvcalls_pass_sk_data_ready(struct sock *sock)
 {
 	struct sockpass_mapping *mappass = sock->sk_user_data;
+	struct pvcalls_back_priv *priv;
+	struct xen_pvcalls_response *rsp;
+	unsigned long flags;
+	int notify;
 
 	if (mappass == NULL)
 		return;
 
-	queue_work(mappass->wq, &mappass->register_work);
+	priv = mappass->priv;
+	spin_lock_irqsave(&mappass->copy_lock, flags);
+	if (mappass->reqcopy.cmd == PVCALLS_POLL) {
+		rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
+		rsp->req_id = mappass->reqcopy.req_id;
+		rsp->u.poll.id = mappass->reqcopy.u.poll.id;
+		rsp->cmd = mappass->reqcopy.cmd;
+		rsp->ret = 0;
+
+		mappass->reqcopy.cmd = 0;
+		spin_unlock_irqrestore(&mappass->copy_lock, flags);
+
+		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&priv->ring, notify);
+		if (notify)
+			notify_remote_via_irq(mappass->priv->irq);
+	} else {
+		spin_unlock_irqrestore(&mappass->copy_lock, flags);
+		queue_work(mappass->wq, &mappass->register_work);
+	}
 }
 
 static int pvcalls_back_bind(struct xenbus_device *dev,
@@ -537,7 +559,56 @@ static int pvcalls_back_accept(struct xenbus_device *dev,
 static int pvcalls_back_poll(struct xenbus_device *dev,
 			     struct xen_pvcalls_request *req)
 {
-	return 0;
+	struct pvcalls_back_priv *priv;
+	struct sockpass_mapping *mappass;
+	struct xen_pvcalls_response *rsp;
+	struct inet_connection_sock *icsk;
+	struct request_sock_queue *queue;
+	unsigned long flags;
+	int ret;
+	bool data;
+
+	priv = dev_get_drvdata(&dev->dev);
+
+	mappass = radix_tree_lookup(&priv->socketpass_mappings, req->u.poll.id);
+	if (mappass == NULL)
+		return -EINVAL;
+
+	/*
+	 * Limitation of the current implementation: only support one
+	 * concurrent accept or poll call on one socket.
+	 */
+	spin_lock_irqsave(&mappass->copy_lock, flags);
+	if (mappass->reqcopy.cmd != 0) {
+		ret = -EINTR;
+		goto out;
+	}
+
+	mappass->reqcopy = *req;
+	icsk = inet_csk(mappass->sock->sk);
+	queue = &icsk->icsk_accept_queue;
+	spin_lock(&queue->rskq_lock);
+	data = queue->rskq_accept_head != NULL;
+	spin_unlock(&queue->rskq_lock);
+	if (data) {
+		mappass->reqcopy.cmd = 0;
+		ret = 0;
+		goto out;
+	}
+	spin_unlock_irqrestore(&mappass->copy_lock, flags);
+
+	/* Tell the caller we don't need to send back a notification yet */
+	return -1;
+
+out:
+	spin_unlock_irqrestore(&mappass->copy_lock, flags);
+
+	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->cmd = req->cmd;
+	rsp->u.poll.id = req->u.poll.id;
+	rsp->ret = ret;
+	return ret;
 }
 
 static int pvcalls_back_handle_cmd(struct xenbus_device *dev,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 13/18] xen/pvcalls: implement release command
  2017-05-19 23:22   ` Stefano Stabellini
                     ` (13 preceding siblings ...)
  (?)
@ 2017-05-19 23:22   ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Release both active and passive sockets. For active sockets, make sure
to avoid possible conflicts with the ioworker reading/writing to those
sockets concurrently. Set map->release to let the ioworker know
atomically that the socket will be released soon, then wait until the
ioworker finishes (flush_work).

Unmap indexes pages and data rings.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 72 +++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 71 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 5ff1504..363a550 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -257,13 +257,83 @@ static int pvcalls_back_release_active(struct xenbus_device *dev,
 				       struct pvcalls_back_priv *priv,
 				       struct sock_mapping *map)
 {
+	disable_irq(map->irq);
+	if (map->sock->sk != NULL) {
+		write_lock_bh(&map->sock->sk->sk_callback_lock);
+		map->sock->sk->sk_user_data = NULL;
+		map->sock->sk->sk_data_ready = map->saved_data_ready;
+		write_unlock_bh(&map->sock->sk->sk_callback_lock);
+	}
+
+	atomic_set(&map->release, 1);
+	flush_work(&map->ioworker.register_work);
+
+	down(&priv->socket_lock);
+	list_del(&map->list);
+	up(&priv->socket_lock);
+
+	xenbus_unmap_ring_vfree(dev, (void *)map->bytes);
+	xenbus_unmap_ring_vfree(dev, (void *)map->ring);
+	unbind_from_irqhandler(map->irq, map);
+
+	sock_release(map->sock);
+	kfree(map);
+
+	return 0;
+}
+
+static int pvcalls_back_release_passive(struct xenbus_device *dev,
+					struct pvcalls_back_priv *priv,
+					struct sockpass_mapping *mappass)
+{
+	if (mappass->sock->sk != NULL) {
+		write_lock_bh(&mappass->sock->sk->sk_callback_lock);
+		mappass->sock->sk->sk_user_data = NULL;
+		mappass->sock->sk->sk_data_ready = mappass->saved_data_ready;
+		write_unlock_bh(&mappass->sock->sk->sk_callback_lock);
+	}
+	down(&priv->socket_lock);
+	radix_tree_delete(&priv->socketpass_mappings, mappass->id);
+	sock_release(mappass->sock);
+	flush_workqueue(mappass->wq);
+	destroy_workqueue(mappass->wq);
+	kfree(mappass);
+	up(&priv->socket_lock);
+
 	return 0;
 }
 
 static int pvcalls_back_release(struct xenbus_device *dev,
 				struct xen_pvcalls_request *req)
 {
-	return 0;
+	struct pvcalls_back_priv *priv;
+	struct sock_mapping *map, *n;
+	struct sockpass_mapping *mappass;
+	int ret = 0;
+	struct xen_pvcalls_response *rsp;
+
+	priv = dev_get_drvdata(&dev->dev);
+
+	list_for_each_entry_safe(map, n, &priv->socket_mappings, list) {
+		if (map->id == req->u.release.id) {
+			ret = pvcalls_back_release_active(dev, priv, map);
+			goto out;
+		}
+	}
+	mappass = radix_tree_lookup(&priv->socketpass_mappings,
+				    req->u.release.id);
+	if (mappass != NULL) {
+		ret = pvcalls_back_release_passive(dev, priv, mappass);
+		goto out;
+	}
+
+out:
+	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->u.release.id = req->u.release.id;
+	rsp->cmd = req->cmd;
+	rsp->ret = ret;
+	return ret;
 }
 
 static void __pvcalls_back_accept(struct work_struct *work)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 13/18] xen/pvcalls: implement release command
  2017-05-19 23:22   ` Stefano Stabellini
                     ` (14 preceding siblings ...)
  (?)
@ 2017-05-19 23:22   ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, Stefano Stabellini, boris.ostrovsky, sstabellini, linux-kernel

Release both active and passive sockets. For active sockets, make sure
to avoid possible conflicts with the ioworker reading/writing to those
sockets concurrently. Set map->release to let the ioworker know
atomically that the socket will be released soon, then wait until the
ioworker finishes (flush_work).

Unmap indexes pages and data rings.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 72 +++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 71 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 5ff1504..363a550 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -257,13 +257,83 @@ static int pvcalls_back_release_active(struct xenbus_device *dev,
 				       struct pvcalls_back_priv *priv,
 				       struct sock_mapping *map)
 {
+	disable_irq(map->irq);
+	if (map->sock->sk != NULL) {
+		write_lock_bh(&map->sock->sk->sk_callback_lock);
+		map->sock->sk->sk_user_data = NULL;
+		map->sock->sk->sk_data_ready = map->saved_data_ready;
+		write_unlock_bh(&map->sock->sk->sk_callback_lock);
+	}
+
+	atomic_set(&map->release, 1);
+	flush_work(&map->ioworker.register_work);
+
+	down(&priv->socket_lock);
+	list_del(&map->list);
+	up(&priv->socket_lock);
+
+	xenbus_unmap_ring_vfree(dev, (void *)map->bytes);
+	xenbus_unmap_ring_vfree(dev, (void *)map->ring);
+	unbind_from_irqhandler(map->irq, map);
+
+	sock_release(map->sock);
+	kfree(map);
+
+	return 0;
+}
+
+static int pvcalls_back_release_passive(struct xenbus_device *dev,
+					struct pvcalls_back_priv *priv,
+					struct sockpass_mapping *mappass)
+{
+	if (mappass->sock->sk != NULL) {
+		write_lock_bh(&mappass->sock->sk->sk_callback_lock);
+		mappass->sock->sk->sk_user_data = NULL;
+		mappass->sock->sk->sk_data_ready = mappass->saved_data_ready;
+		write_unlock_bh(&mappass->sock->sk->sk_callback_lock);
+	}
+	down(&priv->socket_lock);
+	radix_tree_delete(&priv->socketpass_mappings, mappass->id);
+	sock_release(mappass->sock);
+	flush_workqueue(mappass->wq);
+	destroy_workqueue(mappass->wq);
+	kfree(mappass);
+	up(&priv->socket_lock);
+
 	return 0;
 }
 
 static int pvcalls_back_release(struct xenbus_device *dev,
 				struct xen_pvcalls_request *req)
 {
-	return 0;
+	struct pvcalls_back_priv *priv;
+	struct sock_mapping *map, *n;
+	struct sockpass_mapping *mappass;
+	int ret = 0;
+	struct xen_pvcalls_response *rsp;
+
+	priv = dev_get_drvdata(&dev->dev);
+
+	list_for_each_entry_safe(map, n, &priv->socket_mappings, list) {
+		if (map->id == req->u.release.id) {
+			ret = pvcalls_back_release_active(dev, priv, map);
+			goto out;
+		}
+	}
+	mappass = radix_tree_lookup(&priv->socketpass_mappings,
+				    req->u.release.id);
+	if (mappass != NULL) {
+		ret = pvcalls_back_release_passive(dev, priv, mappass);
+		goto out;
+	}
+
+out:
+	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
+	rsp->req_id = req->req_id;
+	rsp->u.release.id = req->u.release.id;
+	rsp->cmd = req->cmd;
+	rsp->ret = ret;
+	return ret;
 }
 
 static void __pvcalls_back_accept(struct work_struct *work)
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 14/18] xen/pvcalls: disconnect and module_exit
  2017-05-19 23:22   ` Stefano Stabellini
                     ` (15 preceding siblings ...)
  (?)
@ 2017-05-19 23:22   ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Implement backend_disconnect. Call pvcalls_back_release_active on active
sockets and pvcalls_back_release_passive on passive sockets.

Implement module_exit by calling backend_disconnect on frontend
connections.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 49 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 363a550..6d3d520 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -842,6 +842,38 @@ static int backend_connect(struct xenbus_device *dev)
 
 static int backend_disconnect(struct xenbus_device *dev)
 {
+	struct pvcalls_back_priv *priv;
+	struct sock_mapping *map, *n;
+	struct sockpass_mapping *mappass;
+	struct radix_tree_iter iter;
+	void **slot;
+
+
+	priv = dev_get_drvdata(&dev->dev);
+
+	list_for_each_entry_safe(map, n, &priv->socket_mappings, list) {
+		pvcalls_back_release_active(dev, priv, map);
+	}
+
+	radix_tree_for_each_slot(slot, &priv->socketpass_mappings, &iter, 0) {
+		mappass = radix_tree_deref_slot(slot);
+		if (!mappass || radix_tree_exception(mappass)) {
+			if (radix_tree_deref_retry(mappass)) {
+				slot = radix_tree_iter_retry(&iter);
+				continue;
+			}
+		} else
+			pvcalls_back_release_passive(dev, priv, mappass);
+	}
+
+	xenbus_unmap_ring_vfree(dev, (void *)priv->sring);
+	unbind_from_irqhandler(priv->irq, dev);
+
+	list_del(&priv->list);
+	destroy_workqueue(priv->wq);
+	kfree(priv);
+	dev_set_drvdata(&dev->dev, NULL);
+
 	return 0;
 }
 
@@ -1018,3 +1050,20 @@ static int __init pvcalls_back_init(void)
 	return 0;
 }
 module_init(pvcalls_back_init);
+
+static void __exit pvcalls_back_fin(void)
+{
+	struct pvcalls_back_priv *priv, *npriv;
+
+	down(&pvcalls_back_global.frontends_lock);
+	list_for_each_entry_safe(priv, npriv, &pvcalls_back_global.frontends,
+				 list) {
+		backend_disconnect(priv->dev);
+	}
+	up(&pvcalls_back_global.frontends_lock);
+
+	xenbus_unregister_driver(&pvcalls_back_driver);
+	memset(&pvcalls_back_global, 0, sizeof(pvcalls_back_global));
+}
+
+module_exit(pvcalls_back_fin);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 14/18] xen/pvcalls: disconnect and module_exit
  2017-05-19 23:22   ` Stefano Stabellini
                     ` (16 preceding siblings ...)
  (?)
@ 2017-05-19 23:22   ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, Stefano Stabellini, boris.ostrovsky, sstabellini, linux-kernel

Implement backend_disconnect. Call pvcalls_back_release_active on active
sockets and pvcalls_back_release_passive on passive sockets.

Implement module_exit by calling backend_disconnect on frontend
connections.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 49 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 363a550..6d3d520 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -842,6 +842,38 @@ static int backend_connect(struct xenbus_device *dev)
 
 static int backend_disconnect(struct xenbus_device *dev)
 {
+	struct pvcalls_back_priv *priv;
+	struct sock_mapping *map, *n;
+	struct sockpass_mapping *mappass;
+	struct radix_tree_iter iter;
+	void **slot;
+
+
+	priv = dev_get_drvdata(&dev->dev);
+
+	list_for_each_entry_safe(map, n, &priv->socket_mappings, list) {
+		pvcalls_back_release_active(dev, priv, map);
+	}
+
+	radix_tree_for_each_slot(slot, &priv->socketpass_mappings, &iter, 0) {
+		mappass = radix_tree_deref_slot(slot);
+		if (!mappass || radix_tree_exception(mappass)) {
+			if (radix_tree_deref_retry(mappass)) {
+				slot = radix_tree_iter_retry(&iter);
+				continue;
+			}
+		} else
+			pvcalls_back_release_passive(dev, priv, mappass);
+	}
+
+	xenbus_unmap_ring_vfree(dev, (void *)priv->sring);
+	unbind_from_irqhandler(priv->irq, dev);
+
+	list_del(&priv->list);
+	destroy_workqueue(priv->wq);
+	kfree(priv);
+	dev_set_drvdata(&dev->dev, NULL);
+
 	return 0;
 }
 
@@ -1018,3 +1050,20 @@ static int __init pvcalls_back_init(void)
 	return 0;
 }
 module_init(pvcalls_back_init);
+
+static void __exit pvcalls_back_fin(void)
+{
+	struct pvcalls_back_priv *priv, *npriv;
+
+	down(&pvcalls_back_global.frontends_lock);
+	list_for_each_entry_safe(priv, npriv, &pvcalls_back_global.frontends,
+				 list) {
+		backend_disconnect(priv->dev);
+	}
+	up(&pvcalls_back_global.frontends_lock);
+
+	xenbus_unregister_driver(&pvcalls_back_driver);
+	memset(&pvcalls_back_global, 0, sizeof(pvcalls_back_global));
+}
+
+module_exit(pvcalls_back_fin);
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 15/18] xen/pvcalls: implement the ioworker functions
  2017-05-19 23:22   ` Stefano Stabellini
@ 2017-05-19 23:22     ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

We have one ioworker per socket. Each ioworker goes through the list of
outstanding read/write requests. Once all requests have been dealt with,
it returns.

We use one atomic counter per socket for "read" operations and one
for "write" operations to keep track of the reads/writes to do.

We also use one atomic counter ("io") per ioworker to keep track of how
many outstanding requests we have in total assigned to the ioworker. The
ioworker finishes when there are none.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 6d3d520..17625c7 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -100,8 +100,35 @@ static int pvcalls_back_release_active(struct xenbus_device *dev,
 				       struct pvcalls_back_priv *priv,
 				       struct sock_mapping *map);
 
+static void pvcalls_conn_back_read(unsigned long opaque)
+{
+}
+
+static int pvcalls_conn_back_write(struct sock_mapping *map)
+{
+	return 0;
+}
+
 static void pvcalls_back_ioworker(struct work_struct *work)
 {
+	struct pvcalls_ioworker *ioworker = container_of(work,
+		struct pvcalls_ioworker, register_work);
+	struct sock_mapping *map = container_of(ioworker, struct sock_mapping,
+		ioworker);
+
+	while (atomic_read(&map->io) > 0) {
+		if (atomic_read(&map->release) > 0) {
+			atomic_set(&map->release, 0);
+			return;
+		}
+
+		if (atomic_read(&map->read) > 0)
+			pvcalls_conn_back_read((unsigned long)map);
+		if (atomic_read(&map->write) > 0)
+			pvcalls_conn_back_write(map);
+
+		atomic_dec(&map->io);
+	}
 }
 
 static int pvcalls_back_socket(struct xenbus_device *dev,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 15/18] xen/pvcalls: implement the ioworker functions
@ 2017-05-19 23:22     ` Stefano Stabellini
  0 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, Stefano Stabellini, boris.ostrovsky, sstabellini, linux-kernel

We have one ioworker per socket. Each ioworker goes through the list of
outstanding read/write requests. Once all requests have been dealt with,
it returns.

We use one atomic counter per socket for "read" operations and one
for "write" operations to keep track of the reads/writes to do.

We also use one atomic counter ("io") per ioworker to keep track of how
many outstanding requests we have in total assigned to the ioworker. The
ioworker finishes when there are none.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 6d3d520..17625c7 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -100,8 +100,35 @@ static int pvcalls_back_release_active(struct xenbus_device *dev,
 				       struct pvcalls_back_priv *priv,
 				       struct sock_mapping *map);
 
+static void pvcalls_conn_back_read(unsigned long opaque)
+{
+}
+
+static int pvcalls_conn_back_write(struct sock_mapping *map)
+{
+	return 0;
+}
+
 static void pvcalls_back_ioworker(struct work_struct *work)
 {
+	struct pvcalls_ioworker *ioworker = container_of(work,
+		struct pvcalls_ioworker, register_work);
+	struct sock_mapping *map = container_of(ioworker, struct sock_mapping,
+		ioworker);
+
+	while (atomic_read(&map->io) > 0) {
+		if (atomic_read(&map->release) > 0) {
+			atomic_set(&map->release, 0);
+			return;
+		}
+
+		if (atomic_read(&map->read) > 0)
+			pvcalls_conn_back_read((unsigned long)map);
+		if (atomic_read(&map->write) > 0)
+			pvcalls_conn_back_write(map);
+
+		atomic_dec(&map->io);
+	}
 }
 
 static int pvcalls_back_socket(struct xenbus_device *dev,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 16/18] xen/pvcalls: implement read
  2017-05-19 23:22   ` Stefano Stabellini
                     ` (19 preceding siblings ...)
  (?)
@ 2017-05-19 23:22   ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

When an active socket has data available, increment the io and read
counters, and schedule the ioworker.

Implement the read function by reading from the socket, writing the data
to the data ring.

Set in_error on error.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 85 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 85 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 17625c7..59e0971 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -102,6 +102,81 @@ static int pvcalls_back_release_active(struct xenbus_device *dev,
 
 static void pvcalls_conn_back_read(unsigned long opaque)
 {
+	struct sock_mapping *map = (struct sock_mapping *)opaque;
+	struct msghdr msg;
+	struct kvec vec[2];
+	RING_IDX cons, prod, size, wanted, array_size, masked_prod, masked_cons;
+	int32_t error;
+	struct pvcalls_data_intf *intf = map->ring;
+	struct pvcalls_data *data = &map->data;
+	unsigned long flags;
+	int ret;
+
+	array_size = XEN_FLEX_RING_SIZE(map->ring_order);
+	cons = intf->in_cons;
+	prod = intf->in_prod;
+	error = intf->in_error;
+	/* read the indexes first, then deal with the data */
+	virt_mb();
+
+	if (error)
+		return;
+
+	size = pvcalls_queued(prod, cons, array_size);
+	if (size >= array_size)
+		return;
+	spin_lock_irqsave(&map->sock->sk->sk_receive_queue.lock, flags);
+	if (skb_queue_empty(&map->sock->sk->sk_receive_queue)) {
+		atomic_set(&map->read, 0);
+		spin_unlock_irqrestore(&map->sock->sk->sk_receive_queue.lock,
+				flags);
+		return;
+	}
+	spin_unlock_irqrestore(&map->sock->sk->sk_receive_queue.lock, flags);
+	wanted = array_size - size;
+	masked_prod = pvcalls_mask(prod, array_size);
+	masked_cons = pvcalls_mask(cons, array_size);
+
+	memset(&msg, 0, sizeof(msg));
+	msg.msg_iter.type = ITER_KVEC|WRITE;
+	msg.msg_iter.count = wanted;
+	if (masked_prod < masked_cons) {
+		vec[0].iov_base = data->in + masked_prod;
+		vec[0].iov_len = wanted;
+		msg.msg_iter.kvec = vec;
+		msg.msg_iter.nr_segs = 1;
+	} else {
+		vec[0].iov_base = data->in + masked_prod;
+		vec[0].iov_len = array_size - masked_prod;
+		vec[1].iov_base = data->in;
+		vec[1].iov_len = wanted - vec[0].iov_len;
+		msg.msg_iter.kvec = vec;
+		msg.msg_iter.nr_segs = 2;
+	}
+
+	atomic_set(&map->read, 0);
+	ret = inet_recvmsg(map->sock, &msg, wanted, MSG_DONTWAIT);
+	WARN_ON(ret > 0 && ret > wanted);
+	if (ret == -EAGAIN) /* shouldn't happen */
+		return;
+	if (!ret)
+		ret = -ENOTCONN;
+	spin_lock_irqsave(&map->sock->sk->sk_receive_queue.lock, flags);
+	if (ret > 0 && !skb_queue_empty(&map->sock->sk->sk_receive_queue))
+		atomic_inc(&map->read);
+	spin_unlock_irqrestore(&map->sock->sk->sk_receive_queue.lock, flags);
+
+	/* write the data, then modify the indexes */
+	virt_wmb();
+	if (ret < 0)
+		intf->in_error = ret;
+	else
+		intf->in_prod = prod + ret;
+	/* update the indexes, then notify the other end */
+	virt_wmb();
+	notify_remote_via_irq(map->irq);
+
+	return;
 }
 
 static int pvcalls_conn_back_write(struct sock_mapping *map)
@@ -174,6 +249,16 @@ static void pvcalls_sk_state_change(struct sock *sock)
 
 static void pvcalls_sk_data_ready(struct sock *sock)
 {
+	struct sock_mapping *map = sock->sk_user_data;
+	struct pvcalls_ioworker *iow;
+
+	if (map == NULL)
+		return;
+
+	iow = &map->ioworker;
+	atomic_inc(&map->read);
+	atomic_inc(&map->io);
+	queue_work_on(iow->cpu, iow->wq, &iow->register_work);
 }
 
 static int pvcalls_back_connect(struct xenbus_device *dev,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 16/18] xen/pvcalls: implement read
  2017-05-19 23:22   ` Stefano Stabellini
                     ` (18 preceding siblings ...)
  (?)
@ 2017-05-19 23:22   ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, Stefano Stabellini, boris.ostrovsky, sstabellini, linux-kernel

When an active socket has data available, increment the io and read
counters, and schedule the ioworker.

Implement the read function by reading from the socket, writing the data
to the data ring.

Set in_error on error.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 85 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 85 insertions(+)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 17625c7..59e0971 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -102,6 +102,81 @@ static int pvcalls_back_release_active(struct xenbus_device *dev,
 
 static void pvcalls_conn_back_read(unsigned long opaque)
 {
+	struct sock_mapping *map = (struct sock_mapping *)opaque;
+	struct msghdr msg;
+	struct kvec vec[2];
+	RING_IDX cons, prod, size, wanted, array_size, masked_prod, masked_cons;
+	int32_t error;
+	struct pvcalls_data_intf *intf = map->ring;
+	struct pvcalls_data *data = &map->data;
+	unsigned long flags;
+	int ret;
+
+	array_size = XEN_FLEX_RING_SIZE(map->ring_order);
+	cons = intf->in_cons;
+	prod = intf->in_prod;
+	error = intf->in_error;
+	/* read the indexes first, then deal with the data */
+	virt_mb();
+
+	if (error)
+		return;
+
+	size = pvcalls_queued(prod, cons, array_size);
+	if (size >= array_size)
+		return;
+	spin_lock_irqsave(&map->sock->sk->sk_receive_queue.lock, flags);
+	if (skb_queue_empty(&map->sock->sk->sk_receive_queue)) {
+		atomic_set(&map->read, 0);
+		spin_unlock_irqrestore(&map->sock->sk->sk_receive_queue.lock,
+				flags);
+		return;
+	}
+	spin_unlock_irqrestore(&map->sock->sk->sk_receive_queue.lock, flags);
+	wanted = array_size - size;
+	masked_prod = pvcalls_mask(prod, array_size);
+	masked_cons = pvcalls_mask(cons, array_size);
+
+	memset(&msg, 0, sizeof(msg));
+	msg.msg_iter.type = ITER_KVEC|WRITE;
+	msg.msg_iter.count = wanted;
+	if (masked_prod < masked_cons) {
+		vec[0].iov_base = data->in + masked_prod;
+		vec[0].iov_len = wanted;
+		msg.msg_iter.kvec = vec;
+		msg.msg_iter.nr_segs = 1;
+	} else {
+		vec[0].iov_base = data->in + masked_prod;
+		vec[0].iov_len = array_size - masked_prod;
+		vec[1].iov_base = data->in;
+		vec[1].iov_len = wanted - vec[0].iov_len;
+		msg.msg_iter.kvec = vec;
+		msg.msg_iter.nr_segs = 2;
+	}
+
+	atomic_set(&map->read, 0);
+	ret = inet_recvmsg(map->sock, &msg, wanted, MSG_DONTWAIT);
+	WARN_ON(ret > 0 && ret > wanted);
+	if (ret == -EAGAIN) /* shouldn't happen */
+		return;
+	if (!ret)
+		ret = -ENOTCONN;
+	spin_lock_irqsave(&map->sock->sk->sk_receive_queue.lock, flags);
+	if (ret > 0 && !skb_queue_empty(&map->sock->sk->sk_receive_queue))
+		atomic_inc(&map->read);
+	spin_unlock_irqrestore(&map->sock->sk->sk_receive_queue.lock, flags);
+
+	/* write the data, then modify the indexes */
+	virt_wmb();
+	if (ret < 0)
+		intf->in_error = ret;
+	else
+		intf->in_prod = prod + ret;
+	/* update the indexes, then notify the other end */
+	virt_wmb();
+	notify_remote_via_irq(map->irq);
+
+	return;
 }
 
 static int pvcalls_conn_back_write(struct sock_mapping *map)
@@ -174,6 +249,16 @@ static void pvcalls_sk_state_change(struct sock *sock)
 
 static void pvcalls_sk_data_ready(struct sock *sock)
 {
+	struct sock_mapping *map = sock->sk_user_data;
+	struct pvcalls_ioworker *iow;
+
+	if (map == NULL)
+		return;
+
+	iow = &map->ioworker;
+	atomic_inc(&map->read);
+	atomic_inc(&map->io);
+	queue_work_on(iow->cpu, iow->wq, &iow->register_work);
 }
 
 static int pvcalls_back_connect(struct xenbus_device *dev,
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 17/18] xen/pvcalls: implement write
  2017-05-19 23:22   ` Stefano Stabellini
                     ` (20 preceding siblings ...)
  (?)
@ 2017-05-19 23:22   ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

When the other end notifies us that there is data to be written
(pvcalls_back_conn_event), increment the io and write counters, and
schedule the ioworker.

Implement the write function called by ioworker by reading the data from
the data ring, writing it to the socket by calling inet_sendmsg.

Set out_error on error.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 74 +++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 73 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 59e0971..2bd246c 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -181,7 +181,66 @@ static void pvcalls_conn_back_read(unsigned long opaque)
 
 static int pvcalls_conn_back_write(struct sock_mapping *map)
 {
-	return 0;
+	struct pvcalls_data_intf *intf = map->ring;
+	struct pvcalls_data *data = &map->data;
+	struct msghdr msg;
+	struct kvec vec[2];
+	RING_IDX cons, prod, size, ring_size;
+	int ret;
+
+	cons = intf->out_cons;
+	prod = intf->out_prod;
+	/* read the indexes before dealing with the data */
+	virt_mb();
+
+	ring_size = XEN_FLEX_RING_SIZE(map->ring_order);
+	size = pvcalls_queued(prod, cons, ring_size);
+	if (size == 0)
+		return 0;
+
+	memset(&msg, 0, sizeof(msg));
+	msg.msg_flags |= MSG_DONTWAIT;
+	msg.msg_iter.type = ITER_KVEC|READ;
+	msg.msg_iter.count = size;
+	if (pvcalls_mask(prod, ring_size) > pvcalls_mask(cons, ring_size)) {
+		vec[0].iov_base = data->out + pvcalls_mask(cons, ring_size);
+		vec[0].iov_len = size;
+		msg.msg_iter.kvec = vec;
+		msg.msg_iter.nr_segs = 1;
+	} else {
+		vec[0].iov_base = data->out + pvcalls_mask(cons, ring_size);
+		vec[0].iov_len = ring_size - pvcalls_mask(cons, ring_size);
+		vec[1].iov_base = data->out;
+		vec[1].iov_len = size - vec[0].iov_len;
+		msg.msg_iter.kvec = vec;
+		msg.msg_iter.nr_segs = 2;
+	}
+
+	atomic_set(&map->write, 0);
+	ret = inet_sendmsg(map->sock, &msg, size);
+	if (ret == -EAGAIN || ret < size) {
+		atomic_inc(&map->write);
+		atomic_inc(&map->io);
+	}
+	if (ret == -EAGAIN)
+		return ret;
+
+	/* write the data, then update the indexes */
+	virt_wmb();
+	if (ret < 0) {
+		intf->out_error = ret;
+	} else {
+		intf->out_error = 0;
+		intf->out_cons = cons + ret;
+		prod = intf->out_prod;
+	}
+	/* update the indexes, then notify the other end */
+	virt_wmb();
+	if (prod != cons + ret)
+		atomic_inc(&map->write);
+	notify_remote_via_irq(map->irq);
+
+	return ret;
 }
 
 static void pvcalls_back_ioworker(struct work_struct *work)
@@ -877,6 +936,19 @@ static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
 
 static irqreturn_t pvcalls_back_conn_event(int irq, void *sock_map)
 {
+	struct sock_mapping *map = sock_map;
+	struct pvcalls_ioworker *iow;
+
+	if (map == NULL || map->sock == NULL || map->sock->sk == NULL ||
+		map->sock->sk->sk_user_data != map)
+		return IRQ_HANDLED;
+
+	iow = &map->ioworker;
+
+	atomic_inc(&map->write);
+	atomic_inc(&map->io);
+	queue_work_on(iow->cpu, iow->wq, &iow->register_work);
+
 	return IRQ_HANDLED;
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 17/18] xen/pvcalls: implement write
  2017-05-19 23:22   ` Stefano Stabellini
                     ` (21 preceding siblings ...)
  (?)
@ 2017-05-19 23:22   ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, Stefano Stabellini, boris.ostrovsky, sstabellini, linux-kernel

When the other end notifies us that there is data to be written
(pvcalls_back_conn_event), increment the io and write counters, and
schedule the ioworker.

Implement the write function called by ioworker by reading the data from
the data ring, writing it to the socket by calling inet_sendmsg.

Set out_error on error.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/pvcalls-back.c | 74 +++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 73 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index 59e0971..2bd246c 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -181,7 +181,66 @@ static void pvcalls_conn_back_read(unsigned long opaque)
 
 static int pvcalls_conn_back_write(struct sock_mapping *map)
 {
-	return 0;
+	struct pvcalls_data_intf *intf = map->ring;
+	struct pvcalls_data *data = &map->data;
+	struct msghdr msg;
+	struct kvec vec[2];
+	RING_IDX cons, prod, size, ring_size;
+	int ret;
+
+	cons = intf->out_cons;
+	prod = intf->out_prod;
+	/* read the indexes before dealing with the data */
+	virt_mb();
+
+	ring_size = XEN_FLEX_RING_SIZE(map->ring_order);
+	size = pvcalls_queued(prod, cons, ring_size);
+	if (size == 0)
+		return 0;
+
+	memset(&msg, 0, sizeof(msg));
+	msg.msg_flags |= MSG_DONTWAIT;
+	msg.msg_iter.type = ITER_KVEC|READ;
+	msg.msg_iter.count = size;
+	if (pvcalls_mask(prod, ring_size) > pvcalls_mask(cons, ring_size)) {
+		vec[0].iov_base = data->out + pvcalls_mask(cons, ring_size);
+		vec[0].iov_len = size;
+		msg.msg_iter.kvec = vec;
+		msg.msg_iter.nr_segs = 1;
+	} else {
+		vec[0].iov_base = data->out + pvcalls_mask(cons, ring_size);
+		vec[0].iov_len = ring_size - pvcalls_mask(cons, ring_size);
+		vec[1].iov_base = data->out;
+		vec[1].iov_len = size - vec[0].iov_len;
+		msg.msg_iter.kvec = vec;
+		msg.msg_iter.nr_segs = 2;
+	}
+
+	atomic_set(&map->write, 0);
+	ret = inet_sendmsg(map->sock, &msg, size);
+	if (ret == -EAGAIN || ret < size) {
+		atomic_inc(&map->write);
+		atomic_inc(&map->io);
+	}
+	if (ret == -EAGAIN)
+		return ret;
+
+	/* write the data, then update the indexes */
+	virt_wmb();
+	if (ret < 0) {
+		intf->out_error = ret;
+	} else {
+		intf->out_error = 0;
+		intf->out_cons = cons + ret;
+		prod = intf->out_prod;
+	}
+	/* update the indexes, then notify the other end */
+	virt_wmb();
+	if (prod != cons + ret)
+		atomic_inc(&map->write);
+	notify_remote_via_irq(map->irq);
+
+	return ret;
 }
 
 static void pvcalls_back_ioworker(struct work_struct *work)
@@ -877,6 +936,19 @@ static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
 
 static irqreturn_t pvcalls_back_conn_event(int irq, void *sock_map)
 {
+	struct sock_mapping *map = sock_map;
+	struct pvcalls_ioworker *iow;
+
+	if (map == NULL || map->sock == NULL || map->sock->sk == NULL ||
+		map->sock->sk->sk_user_data != map)
+		return IRQ_HANDLED;
+
+	iow = &map->ioworker;
+
+	atomic_inc(&map->write);
+	atomic_inc(&map->io);
+	queue_work_on(iow->cpu, iow->wq, &iow->register_work);
+
 	return IRQ_HANDLED;
 }
 
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 18/18] xen: introduce a Kconfig option to enable the pvcalls backend
  2017-05-19 23:22   ` Stefano Stabellini
@ 2017-05-19 23:22     ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: linux-kernel, sstabellini, jgross, boris.ostrovsky, Stefano Stabellini

Also add pvcalls-back to the Makefile.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/Kconfig  | 12 ++++++++++++
 drivers/xen/Makefile |  1 +
 2 files changed, 13 insertions(+)

diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index f15bb3b7..bbdf059 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -196,6 +196,18 @@ config XEN_PCIDEV_BACKEND
 
 	  If in doubt, say m.
 
+config XEN_PVCALLS_BACKEND
+	bool "XEN PV Calls backend driver"
+	depends on INET && XEN
+	default n
+	help
+	  Experimental backend for the Xen PV Calls protocol
+	  (https://xenbits.xen.org/docs/unstable/misc/pvcalls.html). It
+	  allows PV Calls frontends to send POSIX calls to the backend,
+	  which implements them.
+
+	  If in doubt, say n.
+
 config XEN_SCSI_BACKEND
 	tristate "XEN SCSI backend driver"
 	depends on XEN && XEN_BACKEND && TARGET_CORE
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 8feab810..480b928 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -38,6 +38,7 @@ obj-$(CONFIG_XEN_ACPI_PROCESSOR)	+= xen-acpi-processor.o
 obj-$(CONFIG_XEN_EFI)			+= efi.o
 obj-$(CONFIG_XEN_SCSI_BACKEND)		+= xen-scsiback.o
 obj-$(CONFIG_XEN_AUTO_XLATE)		+= xlate_mmu.o
+obj-$(CONFIG_XEN_PVCALLS_BACKEND)	+= pvcalls-back.o
 xen-evtchn-y				:= evtchn.o
 xen-gntdev-y				:= gntdev.o
 xen-gntalloc-y				:= gntalloc.o
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* [PATCH v2 18/18] xen: introduce a Kconfig option to enable the pvcalls backend
@ 2017-05-19 23:22     ` Stefano Stabellini
  0 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-05-19 23:22 UTC (permalink / raw)
  To: xen-devel
  Cc: jgross, Stefano Stabellini, boris.ostrovsky, sstabellini, linux-kernel

Also add pvcalls-back to the Makefile.

Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
 drivers/xen/Kconfig  | 12 ++++++++++++
 drivers/xen/Makefile |  1 +
 2 files changed, 13 insertions(+)

diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index f15bb3b7..bbdf059 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -196,6 +196,18 @@ config XEN_PCIDEV_BACKEND
 
 	  If in doubt, say m.
 
+config XEN_PVCALLS_BACKEND
+	bool "XEN PV Calls backend driver"
+	depends on INET && XEN
+	default n
+	help
+	  Experimental backend for the Xen PV Calls protocol
+	  (https://xenbits.xen.org/docs/unstable/misc/pvcalls.html). It
+	  allows PV Calls frontends to send POSIX calls to the backend,
+	  which implements them.
+
+	  If in doubt, say n.
+
 config XEN_SCSI_BACKEND
 	tristate "XEN SCSI backend driver"
 	depends on XEN && XEN_BACKEND && TARGET_CORE
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 8feab810..480b928 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -38,6 +38,7 @@ obj-$(CONFIG_XEN_ACPI_PROCESSOR)	+= xen-acpi-processor.o
 obj-$(CONFIG_XEN_EFI)			+= efi.o
 obj-$(CONFIG_XEN_SCSI_BACKEND)		+= xen-scsiback.o
 obj-$(CONFIG_XEN_AUTO_XLATE)		+= xlate_mmu.o
+obj-$(CONFIG_XEN_PVCALLS_BACKEND)	+= pvcalls-back.o
 xen-evtchn-y				:= evtchn.o
 xen-gntdev-y				:= gntdev.o
 xen-gntalloc-y				:= gntalloc.o
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 01/18] xen: introduce the pvcalls interface header
  2017-05-19 23:22   ` Stefano Stabellini
                     ` (23 preceding siblings ...)
  (?)
@ 2017-05-25 22:02   ` Boris Ostrovsky
  2017-06-01 20:54     ` Stefano Stabellini
  2017-06-01 20:54     ` Stefano Stabellini
  -1 siblings, 2 replies; 69+ messages in thread
From: Boris Ostrovsky @ 2017-05-25 22:02 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel
  Cc: linux-kernel, jgross, Stefano Stabellini, konrad.wilk

On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> Introduce the C header file which defines the PV Calls interface. It is
> imported from xen/include/public/io/pvcalls.h.
>
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> CC: konrad.wilk@oracle.com
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> ---
>  include/xen/interface/io/pvcalls.h | 120 +++++++++++++++++++++++++++++++++++++
>  1 file changed, 120 insertions(+)
>  create mode 100644 include/xen/interface/io/pvcalls.h
>
> diff --git a/include/xen/interface/io/pvcalls.h b/include/xen/interface/io/pvcalls.h
> new file mode 100644
> index 0000000..0d41959
> --- /dev/null
> +++ b/include/xen/interface/io/pvcalls.h
> @@ -0,0 +1,120 @@
> +#ifndef __XEN_PUBLIC_IO_XEN_PVCALLS_H__
> +#define __XEN_PUBLIC_IO_XEN_PVCALLS_H__
> +
> +#include <linux/net.h>
> +#include "xen/interface/io/ring.h"
> +
> +/* "1" means socket, connect, release, bind, listen, accept and poll */
> +#define XENBUS_FUNCTIONS_CALLS "1"
> +
> +/*
> + * See docs/misc/pvcalls.markdown in xen.git for the full specification:
> + * https://xenbits.xen.org/docs/unstable/misc/pvcalls.html
> + */
> +struct pvcalls_data_intf {
> +    RING_IDX in_cons, in_prod, in_error;
> +
> +    uint8_t pad1[52];
> +
> +    RING_IDX out_cons, out_prod, out_error;
> +
> +    uint8_t pad2[52];
> +
> +    RING_IDX ring_order;
> +    grant_ref_t ref[];

I think you should also include <xen/interface/grant_table.h>.

In fact, ring.h probably needs to do it too.

-boris

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 01/18] xen: introduce the pvcalls interface header
  2017-05-19 23:22   ` Stefano Stabellini
                     ` (24 preceding siblings ...)
  (?)
@ 2017-05-25 22:02   ` Boris Ostrovsky
  -1 siblings, 0 replies; 69+ messages in thread
From: Boris Ostrovsky @ 2017-05-25 22:02 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: jgross, Stefano Stabellini, linux-kernel

On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> Introduce the C header file which defines the PV Calls interface. It is
> imported from xen/include/public/io/pvcalls.h.
>
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> CC: konrad.wilk@oracle.com
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> ---
>  include/xen/interface/io/pvcalls.h | 120 +++++++++++++++++++++++++++++++++++++
>  1 file changed, 120 insertions(+)
>  create mode 100644 include/xen/interface/io/pvcalls.h
>
> diff --git a/include/xen/interface/io/pvcalls.h b/include/xen/interface/io/pvcalls.h
> new file mode 100644
> index 0000000..0d41959
> --- /dev/null
> +++ b/include/xen/interface/io/pvcalls.h
> @@ -0,0 +1,120 @@
> +#ifndef __XEN_PUBLIC_IO_XEN_PVCALLS_H__
> +#define __XEN_PUBLIC_IO_XEN_PVCALLS_H__
> +
> +#include <linux/net.h>
> +#include "xen/interface/io/ring.h"
> +
> +/* "1" means socket, connect, release, bind, listen, accept and poll */
> +#define XENBUS_FUNCTIONS_CALLS "1"
> +
> +/*
> + * See docs/misc/pvcalls.markdown in xen.git for the full specification:
> + * https://xenbits.xen.org/docs/unstable/misc/pvcalls.html
> + */
> +struct pvcalls_data_intf {
> +    RING_IDX in_cons, in_prod, in_error;
> +
> +    uint8_t pad1[52];
> +
> +    RING_IDX out_cons, out_prod, out_error;
> +
> +    uint8_t pad2[52];
> +
> +    RING_IDX ring_order;
> +    grant_ref_t ref[];

I think you should also include <xen/interface/grant_table.h>.

In fact, ring.h probably needs to do it too.

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 02/18] xen/pvcalls: introduce the pvcalls xenbus backend
  2017-05-19 23:22   ` [PATCH v2 02/18] xen/pvcalls: introduce the pvcalls xenbus backend Stefano Stabellini
  2017-05-25 22:04     ` Boris Ostrovsky
@ 2017-05-25 22:04     ` Boris Ostrovsky
  1 sibling, 0 replies; 69+ messages in thread
From: Boris Ostrovsky @ 2017-05-25 22:04 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: linux-kernel, jgross, Stefano Stabellini

On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> Introduce a xenbus backend for the pvcalls protocol, as defined by
> https://xenbits.xen.org/docs/unstable/misc/pvcalls.html.
>
> This patch only adds the stubs, the code will be added by the following
> patches.
>
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 02/18] xen/pvcalls: introduce the pvcalls xenbus backend
  2017-05-19 23:22   ` [PATCH v2 02/18] xen/pvcalls: introduce the pvcalls xenbus backend Stefano Stabellini
@ 2017-05-25 22:04     ` Boris Ostrovsky
  2017-05-25 22:04     ` Boris Ostrovsky
  1 sibling, 0 replies; 69+ messages in thread
From: Boris Ostrovsky @ 2017-05-25 22:04 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: jgross, Stefano Stabellini, linux-kernel

On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> Introduce a xenbus backend for the pvcalls protocol, as defined by
> https://xenbits.xen.org/docs/unstable/misc/pvcalls.html.
>
> This patch only adds the stubs, the code will be added by the following
> patches.
>
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 03/18] xen/pvcalls: initialize the module and register the xenbus backend
  2017-05-19 23:22   ` Stefano Stabellini
@ 2017-05-25 22:12       ` Boris Ostrovsky
  0 siblings, 0 replies; 69+ messages in thread
From: Boris Ostrovsky @ 2017-05-25 22:12 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: linux-kernel, jgross, Stefano Stabellini

On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> Keep a list of connected frontends. Use a semaphore to protect list
> accesses.
>
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 03/18] xen/pvcalls: initialize the module and register the xenbus backend
@ 2017-05-25 22:12       ` Boris Ostrovsky
  0 siblings, 0 replies; 69+ messages in thread
From: Boris Ostrovsky @ 2017-05-25 22:12 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: jgross, Stefano Stabellini, linux-kernel

On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> Keep a list of connected frontends. Use a semaphore to protect list
> accesses.
>
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 04/18] xen/pvcalls: xenbus state handling
  2017-05-19 23:22     ` Stefano Stabellini
@ 2017-05-26  0:12       ` Boris Ostrovsky
  -1 siblings, 0 replies; 69+ messages in thread
From: Boris Ostrovsky @ 2017-05-26  0:12 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: linux-kernel, jgross, Stefano Stabellini

On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> Introduce the code to handle xenbus state changes.
>
> Implement the probe function for the pvcalls backend. Write the
> supported versions, max-page-order and function-calls nodes to xenstore,
> as required by the protocol.
>
> Introduce stub functions for disconnecting/connecting to a frontend.
>
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> ---
>  drivers/xen/pvcalls-back.c | 135 +++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 135 insertions(+)
>
> diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> index 9044cf2..b4da138 100644
> --- a/drivers/xen/pvcalls-back.c
> +++ b/drivers/xen/pvcalls-back.c
> @@ -25,20 +25,155 @@
>  #include <xen/xenbus.h>
>  #include <xen/interface/io/pvcalls.h>
>  
> +#define PVCALLS_VERSIONS "1"

Shouldn't this be in a header file that will be shared with frontends?

> +#define MAX_RING_ORDER XENBUS_MAX_RING_GRANT_ORDER
> +
>  struct pvcalls_back_global {
>  	struct list_head frontends;
>  	struct semaphore frontends_lock;
>  } pvcalls_back_global;
>  
> +static int backend_connect(struct xenbus_device *dev)
> +{
> +	return 0;
> +}
> +
> +static int backend_disconnect(struct xenbus_device *dev)
> +{
> +	return 0;
> +}
> +
>  static int pvcalls_back_probe(struct xenbus_device *dev,
>  			      const struct xenbus_device_id *id)
>  {
> +	int err;
> +
> +	err = xenbus_printf(XBT_NIL, dev->nodename, "versions", "%s",
> +			    PVCALLS_VERSIONS);
> +	if (err) {
> +		pr_warn("%s write out 'version' failed\n", __func__);
> +		return -EINVAL;

err?

> +	}
> +
> +	err = xenbus_printf(XBT_NIL, dev->nodename, "max-page-order", "%u",
> +			    MAX_RING_ORDER);
> +	if (err) {
> +		pr_warn("%s write out 'max-page-order' failed\n", __func__);
> +		return err;
> +	}
> +
> +	/* "1" means socket, connect, release, bind, listen, accept and poll*/

This comment is obsolete, or at least should refer to
XENBUS_FUNCTIONS_CALLS (and is missing a space at the end).

> +	err = xenbus_printf(XBT_NIL, dev->nodename, "function-calls",
> +			    XENBUS_FUNCTIONS_CALLS);
> +	if (err) {
> +		pr_warn("%s write out 'function-calls' failed\n", __func__);
> +		return err;
> +	}


In case of errors we will end up with all previous entries. I think this
should be done as a transaction which would be aborted in case of an error.

> +
> +	err = xenbus_switch_state(dev, XenbusStateInitWait);
> +	if (err)
> +		return err;


Not sure what to do on an error here (wrt xenstore entries).
xenbus_switch_state() itself uses transactions. Are we allowed to have
nested transactions?

OTOH, xenbus_switch_state() never returns an error, at least now. In
fact, in most cases we ignore return value.


-boris


> +
>  	return 0;
>  }
>  
>

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 04/18] xen/pvcalls: xenbus state handling
@ 2017-05-26  0:12       ` Boris Ostrovsky
  0 siblings, 0 replies; 69+ messages in thread
From: Boris Ostrovsky @ 2017-05-26  0:12 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: jgross, Stefano Stabellini, linux-kernel

On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> Introduce the code to handle xenbus state changes.
>
> Implement the probe function for the pvcalls backend. Write the
> supported versions, max-page-order and function-calls nodes to xenstore,
> as required by the protocol.
>
> Introduce stub functions for disconnecting/connecting to a frontend.
>
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> ---
>  drivers/xen/pvcalls-back.c | 135 +++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 135 insertions(+)
>
> diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> index 9044cf2..b4da138 100644
> --- a/drivers/xen/pvcalls-back.c
> +++ b/drivers/xen/pvcalls-back.c
> @@ -25,20 +25,155 @@
>  #include <xen/xenbus.h>
>  #include <xen/interface/io/pvcalls.h>
>  
> +#define PVCALLS_VERSIONS "1"

Shouldn't this be in a header file that will be shared with frontends?

> +#define MAX_RING_ORDER XENBUS_MAX_RING_GRANT_ORDER
> +
>  struct pvcalls_back_global {
>  	struct list_head frontends;
>  	struct semaphore frontends_lock;
>  } pvcalls_back_global;
>  
> +static int backend_connect(struct xenbus_device *dev)
> +{
> +	return 0;
> +}
> +
> +static int backend_disconnect(struct xenbus_device *dev)
> +{
> +	return 0;
> +}
> +
>  static int pvcalls_back_probe(struct xenbus_device *dev,
>  			      const struct xenbus_device_id *id)
>  {
> +	int err;
> +
> +	err = xenbus_printf(XBT_NIL, dev->nodename, "versions", "%s",
> +			    PVCALLS_VERSIONS);
> +	if (err) {
> +		pr_warn("%s write out 'version' failed\n", __func__);
> +		return -EINVAL;

err?

> +	}
> +
> +	err = xenbus_printf(XBT_NIL, dev->nodename, "max-page-order", "%u",
> +			    MAX_RING_ORDER);
> +	if (err) {
> +		pr_warn("%s write out 'max-page-order' failed\n", __func__);
> +		return err;
> +	}
> +
> +	/* "1" means socket, connect, release, bind, listen, accept and poll*/

This comment is obsolete, or at least should refer to
XENBUS_FUNCTIONS_CALLS (and is missing a space at the end).

> +	err = xenbus_printf(XBT_NIL, dev->nodename, "function-calls",
> +			    XENBUS_FUNCTIONS_CALLS);
> +	if (err) {
> +		pr_warn("%s write out 'function-calls' failed\n", __func__);
> +		return err;
> +	}


In case of errors we will end up with all previous entries. I think this
should be done as a transaction which would be aborted in case of an error.

> +
> +	err = xenbus_switch_state(dev, XenbusStateInitWait);
> +	if (err)
> +		return err;


Not sure what to do on an error here (wrt xenstore entries).
xenbus_switch_state() itself uses transactions. Are we allowed to have
nested transactions?

OTOH, xenbus_switch_state() never returns an error, at least now. In
fact, in most cases we ignore return value.


-boris


> +
>  	return 0;
>  }
>  
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 05/18] xen/pvcalls: connect to a frontend
  2017-05-19 23:22     ` Stefano Stabellini
  (?)
  (?)
@ 2017-05-26 15:23     ` Boris Ostrovsky
  2017-06-01 21:06       ` Stefano Stabellini
  2017-06-01 21:06       ` Stefano Stabellini
  -1 siblings, 2 replies; 69+ messages in thread
From: Boris Ostrovsky @ 2017-05-26 15:23 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: linux-kernel, jgross, Stefano Stabellini

On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> Introduce a per-frontend data structure named pvcalls_back_priv. It
> contains pointers to the command ring, its event channel, a list of
> active sockets and a tree of passive sockets (passing sockets need to be
> looked up from the id on listen, accept and poll commands, while active
> sockets only on release).
>
> It also has an unbound workqueue to schedule the work of parsing and
> executing commands on the command ring. socket_lock protects the two
> lists. In pvcalls_back_global, keep a list of connected frontends.
>
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> ---
>  drivers/xen/pvcalls-back.c | 95 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 95 insertions(+)
>
> diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> index b4da138..a48b0d9 100644
> --- a/drivers/xen/pvcalls-back.c
> +++ b/drivers/xen/pvcalls-back.c
> @@ -33,9 +33,104 @@ struct pvcalls_back_global {
>  	struct semaphore frontends_lock;
>  } pvcalls_back_global;
>  
> +/*
> + * Per-frontend data structure. It contains pointers to the command
> + * ring, its event channel, a list of active sockets and a tree of
> + * passive sockets.
> + */
> +struct pvcalls_back_priv {

pvcalls_fedata or pvcalls_feinfo maybe (or pvcalls_back_fedata)?

> +	struct list_head list;
> +	struct xenbus_device *dev;
> +	struct xen_pvcalls_sring *sring;
> +	struct xen_pvcalls_back_ring ring;
> +	int irq;
> +	struct list_head socket_mappings;
> +	struct radix_tree_root socketpass_mappings;
> +	struct semaphore socket_lock;
> +	atomic_t work;
> +	struct workqueue_struct *wq;
> +	struct work_struct register_work;
> +};
> +
> +static void pvcalls_back_work(struct work_struct *work)
> +{
> +}
> +
> +static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
> +{
> +	return IRQ_HANDLED;
> +}
> +
>  static int backend_connect(struct xenbus_device *dev)
>  {
> +	int err, evtchn;
> +	grant_ref_t ring_ref;
> +	void *addr = NULL;
> +	struct pvcalls_back_priv *priv = NULL;
> +
> +	priv = kzalloc(sizeof(struct pvcalls_back_priv), GFP_KERNEL);
> +	if (!priv)
> +		return -ENOMEM;
> +
> +	err = xenbus_scanf(XBT_NIL, dev->otherend, "port", "%u",
> +			   &evtchn);
> +	if (err != 1) {
> +		err = -EINVAL;
> +		xenbus_dev_fatal(dev, err, "reading %s/event-channel",
> +				 dev->otherend);
> +		goto error;
> +	}
> +
> +	err = xenbus_scanf(XBT_NIL, dev->otherend, "ring-ref", "%u", &ring_ref);
> +	if (err != 1) {
> +		err = -EINVAL;
> +		xenbus_dev_fatal(dev, err, "reading %s/ring-ref",
> +				 dev->otherend);
> +		goto error;
> +	}
> +
> +	err = bind_interdomain_evtchn_to_irqhandler(dev->otherend_id, evtchn,
> +						    pvcalls_back_event, 0,
> +						    "pvcalls-backend", dev);
> +	if (err < 0)
> +		goto error;
> +	priv->irq = err;
> +
> +	priv->wq = alloc_workqueue("pvcalls_back_wq", WQ_UNBOUND, 1);
> +	if (!priv->wq) {
> +		err = -ENOMEM;
> +		goto error;
> +	}
> +
> +	err = xenbus_map_ring_valloc(dev, &ring_ref, 1, &addr);
> +	if (err < 0)
> +		goto error;
> +	priv->sring = addr;

You don't really need addr, since priv is kzalloc'd (and you can deal
with it in error path).

-boris

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 05/18] xen/pvcalls: connect to a frontend
  2017-05-19 23:22     ` Stefano Stabellini
  (?)
@ 2017-05-26 15:23     ` Boris Ostrovsky
  -1 siblings, 0 replies; 69+ messages in thread
From: Boris Ostrovsky @ 2017-05-26 15:23 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: jgross, Stefano Stabellini, linux-kernel

On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> Introduce a per-frontend data structure named pvcalls_back_priv. It
> contains pointers to the command ring, its event channel, a list of
> active sockets and a tree of passive sockets (passing sockets need to be
> looked up from the id on listen, accept and poll commands, while active
> sockets only on release).
>
> It also has an unbound workqueue to schedule the work of parsing and
> executing commands on the command ring. socket_lock protects the two
> lists. In pvcalls_back_global, keep a list of connected frontends.
>
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> ---
>  drivers/xen/pvcalls-back.c | 95 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 95 insertions(+)
>
> diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> index b4da138..a48b0d9 100644
> --- a/drivers/xen/pvcalls-back.c
> +++ b/drivers/xen/pvcalls-back.c
> @@ -33,9 +33,104 @@ struct pvcalls_back_global {
>  	struct semaphore frontends_lock;
>  } pvcalls_back_global;
>  
> +/*
> + * Per-frontend data structure. It contains pointers to the command
> + * ring, its event channel, a list of active sockets and a tree of
> + * passive sockets.
> + */
> +struct pvcalls_back_priv {

pvcalls_fedata or pvcalls_feinfo maybe (or pvcalls_back_fedata)?

> +	struct list_head list;
> +	struct xenbus_device *dev;
> +	struct xen_pvcalls_sring *sring;
> +	struct xen_pvcalls_back_ring ring;
> +	int irq;
> +	struct list_head socket_mappings;
> +	struct radix_tree_root socketpass_mappings;
> +	struct semaphore socket_lock;
> +	atomic_t work;
> +	struct workqueue_struct *wq;
> +	struct work_struct register_work;
> +};
> +
> +static void pvcalls_back_work(struct work_struct *work)
> +{
> +}
> +
> +static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
> +{
> +	return IRQ_HANDLED;
> +}
> +
>  static int backend_connect(struct xenbus_device *dev)
>  {
> +	int err, evtchn;
> +	grant_ref_t ring_ref;
> +	void *addr = NULL;
> +	struct pvcalls_back_priv *priv = NULL;
> +
> +	priv = kzalloc(sizeof(struct pvcalls_back_priv), GFP_KERNEL);
> +	if (!priv)
> +		return -ENOMEM;
> +
> +	err = xenbus_scanf(XBT_NIL, dev->otherend, "port", "%u",
> +			   &evtchn);
> +	if (err != 1) {
> +		err = -EINVAL;
> +		xenbus_dev_fatal(dev, err, "reading %s/event-channel",
> +				 dev->otherend);
> +		goto error;
> +	}
> +
> +	err = xenbus_scanf(XBT_NIL, dev->otherend, "ring-ref", "%u", &ring_ref);
> +	if (err != 1) {
> +		err = -EINVAL;
> +		xenbus_dev_fatal(dev, err, "reading %s/ring-ref",
> +				 dev->otherend);
> +		goto error;
> +	}
> +
> +	err = bind_interdomain_evtchn_to_irqhandler(dev->otherend_id, evtchn,
> +						    pvcalls_back_event, 0,
> +						    "pvcalls-backend", dev);
> +	if (err < 0)
> +		goto error;
> +	priv->irq = err;
> +
> +	priv->wq = alloc_workqueue("pvcalls_back_wq", WQ_UNBOUND, 1);
> +	if (!priv->wq) {
> +		err = -ENOMEM;
> +		goto error;
> +	}
> +
> +	err = xenbus_map_ring_valloc(dev, &ring_ref, 1, &addr);
> +	if (err < 0)
> +		goto error;
> +	priv->sring = addr;

You don't really need addr, since priv is kzalloc'd (and you can deal
with it in error path).

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 06/18] xen/pvcalls: handle commands from the frontend
  2017-05-19 23:22     ` Stefano Stabellini
  (?)
  (?)
@ 2017-05-26 15:32     ` Boris Ostrovsky
  2017-06-02 18:21       ` Stefano Stabellini
  2017-06-02 18:21       ` Stefano Stabellini
  -1 siblings, 2 replies; 69+ messages in thread
From: Boris Ostrovsky @ 2017-05-26 15:32 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: linux-kernel, jgross, Stefano Stabellini

On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> +
>  static void pvcalls_back_work(struct work_struct *work)
>  {
> +	struct pvcalls_back_priv *priv = container_of(work,
> +		struct pvcalls_back_priv, register_work);
> +	int notify, notify_all = 0, more = 1;
> +	struct xen_pvcalls_request req;
> +	struct xenbus_device *dev = priv->dev;
> +
> +	atomic_set(&priv->work, 1);
> +
> +	while (more || !atomic_dec_and_test(&priv->work)) {
> +		while (RING_HAS_UNCONSUMED_REQUESTS(&priv->ring)) {
> +			RING_COPY_REQUEST(&priv->ring,
> +					  priv->ring.req_cons++,
> +					  &req);
> +
> +			if (!pvcalls_back_handle_cmd(dev, &req)) {
> +				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(
> +					&priv->ring, notify);
> +				notify_all += notify;
> +			}
> +		}
> +
> +		if (notify_all)
> +			notify_remote_via_irq(priv->irq);
> +
> +		RING_FINAL_CHECK_FOR_REQUESTS(&priv->ring, more);
> +	}
>  }
>  
>  static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
>  {
> +	struct xenbus_device *dev = dev_id;
> +	struct pvcalls_back_priv *priv = NULL;
> +
> +	if (dev == NULL)
> +		return IRQ_HANDLED;
> +
> +	priv = dev_get_drvdata(&dev->dev);
> +	if (priv == NULL)
> +		return IRQ_HANDLED;
> +
> +	atomic_inc(&priv->work);

I will paste you response here from v1 --- I thought I understood it and
now I don't anymore.

>>
>> Is this really needed? We have a new entry on the ring, so the outer
loop in
>> pvcalls_back_work() will pick this up (by setting 'more').
>
> This is to avoid race conditions. A notification could be delivered
> after RING_FINAL_CHECK_FOR_REQUESTS is called, returning more == 0, but
> before pvcalls_back_work completes. In that case, without priv->work,
> pvcalls_back_work wouldn't be rescheduled because it is still running
> and the work would be left undone.


How is this different from the case when new work comes after the outer
loop is done but we still haven't returned from pvcalls_back_work()?

-boris

> +	queue_work(priv->wq, &priv->register_work);
> +
>  	return IRQ_HANDLED;
>  }
>  

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 06/18] xen/pvcalls: handle commands from the frontend
  2017-05-19 23:22     ` Stefano Stabellini
  (?)
@ 2017-05-26 15:32     ` Boris Ostrovsky
  -1 siblings, 0 replies; 69+ messages in thread
From: Boris Ostrovsky @ 2017-05-26 15:32 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: jgross, Stefano Stabellini, linux-kernel

On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> +
>  static void pvcalls_back_work(struct work_struct *work)
>  {
> +	struct pvcalls_back_priv *priv = container_of(work,
> +		struct pvcalls_back_priv, register_work);
> +	int notify, notify_all = 0, more = 1;
> +	struct xen_pvcalls_request req;
> +	struct xenbus_device *dev = priv->dev;
> +
> +	atomic_set(&priv->work, 1);
> +
> +	while (more || !atomic_dec_and_test(&priv->work)) {
> +		while (RING_HAS_UNCONSUMED_REQUESTS(&priv->ring)) {
> +			RING_COPY_REQUEST(&priv->ring,
> +					  priv->ring.req_cons++,
> +					  &req);
> +
> +			if (!pvcalls_back_handle_cmd(dev, &req)) {
> +				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(
> +					&priv->ring, notify);
> +				notify_all += notify;
> +			}
> +		}
> +
> +		if (notify_all)
> +			notify_remote_via_irq(priv->irq);
> +
> +		RING_FINAL_CHECK_FOR_REQUESTS(&priv->ring, more);
> +	}
>  }
>  
>  static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
>  {
> +	struct xenbus_device *dev = dev_id;
> +	struct pvcalls_back_priv *priv = NULL;
> +
> +	if (dev == NULL)
> +		return IRQ_HANDLED;
> +
> +	priv = dev_get_drvdata(&dev->dev);
> +	if (priv == NULL)
> +		return IRQ_HANDLED;
> +
> +	atomic_inc(&priv->work);

I will paste you response here from v1 --- I thought I understood it and
now I don't anymore.

>>
>> Is this really needed? We have a new entry on the ring, so the outer
loop in
>> pvcalls_back_work() will pick this up (by setting 'more').
>
> This is to avoid race conditions. A notification could be delivered
> after RING_FINAL_CHECK_FOR_REQUESTS is called, returning more == 0, but
> before pvcalls_back_work completes. In that case, without priv->work,
> pvcalls_back_work wouldn't be rescheduled because it is still running
> and the work would be left undone.


How is this different from the case when new work comes after the outer
loop is done but we still haven't returned from pvcalls_back_work()?

-boris

> +	queue_work(priv->wq, &priv->register_work);
> +
>  	return IRQ_HANDLED;
>  }
>  


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 07/18] xen/pvcalls: implement socket command
  2017-05-19 23:22     ` Stefano Stabellini
  (?)
  (?)
@ 2017-05-26 15:53     ` Boris Ostrovsky
  2017-06-02 18:35       ` Stefano Stabellini
  2017-06-02 18:35       ` Stefano Stabellini
  -1 siblings, 2 replies; 69+ messages in thread
From: Boris Ostrovsky @ 2017-05-26 15:53 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: linux-kernel, jgross, Stefano Stabellini


>  static int pvcalls_back_socket(struct xenbus_device *dev,
>  		struct xen_pvcalls_request *req)
>  {
> -	return 0;
> +	struct pvcalls_back_priv *priv;
> +	int ret;
> +	struct xen_pvcalls_response *rsp;
> +
> +	priv = dev_get_drvdata(&dev->dev);
> +
> +	if (req->u.socket.domain != AF_INET ||
> +	    req->u.socket.type != SOCK_STREAM ||
> +	    (req->u.socket.protocol != 0 &&
> +	     req->u.socket.protocol != AF_INET))

Shouldn't this be one of IPPROTO_* macros?

-boris

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 07/18] xen/pvcalls: implement socket command
  2017-05-19 23:22     ` Stefano Stabellini
  (?)
@ 2017-05-26 15:53     ` Boris Ostrovsky
  -1 siblings, 0 replies; 69+ messages in thread
From: Boris Ostrovsky @ 2017-05-26 15:53 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: jgross, Stefano Stabellini, linux-kernel


>  static int pvcalls_back_socket(struct xenbus_device *dev,
>  		struct xen_pvcalls_request *req)
>  {
> -	return 0;
> +	struct pvcalls_back_priv *priv;
> +	int ret;
> +	struct xen_pvcalls_response *rsp;
> +
> +	priv = dev_get_drvdata(&dev->dev);
> +
> +	if (req->u.socket.domain != AF_INET ||
> +	    req->u.socket.type != SOCK_STREAM ||
> +	    (req->u.socket.protocol != 0 &&
> +	     req->u.socket.protocol != AF_INET))

Shouldn't this be one of IPPROTO_* macros?

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 11/18] xen/pvcalls: implement accept command
  2017-05-19 23:22     ` Stefano Stabellini
  (?)
  (?)
@ 2017-05-26 18:18     ` Boris Ostrovsky
  2017-06-02 19:19       ` Stefano Stabellini
  2017-06-02 19:19       ` Stefano Stabellini
  -1 siblings, 2 replies; 69+ messages in thread
From: Boris Ostrovsky @ 2017-05-26 18:18 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: linux-kernel, jgross, Stefano Stabellini


>  static void __pvcalls_back_accept(struct work_struct *work)
>  {
> +	struct sockpass_mapping *mappass = container_of(
> +		work, struct sockpass_mapping, register_work);
> +	struct sock_mapping *map;
> +	struct pvcalls_ioworker *iow;
> +	struct pvcalls_back_priv *priv;
> +	struct xen_pvcalls_response *rsp;
> +	struct xen_pvcalls_request *req;
> +	void *page = NULL;
> +	int notify;
> +	int ret = -EINVAL;
> +	unsigned long flags;
> +
> +	priv = mappass->priv;
> +	/* We only need to check the value of "cmd" atomically on read. */
> +	spin_lock_irqsave(&mappass->copy_lock, flags);
> +	req = &mappass->reqcopy;
> +	if (req->cmd != PVCALLS_ACCEPT) {
> +		spin_unlock_irqrestore(&mappass->copy_lock, flags);
> +		return;
> +	}
> +	spin_unlock_irqrestore(&mappass->copy_lock, flags);
> +
> +	map = kzalloc(sizeof(*map), GFP_KERNEL);

>From here on, the code looks almost identical to connect. Can this be
factored out?

-boris

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 11/18] xen/pvcalls: implement accept command
  2017-05-19 23:22     ` Stefano Stabellini
  (?)
@ 2017-05-26 18:18     ` Boris Ostrovsky
  -1 siblings, 0 replies; 69+ messages in thread
From: Boris Ostrovsky @ 2017-05-26 18:18 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: jgross, Stefano Stabellini, linux-kernel


>  static void __pvcalls_back_accept(struct work_struct *work)
>  {
> +	struct sockpass_mapping *mappass = container_of(
> +		work, struct sockpass_mapping, register_work);
> +	struct sock_mapping *map;
> +	struct pvcalls_ioworker *iow;
> +	struct pvcalls_back_priv *priv;
> +	struct xen_pvcalls_response *rsp;
> +	struct xen_pvcalls_request *req;
> +	void *page = NULL;
> +	int notify;
> +	int ret = -EINVAL;
> +	unsigned long flags;
> +
> +	priv = mappass->priv;
> +	/* We only need to check the value of "cmd" atomically on read. */
> +	spin_lock_irqsave(&mappass->copy_lock, flags);
> +	req = &mappass->reqcopy;
> +	if (req->cmd != PVCALLS_ACCEPT) {
> +		spin_unlock_irqrestore(&mappass->copy_lock, flags);
> +		return;
> +	}
> +	spin_unlock_irqrestore(&mappass->copy_lock, flags);
> +
> +	map = kzalloc(sizeof(*map), GFP_KERNEL);

From here on, the code looks almost identical to connect. Can this be
factored out?

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 07/18] xen/pvcalls: implement socket command
  2017-05-19 23:22     ` Stefano Stabellini
                       ` (3 preceding siblings ...)
  (?)
@ 2017-05-26 18:58     ` Boris Ostrovsky
  2017-06-02 18:41       ` Stefano Stabellini
  2017-06-02 18:41       ` Stefano Stabellini
  -1 siblings, 2 replies; 69+ messages in thread
From: Boris Ostrovsky @ 2017-05-26 18:58 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: linux-kernel, jgross, Stefano Stabellini

On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> Just reply with success to the other end for now. Delay the allocation
> of the actual socket to bind and/or connect.
>
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> ---
>  drivers/xen/pvcalls-back.c | 29 ++++++++++++++++++++++++++++-
>  1 file changed, 28 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> index 9dc8a28..fed54bf 100644
> --- a/drivers/xen/pvcalls-back.c
> +++ b/drivers/xen/pvcalls-back.c
> @@ -12,12 +12,17 @@
>   * GNU General Public License for more details.
>   */
>  
> +#include <linux/inet.h>
>  #include <linux/kthread.h>
>  #include <linux/list.h>
>  #include <linux/radix-tree.h>
>  #include <linux/module.h>
>  #include <linux/semaphore.h>
>  #include <linux/wait.h>
> +#include <net/sock.h>
> +#include <net/inet_common.h>
> +#include <net/inet_connection_sock.h>
> +#include <net/request_sock.h>
>  
>  #include <xen/events.h>
>  #include <xen/grant_table.h>
> @@ -55,7 +60,29 @@ struct pvcalls_back_priv {
>  static int pvcalls_back_socket(struct xenbus_device *dev,
>  		struct xen_pvcalls_request *req)
>  {
> -	return 0;
> +	struct pvcalls_back_priv *priv;
> +	int ret;
> +	struct xen_pvcalls_response *rsp;
> +
> +	priv = dev_get_drvdata(&dev->dev);
> +
> +	if (req->u.socket.domain != AF_INET ||
> +	    req->u.socket.type != SOCK_STREAM ||
> +	    (req->u.socket.protocol != 0 &&
> +	     req->u.socket.protocol != AF_INET))
> +		ret = -EAFNOSUPPORT;
> +	else
> +		ret = 0;
> +
> +	/* leave the actual socket allocation for later */

Why is this allocation deferred (to connect and bind)? Doesn't it in
some way violate semantics of socket call?


-boris

> +
> +	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
> +	rsp->req_id = req->req_id;
> +	rsp->cmd = req->cmd;
> +	rsp->u.socket.id = req->u.socket.id;
> +	rsp->ret = ret;
> +
> +	return ret;
>  }
>  
>  static int pvcalls_back_connect(struct xenbus_device *dev,

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 07/18] xen/pvcalls: implement socket command
  2017-05-19 23:22     ` Stefano Stabellini
                       ` (2 preceding siblings ...)
  (?)
@ 2017-05-26 18:58     ` Boris Ostrovsky
  -1 siblings, 0 replies; 69+ messages in thread
From: Boris Ostrovsky @ 2017-05-26 18:58 UTC (permalink / raw)
  To: Stefano Stabellini, xen-devel; +Cc: jgross, Stefano Stabellini, linux-kernel

On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> Just reply with success to the other end for now. Delay the allocation
> of the actual socket to bind and/or connect.
>
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> ---
>  drivers/xen/pvcalls-back.c | 29 ++++++++++++++++++++++++++++-
>  1 file changed, 28 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> index 9dc8a28..fed54bf 100644
> --- a/drivers/xen/pvcalls-back.c
> +++ b/drivers/xen/pvcalls-back.c
> @@ -12,12 +12,17 @@
>   * GNU General Public License for more details.
>   */
>  
> +#include <linux/inet.h>
>  #include <linux/kthread.h>
>  #include <linux/list.h>
>  #include <linux/radix-tree.h>
>  #include <linux/module.h>
>  #include <linux/semaphore.h>
>  #include <linux/wait.h>
> +#include <net/sock.h>
> +#include <net/inet_common.h>
> +#include <net/inet_connection_sock.h>
> +#include <net/request_sock.h>
>  
>  #include <xen/events.h>
>  #include <xen/grant_table.h>
> @@ -55,7 +60,29 @@ struct pvcalls_back_priv {
>  static int pvcalls_back_socket(struct xenbus_device *dev,
>  		struct xen_pvcalls_request *req)
>  {
> -	return 0;
> +	struct pvcalls_back_priv *priv;
> +	int ret;
> +	struct xen_pvcalls_response *rsp;
> +
> +	priv = dev_get_drvdata(&dev->dev);
> +
> +	if (req->u.socket.domain != AF_INET ||
> +	    req->u.socket.type != SOCK_STREAM ||
> +	    (req->u.socket.protocol != 0 &&
> +	     req->u.socket.protocol != AF_INET))
> +		ret = -EAFNOSUPPORT;
> +	else
> +		ret = 0;
> +
> +	/* leave the actual socket allocation for later */

Why is this allocation deferred (to connect and bind)? Doesn't it in
some way violate semantics of socket call?


-boris

> +
> +	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
> +	rsp->req_id = req->req_id;
> +	rsp->cmd = req->cmd;
> +	rsp->u.socket.id = req->u.socket.id;
> +	rsp->ret = ret;
> +
> +	return ret;
>  }
>  
>  static int pvcalls_back_connect(struct xenbus_device *dev,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 04/18] xen/pvcalls: xenbus state handling
  2017-05-26  0:12       ` Boris Ostrovsky
  (?)
  (?)
@ 2017-06-01 20:54       ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-06-01 20:54 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: Stefano Stabellini, xen-devel, linux-kernel, jgross, Stefano Stabellini

On Thu, 25 May 2017, Boris Ostrovsky wrote:
> On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> > Introduce the code to handle xenbus state changes.
> >
> > Implement the probe function for the pvcalls backend. Write the
> > supported versions, max-page-order and function-calls nodes to xenstore,
> > as required by the protocol.
> >
> > Introduce stub functions for disconnecting/connecting to a frontend.
> >
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > CC: boris.ostrovsky@oracle.com
> > CC: jgross@suse.com
> > ---
> >  drivers/xen/pvcalls-back.c | 135 +++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 135 insertions(+)
> >
> > diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> > index 9044cf2..b4da138 100644
> > --- a/drivers/xen/pvcalls-back.c
> > +++ b/drivers/xen/pvcalls-back.c
> > @@ -25,20 +25,155 @@
> >  #include <xen/xenbus.h>
> >  #include <xen/interface/io/pvcalls.h>
> >  
> > +#define PVCALLS_VERSIONS "1"
> 
> Shouldn't this be in a header file that will be shared with frontends?

I don't think it should: different backends could implement a different
pvcalls version. There is an handshake mechanism to agree on a version
of the protocol to use. It is probably not a good idea to expose this
specific version in the generic pvcalls header.


> > +#define MAX_RING_ORDER XENBUS_MAX_RING_GRANT_ORDER
> > +
> >  struct pvcalls_back_global {
> >  	struct list_head frontends;
> >  	struct semaphore frontends_lock;
> >  } pvcalls_back_global;
> >  
> > +static int backend_connect(struct xenbus_device *dev)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int backend_disconnect(struct xenbus_device *dev)
> > +{
> > +	return 0;
> > +}
> > +
> >  static int pvcalls_back_probe(struct xenbus_device *dev,
> >  			      const struct xenbus_device_id *id)
> >  {
> > +	int err;
> > +
> > +	err = xenbus_printf(XBT_NIL, dev->nodename, "versions", "%s",
> > +			    PVCALLS_VERSIONS);
> > +	if (err) {
> > +		pr_warn("%s write out 'version' failed\n", __func__);
> > +		return -EINVAL;
> 
> err?

OK


> > +	}
> > +
> > +	err = xenbus_printf(XBT_NIL, dev->nodename, "max-page-order", "%u",
> > +			    MAX_RING_ORDER);
> > +	if (err) {
> > +		pr_warn("%s write out 'max-page-order' failed\n", __func__);
> > +		return err;
> > +	}
> > +
> > +	/* "1" means socket, connect, release, bind, listen, accept and poll*/
> 
> This comment is obsolete, or at least should refer to
> XENBUS_FUNCTIONS_CALLS (and is missing a space at the end).

OK


> > +	err = xenbus_printf(XBT_NIL, dev->nodename, "function-calls",
> > +			    XENBUS_FUNCTIONS_CALLS);
> > +	if (err) {
> > +		pr_warn("%s write out 'function-calls' failed\n", __func__);
> > +		return err;
> > +	}
> 
> 
> In case of errors we will end up with all previous entries. I think this
> should be done as a transaction which would be aborted in case of an error.

OK


> > +
> > +	err = xenbus_switch_state(dev, XenbusStateInitWait);
> > +	if (err)
> > +		return err;
> 
> 
> Not sure what to do on an error here (wrt xenstore entries).
> xenbus_switch_state() itself uses transactions. Are we allowed to have
> nested transactions?
> 
> OTOH, xenbus_switch_state() never returns an error, at least now. In
> fact, in most cases we ignore return value.

I don't think we can have nested transactions. For simplicity, I'll
ignore errors from xenbus_switch_state.

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 04/18] xen/pvcalls: xenbus state handling
  2017-05-26  0:12       ` Boris Ostrovsky
  (?)
@ 2017-06-01 20:54       ` Stefano Stabellini
  -1 siblings, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-06-01 20:54 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: jgross, Stefano Stabellini, Stefano Stabellini, linux-kernel, xen-devel

On Thu, 25 May 2017, Boris Ostrovsky wrote:
> On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> > Introduce the code to handle xenbus state changes.
> >
> > Implement the probe function for the pvcalls backend. Write the
> > supported versions, max-page-order and function-calls nodes to xenstore,
> > as required by the protocol.
> >
> > Introduce stub functions for disconnecting/connecting to a frontend.
> >
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > CC: boris.ostrovsky@oracle.com
> > CC: jgross@suse.com
> > ---
> >  drivers/xen/pvcalls-back.c | 135 +++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 135 insertions(+)
> >
> > diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> > index 9044cf2..b4da138 100644
> > --- a/drivers/xen/pvcalls-back.c
> > +++ b/drivers/xen/pvcalls-back.c
> > @@ -25,20 +25,155 @@
> >  #include <xen/xenbus.h>
> >  #include <xen/interface/io/pvcalls.h>
> >  
> > +#define PVCALLS_VERSIONS "1"
> 
> Shouldn't this be in a header file that will be shared with frontends?

I don't think it should: different backends could implement a different
pvcalls version. There is an handshake mechanism to agree on a version
of the protocol to use. It is probably not a good idea to expose this
specific version in the generic pvcalls header.


> > +#define MAX_RING_ORDER XENBUS_MAX_RING_GRANT_ORDER
> > +
> >  struct pvcalls_back_global {
> >  	struct list_head frontends;
> >  	struct semaphore frontends_lock;
> >  } pvcalls_back_global;
> >  
> > +static int backend_connect(struct xenbus_device *dev)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int backend_disconnect(struct xenbus_device *dev)
> > +{
> > +	return 0;
> > +}
> > +
> >  static int pvcalls_back_probe(struct xenbus_device *dev,
> >  			      const struct xenbus_device_id *id)
> >  {
> > +	int err;
> > +
> > +	err = xenbus_printf(XBT_NIL, dev->nodename, "versions", "%s",
> > +			    PVCALLS_VERSIONS);
> > +	if (err) {
> > +		pr_warn("%s write out 'version' failed\n", __func__);
> > +		return -EINVAL;
> 
> err?

OK


> > +	}
> > +
> > +	err = xenbus_printf(XBT_NIL, dev->nodename, "max-page-order", "%u",
> > +			    MAX_RING_ORDER);
> > +	if (err) {
> > +		pr_warn("%s write out 'max-page-order' failed\n", __func__);
> > +		return err;
> > +	}
> > +
> > +	/* "1" means socket, connect, release, bind, listen, accept and poll*/
> 
> This comment is obsolete, or at least should refer to
> XENBUS_FUNCTIONS_CALLS (and is missing a space at the end).

OK


> > +	err = xenbus_printf(XBT_NIL, dev->nodename, "function-calls",
> > +			    XENBUS_FUNCTIONS_CALLS);
> > +	if (err) {
> > +		pr_warn("%s write out 'function-calls' failed\n", __func__);
> > +		return err;
> > +	}
> 
> 
> In case of errors we will end up with all previous entries. I think this
> should be done as a transaction which would be aborted in case of an error.

OK


> > +
> > +	err = xenbus_switch_state(dev, XenbusStateInitWait);
> > +	if (err)
> > +		return err;
> 
> 
> Not sure what to do on an error here (wrt xenstore entries).
> xenbus_switch_state() itself uses transactions. Are we allowed to have
> nested transactions?
> 
> OTOH, xenbus_switch_state() never returns an error, at least now. In
> fact, in most cases we ignore return value.

I don't think we can have nested transactions. For simplicity, I'll
ignore errors from xenbus_switch_state.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 01/18] xen: introduce the pvcalls interface header
  2017-05-25 22:02   ` [PATCH v2 01/18] xen: introduce the pvcalls interface header Boris Ostrovsky
  2017-06-01 20:54     ` Stefano Stabellini
@ 2017-06-01 20:54     ` Stefano Stabellini
  1 sibling, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-06-01 20:54 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: Stefano Stabellini, xen-devel, linux-kernel, jgross,
	Stefano Stabellini, konrad.wilk

On Thu, 25 May 2017, Boris Ostrovsky wrote:
> On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> > Introduce the C header file which defines the PV Calls interface. It is
> > imported from xen/include/public/io/pvcalls.h.
> >
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > CC: konrad.wilk@oracle.com
> > CC: boris.ostrovsky@oracle.com
> > CC: jgross@suse.com
> > ---
> >  include/xen/interface/io/pvcalls.h | 120 +++++++++++++++++++++++++++++++++++++
> >  1 file changed, 120 insertions(+)
> >  create mode 100644 include/xen/interface/io/pvcalls.h
> >
> > diff --git a/include/xen/interface/io/pvcalls.h b/include/xen/interface/io/pvcalls.h
> > new file mode 100644
> > index 0000000..0d41959
> > --- /dev/null
> > +++ b/include/xen/interface/io/pvcalls.h
> > @@ -0,0 +1,120 @@
> > +#ifndef __XEN_PUBLIC_IO_XEN_PVCALLS_H__
> > +#define __XEN_PUBLIC_IO_XEN_PVCALLS_H__
> > +
> > +#include <linux/net.h>
> > +#include "xen/interface/io/ring.h"
> > +
> > +/* "1" means socket, connect, release, bind, listen, accept and poll */
> > +#define XENBUS_FUNCTIONS_CALLS "1"
> > +
> > +/*
> > + * See docs/misc/pvcalls.markdown in xen.git for the full specification:
> > + * https://xenbits.xen.org/docs/unstable/misc/pvcalls.html
> > + */
> > +struct pvcalls_data_intf {
> > +    RING_IDX in_cons, in_prod, in_error;
> > +
> > +    uint8_t pad1[52];
> > +
> > +    RING_IDX out_cons, out_prod, out_error;
> > +
> > +    uint8_t pad2[52];
> > +
> > +    RING_IDX ring_order;
> > +    grant_ref_t ref[];
> 
> I think you should also include <xen/interface/grant_table.h>.
> 
> In fact, ring.h probably needs to do it too.

I'll do, thanks

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 01/18] xen: introduce the pvcalls interface header
  2017-05-25 22:02   ` [PATCH v2 01/18] xen: introduce the pvcalls interface header Boris Ostrovsky
@ 2017-06-01 20:54     ` Stefano Stabellini
  2017-06-01 20:54     ` Stefano Stabellini
  1 sibling, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-06-01 20:54 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: jgross, Stefano Stabellini, linux-kernel, xen-devel, Stefano Stabellini

On Thu, 25 May 2017, Boris Ostrovsky wrote:
> On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> > Introduce the C header file which defines the PV Calls interface. It is
> > imported from xen/include/public/io/pvcalls.h.
> >
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > CC: konrad.wilk@oracle.com
> > CC: boris.ostrovsky@oracle.com
> > CC: jgross@suse.com
> > ---
> >  include/xen/interface/io/pvcalls.h | 120 +++++++++++++++++++++++++++++++++++++
> >  1 file changed, 120 insertions(+)
> >  create mode 100644 include/xen/interface/io/pvcalls.h
> >
> > diff --git a/include/xen/interface/io/pvcalls.h b/include/xen/interface/io/pvcalls.h
> > new file mode 100644
> > index 0000000..0d41959
> > --- /dev/null
> > +++ b/include/xen/interface/io/pvcalls.h
> > @@ -0,0 +1,120 @@
> > +#ifndef __XEN_PUBLIC_IO_XEN_PVCALLS_H__
> > +#define __XEN_PUBLIC_IO_XEN_PVCALLS_H__
> > +
> > +#include <linux/net.h>
> > +#include "xen/interface/io/ring.h"
> > +
> > +/* "1" means socket, connect, release, bind, listen, accept and poll */
> > +#define XENBUS_FUNCTIONS_CALLS "1"
> > +
> > +/*
> > + * See docs/misc/pvcalls.markdown in xen.git for the full specification:
> > + * https://xenbits.xen.org/docs/unstable/misc/pvcalls.html
> > + */
> > +struct pvcalls_data_intf {
> > +    RING_IDX in_cons, in_prod, in_error;
> > +
> > +    uint8_t pad1[52];
> > +
> > +    RING_IDX out_cons, out_prod, out_error;
> > +
> > +    uint8_t pad2[52];
> > +
> > +    RING_IDX ring_order;
> > +    grant_ref_t ref[];
> 
> I think you should also include <xen/interface/grant_table.h>.
> 
> In fact, ring.h probably needs to do it too.

I'll do, thanks

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 05/18] xen/pvcalls: connect to a frontend
  2017-05-26 15:23     ` Boris Ostrovsky
@ 2017-06-01 21:06       ` Stefano Stabellini
  2017-06-01 21:06       ` Stefano Stabellini
  1 sibling, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-06-01 21:06 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: Stefano Stabellini, xen-devel, linux-kernel, jgross, Stefano Stabellini

On Fri, 26 May 2017, Boris Ostrovsky wrote:
> On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> > Introduce a per-frontend data structure named pvcalls_back_priv. It
> > contains pointers to the command ring, its event channel, a list of
> > active sockets and a tree of passive sockets (passing sockets need to be
> > looked up from the id on listen, accept and poll commands, while active
> > sockets only on release).
> >
> > It also has an unbound workqueue to schedule the work of parsing and
> > executing commands on the command ring. socket_lock protects the two
> > lists. In pvcalls_back_global, keep a list of connected frontends.
> >
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > CC: boris.ostrovsky@oracle.com
> > CC: jgross@suse.com
> > ---
> >  drivers/xen/pvcalls-back.c | 95 ++++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 95 insertions(+)
> >
> > diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> > index b4da138..a48b0d9 100644
> > --- a/drivers/xen/pvcalls-back.c
> > +++ b/drivers/xen/pvcalls-back.c
> > @@ -33,9 +33,104 @@ struct pvcalls_back_global {
> >  	struct semaphore frontends_lock;
> >  } pvcalls_back_global;
> >  
> > +/*
> > + * Per-frontend data structure. It contains pointers to the command
> > + * ring, its event channel, a list of active sockets and a tree of
> > + * passive sockets.
> > + */
> > +struct pvcalls_back_priv {
> 
> pvcalls_fedata or pvcalls_feinfo maybe (or pvcalls_back_fedata)?

I'll go with pvcalls_fedata.


> > +	struct list_head list;
> > +	struct xenbus_device *dev;
> > +	struct xen_pvcalls_sring *sring;
> > +	struct xen_pvcalls_back_ring ring;
> > +	int irq;
> > +	struct list_head socket_mappings;
> > +	struct radix_tree_root socketpass_mappings;
> > +	struct semaphore socket_lock;
> > +	atomic_t work;
> > +	struct workqueue_struct *wq;
> > +	struct work_struct register_work;
> > +};
> > +
> > +static void pvcalls_back_work(struct work_struct *work)
> > +{
> > +}
> > +
> > +static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
> > +{
> > +	return IRQ_HANDLED;
> > +}
> > +
> >  static int backend_connect(struct xenbus_device *dev)
> >  {
> > +	int err, evtchn;
> > +	grant_ref_t ring_ref;
> > +	void *addr = NULL;
> > +	struct pvcalls_back_priv *priv = NULL;
> > +
> > +	priv = kzalloc(sizeof(struct pvcalls_back_priv), GFP_KERNEL);
> > +	if (!priv)
> > +		return -ENOMEM;
> > +
> > +	err = xenbus_scanf(XBT_NIL, dev->otherend, "port", "%u",
> > +			   &evtchn);
> > +	if (err != 1) {
> > +		err = -EINVAL;
> > +		xenbus_dev_fatal(dev, err, "reading %s/event-channel",
> > +				 dev->otherend);
> > +		goto error;
> > +	}
> > +
> > +	err = xenbus_scanf(XBT_NIL, dev->otherend, "ring-ref", "%u", &ring_ref);
> > +	if (err != 1) {
> > +		err = -EINVAL;
> > +		xenbus_dev_fatal(dev, err, "reading %s/ring-ref",
> > +				 dev->otherend);
> > +		goto error;
> > +	}
> > +
> > +	err = bind_interdomain_evtchn_to_irqhandler(dev->otherend_id, evtchn,
> > +						    pvcalls_back_event, 0,
> > +						    "pvcalls-backend", dev);
> > +	if (err < 0)
> > +		goto error;
> > +	priv->irq = err;
> > +
> > +	priv->wq = alloc_workqueue("pvcalls_back_wq", WQ_UNBOUND, 1);
> > +	if (!priv->wq) {
> > +		err = -ENOMEM;
> > +		goto error;
> > +	}
> > +
> > +	err = xenbus_map_ring_valloc(dev, &ring_ref, 1, &addr);
> > +	if (err < 0)
> > +		goto error;
> > +	priv->sring = addr;
> 
> You don't really need addr, since priv is kzalloc'd (and you can deal
> with it in error path).

OK

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 05/18] xen/pvcalls: connect to a frontend
  2017-05-26 15:23     ` Boris Ostrovsky
  2017-06-01 21:06       ` Stefano Stabellini
@ 2017-06-01 21:06       ` Stefano Stabellini
  1 sibling, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-06-01 21:06 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: jgross, Stefano Stabellini, Stefano Stabellini, linux-kernel, xen-devel

On Fri, 26 May 2017, Boris Ostrovsky wrote:
> On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> > Introduce a per-frontend data structure named pvcalls_back_priv. It
> > contains pointers to the command ring, its event channel, a list of
> > active sockets and a tree of passive sockets (passing sockets need to be
> > looked up from the id on listen, accept and poll commands, while active
> > sockets only on release).
> >
> > It also has an unbound workqueue to schedule the work of parsing and
> > executing commands on the command ring. socket_lock protects the two
> > lists. In pvcalls_back_global, keep a list of connected frontends.
> >
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > CC: boris.ostrovsky@oracle.com
> > CC: jgross@suse.com
> > ---
> >  drivers/xen/pvcalls-back.c | 95 ++++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 95 insertions(+)
> >
> > diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> > index b4da138..a48b0d9 100644
> > --- a/drivers/xen/pvcalls-back.c
> > +++ b/drivers/xen/pvcalls-back.c
> > @@ -33,9 +33,104 @@ struct pvcalls_back_global {
> >  	struct semaphore frontends_lock;
> >  } pvcalls_back_global;
> >  
> > +/*
> > + * Per-frontend data structure. It contains pointers to the command
> > + * ring, its event channel, a list of active sockets and a tree of
> > + * passive sockets.
> > + */
> > +struct pvcalls_back_priv {
> 
> pvcalls_fedata or pvcalls_feinfo maybe (or pvcalls_back_fedata)?

I'll go with pvcalls_fedata.


> > +	struct list_head list;
> > +	struct xenbus_device *dev;
> > +	struct xen_pvcalls_sring *sring;
> > +	struct xen_pvcalls_back_ring ring;
> > +	int irq;
> > +	struct list_head socket_mappings;
> > +	struct radix_tree_root socketpass_mappings;
> > +	struct semaphore socket_lock;
> > +	atomic_t work;
> > +	struct workqueue_struct *wq;
> > +	struct work_struct register_work;
> > +};
> > +
> > +static void pvcalls_back_work(struct work_struct *work)
> > +{
> > +}
> > +
> > +static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
> > +{
> > +	return IRQ_HANDLED;
> > +}
> > +
> >  static int backend_connect(struct xenbus_device *dev)
> >  {
> > +	int err, evtchn;
> > +	grant_ref_t ring_ref;
> > +	void *addr = NULL;
> > +	struct pvcalls_back_priv *priv = NULL;
> > +
> > +	priv = kzalloc(sizeof(struct pvcalls_back_priv), GFP_KERNEL);
> > +	if (!priv)
> > +		return -ENOMEM;
> > +
> > +	err = xenbus_scanf(XBT_NIL, dev->otherend, "port", "%u",
> > +			   &evtchn);
> > +	if (err != 1) {
> > +		err = -EINVAL;
> > +		xenbus_dev_fatal(dev, err, "reading %s/event-channel",
> > +				 dev->otherend);
> > +		goto error;
> > +	}
> > +
> > +	err = xenbus_scanf(XBT_NIL, dev->otherend, "ring-ref", "%u", &ring_ref);
> > +	if (err != 1) {
> > +		err = -EINVAL;
> > +		xenbus_dev_fatal(dev, err, "reading %s/ring-ref",
> > +				 dev->otherend);
> > +		goto error;
> > +	}
> > +
> > +	err = bind_interdomain_evtchn_to_irqhandler(dev->otherend_id, evtchn,
> > +						    pvcalls_back_event, 0,
> > +						    "pvcalls-backend", dev);
> > +	if (err < 0)
> > +		goto error;
> > +	priv->irq = err;
> > +
> > +	priv->wq = alloc_workqueue("pvcalls_back_wq", WQ_UNBOUND, 1);
> > +	if (!priv->wq) {
> > +		err = -ENOMEM;
> > +		goto error;
> > +	}
> > +
> > +	err = xenbus_map_ring_valloc(dev, &ring_ref, 1, &addr);
> > +	if (err < 0)
> > +		goto error;
> > +	priv->sring = addr;
> 
> You don't really need addr, since priv is kzalloc'd (and you can deal
> with it in error path).

OK

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 06/18] xen/pvcalls: handle commands from the frontend
  2017-05-26 15:32     ` Boris Ostrovsky
  2017-06-02 18:21       ` Stefano Stabellini
@ 2017-06-02 18:21       ` Stefano Stabellini
  1 sibling, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-06-02 18:21 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: Stefano Stabellini, xen-devel, linux-kernel, jgross, Stefano Stabellini

On Fri, 26 May 2017, Boris Ostrovsky wrote:
> On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> > +
> >  static void pvcalls_back_work(struct work_struct *work)
> >  {
> > +	struct pvcalls_back_priv *priv = container_of(work,
> > +		struct pvcalls_back_priv, register_work);
> > +	int notify, notify_all = 0, more = 1;
> > +	struct xen_pvcalls_request req;
> > +	struct xenbus_device *dev = priv->dev;
> > +
> > +	atomic_set(&priv->work, 1);
> > +
> > +	while (more || !atomic_dec_and_test(&priv->work)) {
> > +		while (RING_HAS_UNCONSUMED_REQUESTS(&priv->ring)) {
> > +			RING_COPY_REQUEST(&priv->ring,
> > +					  priv->ring.req_cons++,
> > +					  &req);
> > +
> > +			if (!pvcalls_back_handle_cmd(dev, &req)) {
> > +				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(
> > +					&priv->ring, notify);
> > +				notify_all += notify;
> > +			}
> > +		}
> > +
> > +		if (notify_all)
> > +			notify_remote_via_irq(priv->irq);
> > +
> > +		RING_FINAL_CHECK_FOR_REQUESTS(&priv->ring, more);
> > +	}
> >  }
> >  
> >  static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
> >  {
> > +	struct xenbus_device *dev = dev_id;
> > +	struct pvcalls_back_priv *priv = NULL;
> > +
> > +	if (dev == NULL)
> > +		return IRQ_HANDLED;
> > +
> > +	priv = dev_get_drvdata(&dev->dev);
> > +	if (priv == NULL)
> > +		return IRQ_HANDLED;
> > +
> > +	atomic_inc(&priv->work);
> 
> I will paste you response here from v1 --- I thought I understood it and
> now I don't anymore.
>
> >>
> >> Is this really needed? We have a new entry on the ring, so the outer
> loop in
> >> pvcalls_back_work() will pick this up (by setting 'more').
> >
> > This is to avoid race conditions. A notification could be delivered
> > after RING_FINAL_CHECK_FOR_REQUESTS is called, returning more == 0, but
> > before pvcalls_back_work completes. In that case, without priv->work,
> > pvcalls_back_work wouldn't be rescheduled because it is still running
> > and the work would be left undone.
> 
> 
> How is this different from the case when new work comes after the outer
> loop is done but we still haven't returned from pvcalls_back_work()?

It is the same case. In fact, looking at it more closely, I think that
priv->work in its current form makes it more unlikely to happen, but
doesn't prevent it completely :-(

Given that I have been trying to reproduce the race in many ways but
always failed so far, I think this race is only theoretical. I have
removed the priv->work construct, and added a in-code comment about the
race.


> > +	queue_work(priv->wq, &priv->register_work);
> > +
> >  	return IRQ_HANDLED;
> >  }

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 06/18] xen/pvcalls: handle commands from the frontend
  2017-05-26 15:32     ` Boris Ostrovsky
@ 2017-06-02 18:21       ` Stefano Stabellini
  2017-06-02 18:21       ` Stefano Stabellini
  1 sibling, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-06-02 18:21 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: jgross, Stefano Stabellini, Stefano Stabellini, linux-kernel, xen-devel

On Fri, 26 May 2017, Boris Ostrovsky wrote:
> On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> > +
> >  static void pvcalls_back_work(struct work_struct *work)
> >  {
> > +	struct pvcalls_back_priv *priv = container_of(work,
> > +		struct pvcalls_back_priv, register_work);
> > +	int notify, notify_all = 0, more = 1;
> > +	struct xen_pvcalls_request req;
> > +	struct xenbus_device *dev = priv->dev;
> > +
> > +	atomic_set(&priv->work, 1);
> > +
> > +	while (more || !atomic_dec_and_test(&priv->work)) {
> > +		while (RING_HAS_UNCONSUMED_REQUESTS(&priv->ring)) {
> > +			RING_COPY_REQUEST(&priv->ring,
> > +					  priv->ring.req_cons++,
> > +					  &req);
> > +
> > +			if (!pvcalls_back_handle_cmd(dev, &req)) {
> > +				RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(
> > +					&priv->ring, notify);
> > +				notify_all += notify;
> > +			}
> > +		}
> > +
> > +		if (notify_all)
> > +			notify_remote_via_irq(priv->irq);
> > +
> > +		RING_FINAL_CHECK_FOR_REQUESTS(&priv->ring, more);
> > +	}
> >  }
> >  
> >  static irqreturn_t pvcalls_back_event(int irq, void *dev_id)
> >  {
> > +	struct xenbus_device *dev = dev_id;
> > +	struct pvcalls_back_priv *priv = NULL;
> > +
> > +	if (dev == NULL)
> > +		return IRQ_HANDLED;
> > +
> > +	priv = dev_get_drvdata(&dev->dev);
> > +	if (priv == NULL)
> > +		return IRQ_HANDLED;
> > +
> > +	atomic_inc(&priv->work);
> 
> I will paste you response here from v1 --- I thought I understood it and
> now I don't anymore.
>
> >>
> >> Is this really needed? We have a new entry on the ring, so the outer
> loop in
> >> pvcalls_back_work() will pick this up (by setting 'more').
> >
> > This is to avoid race conditions. A notification could be delivered
> > after RING_FINAL_CHECK_FOR_REQUESTS is called, returning more == 0, but
> > before pvcalls_back_work completes. In that case, without priv->work,
> > pvcalls_back_work wouldn't be rescheduled because it is still running
> > and the work would be left undone.
> 
> 
> How is this different from the case when new work comes after the outer
> loop is done but we still haven't returned from pvcalls_back_work()?

It is the same case. In fact, looking at it more closely, I think that
priv->work in its current form makes it more unlikely to happen, but
doesn't prevent it completely :-(

Given that I have been trying to reproduce the race in many ways but
always failed so far, I think this race is only theoretical. I have
removed the priv->work construct, and added a in-code comment about the
race.


> > +	queue_work(priv->wq, &priv->register_work);
> > +
> >  	return IRQ_HANDLED;
> >  }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 07/18] xen/pvcalls: implement socket command
  2017-05-26 15:53     ` Boris Ostrovsky
  2017-06-02 18:35       ` Stefano Stabellini
@ 2017-06-02 18:35       ` Stefano Stabellini
  1 sibling, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-06-02 18:35 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: Stefano Stabellini, xen-devel, linux-kernel, jgross, Stefano Stabellini

On Fri, 26 May 2017, Boris Ostrovsky wrote:
> >  static int pvcalls_back_socket(struct xenbus_device *dev,
> >  		struct xen_pvcalls_request *req)
> >  {
> > -	return 0;
> > +	struct pvcalls_back_priv *priv;
> > +	int ret;
> > +	struct xen_pvcalls_response *rsp;
> > +
> > +	priv = dev_get_drvdata(&dev->dev);
> > +
> > +	if (req->u.socket.domain != AF_INET ||
> > +	    req->u.socket.type != SOCK_STREAM ||
> > +	    (req->u.socket.protocol != 0 &&
> > +	     req->u.socket.protocol != AF_INET))
> 
> Shouldn't this be one of IPPROTO_* macros?

Ah yes, I could use IPPROTO_IP instead of 0 here

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 07/18] xen/pvcalls: implement socket command
  2017-05-26 15:53     ` Boris Ostrovsky
@ 2017-06-02 18:35       ` Stefano Stabellini
  2017-06-02 18:35       ` Stefano Stabellini
  1 sibling, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-06-02 18:35 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: jgross, Stefano Stabellini, Stefano Stabellini, linux-kernel, xen-devel

On Fri, 26 May 2017, Boris Ostrovsky wrote:
> >  static int pvcalls_back_socket(struct xenbus_device *dev,
> >  		struct xen_pvcalls_request *req)
> >  {
> > -	return 0;
> > +	struct pvcalls_back_priv *priv;
> > +	int ret;
> > +	struct xen_pvcalls_response *rsp;
> > +
> > +	priv = dev_get_drvdata(&dev->dev);
> > +
> > +	if (req->u.socket.domain != AF_INET ||
> > +	    req->u.socket.type != SOCK_STREAM ||
> > +	    (req->u.socket.protocol != 0 &&
> > +	     req->u.socket.protocol != AF_INET))
> 
> Shouldn't this be one of IPPROTO_* macros?

Ah yes, I could use IPPROTO_IP instead of 0 here

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 07/18] xen/pvcalls: implement socket command
  2017-05-26 18:58     ` Boris Ostrovsky
  2017-06-02 18:41       ` Stefano Stabellini
@ 2017-06-02 18:41       ` Stefano Stabellini
  1 sibling, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-06-02 18:41 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: Stefano Stabellini, xen-devel, linux-kernel, jgross, Stefano Stabellini

On Fri, 26 May 2017, Boris Ostrovsky wrote:
> On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> > Just reply with success to the other end for now. Delay the allocation
> > of the actual socket to bind and/or connect.
> >
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > CC: boris.ostrovsky@oracle.com
> > CC: jgross@suse.com
> > ---
> >  drivers/xen/pvcalls-back.c | 29 ++++++++++++++++++++++++++++-
> >  1 file changed, 28 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> > index 9dc8a28..fed54bf 100644
> > --- a/drivers/xen/pvcalls-back.c
> > +++ b/drivers/xen/pvcalls-back.c
> > @@ -12,12 +12,17 @@
> >   * GNU General Public License for more details.
> >   */
> >  
> > +#include <linux/inet.h>
> >  #include <linux/kthread.h>
> >  #include <linux/list.h>
> >  #include <linux/radix-tree.h>
> >  #include <linux/module.h>
> >  #include <linux/semaphore.h>
> >  #include <linux/wait.h>
> > +#include <net/sock.h>
> > +#include <net/inet_common.h>
> > +#include <net/inet_connection_sock.h>
> > +#include <net/request_sock.h>
> >  
> >  #include <xen/events.h>
> >  #include <xen/grant_table.h>
> > @@ -55,7 +60,29 @@ struct pvcalls_back_priv {
> >  static int pvcalls_back_socket(struct xenbus_device *dev,
> >  		struct xen_pvcalls_request *req)
> >  {
> > -	return 0;
> > +	struct pvcalls_back_priv *priv;
> > +	int ret;
> > +	struct xen_pvcalls_response *rsp;
> > +
> > +	priv = dev_get_drvdata(&dev->dev);
> > +
> > +	if (req->u.socket.domain != AF_INET ||
> > +	    req->u.socket.type != SOCK_STREAM ||
> > +	    (req->u.socket.protocol != 0 &&
> > +	     req->u.socket.protocol != AF_INET))
> > +		ret = -EAFNOSUPPORT;
> > +	else
> > +		ret = 0;
> > +
> > +	/* leave the actual socket allocation for later */
> 
> Why is this allocation deferred (to connect and bind)? Doesn't it in
> some way violate semantics of socket call?

For convenience: this way we can easily distinguish active sockets from
passive sockets when we do the allocation. I don't think it violates the
semantics, as in pvcalls the socket identifier is chosen ("allocated")
by the frontend anyway.


> > +
> > +	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
> > +	rsp->req_id = req->req_id;
> > +	rsp->cmd = req->cmd;
> > +	rsp->u.socket.id = req->u.socket.id;
> > +	rsp->ret = ret;
> > +
> > +	return ret;
> >  }
> >  
> >  static int pvcalls_back_connect(struct xenbus_device *dev,

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 07/18] xen/pvcalls: implement socket command
  2017-05-26 18:58     ` Boris Ostrovsky
@ 2017-06-02 18:41       ` Stefano Stabellini
  2017-06-02 18:41       ` Stefano Stabellini
  1 sibling, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-06-02 18:41 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: jgross, Stefano Stabellini, Stefano Stabellini, linux-kernel, xen-devel

On Fri, 26 May 2017, Boris Ostrovsky wrote:
> On 05/19/2017 07:22 PM, Stefano Stabellini wrote:
> > Just reply with success to the other end for now. Delay the allocation
> > of the actual socket to bind and/or connect.
> >
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > CC: boris.ostrovsky@oracle.com
> > CC: jgross@suse.com
> > ---
> >  drivers/xen/pvcalls-back.c | 29 ++++++++++++++++++++++++++++-
> >  1 file changed, 28 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> > index 9dc8a28..fed54bf 100644
> > --- a/drivers/xen/pvcalls-back.c
> > +++ b/drivers/xen/pvcalls-back.c
> > @@ -12,12 +12,17 @@
> >   * GNU General Public License for more details.
> >   */
> >  
> > +#include <linux/inet.h>
> >  #include <linux/kthread.h>
> >  #include <linux/list.h>
> >  #include <linux/radix-tree.h>
> >  #include <linux/module.h>
> >  #include <linux/semaphore.h>
> >  #include <linux/wait.h>
> > +#include <net/sock.h>
> > +#include <net/inet_common.h>
> > +#include <net/inet_connection_sock.h>
> > +#include <net/request_sock.h>
> >  
> >  #include <xen/events.h>
> >  #include <xen/grant_table.h>
> > @@ -55,7 +60,29 @@ struct pvcalls_back_priv {
> >  static int pvcalls_back_socket(struct xenbus_device *dev,
> >  		struct xen_pvcalls_request *req)
> >  {
> > -	return 0;
> > +	struct pvcalls_back_priv *priv;
> > +	int ret;
> > +	struct xen_pvcalls_response *rsp;
> > +
> > +	priv = dev_get_drvdata(&dev->dev);
> > +
> > +	if (req->u.socket.domain != AF_INET ||
> > +	    req->u.socket.type != SOCK_STREAM ||
> > +	    (req->u.socket.protocol != 0 &&
> > +	     req->u.socket.protocol != AF_INET))
> > +		ret = -EAFNOSUPPORT;
> > +	else
> > +		ret = 0;
> > +
> > +	/* leave the actual socket allocation for later */
> 
> Why is this allocation deferred (to connect and bind)? Doesn't it in
> some way violate semantics of socket call?

For convenience: this way we can easily distinguish active sockets from
passive sockets when we do the allocation. I don't think it violates the
semantics, as in pvcalls the socket identifier is chosen ("allocated")
by the frontend anyway.


> > +
> > +	rsp = RING_GET_RESPONSE(&priv->ring, priv->ring.rsp_prod_pvt++);
> > +	rsp->req_id = req->req_id;
> > +	rsp->cmd = req->cmd;
> > +	rsp->u.socket.id = req->u.socket.id;
> > +	rsp->ret = ret;
> > +
> > +	return ret;
> >  }
> >  
> >  static int pvcalls_back_connect(struct xenbus_device *dev,

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 11/18] xen/pvcalls: implement accept command
  2017-05-26 18:18     ` Boris Ostrovsky
@ 2017-06-02 19:19       ` Stefano Stabellini
  2017-06-02 19:19       ` Stefano Stabellini
  1 sibling, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-06-02 19:19 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: Stefano Stabellini, xen-devel, linux-kernel, jgross, Stefano Stabellini

On Fri, 26 May 2017, Boris Ostrovsky wrote:
> >  static void __pvcalls_back_accept(struct work_struct *work)
> >  {
> > +	struct sockpass_mapping *mappass = container_of(
> > +		work, struct sockpass_mapping, register_work);
> > +	struct sock_mapping *map;
> > +	struct pvcalls_ioworker *iow;
> > +	struct pvcalls_back_priv *priv;
> > +	struct xen_pvcalls_response *rsp;
> > +	struct xen_pvcalls_request *req;
> > +	void *page = NULL;
> > +	int notify;
> > +	int ret = -EINVAL;
> > +	unsigned long flags;
> > +
> > +	priv = mappass->priv;
> > +	/* We only need to check the value of "cmd" atomically on read. */
> > +	spin_lock_irqsave(&mappass->copy_lock, flags);
> > +	req = &mappass->reqcopy;
> > +	if (req->cmd != PVCALLS_ACCEPT) {
> > +		spin_unlock_irqrestore(&mappass->copy_lock, flags);
> > +		return;
> > +	}
> > +	spin_unlock_irqrestore(&mappass->copy_lock, flags);
> > +
> > +	map = kzalloc(sizeof(*map), GFP_KERNEL);
> 
> >From here on, the code looks almost identical to connect. Can this be
> factored out?

Yes, good idea, I'll do that

^ permalink raw reply	[flat|nested] 69+ messages in thread

* Re: [PATCH v2 11/18] xen/pvcalls: implement accept command
  2017-05-26 18:18     ` Boris Ostrovsky
  2017-06-02 19:19       ` Stefano Stabellini
@ 2017-06-02 19:19       ` Stefano Stabellini
  1 sibling, 0 replies; 69+ messages in thread
From: Stefano Stabellini @ 2017-06-02 19:19 UTC (permalink / raw)
  To: Boris Ostrovsky
  Cc: jgross, Stefano Stabellini, Stefano Stabellini, linux-kernel, xen-devel

On Fri, 26 May 2017, Boris Ostrovsky wrote:
> >  static void __pvcalls_back_accept(struct work_struct *work)
> >  {
> > +	struct sockpass_mapping *mappass = container_of(
> > +		work, struct sockpass_mapping, register_work);
> > +	struct sock_mapping *map;
> > +	struct pvcalls_ioworker *iow;
> > +	struct pvcalls_back_priv *priv;
> > +	struct xen_pvcalls_response *rsp;
> > +	struct xen_pvcalls_request *req;
> > +	void *page = NULL;
> > +	int notify;
> > +	int ret = -EINVAL;
> > +	unsigned long flags;
> > +
> > +	priv = mappass->priv;
> > +	/* We only need to check the value of "cmd" atomically on read. */
> > +	spin_lock_irqsave(&mappass->copy_lock, flags);
> > +	req = &mappass->reqcopy;
> > +	if (req->cmd != PVCALLS_ACCEPT) {
> > +		spin_unlock_irqrestore(&mappass->copy_lock, flags);
> > +		return;
> > +	}
> > +	spin_unlock_irqrestore(&mappass->copy_lock, flags);
> > +
> > +	map = kzalloc(sizeof(*map), GFP_KERNEL);
> 
> >From here on, the code looks almost identical to connect. Can this be
> factored out?

Yes, good idea, I'll do that

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 69+ messages in thread

end of thread, other threads:[~2017-06-02 19:19 UTC | newest]

Thread overview: 69+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-19 23:17 [PATCH v2 00/18] introduce the Xen PV Calls backend Stefano Stabellini
2017-05-19 23:22 ` [PATCH v2 01/18] xen: introduce the pvcalls interface header Stefano Stabellini
2017-05-19 23:22   ` Stefano Stabellini
2017-05-19 23:22   ` [PATCH v2 02/18] xen/pvcalls: introduce the pvcalls xenbus backend Stefano Stabellini
2017-05-25 22:04     ` Boris Ostrovsky
2017-05-25 22:04     ` Boris Ostrovsky
2017-05-19 23:22   ` Stefano Stabellini
2017-05-19 23:22   ` [PATCH v2 03/18] xen/pvcalls: initialize the module and register the " Stefano Stabellini
2017-05-19 23:22   ` Stefano Stabellini
2017-05-25 22:12     ` Boris Ostrovsky
2017-05-25 22:12       ` Boris Ostrovsky
2017-05-19 23:22   ` [PATCH v2 04/18] xen/pvcalls: xenbus state handling Stefano Stabellini
2017-05-19 23:22     ` Stefano Stabellini
2017-05-26  0:12     ` Boris Ostrovsky
2017-05-26  0:12       ` Boris Ostrovsky
2017-06-01 20:54       ` Stefano Stabellini
2017-06-01 20:54       ` Stefano Stabellini
2017-05-19 23:22   ` [PATCH v2 05/18] xen/pvcalls: connect to a frontend Stefano Stabellini
2017-05-19 23:22     ` Stefano Stabellini
2017-05-26 15:23     ` Boris Ostrovsky
2017-05-26 15:23     ` Boris Ostrovsky
2017-06-01 21:06       ` Stefano Stabellini
2017-06-01 21:06       ` Stefano Stabellini
2017-05-19 23:22   ` [PATCH v2 06/18] xen/pvcalls: handle commands from the frontend Stefano Stabellini
2017-05-19 23:22     ` Stefano Stabellini
2017-05-26 15:32     ` Boris Ostrovsky
2017-05-26 15:32     ` Boris Ostrovsky
2017-06-02 18:21       ` Stefano Stabellini
2017-06-02 18:21       ` Stefano Stabellini
2017-05-19 23:22   ` [PATCH v2 07/18] xen/pvcalls: implement socket command Stefano Stabellini
2017-05-19 23:22     ` Stefano Stabellini
2017-05-26 15:53     ` Boris Ostrovsky
2017-05-26 15:53     ` Boris Ostrovsky
2017-06-02 18:35       ` Stefano Stabellini
2017-06-02 18:35       ` Stefano Stabellini
2017-05-26 18:58     ` Boris Ostrovsky
2017-05-26 18:58     ` Boris Ostrovsky
2017-06-02 18:41       ` Stefano Stabellini
2017-06-02 18:41       ` Stefano Stabellini
2017-05-19 23:22   ` [PATCH v2 08/18] xen/pvcalls: implement connect command Stefano Stabellini
2017-05-19 23:22     ` Stefano Stabellini
2017-05-19 23:22   ` [PATCH v2 09/18] xen/pvcalls: implement bind command Stefano Stabellini
2017-05-19 23:22     ` Stefano Stabellini
2017-05-19 23:22   ` [PATCH v2 10/18] xen/pvcalls: implement listen command Stefano Stabellini
2017-05-19 23:22     ` Stefano Stabellini
2017-05-19 23:22   ` [PATCH v2 11/18] xen/pvcalls: implement accept command Stefano Stabellini
2017-05-19 23:22     ` Stefano Stabellini
2017-05-26 18:18     ` Boris Ostrovsky
2017-05-26 18:18     ` Boris Ostrovsky
2017-06-02 19:19       ` Stefano Stabellini
2017-06-02 19:19       ` Stefano Stabellini
2017-05-19 23:22   ` [PATCH v2 12/18] xen/pvcalls: implement poll command Stefano Stabellini
2017-05-19 23:22     ` Stefano Stabellini
2017-05-19 23:22   ` [PATCH v2 13/18] xen/pvcalls: implement release command Stefano Stabellini
2017-05-19 23:22   ` Stefano Stabellini
2017-05-19 23:22   ` [PATCH v2 14/18] xen/pvcalls: disconnect and module_exit Stefano Stabellini
2017-05-19 23:22   ` Stefano Stabellini
2017-05-19 23:22   ` [PATCH v2 15/18] xen/pvcalls: implement the ioworker functions Stefano Stabellini
2017-05-19 23:22     ` Stefano Stabellini
2017-05-19 23:22   ` [PATCH v2 16/18] xen/pvcalls: implement read Stefano Stabellini
2017-05-19 23:22   ` Stefano Stabellini
2017-05-19 23:22   ` [PATCH v2 17/18] xen/pvcalls: implement write Stefano Stabellini
2017-05-19 23:22   ` Stefano Stabellini
2017-05-19 23:22   ` [PATCH v2 18/18] xen: introduce a Kconfig option to enable the pvcalls backend Stefano Stabellini
2017-05-19 23:22     ` Stefano Stabellini
2017-05-25 22:02   ` [PATCH v2 01/18] xen: introduce the pvcalls interface header Boris Ostrovsky
2017-06-01 20:54     ` Stefano Stabellini
2017-06-01 20:54     ` Stefano Stabellini
2017-05-25 22:02   ` Boris Ostrovsky

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.