* [PATCH v3 2/7] xen: introduce the header file for the Xen 9pfs transport protocol
2017-03-13 23:50 ` [PATCH v3 1/7] xen: import new ring macros in ring.h Stefano Stabellini
@ 2017-03-13 23:50 ` Stefano Stabellini
2017-03-13 23:50 ` [PATCH v3 3/7] xen/9pfs: introduce Xen 9pfs transport driver Stefano Stabellini
` (4 subsequent siblings)
5 siblings, 0 replies; 24+ messages in thread
From: Stefano Stabellini @ 2017-03-13 23:50 UTC (permalink / raw)
To: xen-devel
Cc: linux-kernel, sstabellini, boris.ostrovsky, Stefano Stabellini,
konrad.wilk, jgross
It uses the new ring.h macros to declare rings and interfaces.
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: konrad.wilk@oracle.com
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
---
include/xen/interface/io/9pfs.h | 40 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 40 insertions(+)
create mode 100644 include/xen/interface/io/9pfs.h
diff --git a/include/xen/interface/io/9pfs.h b/include/xen/interface/io/9pfs.h
new file mode 100644
index 0000000..276eda4
--- /dev/null
+++ b/include/xen/interface/io/9pfs.h
@@ -0,0 +1,40 @@
+/*
+ * 9pfs.h -- Xen 9PFS transport
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Copyright (C) 2017 Stefano Stabellini <stefano@aporeto.com>
+ */
+
+#ifndef __XEN_PUBLIC_IO_9PFS_H__
+#define __XEN_PUBLIC_IO_9PFS_H__
+
+#include "xen/interface/io/ring.h"
+
+struct xen_9pfs_header {
+ uint32_t size;
+ uint8_t id;
+ uint16_t tag;
+} __attribute__((packed));
+
+#define XEN_9PFS_RING_ORDER 6
+#define XEN_9PFS_RING_SIZE XEN_FLEX_RING_SIZE(XEN_9PFS_RING_ORDER)
+DEFINE_XEN_FLEX_RING_AND_INTF(xen_9pfs);
+
+#endif
--
1.9.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v3 3/7] xen/9pfs: introduce Xen 9pfs transport driver
2017-03-13 23:50 ` [PATCH v3 1/7] xen: import new ring macros in ring.h Stefano Stabellini
2017-03-13 23:50 ` [PATCH v3 2/7] xen: introduce the header file for the Xen 9pfs transport protocol Stefano Stabellini
@ 2017-03-13 23:50 ` Stefano Stabellini
2017-03-14 6:24 ` Juergen Gross
2017-03-13 23:50 ` [PATCH v3 4/7] xen/9pfs: connect to the backend Stefano Stabellini
` (3 subsequent siblings)
5 siblings, 1 reply; 24+ messages in thread
From: Stefano Stabellini @ 2017-03-13 23:50 UTC (permalink / raw)
To: xen-devel
Cc: linux-kernel, sstabellini, boris.ostrovsky, Stefano Stabellini,
jgross, Eric Van Hensbergen, Ron Minnich, Latchesar Ionkov,
v9fs-developer
Introduce the Xen 9pfs transport driver: add struct xenbus_driver to
register as a xenbus driver and add struct p9_trans_module to register
as v9fs driver.
All functions are empty stubs for now.
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: jgross@suse.com
CC: Eric Van Hensbergen <ericvh@gmail.com>
CC: Ron Minnich <rminnich@sandia.gov>
CC: Latchesar Ionkov <lucho@ionkov.net>
CC: v9fs-developer@lists.sourceforge.net
---
net/9p/trans_xen.c | 125 +++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 125 insertions(+)
create mode 100644 net/9p/trans_xen.c
diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
new file mode 100644
index 0000000..f072876
--- /dev/null
+++ b/net/9p/trans_xen.c
@@ -0,0 +1,125 @@
+/*
+ * linux/fs/9p/trans_xen
+ *
+ * Xen transport layer.
+ *
+ * Copyright (C) 2017 by Stefano Stabellini <stefano@aporeto.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, modify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#include <xen/events.h>
+#include <xen/grant_table.h>
+#include <xen/xen.h>
+#include <xen/xenbus.h>
+#include <xen/interface/io/9pfs.h>
+
+#include <linux/module.h>
+#include <net/9p/9p.h>
+#include <net/9p/client.h>
+#include <net/9p/transport.h>
+
+static int p9_xen_cancel(struct p9_client *client, struct p9_req_t *req)
+{
+ return 0;
+}
+
+static int p9_xen_create(struct p9_client *client, const char *addr, char *args)
+{
+ return 0;
+}
+
+static void p9_xen_close(struct p9_client *client)
+{
+}
+
+static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
+{
+ return 0;
+}
+
+static struct p9_trans_module p9_xen_trans = {
+ .name = "xen",
+ .maxsize = (1 << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT)),
+ .def = 1,
+ .create = p9_xen_create,
+ .close = p9_xen_close,
+ .request = p9_xen_request,
+ .cancel = p9_xen_cancel,
+ .owner = THIS_MODULE,
+};
+
+static const struct xenbus_device_id xen_9pfs_front_ids[] = {
+ { "9pfs" },
+ { "" }
+};
+
+static int xen_9pfs_front_remove(struct xenbus_device *dev)
+{
+ return 0;
+}
+
+static int xen_9pfs_front_probe(struct xenbus_device *dev,
+ const struct xenbus_device_id *id)
+{
+ return 0;
+}
+
+static int xen_9pfs_front_resume(struct xenbus_device *dev)
+{
+ return 0;
+}
+
+static void xen_9pfs_front_changed(struct xenbus_device *dev,
+ enum xenbus_state backend_state)
+{
+}
+
+static struct xenbus_driver xen_9pfs_front_driver = {
+ .ids = xen_9pfs_front_ids,
+ .probe = xen_9pfs_front_probe,
+ .remove = xen_9pfs_front_remove,
+ .resume = xen_9pfs_front_resume,
+ .otherend_changed = xen_9pfs_front_changed,
+};
+
+int p9_trans_xen_init(void)
+{
+ if (!xen_domain())
+ return -ENODEV;
+
+ pr_info("Initialising Xen transport for 9pfs\n");
+
+ v9fs_register_trans(&p9_xen_trans);
+ return xenbus_register_frontend(&xen_9pfs_front_driver);
+}
+module_init(p9_trans_xen_init);
+
+void p9_trans_xen_exit(void)
+{
+ v9fs_unregister_trans(&p9_xen_trans);
+ return xenbus_unregister_driver(&xen_9pfs_front_driver);
+}
+module_exit(p9_trans_xen_exit);
--
1.9.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v3 3/7] xen/9pfs: introduce Xen 9pfs transport driver
2017-03-13 23:50 ` [PATCH v3 3/7] xen/9pfs: introduce Xen 9pfs transport driver Stefano Stabellini
@ 2017-03-14 6:24 ` Juergen Gross
2017-03-14 20:26 ` Stefano Stabellini
0 siblings, 1 reply; 24+ messages in thread
From: Juergen Gross @ 2017-03-14 6:24 UTC (permalink / raw)
To: Stefano Stabellini, xen-devel
Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini,
Eric Van Hensbergen, Ron Minnich, Latchesar Ionkov,
v9fs-developer
On 14/03/17 00:50, Stefano Stabellini wrote:
> Introduce the Xen 9pfs transport driver: add struct xenbus_driver to
> register as a xenbus driver and add struct p9_trans_module to register
> as v9fs driver.
>
> All functions are empty stubs for now.
>
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> CC: jgross@suse.com
> CC: Eric Van Hensbergen <ericvh@gmail.com>
> CC: Ron Minnich <rminnich@sandia.gov>
> CC: Latchesar Ionkov <lucho@ionkov.net>
> CC: v9fs-developer@lists.sourceforge.net
> ---
> net/9p/trans_xen.c | 125 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 125 insertions(+)
> create mode 100644 net/9p/trans_xen.c
>
> diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
> new file mode 100644
> index 0000000..f072876
> --- /dev/null
> +++ b/net/9p/trans_xen.c
> +static struct p9_trans_module p9_xen_trans = {
> + .name = "xen",
> + .maxsize = (1 << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT)),
I think you can remove the outer brackets.
Juergen
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v3 3/7] xen/9pfs: introduce Xen 9pfs transport driver
2017-03-14 6:24 ` Juergen Gross
@ 2017-03-14 20:26 ` Stefano Stabellini
0 siblings, 0 replies; 24+ messages in thread
From: Stefano Stabellini @ 2017-03-14 20:26 UTC (permalink / raw)
To: Juergen Gross
Cc: Stefano Stabellini, xen-devel, linux-kernel, boris.ostrovsky,
Stefano Stabellini, Eric Van Hensbergen, Ron Minnich,
Latchesar Ionkov, v9fs-developer
On Tue, 14 Mar 2017, Juergen Gross wrote:
> On 14/03/17 00:50, Stefano Stabellini wrote:
> > Introduce the Xen 9pfs transport driver: add struct xenbus_driver to
> > register as a xenbus driver and add struct p9_trans_module to register
> > as v9fs driver.
> >
> > All functions are empty stubs for now.
> >
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > CC: jgross@suse.com
> > CC: Eric Van Hensbergen <ericvh@gmail.com>
> > CC: Ron Minnich <rminnich@sandia.gov>
> > CC: Latchesar Ionkov <lucho@ionkov.net>
> > CC: v9fs-developer@lists.sourceforge.net
> > ---
> > net/9p/trans_xen.c | 125 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 125 insertions(+)
> > create mode 100644 net/9p/trans_xen.c
> >
> > diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
> > new file mode 100644
> > index 0000000..f072876
> > --- /dev/null
> > +++ b/net/9p/trans_xen.c
>
> > +static struct p9_trans_module p9_xen_trans = {
> > + .name = "xen",
> > + .maxsize = (1 << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT)),
>
> I think you can remove the outer brackets.
OK
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v3 4/7] xen/9pfs: connect to the backend
2017-03-13 23:50 ` [PATCH v3 1/7] xen: import new ring macros in ring.h Stefano Stabellini
2017-03-13 23:50 ` [PATCH v3 2/7] xen: introduce the header file for the Xen 9pfs transport protocol Stefano Stabellini
2017-03-13 23:50 ` [PATCH v3 3/7] xen/9pfs: introduce Xen 9pfs transport driver Stefano Stabellini
@ 2017-03-13 23:50 ` Stefano Stabellini
2017-03-14 6:41 ` Juergen Gross
2017-03-13 23:50 ` [PATCH v3 5/7] xen/9pfs: send requests " Stefano Stabellini
` (2 subsequent siblings)
5 siblings, 1 reply; 24+ messages in thread
From: Stefano Stabellini @ 2017-03-13 23:50 UTC (permalink / raw)
To: xen-devel
Cc: linux-kernel, sstabellini, boris.ostrovsky, Stefano Stabellini,
jgross, Eric Van Hensbergen, Ron Minnich, Latchesar Ionkov,
v9fs-developer
Implement functions to handle the xenbus handshake. Upon connection,
allocate the rings according to the protocol specification.
Initialize a work_struct and a wait_queue. The work_struct will be used
to schedule work upon receiving an event channel notification from the
backend. The wait_queue will be used to wait when the ring is full and
we need to send a new request.
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
CC: Eric Van Hensbergen <ericvh@gmail.com>
CC: Ron Minnich <rminnich@sandia.gov>
CC: Latchesar Ionkov <lucho@ionkov.net>
CC: v9fs-developer@lists.sourceforge.net
---
net/9p/trans_xen.c | 240 +++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 240 insertions(+)
diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
index f072876..974bac3 100644
--- a/net/9p/trans_xen.c
+++ b/net/9p/trans_xen.c
@@ -41,6 +41,35 @@
#include <net/9p/client.h>
#include <net/9p/transport.h>
+#define XEN_9PFS_NUM_RINGS 2
+
+/* One per ring, more than one per 9pfs share */
+struct xen_9pfs_dataring {
+ struct xen_9pfs_front_priv *priv;
+
+ struct xen_9pfs_data_intf *intf;
+ grant_ref_t ref;
+ int evtchn;
+ int irq;
+ spinlock_t lock;
+
+ struct xen_9pfs_data data;
+ wait_queue_head_t wq;
+ struct work_struct work;
+};
+
+/* One per 9pfs share */
+struct xen_9pfs_front_priv {
+ struct list_head list;
+ struct xenbus_device *dev;
+ char *tag;
+ struct p9_client *client;
+
+ int num_rings;
+ struct xen_9pfs_dataring *rings;
+};
+static LIST_HEAD(xen_9pfs_devs);
+
static int p9_xen_cancel(struct p9_client *client, struct p9_req_t *req)
{
return 0;
@@ -60,6 +89,21 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
return 0;
}
+static void p9_xen_response(struct work_struct *work)
+{
+}
+
+static irqreturn_t xen_9pfs_front_event_handler(int irq, void *r)
+{
+ struct xen_9pfs_dataring *ring = r;
+ BUG_ON(!ring || !ring->priv->client);
+
+ wake_up_interruptible(&ring->wq);
+ schedule_work(&ring->work);
+
+ return IRQ_HANDLED;
+}
+
static struct p9_trans_module p9_xen_trans = {
.name = "xen",
.maxsize = (1 << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT)),
@@ -76,25 +120,221 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
{ "" }
};
+static int xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
+{
+ int i, j;
+
+ list_del(&priv->list);
+
+ for (i = 0; i < priv->num_rings; i++) {
+ if (priv->rings[i].intf == NULL)
+ break;
+ if (priv->rings[i].irq > 0)
+ unbind_from_irqhandler(priv->rings[i].irq, priv->dev);
+ if (priv->rings[i].data.in != NULL) {
+ for (j = 0; j < (1 << XEN_9PFS_RING_ORDER); j++)
+ gnttab_end_foreign_access(priv->rings[i].intf->ref[j], 0, 0);
+ free_pages((unsigned long)priv->rings[i].data.in,
+ XEN_9PFS_RING_ORDER - (PAGE_SHIFT - XEN_PAGE_SHIFT));
+ }
+ gnttab_end_foreign_access(priv->rings[i].ref, 0, 0);
+ free_page((unsigned long)priv->rings[i].intf);
+ }
+ kfree(priv->rings);
+ kfree(priv);
+
+ return 0;
+}
+
static int xen_9pfs_front_remove(struct xenbus_device *dev)
{
+ int ret;
+ struct xen_9pfs_front_priv *priv = dev_get_drvdata(&dev->dev);
+
+ dev_set_drvdata(&dev->dev, NULL);
+ ret = xen_9pfs_front_free(priv);
+ return ret;
+}
+
+static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev,
+ struct xen_9pfs_dataring *ring)
+{
+ int i = 0;
+ int ret = -ENOMEM;
+ void *bytes = NULL;
+
+ init_waitqueue_head(&ring->wq);
+ spin_lock_init(&ring->lock);
+ INIT_WORK(&ring->work, p9_xen_response);
+
+ ring->intf = (struct xen_9pfs_data_intf *) get_zeroed_page(GFP_KERNEL | __GFP_ZERO);
+ if (!ring->intf)
+ return ret;
+ ret = ring->ref = gnttab_grant_foreign_access(dev->otherend_id,
+ virt_to_gfn(ring->intf), 0);
+ if (ret < 0)
+ goto out;
+ bytes = (void*)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
+ XEN_9PFS_RING_ORDER - (PAGE_SHIFT - XEN_PAGE_SHIFT));
+ if (bytes == NULL)
+ goto out;
+ for (; i < (1 << XEN_9PFS_RING_ORDER); i++) {
+ ret = ring->intf->ref[i] = gnttab_grant_foreign_access(
+ dev->otherend_id, virt_to_gfn(bytes) + i, 0);
+ if (ret < 0)
+ goto out;
+ }
+ ring->data.in = bytes;
+ ring->data.out = bytes + XEN_9PFS_RING_SIZE;
+
+ ret = xenbus_alloc_evtchn(dev, &ring->evtchn);
+ if (ret)
+ goto out;
+ ring->irq = bind_evtchn_to_irqhandler(ring->evtchn, xen_9pfs_front_event_handler,
+ 0, "xen_9pfs-frontend", ring);
+ if (ring->irq < 0) {
+ xenbus_free_evtchn(dev, ring->evtchn);
+ ret = ring->irq;
+ goto out;
+ }
return 0;
+
+out:
+ if (bytes != NULL) {
+ for (i--; i >= 0; i--)
+ gnttab_end_foreign_access(ring->intf->ref[i], 0, 0);
+ free_pages((unsigned long)bytes,
+ XEN_9PFS_RING_ORDER - (PAGE_SHIFT - XEN_PAGE_SHIFT));
+ }
+ gnttab_end_foreign_access(ring->ref, 0, 0);
+ free_page((unsigned long)ring->intf);
+ return ret;
}
static int xen_9pfs_front_probe(struct xenbus_device *dev,
const struct xenbus_device_id *id)
{
+ int ret, i;
+ struct xenbus_transaction xbt;
+ struct xen_9pfs_front_priv *priv = NULL;
+ char *versions;
+ unsigned int max_rings, max_ring_order, len;
+
+ versions = xenbus_read(XBT_NIL, dev->otherend, "versions", &len);
+ if (!len || strcmp(versions, "1"))
+ return -EINVAL;
+ kfree(versions);
+ ret = xenbus_scanf(XBT_NIL, dev->otherend, "max-rings", "%u", &max_rings);
+ if (ret < 0 || max_rings < XEN_9PFS_NUM_RINGS)
+ return -EINVAL;
+ ret = xenbus_scanf(XBT_NIL, dev->otherend, "max-ring-page-order", "%u", &max_ring_order);
+ if (ret < 0|| max_ring_order < XEN_9PFS_RING_ORDER)
+ return -EINVAL;
+
+
+ priv = kzalloc(sizeof(struct xen_9pfs_front_priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+
+ priv->dev = dev;
+ priv->num_rings = XEN_9PFS_NUM_RINGS;
+ priv->rings = kzalloc(sizeof(struct xen_9pfs_dataring) * priv->num_rings,
+ GFP_KERNEL);
+ if (!priv->rings) {
+ kfree(priv);
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < priv->num_rings; i++) {
+ priv->rings[i].priv = priv;
+ ret = xen_9pfs_front_alloc_dataring(dev, &priv->rings[i]);
+ if (ret < 0)
+ goto error;
+ }
+
+ again:
+ ret = xenbus_transaction_start(&xbt);
+ if (ret) {
+ xenbus_dev_fatal(dev, ret, "starting transaction");
+ goto error;
+ }
+ ret = xenbus_printf(xbt, dev->nodename, "version", "%u", 1);
+ if (ret)
+ goto error_xenbus;
+ ret = xenbus_printf(xbt, dev->nodename, "num-rings", "%u", priv->num_rings);
+ if (ret)
+ goto error_xenbus;
+ for (i = 0; i < priv->num_rings; i++) {
+ char str[16];
+
+ sprintf(str, "ring-ref%u", i);
+ ret = xenbus_printf(xbt, dev->nodename, str, "%d", priv->rings[i].ref);
+ if (ret)
+ goto error_xenbus;
+
+ sprintf(str, "event-channel-%u", i);
+ ret = xenbus_printf(xbt, dev->nodename, str, "%u", priv->rings[i].evtchn);
+ if (ret)
+ goto error_xenbus;
+ }
+ priv->tag = xenbus_read(xbt, dev->nodename, "tag", NULL);
+ if (ret)
+ goto error_xenbus;
+ ret = xenbus_transaction_end(xbt, 0);
+ if (ret) {
+ if (ret == -EAGAIN)
+ goto again;
+ xenbus_dev_fatal(dev, ret, "completing transaction");
+ goto error;
+ }
+
+ list_add_tail(&priv->list, &xen_9pfs_devs);
+ dev_set_drvdata(&dev->dev, priv);
+ xenbus_switch_state(dev, XenbusStateInitialised);
+
return 0;
+
+ error_xenbus:
+ xenbus_transaction_end(xbt, 1);
+ xenbus_dev_fatal(dev, ret, "writing xenstore");
+ error:
+ dev_set_drvdata(&dev->dev, NULL);
+ xen_9pfs_front_free(priv);
+ return ret;
}
static int xen_9pfs_front_resume(struct xenbus_device *dev)
{
+ dev_warn(&dev->dev, "suspsend/resume unsupported\n");
return 0;
}
static void xen_9pfs_front_changed(struct xenbus_device *dev,
enum xenbus_state backend_state)
{
+ switch (backend_state) {
+ case XenbusStateReconfiguring:
+ case XenbusStateReconfigured:
+ case XenbusStateInitialising:
+ case XenbusStateInitialised:
+ case XenbusStateUnknown:
+ break;
+
+ case XenbusStateInitWait:
+ break;
+
+ case XenbusStateConnected:
+ xenbus_switch_state(dev, XenbusStateConnected);
+ break;
+
+ case XenbusStateClosed:
+ if (dev->state == XenbusStateClosed)
+ break;
+ /* Missed the backend's CLOSING state -- fallthrough */
+ case XenbusStateClosing:
+ xenbus_frontend_closed(dev);
+ break;
+ }
}
static struct xenbus_driver xen_9pfs_front_driver = {
--
1.9.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v3 4/7] xen/9pfs: connect to the backend
2017-03-13 23:50 ` [PATCH v3 4/7] xen/9pfs: connect to the backend Stefano Stabellini
@ 2017-03-14 6:41 ` Juergen Gross
2017-03-14 21:22 ` Stefano Stabellini
0 siblings, 1 reply; 24+ messages in thread
From: Juergen Gross @ 2017-03-14 6:41 UTC (permalink / raw)
To: Stefano Stabellini, xen-devel
Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini,
Eric Van Hensbergen, Ron Minnich, Latchesar Ionkov,
v9fs-developer
On 14/03/17 00:50, Stefano Stabellini wrote:
> Implement functions to handle the xenbus handshake. Upon connection,
> allocate the rings according to the protocol specification.
>
> Initialize a work_struct and a wait_queue. The work_struct will be used
> to schedule work upon receiving an event channel notification from the
> backend. The wait_queue will be used to wait when the ring is full and
> we need to send a new request.
>
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> CC: Eric Van Hensbergen <ericvh@gmail.com>
> CC: Ron Minnich <rminnich@sandia.gov>
> CC: Latchesar Ionkov <lucho@ionkov.net>
> CC: v9fs-developer@lists.sourceforge.net
> ---
> net/9p/trans_xen.c | 240 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 240 insertions(+)
>
> diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
> index f072876..974bac3 100644
> --- a/net/9p/trans_xen.c
> +++ b/net/9p/trans_xen.c
> @@ -41,6 +41,35 @@
> #include <net/9p/client.h>
> #include <net/9p/transport.h>
>
> +#define XEN_9PFS_NUM_RINGS 2
> +
> +/* One per ring, more than one per 9pfs share */
> +struct xen_9pfs_dataring {
> + struct xen_9pfs_front_priv *priv;
> +
> + struct xen_9pfs_data_intf *intf;
> + grant_ref_t ref;
> + int evtchn;
> + int irq;
> + spinlock_t lock;
> +
> + struct xen_9pfs_data data;
> + wait_queue_head_t wq;
> + struct work_struct work;
> +};
> +
> +/* One per 9pfs share */
> +struct xen_9pfs_front_priv {
> + struct list_head list;
> + struct xenbus_device *dev;
> + char *tag;
> + struct p9_client *client;
> +
> + int num_rings;
> + struct xen_9pfs_dataring *rings;
> +};
> +static LIST_HEAD(xen_9pfs_devs);
> +
> static int p9_xen_cancel(struct p9_client *client, struct p9_req_t *req)
> {
> return 0;
> @@ -60,6 +89,21 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
> return 0;
> }
>
> +static void p9_xen_response(struct work_struct *work)
> +{
> +}
> +
> +static irqreturn_t xen_9pfs_front_event_handler(int irq, void *r)
> +{
> + struct xen_9pfs_dataring *ring = r;
> + BUG_ON(!ring || !ring->priv->client);
> +
> + wake_up_interruptible(&ring->wq);
> + schedule_work(&ring->work);
> +
> + return IRQ_HANDLED;
> +}
> +
> static struct p9_trans_module p9_xen_trans = {
> .name = "xen",
> .maxsize = (1 << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT)),
> @@ -76,25 +120,221 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
> { "" }
> };
>
> +static int xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
Return type void? You are always returning 0.
> +{
> + int i, j;
> +
> + list_del(&priv->list);
> +
> + for (i = 0; i < priv->num_rings; i++) {
> + if (priv->rings[i].intf == NULL)
> + break;
> + if (priv->rings[i].irq > 0)
> + unbind_from_irqhandler(priv->rings[i].irq, priv->dev);
> + if (priv->rings[i].data.in != NULL) {
> + for (j = 0; j < (1 << XEN_9PFS_RING_ORDER); j++)
> + gnttab_end_foreign_access(priv->rings[i].intf->ref[j], 0, 0);
> + free_pages((unsigned long)priv->rings[i].data.in,
> + XEN_9PFS_RING_ORDER - (PAGE_SHIFT - XEN_PAGE_SHIFT));
> + }
> + gnttab_end_foreign_access(priv->rings[i].ref, 0, 0);
> + free_page((unsigned long)priv->rings[i].intf);
> + }
> + kfree(priv->rings);
> + kfree(priv);
> +
> + return 0;
> +}
> +
> static int xen_9pfs_front_remove(struct xenbus_device *dev)
> {
> + int ret;
> + struct xen_9pfs_front_priv *priv = dev_get_drvdata(&dev->dev);
> +
> + dev_set_drvdata(&dev->dev, NULL);
> + ret = xen_9pfs_front_free(priv);
> + return ret;
> +}
> +
> +static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev,
> + struct xen_9pfs_dataring *ring)
> +{
> + int i = 0;
> + int ret = -ENOMEM;
> + void *bytes = NULL;
> +
> + init_waitqueue_head(&ring->wq);
> + spin_lock_init(&ring->lock);
> + INIT_WORK(&ring->work, p9_xen_response);
> +
> + ring->intf = (struct xen_9pfs_data_intf *) get_zeroed_page(GFP_KERNEL | __GFP_ZERO);
I don't think you need __GFP_ZERO here.
> + if (!ring->intf)
> + return ret;
> + ret = ring->ref = gnttab_grant_foreign_access(dev->otherend_id,
> + virt_to_gfn(ring->intf), 0);
> + if (ret < 0)
> + goto out;
> + bytes = (void*)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
Coding style: (void *)
> + XEN_9PFS_RING_ORDER - (PAGE_SHIFT - XEN_PAGE_SHIFT));
> + if (bytes == NULL)
> + goto out;
> + for (; i < (1 << XEN_9PFS_RING_ORDER); i++) {
> + ret = ring->intf->ref[i] = gnttab_grant_foreign_access(
> + dev->otherend_id, virt_to_gfn(bytes) + i, 0);
> + if (ret < 0)
> + goto out;
> + }
> + ring->data.in = bytes;
> + ring->data.out = bytes + XEN_9PFS_RING_SIZE;
> +
> + ret = xenbus_alloc_evtchn(dev, &ring->evtchn);
> + if (ret)
> + goto out;
> + ring->irq = bind_evtchn_to_irqhandler(ring->evtchn, xen_9pfs_front_event_handler,
> + 0, "xen_9pfs-frontend", ring);
Did you think about using request_threaded_irq() instead of a workqueue?
For an example see e.g. drivers/scsi/xen-scsifront.c
> + if (ring->irq < 0) {
> + xenbus_free_evtchn(dev, ring->evtchn);
> + ret = ring->irq;
> + goto out;
> + }
> return 0;
> +
> +out:
> + if (bytes != NULL) {
> + for (i--; i >= 0; i--)
> + gnttab_end_foreign_access(ring->intf->ref[i], 0, 0);
> + free_pages((unsigned long)bytes,
> + XEN_9PFS_RING_ORDER - (PAGE_SHIFT - XEN_PAGE_SHIFT));
> + }
> + gnttab_end_foreign_access(ring->ref, 0, 0);
> + free_page((unsigned long)ring->intf);
> + return ret;
> }
>
> static int xen_9pfs_front_probe(struct xenbus_device *dev,
> const struct xenbus_device_id *id)
> {
> + int ret, i;
> + struct xenbus_transaction xbt;
> + struct xen_9pfs_front_priv *priv = NULL;
> + char *versions;
> + unsigned int max_rings, max_ring_order, len;
> +
> + versions = xenbus_read(XBT_NIL, dev->otherend, "versions", &len);
> + if (!len || strcmp(versions, "1"))
You are leaking versions in case of not being "1".
Can't you use xenbus_read_unsigned() instead of xenbus_read()?
> + return -EINVAL;
> + kfree(versions);
> + ret = xenbus_scanf(XBT_NIL, dev->otherend, "max-rings", "%u", &max_rings);
xenbus_read_unsigned()?
> + if (ret < 0 || max_rings < XEN_9PFS_NUM_RINGS)
> + return -EINVAL;
> + ret = xenbus_scanf(XBT_NIL, dev->otherend, "max-ring-page-order", "%u", &max_ring_order);
xenbus_read_unsigned()
> + if (ret < 0|| max_ring_order < XEN_9PFS_RING_ORDER)
Coding style (missing space before ||)
> + return -EINVAL;
> +
> +
> + priv = kzalloc(sizeof(struct xen_9pfs_front_priv), GFP_KERNEL);
> + if (!priv)
> + return -ENOMEM;
> +
> + priv->dev = dev;
> + priv->num_rings = XEN_9PFS_NUM_RINGS;
> + priv->rings = kzalloc(sizeof(struct xen_9pfs_dataring) * priv->num_rings,
> + GFP_KERNEL);
Coding style: please align the second line to the first parameter of
kzalloc().
> + if (!priv->rings) {
> + kfree(priv);
> + return -ENOMEM;
> + }
> +
> + for (i = 0; i < priv->num_rings; i++) {
> + priv->rings[i].priv = priv;
> + ret = xen_9pfs_front_alloc_dataring(dev, &priv->rings[i]);
> + if (ret < 0)
> + goto error;
> + }
> +
> + again:
> + ret = xenbus_transaction_start(&xbt);
> + if (ret) {
> + xenbus_dev_fatal(dev, ret, "starting transaction");
> + goto error;
> + }
> + ret = xenbus_printf(xbt, dev->nodename, "version", "%u", 1);
> + if (ret)
> + goto error_xenbus;
> + ret = xenbus_printf(xbt, dev->nodename, "num-rings", "%u", priv->num_rings);
> + if (ret)
> + goto error_xenbus;
> + for (i = 0; i < priv->num_rings; i++) {
> + char str[16];
> +
> + sprintf(str, "ring-ref%u", i);
> + ret = xenbus_printf(xbt, dev->nodename, str, "%d", priv->rings[i].ref);
> + if (ret)
> + goto error_xenbus;
> +
> + sprintf(str, "event-channel-%u", i);
This is dangerous: you are hard coding num_rings always being less than
10 here. Otherwise str[] isn't large enough. Mind adding
BUILD_BUG_ON(XEN_9PFS_NUM_RINGS > 9)?
> + ret = xenbus_printf(xbt, dev->nodename, str, "%u", priv->rings[i].evtchn);
> + if (ret)
> + goto error_xenbus;
> + }
> + priv->tag = xenbus_read(xbt, dev->nodename, "tag", NULL);
> + if (ret)
> + goto error_xenbus;
> + ret = xenbus_transaction_end(xbt, 0);
> + if (ret) {
> + if (ret == -EAGAIN)
> + goto again;
> + xenbus_dev_fatal(dev, ret, "completing transaction");
> + goto error;
> + }
> +
> + list_add_tail(&priv->list, &xen_9pfs_devs);
Don't you need some kind of lock protecting the list?
> + dev_set_drvdata(&dev->dev, priv);
> + xenbus_switch_state(dev, XenbusStateInitialised);
> +
> return 0;
> +
> + error_xenbus:
> + xenbus_transaction_end(xbt, 1);
> + xenbus_dev_fatal(dev, ret, "writing xenstore");
> + error:
> + dev_set_drvdata(&dev->dev, NULL);
> + xen_9pfs_front_free(priv);
> + return ret;
> }
>
> static int xen_9pfs_front_resume(struct xenbus_device *dev)
> {
> + dev_warn(&dev->dev, "suspsend/resume unsupported\n");
> return 0;
> }
>
> static void xen_9pfs_front_changed(struct xenbus_device *dev,
> enum xenbus_state backend_state)
> {
> + switch (backend_state) {
> + case XenbusStateReconfiguring:
> + case XenbusStateReconfigured:
> + case XenbusStateInitialising:
> + case XenbusStateInitialised:
> + case XenbusStateUnknown:
> + break;
> +
> + case XenbusStateInitWait:
> + break;
> +
> + case XenbusStateConnected:
> + xenbus_switch_state(dev, XenbusStateConnected);
> + break;
> +
> + case XenbusStateClosed:
> + if (dev->state == XenbusStateClosed)
> + break;
> + /* Missed the backend's CLOSING state -- fallthrough */
> + case XenbusStateClosing:
> + xenbus_frontend_closed(dev);
> + break;
> + }
> }
>
> static struct xenbus_driver xen_9pfs_front_driver = {
>
Juergen
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v3 4/7] xen/9pfs: connect to the backend
2017-03-14 6:41 ` Juergen Gross
@ 2017-03-14 21:22 ` Stefano Stabellini
2017-03-15 5:08 ` Juergen Gross
0 siblings, 1 reply; 24+ messages in thread
From: Stefano Stabellini @ 2017-03-14 21:22 UTC (permalink / raw)
To: Juergen Gross
Cc: Stefano Stabellini, xen-devel, linux-kernel, boris.ostrovsky,
Stefano Stabellini, Eric Van Hensbergen, Ron Minnich,
Latchesar Ionkov, v9fs-developer
Hi Juergen,
thank you for the review!
On Tue, 14 Mar 2017, Juergen Gross wrote:
> On 14/03/17 00:50, Stefano Stabellini wrote:
> > Implement functions to handle the xenbus handshake. Upon connection,
> > allocate the rings according to the protocol specification.
> >
> > Initialize a work_struct and a wait_queue. The work_struct will be used
> > to schedule work upon receiving an event channel notification from the
> > backend. The wait_queue will be used to wait when the ring is full and
> > we need to send a new request.
> >
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > CC: boris.ostrovsky@oracle.com
> > CC: jgross@suse.com
> > CC: Eric Van Hensbergen <ericvh@gmail.com>
> > CC: Ron Minnich <rminnich@sandia.gov>
> > CC: Latchesar Ionkov <lucho@ionkov.net>
> > CC: v9fs-developer@lists.sourceforge.net
> > ---
> > net/9p/trans_xen.c | 240 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 240 insertions(+)
> >
> > diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
> > index f072876..974bac3 100644
> > --- a/net/9p/trans_xen.c
> > +++ b/net/9p/trans_xen.c
> > @@ -41,6 +41,35 @@
> > #include <net/9p/client.h>
> > #include <net/9p/transport.h>
> >
> > +#define XEN_9PFS_NUM_RINGS 2
> > +
> > +/* One per ring, more than one per 9pfs share */
> > +struct xen_9pfs_dataring {
> > + struct xen_9pfs_front_priv *priv;
> > +
> > + struct xen_9pfs_data_intf *intf;
> > + grant_ref_t ref;
> > + int evtchn;
> > + int irq;
> > + spinlock_t lock;
> > +
> > + struct xen_9pfs_data data;
> > + wait_queue_head_t wq;
> > + struct work_struct work;
> > +};
> > +
> > +/* One per 9pfs share */
> > +struct xen_9pfs_front_priv {
> > + struct list_head list;
> > + struct xenbus_device *dev;
> > + char *tag;
> > + struct p9_client *client;
> > +
> > + int num_rings;
> > + struct xen_9pfs_dataring *rings;
> > +};
> > +static LIST_HEAD(xen_9pfs_devs);
> > +
> > static int p9_xen_cancel(struct p9_client *client, struct p9_req_t *req)
> > {
> > return 0;
> > @@ -60,6 +89,21 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
> > return 0;
> > }
> >
> > +static void p9_xen_response(struct work_struct *work)
> > +{
> > +}
> > +
> > +static irqreturn_t xen_9pfs_front_event_handler(int irq, void *r)
> > +{
> > + struct xen_9pfs_dataring *ring = r;
> > + BUG_ON(!ring || !ring->priv->client);
> > +
> > + wake_up_interruptible(&ring->wq);
> > + schedule_work(&ring->work);
> > +
> > + return IRQ_HANDLED;
> > +}
> > +
> > static struct p9_trans_module p9_xen_trans = {
> > .name = "xen",
> > .maxsize = (1 << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT)),
> > @@ -76,25 +120,221 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
> > { "" }
> > };
> >
> > +static int xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
>
> Return type void? You are always returning 0.
Good point
> > +{
> > + int i, j;
> > +
> > + list_del(&priv->list);
> > +
> > + for (i = 0; i < priv->num_rings; i++) {
> > + if (priv->rings[i].intf == NULL)
> > + break;
> > + if (priv->rings[i].irq > 0)
> > + unbind_from_irqhandler(priv->rings[i].irq, priv->dev);
> > + if (priv->rings[i].data.in != NULL) {
> > + for (j = 0; j < (1 << XEN_9PFS_RING_ORDER); j++)
> > + gnttab_end_foreign_access(priv->rings[i].intf->ref[j], 0, 0);
> > + free_pages((unsigned long)priv->rings[i].data.in,
> > + XEN_9PFS_RING_ORDER - (PAGE_SHIFT - XEN_PAGE_SHIFT));
> > + }
> > + gnttab_end_foreign_access(priv->rings[i].ref, 0, 0);
> > + free_page((unsigned long)priv->rings[i].intf);
> > + }
> > + kfree(priv->rings);
> > + kfree(priv);
> > +
> > + return 0;
> > +}
> > +
> > static int xen_9pfs_front_remove(struct xenbus_device *dev)
> > {
> > + int ret;
> > + struct xen_9pfs_front_priv *priv = dev_get_drvdata(&dev->dev);
> > +
> > + dev_set_drvdata(&dev->dev, NULL);
> > + ret = xen_9pfs_front_free(priv);
> > + return ret;
> > +}
> > +
> > +static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev,
> > + struct xen_9pfs_dataring *ring)
> > +{
> > + int i = 0;
> > + int ret = -ENOMEM;
> > + void *bytes = NULL;
> > +
> > + init_waitqueue_head(&ring->wq);
> > + spin_lock_init(&ring->lock);
> > + INIT_WORK(&ring->work, p9_xen_response);
> > +
> > + ring->intf = (struct xen_9pfs_data_intf *) get_zeroed_page(GFP_KERNEL | __GFP_ZERO);
>
> I don't think you need __GFP_ZERO here.
You are right
> > + if (!ring->intf)
> > + return ret;
> > + ret = ring->ref = gnttab_grant_foreign_access(dev->otherend_id,
> > + virt_to_gfn(ring->intf), 0);
> > + if (ret < 0)
> > + goto out;
> > + bytes = (void*)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
>
> Coding style: (void *)
OK
> > + XEN_9PFS_RING_ORDER - (PAGE_SHIFT - XEN_PAGE_SHIFT));
> > + if (bytes == NULL)
> > + goto out;
> > + for (; i < (1 << XEN_9PFS_RING_ORDER); i++) {
> > + ret = ring->intf->ref[i] = gnttab_grant_foreign_access(
> > + dev->otherend_id, virt_to_gfn(bytes) + i, 0);
> > + if (ret < 0)
> > + goto out;
> > + }
> > + ring->data.in = bytes;
> > + ring->data.out = bytes + XEN_9PFS_RING_SIZE;
> > +
> > + ret = xenbus_alloc_evtchn(dev, &ring->evtchn);
> > + if (ret)
> > + goto out;
> > + ring->irq = bind_evtchn_to_irqhandler(ring->evtchn, xen_9pfs_front_event_handler,
> > + 0, "xen_9pfs-frontend", ring);
>
> Did you think about using request_threaded_irq() instead of a workqueue?
> For an example see e.g. drivers/scsi/xen-scsifront.c
I like workqueues :-) It might come down to personal preferences, but I
think workqueues are more flexible and a better fit for this use case.
Not only it is easy to schedule work in a workqueue from the interrupt
handler, but also they can be used for sleeping in the request function
if there is not enough room on the ring. Besides, they can easily be
configured to share a single thread or to have multiple independent
threads.
> > + if (ring->irq < 0) {
> > + xenbus_free_evtchn(dev, ring->evtchn);
> > + ret = ring->irq;
> > + goto out;
> > + }
> > return 0;
> > +
> > +out:
> > + if (bytes != NULL) {
> > + for (i--; i >= 0; i--)
> > + gnttab_end_foreign_access(ring->intf->ref[i], 0, 0);
> > + free_pages((unsigned long)bytes,
> > + XEN_9PFS_RING_ORDER - (PAGE_SHIFT - XEN_PAGE_SHIFT));
> > + }
> > + gnttab_end_foreign_access(ring->ref, 0, 0);
> > + free_page((unsigned long)ring->intf);
> > + return ret;
> > }
> >
> > static int xen_9pfs_front_probe(struct xenbus_device *dev,
> > const struct xenbus_device_id *id)
> > {
> > + int ret, i;
> > + struct xenbus_transaction xbt;
> > + struct xen_9pfs_front_priv *priv = NULL;
> > + char *versions;
> > + unsigned int max_rings, max_ring_order, len;
> > +
> > + versions = xenbus_read(XBT_NIL, dev->otherend, "versions", &len);
> > + if (!len || strcmp(versions, "1"))
>
> You are leaking versions in case of not being "1".
I'll fix
> Can't you use xenbus_read_unsigned() instead of xenbus_read()?
I can use xenbus_read_unsigned in the other cases below, but not here,
because versions is in the form: "1,3,4"
> > + return -EINVAL;
> > + kfree(versions);
> > + ret = xenbus_scanf(XBT_NIL, dev->otherend, "max-rings", "%u", &max_rings);
>
> xenbus_read_unsigned()?
Good idea, I'll use it
> > + if (ret < 0 || max_rings < XEN_9PFS_NUM_RINGS)
> > + return -EINVAL;
> > + ret = xenbus_scanf(XBT_NIL, dev->otherend, "max-ring-page-order", "%u", &max_ring_order);
>
> xenbus_read_unsigned()
Yep
> > + if (ret < 0|| max_ring_order < XEN_9PFS_RING_ORDER)
>
> Coding style (missing space before ||)
OK
> > + return -EINVAL;
> > +
> > +
> > + priv = kzalloc(sizeof(struct xen_9pfs_front_priv), GFP_KERNEL);
> > + if (!priv)
> > + return -ENOMEM;
> > +
> > + priv->dev = dev;
> > + priv->num_rings = XEN_9PFS_NUM_RINGS;
> > + priv->rings = kzalloc(sizeof(struct xen_9pfs_dataring) * priv->num_rings,
> > + GFP_KERNEL);
>
> Coding style: please align the second line to the first parameter of
> kzalloc().
OK
> > + if (!priv->rings) {
> > + kfree(priv);
> > + return -ENOMEM;
> > + }
> > +
> > + for (i = 0; i < priv->num_rings; i++) {
> > + priv->rings[i].priv = priv;
> > + ret = xen_9pfs_front_alloc_dataring(dev, &priv->rings[i]);
> > + if (ret < 0)
> > + goto error;
> > + }
> > +
> > + again:
> > + ret = xenbus_transaction_start(&xbt);
> > + if (ret) {
> > + xenbus_dev_fatal(dev, ret, "starting transaction");
> > + goto error;
> > + }
> > + ret = xenbus_printf(xbt, dev->nodename, "version", "%u", 1);
> > + if (ret)
> > + goto error_xenbus;
> > + ret = xenbus_printf(xbt, dev->nodename, "num-rings", "%u", priv->num_rings);
> > + if (ret)
> > + goto error_xenbus;
> > + for (i = 0; i < priv->num_rings; i++) {
> > + char str[16];
> > +
> > + sprintf(str, "ring-ref%u", i);
> > + ret = xenbus_printf(xbt, dev->nodename, str, "%d", priv->rings[i].ref);
> > + if (ret)
> > + goto error_xenbus;
> > +
> > + sprintf(str, "event-channel-%u", i);
>
> This is dangerous: you are hard coding num_rings always being less than
> 10 here. Otherwise str[] isn't large enough. Mind adding
> BUILD_BUG_ON(XEN_9PFS_NUM_RINGS > 9)?
Yes, good idea.
> > + ret = xenbus_printf(xbt, dev->nodename, str, "%u", priv->rings[i].evtchn);
> > + if (ret)
> > + goto error_xenbus;
> > + }
> > + priv->tag = xenbus_read(xbt, dev->nodename, "tag", NULL);
> > + if (ret)
> > + goto error_xenbus;
> > + ret = xenbus_transaction_end(xbt, 0);
> > + if (ret) {
> > + if (ret == -EAGAIN)
> > + goto again;
> > + xenbus_dev_fatal(dev, ret, "completing transaction");
> > + goto error;
> > + }
> > +
> > + list_add_tail(&priv->list, &xen_9pfs_devs);
>
> Don't you need some kind of lock protecting the list?
I think I do, I'll add one.
> > + dev_set_drvdata(&dev->dev, priv);
> > + xenbus_switch_state(dev, XenbusStateInitialised);
> > +
> > return 0;
> > +
> > + error_xenbus:
> > + xenbus_transaction_end(xbt, 1);
> > + xenbus_dev_fatal(dev, ret, "writing xenstore");
> > + error:
> > + dev_set_drvdata(&dev->dev, NULL);
> > + xen_9pfs_front_free(priv);
> > + return ret;
> > }
> >
> > static int xen_9pfs_front_resume(struct xenbus_device *dev)
> > {
> > + dev_warn(&dev->dev, "suspsend/resume unsupported\n");
> > return 0;
> > }
> >
> > static void xen_9pfs_front_changed(struct xenbus_device *dev,
> > enum xenbus_state backend_state)
> > {
> > + switch (backend_state) {
> > + case XenbusStateReconfiguring:
> > + case XenbusStateReconfigured:
> > + case XenbusStateInitialising:
> > + case XenbusStateInitialised:
> > + case XenbusStateUnknown:
> > + break;
> > +
> > + case XenbusStateInitWait:
> > + break;
> > +
> > + case XenbusStateConnected:
> > + xenbus_switch_state(dev, XenbusStateConnected);
> > + break;
> > +
> > + case XenbusStateClosed:
> > + if (dev->state == XenbusStateClosed)
> > + break;
> > + /* Missed the backend's CLOSING state -- fallthrough */
> > + case XenbusStateClosing:
> > + xenbus_frontend_closed(dev);
> > + break;
> > + }
> > }
> >
> > static struct xenbus_driver xen_9pfs_front_driver = {
> >
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v3 4/7] xen/9pfs: connect to the backend
2017-03-14 21:22 ` Stefano Stabellini
@ 2017-03-15 5:08 ` Juergen Gross
2017-03-15 18:44 ` Stefano Stabellini
0 siblings, 1 reply; 24+ messages in thread
From: Juergen Gross @ 2017-03-15 5:08 UTC (permalink / raw)
To: Stefano Stabellini
Cc: xen-devel, linux-kernel, boris.ostrovsky, Stefano Stabellini,
Eric Van Hensbergen, Ron Minnich, Latchesar Ionkov,
v9fs-developer
On 14/03/17 22:22, Stefano Stabellini wrote:
> Hi Juergen,
>
> thank you for the review!
>
> On Tue, 14 Mar 2017, Juergen Gross wrote:
>> On 14/03/17 00:50, Stefano Stabellini wrote:
>>> Implement functions to handle the xenbus handshake. Upon connection,
>>> allocate the rings according to the protocol specification.
>>>
>>> Initialize a work_struct and a wait_queue. The work_struct will be used
>>> to schedule work upon receiving an event channel notification from the
>>> backend. The wait_queue will be used to wait when the ring is full and
>>> we need to send a new request.
>>>
>>> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
>>> CC: boris.ostrovsky@oracle.com
>>> CC: jgross@suse.com
>>> CC: Eric Van Hensbergen <ericvh@gmail.com>
>>> CC: Ron Minnich <rminnich@sandia.gov>
>>> CC: Latchesar Ionkov <lucho@ionkov.net>
>>> CC: v9fs-developer@lists.sourceforge.net
>>> ---
>> Did you think about using request_threaded_irq() instead of a workqueue?
>> For an example see e.g. drivers/scsi/xen-scsifront.c
>
> I like workqueues :-) It might come down to personal preferences, but I
> think workqueues are more flexible and a better fit for this use case.
> Not only it is easy to schedule work in a workqueue from the interrupt
> handler, but also they can be used for sleeping in the request function
> if there is not enough room on the ring. Besides, they can easily be
> configured to share a single thread or to have multiple independent
> threads.
I'm fine with the workqueues as long as you have decided to use them
considering the alternatives. :-)
>> Can't you use xenbus_read_unsigned() instead of xenbus_read()?
>
> I can use xenbus_read_unsigned in the other cases below, but not here,
> because versions is in the form: "1,3,4"
Is this documented somewhere?
Hmm, are any of the Xenstore entries documented? Shouldn't this be done
in xen_9pfs.h ?
Juergen
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v3 4/7] xen/9pfs: connect to the backend
2017-03-15 5:08 ` Juergen Gross
@ 2017-03-15 18:44 ` Stefano Stabellini
2017-03-16 5:54 ` Juergen Gross
0 siblings, 1 reply; 24+ messages in thread
From: Stefano Stabellini @ 2017-03-15 18:44 UTC (permalink / raw)
To: Juergen Gross
Cc: Stefano Stabellini, xen-devel, linux-kernel, boris.ostrovsky,
Stefano Stabellini, Eric Van Hensbergen, Ron Minnich,
Latchesar Ionkov, v9fs-developer
On Wed, 15 Mar 2017, Juergen Gross wrote:
> On 14/03/17 22:22, Stefano Stabellini wrote:
> > Hi Juergen,
> >
> > thank you for the review!
> >
> > On Tue, 14 Mar 2017, Juergen Gross wrote:
> >> On 14/03/17 00:50, Stefano Stabellini wrote:
> >>> Implement functions to handle the xenbus handshake. Upon connection,
> >>> allocate the rings according to the protocol specification.
> >>>
> >>> Initialize a work_struct and a wait_queue. The work_struct will be used
> >>> to schedule work upon receiving an event channel notification from the
> >>> backend. The wait_queue will be used to wait when the ring is full and
> >>> we need to send a new request.
> >>>
> >>> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> >>> CC: boris.ostrovsky@oracle.com
> >>> CC: jgross@suse.com
> >>> CC: Eric Van Hensbergen <ericvh@gmail.com>
> >>> CC: Ron Minnich <rminnich@sandia.gov>
> >>> CC: Latchesar Ionkov <lucho@ionkov.net>
> >>> CC: v9fs-developer@lists.sourceforge.net
> >>> ---
>
> >> Did you think about using request_threaded_irq() instead of a workqueue?
> >> For an example see e.g. drivers/scsi/xen-scsifront.c
> >
> > I like workqueues :-) It might come down to personal preferences, but I
> > think workqueues are more flexible and a better fit for this use case.
> > Not only it is easy to schedule work in a workqueue from the interrupt
> > handler, but also they can be used for sleeping in the request function
> > if there is not enough room on the ring. Besides, they can easily be
> > configured to share a single thread or to have multiple independent
> > threads.
>
> I'm fine with the workqueues as long as you have decided to use them
> considering the alternatives. :-)
>
> >> Can't you use xenbus_read_unsigned() instead of xenbus_read()?
> >
> > I can use xenbus_read_unsigned in the other cases below, but not here,
> > because versions is in the form: "1,3,4"
>
> Is this documented somewhere?
>
> Hmm, are any of the Xenstore entries documented? Shouldn't this be done
> in xen_9pfs.h ?
They are documented in docs/misc/9pfs.markdown, under "Xenstore". Given
that it's all written there, especially the semantics, I didn't repeat
it in xen_9pfs.h
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v3 4/7] xen/9pfs: connect to the backend
2017-03-15 18:44 ` Stefano Stabellini
@ 2017-03-16 5:54 ` Juergen Gross
2017-03-16 18:03 ` Stefano Stabellini
0 siblings, 1 reply; 24+ messages in thread
From: Juergen Gross @ 2017-03-16 5:54 UTC (permalink / raw)
To: Stefano Stabellini
Cc: xen-devel, linux-kernel, boris.ostrovsky, Stefano Stabellini,
Eric Van Hensbergen, Ron Minnich, Latchesar Ionkov,
v9fs-developer
On 15/03/17 19:44, Stefano Stabellini wrote:
> On Wed, 15 Mar 2017, Juergen Gross wrote:
>> On 14/03/17 22:22, Stefano Stabellini wrote:
>>> Hi Juergen,
>>>
>>> thank you for the review!
>>>
>>> On Tue, 14 Mar 2017, Juergen Gross wrote:
>>>> On 14/03/17 00:50, Stefano Stabellini wrote:
>>>>> Implement functions to handle the xenbus handshake. Upon connection,
>>>>> allocate the rings according to the protocol specification.
>>>>>
>>>>> Initialize a work_struct and a wait_queue. The work_struct will be used
>>>>> to schedule work upon receiving an event channel notification from the
>>>>> backend. The wait_queue will be used to wait when the ring is full and
>>>>> we need to send a new request.
>>>>>
>>>>> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
>>>>> CC: boris.ostrovsky@oracle.com
>>>>> CC: jgross@suse.com
>>>>> CC: Eric Van Hensbergen <ericvh@gmail.com>
>>>>> CC: Ron Minnich <rminnich@sandia.gov>
>>>>> CC: Latchesar Ionkov <lucho@ionkov.net>
>>>>> CC: v9fs-developer@lists.sourceforge.net
>>>>> ---
>>
>>>> Did you think about using request_threaded_irq() instead of a workqueue?
>>>> For an example see e.g. drivers/scsi/xen-scsifront.c
>>>
>>> I like workqueues :-) It might come down to personal preferences, but I
>>> think workqueues are more flexible and a better fit for this use case.
>>> Not only it is easy to schedule work in a workqueue from the interrupt
>>> handler, but also they can be used for sleeping in the request function
>>> if there is not enough room on the ring. Besides, they can easily be
>>> configured to share a single thread or to have multiple independent
>>> threads.
>>
>> I'm fine with the workqueues as long as you have decided to use them
>> considering the alternatives. :-)
>>
>>>> Can't you use xenbus_read_unsigned() instead of xenbus_read()?
>>>
>>> I can use xenbus_read_unsigned in the other cases below, but not here,
>>> because versions is in the form: "1,3,4"
>>
>> Is this documented somewhere?
>>
>> Hmm, are any of the Xenstore entries documented? Shouldn't this be done
>> in xen_9pfs.h ?
>
> They are documented in docs/misc/9pfs.markdown, under "Xenstore". Given
> that it's all written there, especially the semantics, I didn't repeat
> it in xen_9pfs.h
Looking at it from the Linux kernel perspective this documentation is
not really highly visible. For me it is okay, but there have been
multiple examples in the past where documentation in the Xen repository
wasn't regarded as being sufficient.
I recommend moving the documentation regarding the interface into the
header file like for the other pv interfaces.
Juergen
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v3 4/7] xen/9pfs: connect to the backend
2017-03-16 5:54 ` Juergen Gross
@ 2017-03-16 18:03 ` Stefano Stabellini
2017-03-17 4:54 ` Juergen Gross
0 siblings, 1 reply; 24+ messages in thread
From: Stefano Stabellini @ 2017-03-16 18:03 UTC (permalink / raw)
To: Juergen Gross
Cc: Stefano Stabellini, xen-devel, linux-kernel, boris.ostrovsky,
Stefano Stabellini, Eric Van Hensbergen, Ron Minnich,
Latchesar Ionkov, v9fs-developer
On Thu, 16 Mar 2017, Juergen Gross wrote:
> On 15/03/17 19:44, Stefano Stabellini wrote:
> > On Wed, 15 Mar 2017, Juergen Gross wrote:
> >> On 14/03/17 22:22, Stefano Stabellini wrote:
> >>> Hi Juergen,
> >>>
> >>> thank you for the review!
> >>>
> >>> On Tue, 14 Mar 2017, Juergen Gross wrote:
> >>>> On 14/03/17 00:50, Stefano Stabellini wrote:
> >>>>> Implement functions to handle the xenbus handshake. Upon connection,
> >>>>> allocate the rings according to the protocol specification.
> >>>>>
> >>>>> Initialize a work_struct and a wait_queue. The work_struct will be used
> >>>>> to schedule work upon receiving an event channel notification from the
> >>>>> backend. The wait_queue will be used to wait when the ring is full and
> >>>>> we need to send a new request.
> >>>>>
> >>>>> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> >>>>> CC: boris.ostrovsky@oracle.com
> >>>>> CC: jgross@suse.com
> >>>>> CC: Eric Van Hensbergen <ericvh@gmail.com>
> >>>>> CC: Ron Minnich <rminnich@sandia.gov>
> >>>>> CC: Latchesar Ionkov <lucho@ionkov.net>
> >>>>> CC: v9fs-developer@lists.sourceforge.net
> >>>>> ---
> >>
> >>>> Did you think about using request_threaded_irq() instead of a workqueue?
> >>>> For an example see e.g. drivers/scsi/xen-scsifront.c
> >>>
> >>> I like workqueues :-) It might come down to personal preferences, but I
> >>> think workqueues are more flexible and a better fit for this use case.
> >>> Not only it is easy to schedule work in a workqueue from the interrupt
> >>> handler, but also they can be used for sleeping in the request function
> >>> if there is not enough room on the ring. Besides, they can easily be
> >>> configured to share a single thread or to have multiple independent
> >>> threads.
> >>
> >> I'm fine with the workqueues as long as you have decided to use them
> >> considering the alternatives. :-)
> >>
> >>>> Can't you use xenbus_read_unsigned() instead of xenbus_read()?
> >>>
> >>> I can use xenbus_read_unsigned in the other cases below, but not here,
> >>> because versions is in the form: "1,3,4"
> >>
> >> Is this documented somewhere?
> >>
> >> Hmm, are any of the Xenstore entries documented? Shouldn't this be done
> >> in xen_9pfs.h ?
> >
> > They are documented in docs/misc/9pfs.markdown, under "Xenstore". Given
> > that it's all written there, especially the semantics, I didn't repeat
> > it in xen_9pfs.h
>
> Looking at it from the Linux kernel perspective this documentation is
> not really highly visible. For me it is okay, but there have been
> multiple examples in the past where documentation in the Xen repository
> wasn't regarded as being sufficient.
>
> I recommend moving the documentation regarding the interface into the
> header file like for the other pv interfaces.
What about adding a link such as:
http://xenbits.xen.org/gitweb/?p=xen.git;a=blob_plain;f=docs/misc/9pfs.markdown;hb=HEAD
that should be easily accessible, right? For other specifications, such
as 9p, only links are provided (see Documentation/filesystems/9p.txt).
I am suggesting a link, because then we are sure the specs don't go out
of sync. I realize that older PV protocols were described in header
files, but that was before Xen Project had a formal process for getting
new specifications accepted, and a formal place where to publish them.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v3 4/7] xen/9pfs: connect to the backend
2017-03-16 18:03 ` Stefano Stabellini
@ 2017-03-17 4:54 ` Juergen Gross
2017-03-17 15:13 ` [Xen-devel] " Konrad Rzeszutek Wilk
0 siblings, 1 reply; 24+ messages in thread
From: Juergen Gross @ 2017-03-17 4:54 UTC (permalink / raw)
To: Stefano Stabellini
Cc: xen-devel, linux-kernel, boris.ostrovsky, Stefano Stabellini,
Eric Van Hensbergen, Ron Minnich, Latchesar Ionkov,
v9fs-developer
On 16/03/17 19:03, Stefano Stabellini wrote:
> On Thu, 16 Mar 2017, Juergen Gross wrote:
>> On 15/03/17 19:44, Stefano Stabellini wrote:
>>> On Wed, 15 Mar 2017, Juergen Gross wrote:
>>>> On 14/03/17 22:22, Stefano Stabellini wrote:
>>>>> Hi Juergen,
>>>>>
>>>>> thank you for the review!
>>>>>
>>>>> On Tue, 14 Mar 2017, Juergen Gross wrote:
>>>>>> On 14/03/17 00:50, Stefano Stabellini wrote:
>>>>>>> Implement functions to handle the xenbus handshake. Upon connection,
>>>>>>> allocate the rings according to the protocol specification.
>>>>>>>
>>>>>>> Initialize a work_struct and a wait_queue. The work_struct will be used
>>>>>>> to schedule work upon receiving an event channel notification from the
>>>>>>> backend. The wait_queue will be used to wait when the ring is full and
>>>>>>> we need to send a new request.
>>>>>>>
>>>>>>> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
>>>>>>> CC: boris.ostrovsky@oracle.com
>>>>>>> CC: jgross@suse.com
>>>>>>> CC: Eric Van Hensbergen <ericvh@gmail.com>
>>>>>>> CC: Ron Minnich <rminnich@sandia.gov>
>>>>>>> CC: Latchesar Ionkov <lucho@ionkov.net>
>>>>>>> CC: v9fs-developer@lists.sourceforge.net
>>>>>>> ---
>>>>
>>>>>> Did you think about using request_threaded_irq() instead of a workqueue?
>>>>>> For an example see e.g. drivers/scsi/xen-scsifront.c
>>>>>
>>>>> I like workqueues :-) It might come down to personal preferences, but I
>>>>> think workqueues are more flexible and a better fit for this use case.
>>>>> Not only it is easy to schedule work in a workqueue from the interrupt
>>>>> handler, but also they can be used for sleeping in the request function
>>>>> if there is not enough room on the ring. Besides, they can easily be
>>>>> configured to share a single thread or to have multiple independent
>>>>> threads.
>>>>
>>>> I'm fine with the workqueues as long as you have decided to use them
>>>> considering the alternatives. :-)
>>>>
>>>>>> Can't you use xenbus_read_unsigned() instead of xenbus_read()?
>>>>>
>>>>> I can use xenbus_read_unsigned in the other cases below, but not here,
>>>>> because versions is in the form: "1,3,4"
>>>>
>>>> Is this documented somewhere?
>>>>
>>>> Hmm, are any of the Xenstore entries documented? Shouldn't this be done
>>>> in xen_9pfs.h ?
>>>
>>> They are documented in docs/misc/9pfs.markdown, under "Xenstore". Given
>>> that it's all written there, especially the semantics, I didn't repeat
>>> it in xen_9pfs.h
>>
>> Looking at it from the Linux kernel perspective this documentation is
>> not really highly visible. For me it is okay, but there have been
>> multiple examples in the past where documentation in the Xen repository
>> wasn't regarded as being sufficient.
>>
>> I recommend moving the documentation regarding the interface into the
>> header file like for the other pv interfaces.
>
> What about adding a link such as:
>
> http://xenbits.xen.org/gitweb/?p=xen.git;a=blob_plain;f=docs/misc/9pfs.markdown;hb=HEAD
>
> that should be easily accessible, right? For other specifications, such
> as 9p, only links are provided (see Documentation/filesystems/9p.txt).
> I am suggesting a link, because then we are sure the specs don't go out
> of sync. I realize that older PV protocols were described in header
> files, but that was before Xen Project had a formal process for getting
> new specifications accepted, and a formal place where to publish them.
Fine with me. Lets see if other maintainers are okay with it, too.
Juergen
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Xen-devel] [PATCH v3 4/7] xen/9pfs: connect to the backend
2017-03-17 4:54 ` Juergen Gross
@ 2017-03-17 15:13 ` Konrad Rzeszutek Wilk
2017-03-17 20:55 ` Stefano Stabellini
0 siblings, 1 reply; 24+ messages in thread
From: Konrad Rzeszutek Wilk @ 2017-03-17 15:13 UTC (permalink / raw)
To: Juergen Gross
Cc: Stefano Stabellini, Latchesar Ionkov, Eric Van Hensbergen,
linux-kernel, Stefano Stabellini, v9fs-developer, Ron Minnich,
xen-devel, boris.ostrovsky
On Fri, Mar 17, 2017 at 05:54:47AM +0100, Juergen Gross wrote:
> On 16/03/17 19:03, Stefano Stabellini wrote:
> > On Thu, 16 Mar 2017, Juergen Gross wrote:
> >> On 15/03/17 19:44, Stefano Stabellini wrote:
> >>> On Wed, 15 Mar 2017, Juergen Gross wrote:
> >>>> On 14/03/17 22:22, Stefano Stabellini wrote:
> >>>>> Hi Juergen,
> >>>>>
> >>>>> thank you for the review!
> >>>>>
> >>>>> On Tue, 14 Mar 2017, Juergen Gross wrote:
> >>>>>> On 14/03/17 00:50, Stefano Stabellini wrote:
> >>>>>>> Implement functions to handle the xenbus handshake. Upon connection,
> >>>>>>> allocate the rings according to the protocol specification.
> >>>>>>>
> >>>>>>> Initialize a work_struct and a wait_queue. The work_struct will be used
> >>>>>>> to schedule work upon receiving an event channel notification from the
> >>>>>>> backend. The wait_queue will be used to wait when the ring is full and
> >>>>>>> we need to send a new request.
> >>>>>>>
> >>>>>>> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> >>>>>>> CC: boris.ostrovsky@oracle.com
> >>>>>>> CC: jgross@suse.com
> >>>>>>> CC: Eric Van Hensbergen <ericvh@gmail.com>
> >>>>>>> CC: Ron Minnich <rminnich@sandia.gov>
> >>>>>>> CC: Latchesar Ionkov <lucho@ionkov.net>
> >>>>>>> CC: v9fs-developer@lists.sourceforge.net
> >>>>>>> ---
> >>>>
> >>>>>> Did you think about using request_threaded_irq() instead of a workqueue?
> >>>>>> For an example see e.g. drivers/scsi/xen-scsifront.c
> >>>>>
> >>>>> I like workqueues :-) It might come down to personal preferences, but I
> >>>>> think workqueues are more flexible and a better fit for this use case.
> >>>>> Not only it is easy to schedule work in a workqueue from the interrupt
> >>>>> handler, but also they can be used for sleeping in the request function
> >>>>> if there is not enough room on the ring. Besides, they can easily be
> >>>>> configured to share a single thread or to have multiple independent
> >>>>> threads.
> >>>>
> >>>> I'm fine with the workqueues as long as you have decided to use them
> >>>> considering the alternatives. :-)
> >>>>
> >>>>>> Can't you use xenbus_read_unsigned() instead of xenbus_read()?
> >>>>>
> >>>>> I can use xenbus_read_unsigned in the other cases below, but not here,
> >>>>> because versions is in the form: "1,3,4"
> >>>>
> >>>> Is this documented somewhere?
> >>>>
> >>>> Hmm, are any of the Xenstore entries documented? Shouldn't this be done
> >>>> in xen_9pfs.h ?
> >>>
> >>> They are documented in docs/misc/9pfs.markdown, under "Xenstore". Given
> >>> that it's all written there, especially the semantics, I didn't repeat
> >>> it in xen_9pfs.h
> >>
> >> Looking at it from the Linux kernel perspective this documentation is
> >> not really highly visible. For me it is okay, but there have been
> >> multiple examples in the past where documentation in the Xen repository
> >> wasn't regarded as being sufficient.
> >>
> >> I recommend moving the documentation regarding the interface into the
> >> header file like for the other pv interfaces.
> >
> > What about adding a link such as:
> >
> > http://xenbits.xen.org/gitweb/?p=xen.git;a=blob_plain;f=docs/misc/9pfs.markdown;hb=HEAD
Ewww.
How about https://xenbits.xen.org/docs/unstable/misc/9pfs.html
which gets updated daily.
> >
> > that should be easily accessible, right? For other specifications, such
> > as 9p, only links are provided (see Documentation/filesystems/9p.txt).
> > I am suggesting a link, because then we are sure the specs don't go out
> > of sync. I realize that older PV protocols were described in header
> > files, but that was before Xen Project had a formal process for getting
> > new specifications accepted, and a formal place where to publish them.
>
> Fine with me. Lets see if other maintainers are okay with it, too.
>
>
> Juergen
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Xen-devel] [PATCH v3 4/7] xen/9pfs: connect to the backend
2017-03-17 15:13 ` [Xen-devel] " Konrad Rzeszutek Wilk
@ 2017-03-17 20:55 ` Stefano Stabellini
0 siblings, 0 replies; 24+ messages in thread
From: Stefano Stabellini @ 2017-03-17 20:55 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk
Cc: Juergen Gross, Stefano Stabellini, Latchesar Ionkov,
Eric Van Hensbergen, linux-kernel, Stefano Stabellini,
v9fs-developer, Ron Minnich, xen-devel, boris.ostrovsky
On Fri, 17 Mar 2017, Konrad Rzeszutek Wilk wrote:
> On Fri, Mar 17, 2017 at 05:54:47AM +0100, Juergen Gross wrote:
> > On 16/03/17 19:03, Stefano Stabellini wrote:
> > > On Thu, 16 Mar 2017, Juergen Gross wrote:
> > >> On 15/03/17 19:44, Stefano Stabellini wrote:
> > >>> On Wed, 15 Mar 2017, Juergen Gross wrote:
> > >>>> On 14/03/17 22:22, Stefano Stabellini wrote:
> > >>>>> Hi Juergen,
> > >>>>>
> > >>>>> thank you for the review!
> > >>>>>
> > >>>>> On Tue, 14 Mar 2017, Juergen Gross wrote:
> > >>>>>> On 14/03/17 00:50, Stefano Stabellini wrote:
> > >>>>>>> Implement functions to handle the xenbus handshake. Upon connection,
> > >>>>>>> allocate the rings according to the protocol specification.
> > >>>>>>>
> > >>>>>>> Initialize a work_struct and a wait_queue. The work_struct will be used
> > >>>>>>> to schedule work upon receiving an event channel notification from the
> > >>>>>>> backend. The wait_queue will be used to wait when the ring is full and
> > >>>>>>> we need to send a new request.
> > >>>>>>>
> > >>>>>>> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > >>>>>>> CC: boris.ostrovsky@oracle.com
> > >>>>>>> CC: jgross@suse.com
> > >>>>>>> CC: Eric Van Hensbergen <ericvh@gmail.com>
> > >>>>>>> CC: Ron Minnich <rminnich@sandia.gov>
> > >>>>>>> CC: Latchesar Ionkov <lucho@ionkov.net>
> > >>>>>>> CC: v9fs-developer@lists.sourceforge.net
> > >>>>>>> ---
> > >>>>
> > >>>>>> Did you think about using request_threaded_irq() instead of a workqueue?
> > >>>>>> For an example see e.g. drivers/scsi/xen-scsifront.c
> > >>>>>
> > >>>>> I like workqueues :-) It might come down to personal preferences, but I
> > >>>>> think workqueues are more flexible and a better fit for this use case.
> > >>>>> Not only it is easy to schedule work in a workqueue from the interrupt
> > >>>>> handler, but also they can be used for sleeping in the request function
> > >>>>> if there is not enough room on the ring. Besides, they can easily be
> > >>>>> configured to share a single thread or to have multiple independent
> > >>>>> threads.
> > >>>>
> > >>>> I'm fine with the workqueues as long as you have decided to use them
> > >>>> considering the alternatives. :-)
> > >>>>
> > >>>>>> Can't you use xenbus_read_unsigned() instead of xenbus_read()?
> > >>>>>
> > >>>>> I can use xenbus_read_unsigned in the other cases below, but not here,
> > >>>>> because versions is in the form: "1,3,4"
> > >>>>
> > >>>> Is this documented somewhere?
> > >>>>
> > >>>> Hmm, are any of the Xenstore entries documented? Shouldn't this be done
> > >>>> in xen_9pfs.h ?
> > >>>
> > >>> They are documented in docs/misc/9pfs.markdown, under "Xenstore". Given
> > >>> that it's all written there, especially the semantics, I didn't repeat
> > >>> it in xen_9pfs.h
> > >>
> > >> Looking at it from the Linux kernel perspective this documentation is
> > >> not really highly visible. For me it is okay, but there have been
> > >> multiple examples in the past where documentation in the Xen repository
> > >> wasn't regarded as being sufficient.
> > >>
> > >> I recommend moving the documentation regarding the interface into the
> > >> header file like for the other pv interfaces.
> > >
> > > What about adding a link such as:
> > >
> > > http://xenbits.xen.org/gitweb/?p=xen.git;a=blob_plain;f=docs/misc/9pfs.markdown;hb=HEAD
>
> Ewww.
>
>
> How about https://xenbits.xen.org/docs/unstable/misc/9pfs.html
>
> which gets updated daily.
Great idea, I'll do that
> > >
> > > that should be easily accessible, right? For other specifications, such
> > > as 9p, only links are provided (see Documentation/filesystems/9p.txt).
> > > I am suggesting a link, because then we are sure the specs don't go out
> > > of sync. I realize that older PV protocols were described in header
> > > files, but that was before Xen Project had a formal process for getting
> > > new specifications accepted, and a formal place where to publish them.
> >
> > Fine with me. Lets see if other maintainers are okay with it, too.
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v3 5/7] xen/9pfs: send requests to the backend
2017-03-13 23:50 ` [PATCH v3 1/7] xen: import new ring macros in ring.h Stefano Stabellini
` (2 preceding siblings ...)
2017-03-13 23:50 ` [PATCH v3 4/7] xen/9pfs: connect to the backend Stefano Stabellini
@ 2017-03-13 23:50 ` Stefano Stabellini
2017-03-13 23:50 ` [PATCH v3 6/7] xen/9pfs: receive responses Stefano Stabellini
2017-03-13 23:50 ` [PATCH v3 7/7] xen/9pfs: build 9pfs Xen transport driver Stefano Stabellini
5 siblings, 0 replies; 24+ messages in thread
From: Stefano Stabellini @ 2017-03-13 23:50 UTC (permalink / raw)
To: xen-devel
Cc: linux-kernel, sstabellini, boris.ostrovsky, Stefano Stabellini,
jgross, Eric Van Hensbergen, Ron Minnich, Latchesar Ionkov,
v9fs-developer
Implement struct p9_trans_module create and close functions by looking
at the available Xen 9pfs frontend-backend connections. We don't expect
many frontend-backend connections, thus walking a list is OK.
Send requests to the backend by copying each request to one of the
available rings (each frontend-backend connection comes with multiple
rings). Handle the ring and notifications following the 9pfs
specification. If there are not enough free bytes on the ring for the
request, wait on the wait_queue: the backend will send a notification
after consuming more requests.
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: jgross@suse.com
CC: Eric Van Hensbergen <ericvh@gmail.com>
CC: Ron Minnich <rminnich@sandia.gov>
CC: Latchesar Ionkov <lucho@ionkov.net>
CC: v9fs-developer@lists.sourceforge.net
---
net/9p/trans_xen.c | 81 ++++++++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 79 insertions(+), 2 deletions(-)
diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
index 974bac3..b40bbcb 100644
--- a/net/9p/trans_xen.c
+++ b/net/9p/trans_xen.c
@@ -70,22 +70,99 @@ struct xen_9pfs_front_priv {
};
static LIST_HEAD(xen_9pfs_devs);
+/* We don't currently allow canceling of requests */
static int p9_xen_cancel(struct p9_client *client, struct p9_req_t *req)
{
- return 0;
+ return 1;
}
static int p9_xen_create(struct p9_client *client, const char *addr, char *args)
{
- return 0;
+ struct xen_9pfs_front_priv *priv;
+
+ list_for_each_entry(priv, &xen_9pfs_devs, list) {
+ if (!strcmp(priv->tag, addr)) {
+ priv->client = client;
+ return 0;
+ }
+ }
+ return -EINVAL;
}
static void p9_xen_close(struct p9_client *client)
{
+ struct xen_9pfs_front_priv *priv;
+
+ list_for_each_entry(priv, &xen_9pfs_devs, list) {
+ if (priv->client == client) {
+ priv->client = NULL;
+ return;
+ }
+ }
+ return;
+}
+
+static int p9_xen_write_todo(struct xen_9pfs_dataring *ring, RING_IDX size)
+{
+ RING_IDX cons, prod;
+
+ cons = ring->intf->out_cons;
+ prod = ring->intf->out_prod;
+ virt_mb();
+
+ if (XEN_9PFS_RING_SIZE - xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE) >= size)
+ return 1;
+ else
+ return 0;
}
static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
{
+ struct xen_9pfs_front_priv *priv = NULL;
+ RING_IDX cons, prod, masked_cons, masked_prod;
+ unsigned long flags;
+ uint32_t size = p9_req->tc->size;
+ struct xen_9pfs_dataring *ring;
+ int num;
+
+ list_for_each_entry(priv, &xen_9pfs_devs, list) {
+ if (priv->client == client)
+ break;
+ }
+ if (priv == NULL || priv->client != client)
+ return -EINVAL;
+
+ num = p9_req->tc->tag % priv->num_rings;
+ ring = &priv->rings[num];
+
+again:
+ while (wait_event_interruptible(ring->wq,
+ p9_xen_write_todo(ring, size) > 0) != 0);
+
+ spin_lock_irqsave(&ring->lock, flags);
+ cons = ring->intf->out_cons;
+ prod = ring->intf->out_prod;
+ virt_mb();
+
+ if (XEN_9PFS_RING_SIZE - xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE) < size) {
+ spin_unlock_irqrestore(&ring->lock, flags);
+ goto again;
+ }
+
+ masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE);
+ masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE);
+
+ xen_9pfs_write_packet(ring->data.out,
+ &masked_prod, masked_cons,
+ XEN_9PFS_RING_SIZE, p9_req->tc->sdata, size);
+
+ p9_req->status = REQ_STATUS_SENT;
+ virt_wmb(); /* write ring before updating pointer */
+ prod += size;
+ ring->intf->out_prod = prod;
+ spin_unlock_irqrestore(&ring->lock, flags);
+ notify_remote_via_irq(ring->irq);
+
return 0;
}
--
1.9.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v3 6/7] xen/9pfs: receive responses
2017-03-13 23:50 ` [PATCH v3 1/7] xen: import new ring macros in ring.h Stefano Stabellini
` (3 preceding siblings ...)
2017-03-13 23:50 ` [PATCH v3 5/7] xen/9pfs: send requests " Stefano Stabellini
@ 2017-03-13 23:50 ` Stefano Stabellini
2017-03-14 7:04 ` Juergen Gross
2017-03-13 23:50 ` [PATCH v3 7/7] xen/9pfs: build 9pfs Xen transport driver Stefano Stabellini
5 siblings, 1 reply; 24+ messages in thread
From: Stefano Stabellini @ 2017-03-13 23:50 UTC (permalink / raw)
To: xen-devel
Cc: linux-kernel, sstabellini, boris.ostrovsky, Stefano Stabellini,
jgross, Eric Van Hensbergen, Ron Minnich, Latchesar Ionkov,
v9fs-developer
Upon receiving a notification from the backend, schedule the
p9_xen_response work_struct. p9_xen_response checks if any responses are
available, if so, it reads them one by one, calling p9_client_cb to send
them up to the 9p layer (p9_client_cb completes the request). Handle the
ring following the Xen 9pfs specification.
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: jgross@suse.com
CC: Eric Van Hensbergen <ericvh@gmail.com>
CC: Ron Minnich <rminnich@sandia.gov>
CC: Latchesar Ionkov <lucho@ionkov.net>
CC: v9fs-developer@lists.sourceforge.net
---
net/9p/trans_xen.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 55 insertions(+)
diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
index b40bbcb..1a7eb52 100644
--- a/net/9p/trans_xen.c
+++ b/net/9p/trans_xen.c
@@ -168,6 +168,61 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
static void p9_xen_response(struct work_struct *work)
{
+ struct xen_9pfs_front_priv *priv;
+ struct xen_9pfs_dataring *ring;
+ RING_IDX cons, prod, masked_cons, masked_prod;
+ struct xen_9pfs_header h;
+ struct p9_req_t *req;
+ int status;
+
+ ring = container_of(work, struct xen_9pfs_dataring, work);
+ priv = ring->priv;
+
+ while (1) {
+ cons = ring->intf->in_cons;
+ prod = ring->intf->in_prod;
+ virt_rmb();
+
+ if (xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE) < sizeof(h)) {
+ notify_remote_via_irq(ring->irq);
+ return;
+ }
+
+ masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE);
+ masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE);
+
+ /* First, read just the header */
+ xen_9pfs_read_packet(ring->data.in,
+ masked_prod, &masked_cons,
+ XEN_9PFS_RING_SIZE, &h, sizeof(h));
+
+ req = p9_tag_lookup(priv->client, h.tag);
+ if (!req || req->status != REQ_STATUS_SENT) {
+ dev_warn(&priv->dev->dev, "Wrong req tag=%x\n", h.tag);
+ cons += h.size;
+ virt_mb();
+ ring->intf->in_cons = cons;
+ continue;
+ }
+
+ memcpy(req->rc, &h, sizeof(h));
+ req->rc->offset = 0;
+
+ masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE);
+ /* Then, read the whole packet (including the header) */
+ xen_9pfs_read_packet(ring->data.in,
+ masked_prod, &masked_cons,
+ XEN_9PFS_RING_SIZE, req->rc->sdata, h.size);
+
+ virt_mb();
+ cons += h.size;
+ ring->intf->in_cons = cons;
+
+ status = (req->status != REQ_STATUS_ERROR) ?
+ REQ_STATUS_RCVD : REQ_STATUS_ERROR;
+
+ p9_client_cb(priv->client, req, status);
+ }
}
static irqreturn_t xen_9pfs_front_event_handler(int irq, void *r)
--
1.9.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v3 6/7] xen/9pfs: receive responses
2017-03-13 23:50 ` [PATCH v3 6/7] xen/9pfs: receive responses Stefano Stabellini
@ 2017-03-14 7:04 ` Juergen Gross
2017-03-14 20:32 ` Stefano Stabellini
0 siblings, 1 reply; 24+ messages in thread
From: Juergen Gross @ 2017-03-14 7:04 UTC (permalink / raw)
To: Stefano Stabellini, xen-devel
Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini,
Eric Van Hensbergen, Ron Minnich, Latchesar Ionkov,
v9fs-developer
On 14/03/17 00:50, Stefano Stabellini wrote:
> Upon receiving a notification from the backend, schedule the
> p9_xen_response work_struct. p9_xen_response checks if any responses are
> available, if so, it reads them one by one, calling p9_client_cb to send
> them up to the 9p layer (p9_client_cb completes the request). Handle the
> ring following the Xen 9pfs specification.
>
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> CC: jgross@suse.com
> CC: Eric Van Hensbergen <ericvh@gmail.com>
> CC: Ron Minnich <rminnich@sandia.gov>
> CC: Latchesar Ionkov <lucho@ionkov.net>
> CC: v9fs-developer@lists.sourceforge.net
> ---
> net/9p/trans_xen.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 55 insertions(+)
>
> diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
> index b40bbcb..1a7eb52 100644
> --- a/net/9p/trans_xen.c
> +++ b/net/9p/trans_xen.c
> @@ -168,6 +168,61 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
>
> static void p9_xen_response(struct work_struct *work)
> {
> + struct xen_9pfs_front_priv *priv;
> + struct xen_9pfs_dataring *ring;
> + RING_IDX cons, prod, masked_cons, masked_prod;
> + struct xen_9pfs_header h;
> + struct p9_req_t *req;
> + int status;
> +
> + ring = container_of(work, struct xen_9pfs_dataring, work);
> + priv = ring->priv;
> +
> + while (1) {
> + cons = ring->intf->in_cons;
> + prod = ring->intf->in_prod;
> + virt_rmb();
> +
> + if (xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE) < sizeof(h)) {
> + notify_remote_via_irq(ring->irq);
> + return;
> + }
> +
> + masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE);
> + masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE);
> +
> + /* First, read just the header */
> + xen_9pfs_read_packet(ring->data.in,
> + masked_prod, &masked_cons,
> + XEN_9PFS_RING_SIZE, &h, sizeof(h));
> +
> + req = p9_tag_lookup(priv->client, h.tag);
> + if (!req || req->status != REQ_STATUS_SENT) {
> + dev_warn(&priv->dev->dev, "Wrong req tag=%x\n", h.tag);
> + cons += h.size;
> + virt_mb();
> + ring->intf->in_cons = cons;
> + continue;
> + }
> +
> + memcpy(req->rc, &h, sizeof(h));
> + req->rc->offset = 0;
> +
> + masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE);
> + /* Then, read the whole packet (including the header) */
> + xen_9pfs_read_packet(ring->data.in,
> + masked_prod, &masked_cons,
> + XEN_9PFS_RING_SIZE, req->rc->sdata, h.size);
Please align the parameters to the same column.
> +
> + virt_mb();
> + cons += h.size;
> + ring->intf->in_cons = cons;
> +
> + status = (req->status != REQ_STATUS_ERROR) ?
> + REQ_STATUS_RCVD : REQ_STATUS_ERROR;
> +
> + p9_client_cb(priv->client, req, status);
> + }
> }
>
> static irqreturn_t xen_9pfs_front_event_handler(int irq, void *r)
>
Juergen
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v3 6/7] xen/9pfs: receive responses
2017-03-14 7:04 ` Juergen Gross
@ 2017-03-14 20:32 ` Stefano Stabellini
0 siblings, 0 replies; 24+ messages in thread
From: Stefano Stabellini @ 2017-03-14 20:32 UTC (permalink / raw)
To: Juergen Gross
Cc: Stefano Stabellini, xen-devel, linux-kernel, boris.ostrovsky,
Stefano Stabellini, Eric Van Hensbergen, Ron Minnich,
Latchesar Ionkov, v9fs-developer
On Tue, 14 Mar 2017, Juergen Gross wrote:
> On 14/03/17 00:50, Stefano Stabellini wrote:
> > Upon receiving a notification from the backend, schedule the
> > p9_xen_response work_struct. p9_xen_response checks if any responses are
> > available, if so, it reads them one by one, calling p9_client_cb to send
> > them up to the 9p layer (p9_client_cb completes the request). Handle the
> > ring following the Xen 9pfs specification.
> >
> > Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> > Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > CC: jgross@suse.com
> > CC: Eric Van Hensbergen <ericvh@gmail.com>
> > CC: Ron Minnich <rminnich@sandia.gov>
> > CC: Latchesar Ionkov <lucho@ionkov.net>
> > CC: v9fs-developer@lists.sourceforge.net
> > ---
> > net/9p/trans_xen.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 55 insertions(+)
> >
> > diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
> > index b40bbcb..1a7eb52 100644
> > --- a/net/9p/trans_xen.c
> > +++ b/net/9p/trans_xen.c
> > @@ -168,6 +168,61 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
> >
> > static void p9_xen_response(struct work_struct *work)
> > {
> > + struct xen_9pfs_front_priv *priv;
> > + struct xen_9pfs_dataring *ring;
> > + RING_IDX cons, prod, masked_cons, masked_prod;
> > + struct xen_9pfs_header h;
> > + struct p9_req_t *req;
> > + int status;
> > +
> > + ring = container_of(work, struct xen_9pfs_dataring, work);
> > + priv = ring->priv;
> > +
> > + while (1) {
> > + cons = ring->intf->in_cons;
> > + prod = ring->intf->in_prod;
> > + virt_rmb();
> > +
> > + if (xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE) < sizeof(h)) {
> > + notify_remote_via_irq(ring->irq);
> > + return;
> > + }
> > +
> > + masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE);
> > + masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE);
> > +
> > + /* First, read just the header */
> > + xen_9pfs_read_packet(ring->data.in,
> > + masked_prod, &masked_cons,
> > + XEN_9PFS_RING_SIZE, &h, sizeof(h));
> > +
> > + req = p9_tag_lookup(priv->client, h.tag);
> > + if (!req || req->status != REQ_STATUS_SENT) {
> > + dev_warn(&priv->dev->dev, "Wrong req tag=%x\n", h.tag);
> > + cons += h.size;
> > + virt_mb();
> > + ring->intf->in_cons = cons;
> > + continue;
> > + }
> > +
> > + memcpy(req->rc, &h, sizeof(h));
> > + req->rc->offset = 0;
> > +
> > + masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE);
> > + /* Then, read the whole packet (including the header) */
> > + xen_9pfs_read_packet(ring->data.in,
> > + masked_prod, &masked_cons,
> > + XEN_9PFS_RING_SIZE, req->rc->sdata, h.size);
>
> Please align the parameters to the same column.
I am one of those heretics that use tabstop=4 :-)
I'll fix this though.
> > +
> > + virt_mb();
> > + cons += h.size;
> > + ring->intf->in_cons = cons;
> > +
> > + status = (req->status != REQ_STATUS_ERROR) ?
> > + REQ_STATUS_RCVD : REQ_STATUS_ERROR;
> > +
> > + p9_client_cb(priv->client, req, status);
> > + }
> > }
> >
> > static irqreturn_t xen_9pfs_front_event_handler(int irq, void *r)
> >
>
>
> Juergen
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v3 7/7] xen/9pfs: build 9pfs Xen transport driver
2017-03-13 23:50 ` [PATCH v3 1/7] xen: import new ring macros in ring.h Stefano Stabellini
` (4 preceding siblings ...)
2017-03-13 23:50 ` [PATCH v3 6/7] xen/9pfs: receive responses Stefano Stabellini
@ 2017-03-13 23:50 ` Stefano Stabellini
2017-03-14 7:05 ` Juergen Gross
5 siblings, 1 reply; 24+ messages in thread
From: Stefano Stabellini @ 2017-03-13 23:50 UTC (permalink / raw)
To: xen-devel
Cc: linux-kernel, sstabellini, boris.ostrovsky, Stefano Stabellini,
jgross, Eric Van Hensbergen, Ron Minnich, Latchesar Ionkov,
v9fs-developer
This patch adds a Kconfig option and Makefile support for building the
9pfs Xen driver.
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
CC: Eric Van Hensbergen <ericvh@gmail.com>
CC: Ron Minnich <rminnich@sandia.gov>
CC: Latchesar Ionkov <lucho@ionkov.net>
CC: v9fs-developer@lists.sourceforge.net
---
net/9p/Kconfig | 8 ++++++++
net/9p/Makefile | 4 ++++
2 files changed, 12 insertions(+)
diff --git a/net/9p/Kconfig b/net/9p/Kconfig
index a75174a..5c5649b 100644
--- a/net/9p/Kconfig
+++ b/net/9p/Kconfig
@@ -22,6 +22,14 @@ config NET_9P_VIRTIO
This builds support for a transports between
guest partitions and a host partition.
+config NET_9P_XEN
+ depends on XEN
+ tristate "9P Xen Transport"
+ help
+ This builds support for a transport between
+ two Xen domains.
+
+
config NET_9P_RDMA
depends on INET && INFINIBAND && INFINIBAND_ADDR_TRANS
tristate "9P RDMA Transport (Experimental)"
diff --git a/net/9p/Makefile b/net/9p/Makefile
index a0874cc..697ea7c 100644
--- a/net/9p/Makefile
+++ b/net/9p/Makefile
@@ -1,4 +1,5 @@
obj-$(CONFIG_NET_9P) := 9pnet.o
+obj-$(CONFIG_NET_9P_XEN) += 9pnet_xen.o
obj-$(CONFIG_NET_9P_VIRTIO) += 9pnet_virtio.o
obj-$(CONFIG_NET_9P_RDMA) += 9pnet_rdma.o
@@ -14,5 +15,8 @@ obj-$(CONFIG_NET_9P_RDMA) += 9pnet_rdma.o
9pnet_virtio-objs := \
trans_virtio.o \
+9pnet_xen-objs := \
+ trans_xen.o \
+
9pnet_rdma-objs := \
trans_rdma.o \
--
1.9.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v3 7/7] xen/9pfs: build 9pfs Xen transport driver
2017-03-13 23:50 ` [PATCH v3 7/7] xen/9pfs: build 9pfs Xen transport driver Stefano Stabellini
@ 2017-03-14 7:05 ` Juergen Gross
0 siblings, 0 replies; 24+ messages in thread
From: Juergen Gross @ 2017-03-14 7:05 UTC (permalink / raw)
To: Stefano Stabellini, xen-devel
Cc: linux-kernel, boris.ostrovsky, Stefano Stabellini,
Eric Van Hensbergen, Ron Minnich, Latchesar Ionkov,
v9fs-developer
On 14/03/17 00:50, Stefano Stabellini wrote:
> This patch adds a Kconfig option and Makefile support for building the
> 9pfs Xen driver.
>
> Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> CC: Eric Van Hensbergen <ericvh@gmail.com>
> CC: Ron Minnich <rminnich@sandia.gov>
> CC: Latchesar Ionkov <lucho@ionkov.net>
> CC: v9fs-developer@lists.sourceforge.net
Reviewed-by: Juergen Gross <jgross@suse.com>
Juergen
^ permalink raw reply [flat|nested] 24+ messages in thread