From: Stefano Stabellini <sstabellini@kernel.org> To: xen-devel@lists.xen.org Cc: linux-kernel@vger.kernel.org, sstabellini@kernel.org, jgross@suse.com, boris.ostrovsky@oracle.com, Stefano Stabellini <stefano@aporeto.com> Subject: [PATCH v5 11/18] xen/pvcalls: implement accept command Date: Thu, 22 Jun 2017 12:14:20 -0700 [thread overview] Message-ID: <1498158867-25426-11-git-send-email-sstabellini@kernel.org> (raw) In-Reply-To: <1498158867-25426-1-git-send-email-sstabellini@kernel.org> Implement the accept command by calling inet_accept. To avoid blocking in the kernel, call inet_accept(O_NONBLOCK) from a workqueue, which get scheduled on sk_data_ready (for a passive socket, it means that there are connections to accept). Use the reqcopy field to store the request. Accept the new socket from the delayed work function, create a new sock_mapping for it, map the indexes page and data ring, and reply to the other end. Allocate an ioworker for the socket. Only support one outstanding blocking accept request for every socket at any time. Add a field to sock_mapping to remember the passive socket from which an active socket was created. Signed-off-by: Stefano Stabellini <stefano@aporeto.com> CC: boris.ostrovsky@oracle.com CC: jgross@suse.com --- drivers/xen/pvcalls-back.c | 113 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 113 insertions(+) diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c index 2a47425..62738e4 100644 --- a/drivers/xen/pvcalls-back.c +++ b/drivers/xen/pvcalls-back.c @@ -64,6 +64,7 @@ struct pvcalls_ioworker { struct sock_mapping { struct list_head list; struct pvcalls_fedata *fedata; + struct sockpass_mapping *sockpass; struct socket *sock; uint64_t id; grant_ref_t ref; @@ -279,10 +280,83 @@ static int pvcalls_back_release(struct xenbus_device *dev, static void __pvcalls_back_accept(struct work_struct *work) { + struct sockpass_mapping *mappass = container_of( + work, struct sockpass_mapping, register_work); + struct sock_mapping *map; + struct pvcalls_ioworker *iow; + struct pvcalls_fedata *fedata; + struct socket *sock; + struct xen_pvcalls_response *rsp; + struct xen_pvcalls_request *req; + int notify; + int ret = -EINVAL; + unsigned long flags; + + fedata = mappass->fedata; + /* + * __pvcalls_back_accept can race against pvcalls_back_accept. + * We only need to check the value of "cmd" on read. It could be + * done atomically, but to simplify the code on the write side, we + * use a spinlock. + */ + spin_lock_irqsave(&mappass->copy_lock, flags); + req = &mappass->reqcopy; + if (req->cmd != PVCALLS_ACCEPT) { + spin_unlock_irqrestore(&mappass->copy_lock, flags); + return; + } + spin_unlock_irqrestore(&mappass->copy_lock, flags); + + sock = sock_alloc(); + if (sock == NULL) + goto out_error; + sock->type = mappass->sock->type; + sock->ops = mappass->sock->ops; + + ret = inet_accept(mappass->sock, sock, O_NONBLOCK, true); + if (ret == -EAGAIN) { + sock_release(sock); + goto out_error; + } + + map = pvcalls_new_active_socket(fedata, + req->u.accept.id_new, + req->u.accept.ref, + req->u.accept.evtchn, + sock); + if (!map) { + ret = -EFAULT; + sock_release(sock); + goto out_error; + } + + map->sockpass = mappass; + iow = &map->ioworker; + atomic_inc(&map->read); + atomic_inc(&map->io); + queue_work(iow->wq, &iow->register_work); + +out_error: + rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++); + rsp->req_id = req->req_id; + rsp->cmd = req->cmd; + rsp->u.accept.id = req->u.accept.id; + rsp->ret = ret; + RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&fedata->ring, notify); + if (notify) + notify_remote_via_irq(fedata->irq); + + mappass->reqcopy.cmd = 0; } static void pvcalls_pass_sk_data_ready(struct sock *sock) { + struct sockpass_mapping *mappass = sock->sk_user_data; + + if (mappass == NULL) + return; + + queue_work(mappass->wq, &mappass->register_work); } static int pvcalls_back_bind(struct xenbus_device *dev, @@ -388,6 +462,45 @@ static int pvcalls_back_listen(struct xenbus_device *dev, static int pvcalls_back_accept(struct xenbus_device *dev, struct xen_pvcalls_request *req) { + struct pvcalls_fedata *fedata; + struct sockpass_mapping *mappass; + int ret = -EINVAL; + struct xen_pvcalls_response *rsp; + unsigned long flags; + + fedata = dev_get_drvdata(&dev->dev); + + down(&fedata->socket_lock); + mappass = radix_tree_lookup(&fedata->socketpass_mappings, + req->u.accept.id); + up(&fedata->socket_lock); + if (mappass == NULL) + goto out_error; + + /* + * Limitation of the current implementation: only support one + * concurrent accept or poll call on one socket. + */ + spin_lock_irqsave(&mappass->copy_lock, flags); + if (mappass->reqcopy.cmd != 0) { + spin_unlock_irqrestore(&mappass->copy_lock, flags); + ret = -EINTR; + goto out_error; + } + + mappass->reqcopy = *req; + spin_unlock_irqrestore(&mappass->copy_lock, flags); + queue_work(mappass->wq, &mappass->register_work); + + /* Tell the caller we don't need to send back a notification yet */ + return -1; + +out_error: + rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++); + rsp->req_id = req->req_id; + rsp->cmd = req->cmd; + rsp->u.accept.id = req->u.accept.id; + rsp->ret = ret; return 0; } -- 1.9.1
WARNING: multiple messages have this Message-ID (diff)
From: Stefano Stabellini <sstabellini@kernel.org> To: xen-devel@lists.xen.org Cc: jgross@suse.com, Stefano Stabellini <stefano@aporeto.com>, boris.ostrovsky@oracle.com, sstabellini@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 11/18] xen/pvcalls: implement accept command Date: Thu, 22 Jun 2017 12:14:20 -0700 [thread overview] Message-ID: <1498158867-25426-11-git-send-email-sstabellini@kernel.org> (raw) In-Reply-To: <1498158867-25426-1-git-send-email-sstabellini@kernel.org> Implement the accept command by calling inet_accept. To avoid blocking in the kernel, call inet_accept(O_NONBLOCK) from a workqueue, which get scheduled on sk_data_ready (for a passive socket, it means that there are connections to accept). Use the reqcopy field to store the request. Accept the new socket from the delayed work function, create a new sock_mapping for it, map the indexes page and data ring, and reply to the other end. Allocate an ioworker for the socket. Only support one outstanding blocking accept request for every socket at any time. Add a field to sock_mapping to remember the passive socket from which an active socket was created. Signed-off-by: Stefano Stabellini <stefano@aporeto.com> CC: boris.ostrovsky@oracle.com CC: jgross@suse.com --- drivers/xen/pvcalls-back.c | 113 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 113 insertions(+) diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c index 2a47425..62738e4 100644 --- a/drivers/xen/pvcalls-back.c +++ b/drivers/xen/pvcalls-back.c @@ -64,6 +64,7 @@ struct pvcalls_ioworker { struct sock_mapping { struct list_head list; struct pvcalls_fedata *fedata; + struct sockpass_mapping *sockpass; struct socket *sock; uint64_t id; grant_ref_t ref; @@ -279,10 +280,83 @@ static int pvcalls_back_release(struct xenbus_device *dev, static void __pvcalls_back_accept(struct work_struct *work) { + struct sockpass_mapping *mappass = container_of( + work, struct sockpass_mapping, register_work); + struct sock_mapping *map; + struct pvcalls_ioworker *iow; + struct pvcalls_fedata *fedata; + struct socket *sock; + struct xen_pvcalls_response *rsp; + struct xen_pvcalls_request *req; + int notify; + int ret = -EINVAL; + unsigned long flags; + + fedata = mappass->fedata; + /* + * __pvcalls_back_accept can race against pvcalls_back_accept. + * We only need to check the value of "cmd" on read. It could be + * done atomically, but to simplify the code on the write side, we + * use a spinlock. + */ + spin_lock_irqsave(&mappass->copy_lock, flags); + req = &mappass->reqcopy; + if (req->cmd != PVCALLS_ACCEPT) { + spin_unlock_irqrestore(&mappass->copy_lock, flags); + return; + } + spin_unlock_irqrestore(&mappass->copy_lock, flags); + + sock = sock_alloc(); + if (sock == NULL) + goto out_error; + sock->type = mappass->sock->type; + sock->ops = mappass->sock->ops; + + ret = inet_accept(mappass->sock, sock, O_NONBLOCK, true); + if (ret == -EAGAIN) { + sock_release(sock); + goto out_error; + } + + map = pvcalls_new_active_socket(fedata, + req->u.accept.id_new, + req->u.accept.ref, + req->u.accept.evtchn, + sock); + if (!map) { + ret = -EFAULT; + sock_release(sock); + goto out_error; + } + + map->sockpass = mappass; + iow = &map->ioworker; + atomic_inc(&map->read); + atomic_inc(&map->io); + queue_work(iow->wq, &iow->register_work); + +out_error: + rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++); + rsp->req_id = req->req_id; + rsp->cmd = req->cmd; + rsp->u.accept.id = req->u.accept.id; + rsp->ret = ret; + RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&fedata->ring, notify); + if (notify) + notify_remote_via_irq(fedata->irq); + + mappass->reqcopy.cmd = 0; } static void pvcalls_pass_sk_data_ready(struct sock *sock) { + struct sockpass_mapping *mappass = sock->sk_user_data; + + if (mappass == NULL) + return; + + queue_work(mappass->wq, &mappass->register_work); } static int pvcalls_back_bind(struct xenbus_device *dev, @@ -388,6 +462,45 @@ static int pvcalls_back_listen(struct xenbus_device *dev, static int pvcalls_back_accept(struct xenbus_device *dev, struct xen_pvcalls_request *req) { + struct pvcalls_fedata *fedata; + struct sockpass_mapping *mappass; + int ret = -EINVAL; + struct xen_pvcalls_response *rsp; + unsigned long flags; + + fedata = dev_get_drvdata(&dev->dev); + + down(&fedata->socket_lock); + mappass = radix_tree_lookup(&fedata->socketpass_mappings, + req->u.accept.id); + up(&fedata->socket_lock); + if (mappass == NULL) + goto out_error; + + /* + * Limitation of the current implementation: only support one + * concurrent accept or poll call on one socket. + */ + spin_lock_irqsave(&mappass->copy_lock, flags); + if (mappass->reqcopy.cmd != 0) { + spin_unlock_irqrestore(&mappass->copy_lock, flags); + ret = -EINTR; + goto out_error; + } + + mappass->reqcopy = *req; + spin_unlock_irqrestore(&mappass->copy_lock, flags); + queue_work(mappass->wq, &mappass->register_work); + + /* Tell the caller we don't need to send back a notification yet */ + return -1; + +out_error: + rsp = RING_GET_RESPONSE(&fedata->ring, fedata->ring.rsp_prod_pvt++); + rsp->req_id = req->req_id; + rsp->cmd = req->cmd; + rsp->u.accept.id = req->u.accept.id; + rsp->ret = ret; return 0; } -- 1.9.1 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-06-22 19:17 UTC|newest] Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-06-22 19:14 [PATCH v5 00/18] introduce the Xen PV Calls backend Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini 2017-06-22 19:14 ` [PATCH v5 01/18] xen: introduce the pvcalls interface header Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini 2017-06-22 19:14 ` [PATCH v5 02/18] xen/pvcalls: introduce the pvcalls xenbus backend Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini 2017-06-22 19:14 ` [PATCH v5 03/18] xen/pvcalls: initialize the module and register the " Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini 2017-06-22 19:14 ` [PATCH v5 04/18] xen/pvcalls: xenbus state handling Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini 2017-06-22 19:14 ` [PATCH v5 05/18] xen/pvcalls: connect to a frontend Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini 2017-06-22 19:14 ` [PATCH v5 06/18] xen/pvcalls: handle commands from the frontend Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini 2017-07-03 11:23 ` Juergen Gross 2017-07-03 20:57 ` Stefano Stabellini 2017-07-03 20:57 ` Stefano Stabellini 2017-07-03 11:23 ` Juergen Gross 2017-06-22 19:14 ` [PATCH v5 07/18] xen/pvcalls: implement socket command Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini 2017-06-22 19:14 ` [PATCH v5 08/18] xen/pvcalls: implement connect command Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini 2017-06-22 19:14 ` [PATCH v5 09/18] xen/pvcalls: implement bind command Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini 2017-06-22 19:14 ` [PATCH v5 10/18] xen/pvcalls: implement listen command Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini [this message] 2017-06-22 19:14 ` [PATCH v5 11/18] xen/pvcalls: implement accept command Stefano Stabellini 2017-06-22 19:14 ` [PATCH v5 12/18] xen/pvcalls: implement poll command Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini 2017-06-22 19:14 ` [PATCH v5 13/18] xen/pvcalls: implement release command Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini 2017-06-22 19:14 ` [PATCH v5 14/18] xen/pvcalls: disconnect and module_exit Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini 2017-06-22 19:14 ` [PATCH v5 15/18] xen/pvcalls: implement the ioworker functions Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini 2017-06-22 19:14 ` [PATCH v5 16/18] xen/pvcalls: implement read Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini 2017-06-22 19:14 ` [PATCH v5 17/18] xen/pvcalls: implement write Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini 2017-06-22 19:14 ` [PATCH v5 18/18] xen: introduce a Kconfig option to enable the pvcalls backend Stefano Stabellini 2017-06-22 19:14 ` Stefano Stabellini
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1498158867-25426-11-git-send-email-sstabellini@kernel.org \ --to=sstabellini@kernel.org \ --cc=boris.ostrovsky@oracle.com \ --cc=jgross@suse.com \ --cc=linux-kernel@vger.kernel.org \ --cc=stefano@aporeto.com \ --cc=xen-devel@lists.xen.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.