From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750899AbdCNHER (ORCPT ); Tue, 14 Mar 2017 03:04:17 -0400 Received: from mx2.suse.de ([195.135.220.15]:53860 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750818AbdCNHEP (ORCPT ); Tue, 14 Mar 2017 03:04:15 -0400 Subject: Re: [PATCH v3 6/7] xen/9pfs: receive responses To: Stefano Stabellini , xen-devel@lists.xenproject.org References: <1489449019-13343-1-git-send-email-sstabellini@kernel.org> <1489449019-13343-6-git-send-email-sstabellini@kernel.org> Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com, Stefano Stabellini , Eric Van Hensbergen , Ron Minnich , Latchesar Ionkov , v9fs-developer@lists.sourceforge.net From: Juergen Gross Message-ID: <5a0e3642-5eb4-e192-7241-39cf78a89fbd@suse.com> Date: Tue, 14 Mar 2017 08:04:06 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.7.0 MIME-Version: 1.0 In-Reply-To: <1489449019-13343-6-git-send-email-sstabellini@kernel.org> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 14/03/17 00:50, Stefano Stabellini wrote: > Upon receiving a notification from the backend, schedule the > p9_xen_response work_struct. p9_xen_response checks if any responses are > available, if so, it reads them one by one, calling p9_client_cb to send > them up to the 9p layer (p9_client_cb completes the request). Handle the > ring following the Xen 9pfs specification. > > Signed-off-by: Stefano Stabellini > Reviewed-by: Boris Ostrovsky > CC: jgross@suse.com > CC: Eric Van Hensbergen > CC: Ron Minnich > CC: Latchesar Ionkov > CC: v9fs-developer@lists.sourceforge.net > --- > net/9p/trans_xen.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 55 insertions(+) > > diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c > index b40bbcb..1a7eb52 100644 > --- a/net/9p/trans_xen.c > +++ b/net/9p/trans_xen.c > @@ -168,6 +168,61 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req) > > static void p9_xen_response(struct work_struct *work) > { > + struct xen_9pfs_front_priv *priv; > + struct xen_9pfs_dataring *ring; > + RING_IDX cons, prod, masked_cons, masked_prod; > + struct xen_9pfs_header h; > + struct p9_req_t *req; > + int status; > + > + ring = container_of(work, struct xen_9pfs_dataring, work); > + priv = ring->priv; > + > + while (1) { > + cons = ring->intf->in_cons; > + prod = ring->intf->in_prod; > + virt_rmb(); > + > + if (xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE) < sizeof(h)) { > + notify_remote_via_irq(ring->irq); > + return; > + } > + > + masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE); > + masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE); > + > + /* First, read just the header */ > + xen_9pfs_read_packet(ring->data.in, > + masked_prod, &masked_cons, > + XEN_9PFS_RING_SIZE, &h, sizeof(h)); > + > + req = p9_tag_lookup(priv->client, h.tag); > + if (!req || req->status != REQ_STATUS_SENT) { > + dev_warn(&priv->dev->dev, "Wrong req tag=%x\n", h.tag); > + cons += h.size; > + virt_mb(); > + ring->intf->in_cons = cons; > + continue; > + } > + > + memcpy(req->rc, &h, sizeof(h)); > + req->rc->offset = 0; > + > + masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE); > + /* Then, read the whole packet (including the header) */ > + xen_9pfs_read_packet(ring->data.in, > + masked_prod, &masked_cons, > + XEN_9PFS_RING_SIZE, req->rc->sdata, h.size); Please align the parameters to the same column. > + > + virt_mb(); > + cons += h.size; > + ring->intf->in_cons = cons; > + > + status = (req->status != REQ_STATUS_ERROR) ? > + REQ_STATUS_RCVD : REQ_STATUS_ERROR; > + > + p9_client_cb(priv->client, req, status); > + } > } > > static irqreturn_t xen_9pfs_front_event_handler(int irq, void *r) > Juergen