From mboxrd@z Thu Jan 1 00:00:00 1970 From: Julien Grall Subject: Re: [PATCH v3 0/2] block/xen-blkfront: Support non-indirect grant with 64KB page granularity Date: Tue, 8 Dec 2015 12:25:13 +0000 Message-ID: <5666CC29.5070209__33422.9033127942$1449577673$gmane$org@citrix.com> References: <1447873045-14663-1-git-send-email-julien.grall@citrix.com> <20151201153751.GE19885@char.us.oracle.com> <565DDF24.4000104@citrix.com> <20151201185245.GB27063@char.us.oracle.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1a6HLt-0002a0-Gx for xen-devel@lists.xenproject.org; Tue, 08 Dec 2015 12:26:41 +0000 In-Reply-To: <20151201185245.GB27063@char.us.oracle.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Konrad Rzeszutek Wilk Cc: xen-devel@lists.xenproject.org, Boris Ostrovsky , David Vrabel , linux-kernel@vger.kernel.org, =?windows-1252?Q?Roger_Pau_Monn=E9?= List-Id: xen-devel@lists.xenproject.org Hi Konrad, The rebase of my patch is not correct. It now contains an unused variable and missing one change. I will post the rebase of the two patches. On 01/12/15 18:52, Konrad Rzeszutek Wilk wrote: > +static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo, > + struct request *req, > + struct blkif_request **ring_req) > +{ > + unsigned long id; > + struct blkfront_info *info = rinfo->dev_info; This variable is unused within the function. > + > + *ring_req = RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt); > + rinfo->ring.req_prod_pvt++; > + > + id = get_id_from_freelist(rinfo); > + rinfo->shadow[id].request = req; > + > + (*ring_req)->u.rw.id = id; > + > + return id; > +} > + > static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_info *rinfo) > { > struct blkfront_info *info = rinfo->dev_info; > @@ -488,9 +506,7 @@ static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_inf > unsigned long id; > > /* Fill out a communications ring structure. */ > - ring_req = RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt); > - id = get_id_from_freelist(rinfo); > - rinfo->shadow[id].request = req; > + id = blkif_ring_get_request(rinfo, req, &ring_req); > > ring_req->operation = BLKIF_OP_DISCARD; > ring_req->u.discard.nr_sectors = blk_rq_sectors(req); > @@ -501,8 +517,6 @@ static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_inf > else > ring_req->u.discard.flag = 0; > > - rinfo->ring.req_prod_pvt++; > - > /* Keep a private copy so we can reissue requests when recovering. */ > rinfo->shadow[id].req = *ring_req; > > @@ -635,9 +649,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri > } > > /* Fill out a communications ring structure. */ > - ring_req = RING_GET_REQUEST(&rinfo->ring, rinfo->ring.req_prod_pvt); > - id = get_id_from_freelist(rinfo); > - rinfo->shadow[id].request = req; > + id = blkif_ring_get_request(rinfo, req, &ring_req); > > BUG_ON(info->max_indirect_segments == 0 && > GREFS(req->nr_phys_segments) > BLKIF_MAX_SEGMENTS_PER_REQUEST); @@ -650,7 +661,6 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri for_each_sg(rinfo->shadow[id].sg, sg, num_sg, i) num_grant += gnttab_count_grant(sg->offset, sg->length); - ring_req->u.rw.id = id; rinfo->shadow[id].num_sg = num_sg; if (num_grant > BLKIF_MAX_SEGMENTS_PER_REQUEST) { /* > @@ -716,8 +728,6 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri > if (setup.segments) > kunmap_atomic(setup.segments); > > - rinfo->ring.req_prod_pvt++; > - > /* Keep a private copy so we can reissue requests when recovering. */ > rinfo->shadow[id].req = *ring_req; Regards, -- Julien Grall