From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jeremy Fitzhardinge Subject: Re: 2.6.28.7 domU crashes Date: Wed, 11 Mar 2009 17:29:50 -0700 Message-ID: <49B8577E.40705@goop.org> References: Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: =?ISO-8859-15?Q?Sven_K=F6hler?= Cc: xen-devel@lists.xensource.com, Stable Kernel , Jens Axboe List-Id: xen-devel@lists.xenproject.org Sven K=F6hler wrote: > Hi, > > below is the output of the kernel just before it goes down and=20 > crashes. I have to use "xm destroy" to kill it. > > I have tried the enable/disable the "Optimize for Size" setting in the=20 > kernel. I hopes that it's some kind of compiler bug because of the=20 > "invalid opcode" thing. But actually I don't a clue what this might be=20 > caused by. > > Do you have any idea? > > The crash is very reproducable. Looks like the crash fixed by the change below=20 (c7241227f61ca6606a7fa3555391360d92bd8d9b in linux-2.6.git) J >>From c7241227f61ca6606a7fa3555391360d92bd8d9b Mon Sep 17 00:00:00 2001 From: Jens Axboe Date: Mon, 16 Feb 2009 13:18:28 -0800 Subject: [PATCH] xen/blkfront: use blk_rq_map_sg to generate ring entries On occasion, the request will apparently have more segments than we fit into the ring. Jens says: > The second problem is that the block layer then appears to create one > too many segments, but from the dump it has rq->nr_phys_segments =3D=3D > BLKIF_MAX_SEGMENTS_PER_REQUEST. I suspect the latter is due to > xen-blkfront not handling the merging on its own. It should check that > the new page doesn't form part of the previous page. The > rq_for_each_segment() iterates all single bits in the request, not dma > segments. The "easiest" way to do this is to call blk_rq_map_sg() and > then iterate the mapped sg list. That will give you what you are > looking for. > Here's a test patch, compiles but otherwise untested. I spent more > time figuring out how to enable XEN than to code it up, so YMMV! > Probably the sg list wants to be put inside the ring and only > initialized on allocation, then you can get rid of the sg on stack and > sg_init_table() loop call in the function. I'll leave that, and the > testing, to you. [Moved sg array into info structure, and initialize once. -J] Signed-off-by: Jens Axboe Signed-off-by: Jeremy Fitzhardinge diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c index 918ef72..b6c8ce2 100644 --- a/drivers/block/xen-blkfront.c +++ b/drivers/block/xen-blkfront.c @@ -40,6 +40,7 @@ #include #include #include +#include =20 #include #include @@ -82,6 +83,7 @@ struct blkfront_info enum blkif_state connected; int ring_ref; struct blkif_front_ring ring; + struct scatterlist sg[BLKIF_MAX_SEGMENTS_PER_REQUEST]; unsigned int evtchn, irq; struct request_queue *rq; struct work_struct work; @@ -204,12 +206,11 @@ static int blkif_queue_request(struct request *req) struct blkfront_info *info =3D req->rq_disk->private_data; unsigned long buffer_mfn; struct blkif_request *ring_req; - struct req_iterator iter; - struct bio_vec *bvec; unsigned long id; unsigned int fsect, lsect; - int ref; + int i, ref; grant_ref_t gref_head; + struct scatterlist *sg; =20 if (unlikely(info->connected !=3D BLKIF_STATE_CONNECTED)) return 1; @@ -238,12 +239,13 @@ static int blkif_queue_request(struct request *req) if (blk_barrier_rq(req)) ring_req->operation =3D BLKIF_OP_WRITE_BARRIER; =20 - ring_req->nr_segments =3D 0; - rq_for_each_segment(bvec, req, iter) { - BUG_ON(ring_req->nr_segments =3D=3D BLKIF_MAX_SEGMENTS_PER_REQUEST); - buffer_mfn =3D pfn_to_mfn(page_to_pfn(bvec->bv_page)); - fsect =3D bvec->bv_offset >> 9; - lsect =3D fsect + (bvec->bv_len >> 9) - 1; + ring_req->nr_segments =3D blk_rq_map_sg(req->q, req, info->sg); + BUG_ON(ring_req->nr_segments > BLKIF_MAX_SEGMENTS_PER_REQUEST); + + for_each_sg(info->sg, sg, ring_req->nr_segments, i) { + buffer_mfn =3D pfn_to_mfn(page_to_pfn(sg_page(sg))); + fsect =3D sg->offset >> 9; + lsect =3D fsect + (sg->length >> 9) - 1; /* install a grant reference. */ ref =3D gnttab_claim_grant_reference(&gref_head); BUG_ON(ref =3D=3D -ENOSPC); @@ -254,16 +256,12 @@ static int blkif_queue_request(struct request *req) buffer_mfn, rq_data_dir(req) ); =20 - info->shadow[id].frame[ring_req->nr_segments] =3D - mfn_to_pfn(buffer_mfn); - - ring_req->seg[ring_req->nr_segments] =3D + info->shadow[id].frame[i] =3D mfn_to_pfn(buffer_mfn); + ring_req->seg[i] =3D (struct blkif_request_segment) { .gref =3D ref, .first_sect =3D fsect, .last_sect =3D lsect }; - - ring_req->nr_segments++; } =20 info->ring.req_prod_pvt++; @@ -622,6 +620,8 @@ static int setup_blkring(struct xenbus_device *dev, SHARED_RING_INIT(sring); FRONT_RING_INIT(&info->ring, sring, PAGE_SIZE); =20 + sg_init_table(info->sg, BLKIF_MAX_SEGMENTS_PER_REQUEST); + err =3D xenbus_grant_ring(dev, virt_to_mfn(info->ring.sring)); if (err < 0) { free_page((unsigned long)sring);