* [PATCH] rbd: fix the memory leak of bio_chain_clone
@ 2012-06-25 7:37 Guangliang Zhao
2012-07-02 18:39 ` Gregory Farnum
2012-07-02 22:48 ` Yehuda Sadeh
0 siblings, 2 replies; 8+ messages in thread
From: Guangliang Zhao @ 2012-06-25 7:37 UTC (permalink / raw)
To: ceph-devel; +Cc: gzhao
Signed-off-by: Guangliang Zhao <gzhao@suse.com>
---
drivers/block/rbd.c | 10 ++++------
1 files changed, 4 insertions(+), 6 deletions(-)
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index 65665c9..3d6dfc8 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -719,8 +719,6 @@ static struct bio *bio_chain_clone(struct bio **old, struct bio **next,
goto err_out;
if (total + old_chain->bi_size > len) {
- struct bio_pair *bp;
-
/*
* this split can only happen with a single paged bio,
* split_bio will BUG_ON if this is not the case
@@ -732,13 +730,13 @@ static struct bio *bio_chain_clone(struct bio **old, struct bio **next,
/* split the bio. We'll release it either in the next
call, or it will have to be released outside */
- bp = bio_split(old_chain, (len - total) / SECTOR_SIZE);
- if (!bp)
+ *bp = bio_split(old_chain, (len - total) / SECTOR_SIZE);
+ if (!*bp)
goto err_out;
- __bio_clone(tmp, &bp->bio1);
+ __bio_clone(tmp, &(*bp)->bio1);
- *next = &bp->bio2;
+ *next = &(*bp)->bio2;
} else {
__bio_clone(tmp, old_chain);
*next = old_chain->bi_next;
--
1.7.3.4
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH] rbd: fix the memory leak of bio_chain_clone
2012-06-25 7:37 [PATCH] rbd: fix the memory leak of bio_chain_clone Guangliang Zhao
@ 2012-07-02 18:39 ` Gregory Farnum
2012-07-02 22:48 ` Yehuda Sadeh
1 sibling, 0 replies; 8+ messages in thread
From: Gregory Farnum @ 2012-07-02 18:39 UTC (permalink / raw)
To: Alex Elder, Sage Weil; +Cc: ceph-devel
Hey guys,
I'm checking through the archives and I didn't see any replies to this
patch, nor does it appear in our tree that I can see. We should
probably let him know what we're doing with it. :)
-Greg
On Mon, Jun 25, 2012 at 12:37 AM, Guangliang Zhao <gzhao@suse.com> wrote:
> Signed-off-by: Guangliang Zhao <gzhao@suse.com>
> ---
> drivers/block/rbd.c | 10 ++++------
> 1 files changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
> index 65665c9..3d6dfc8 100644
> --- a/drivers/block/rbd.c
> +++ b/drivers/block/rbd.c
> @@ -719,8 +719,6 @@ static struct bio *bio_chain_clone(struct bio **old, struct bio **next,
> goto err_out;
>
> if (total + old_chain->bi_size > len) {
> - struct bio_pair *bp;
> -
> /*
> * this split can only happen with a single paged bio,
> * split_bio will BUG_ON if this is not the case
> @@ -732,13 +730,13 @@ static struct bio *bio_chain_clone(struct bio **old, struct bio **next,
>
> /* split the bio. We'll release it either in the next
> call, or it will have to be released outside */
> - bp = bio_split(old_chain, (len - total) / SECTOR_SIZE);
> - if (!bp)
> + *bp = bio_split(old_chain, (len - total) / SECTOR_SIZE);
> + if (!*bp)
> goto err_out;
>
> - __bio_clone(tmp, &bp->bio1);
> + __bio_clone(tmp, &(*bp)->bio1);
>
> - *next = &bp->bio2;
> + *next = &(*bp)->bio2;
> } else {
> __bio_clone(tmp, old_chain);
> *next = old_chain->bi_next;
> --
> 1.7.3.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] rbd: fix the memory leak of bio_chain_clone
2012-06-25 7:37 [PATCH] rbd: fix the memory leak of bio_chain_clone Guangliang Zhao
2012-07-02 18:39 ` Gregory Farnum
@ 2012-07-02 22:48 ` Yehuda Sadeh
2012-07-03 15:31 ` Guangliang Zhao
2012-07-04 3:09 ` Guangliang Zhao
1 sibling, 2 replies; 8+ messages in thread
From: Yehuda Sadeh @ 2012-07-02 22:48 UTC (permalink / raw)
To: Guangliang Zhao; +Cc: ceph-devel, Alex Elder
On Mon, Jun 25, 2012 at 12:37 AM, Guangliang Zhao <gzhao@suse.com> wrote:
> Signed-off-by: Guangliang Zhao <gzhao@suse.com>
> ---
> drivers/block/rbd.c | 10 ++++------
> 1 files changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
> index 65665c9..3d6dfc8 100644
> --- a/drivers/block/rbd.c
> +++ b/drivers/block/rbd.c
> @@ -719,8 +719,6 @@ static struct bio *bio_chain_clone(struct bio **old, struct bio **next,
> goto err_out;
>
> if (total + old_chain->bi_size > len) {
> - struct bio_pair *bp;
> -
> /*
> * this split can only happen with a single paged bio,
> * split_bio will BUG_ON if this is not the case
> @@ -732,13 +730,13 @@ static struct bio *bio_chain_clone(struct bio **old, struct bio **next,
>
> /* split the bio. We'll release it either in the next
> call, or it will have to be released outside */
> - bp = bio_split(old_chain, (len - total) / SECTOR_SIZE);
> - if (!bp)
> + *bp = bio_split(old_chain, (len - total) / SECTOR_SIZE);
> + if (!*bp)
> goto err_out;
>
> - __bio_clone(tmp, &bp->bio1);
> + __bio_clone(tmp, &(*bp)->bio1);
>
> - *next = &bp->bio2;
> + *next = &(*bp)->bio2;
> } else {
> __bio_clone(tmp, old_chain);
> *next = old_chain->bi_next;
The above patch looks right, that is, fixes a leak. However, I'm not
too sure that we're not missing some bigger problem that was hidden
(due to the leak). Effectively we never called bio_pair_release() on
the bio_pair that we created in bio_chain_clone(). However, I think we
should only call that after we're done with sending the data. So we
should keep the bio_pair with the request and clear it up when we're
done with the request. Something along the lines of:
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index 013c7a5..7bf3e36 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -131,6 +131,7 @@ struct rbd_req_coll {
*/
struct rbd_request {
struct request *rq; /* blk layer request */
+ struct bio_pair *bp;
struct bio *bio; /* cloned bio */
struct page **pages; /* list of used pages */
u64 len;
@@ -867,6 +868,7 @@ static int rbd_do_request(struct request *rq,
struct ceph_snap_context *snapc,
u64 snapid,
const char *obj, u64 ofs, u64 len,
+ struct bio_pair *bp,
struct bio *bio,
struct page **pages,
int num_pages,
@@ -918,6 +920,7 @@ static int rbd_do_request(struct request *rq,
req->r_callback = rbd_cb;
req_data->rq = rq;
+ req_data->bp = bp;
req_data->bio = bio;
req_data->pages = pages;
req_data->len = len;
@@ -968,6 +971,7 @@ static int rbd_do_request(struct request *rq,
done_err:
bio_chain_put(req_data->bio);
+ bio_pair_release(req_data->bp);
ceph_osdc_put_request(req);
done_pages:
rbd_coll_end_req(req_data, ret, len);
@@ -1010,6 +1014,9 @@ static void rbd_req_cb(struct ceph_osd_request
*req, struct ceph_msg *msg)
if (req_data->bio)
bio_chain_put(req_data->bio);
+ if (req_data->bp)
+ bio_pair_release(req_data->bp);
+
ceph_osdc_put_request(req);
kfree(req_data);
}
@@ -1060,7 +1067,7 @@ static int rbd_req_sync_op(struct rbd_device *dev,
}
ret = rbd_do_request(NULL, dev, snapc, snapid,
- obj, ofs, len, NULL,
+ obj, ofs, len, NULL, NULL,
pages, num_pages,
flags,
ops,
@@ -1091,6 +1098,7 @@ static int rbd_do_op(struct request *rq,
u64 snapid,
int opcode, int flags, int num_reply,
u64 ofs, u64 len,
+ struct bio_pair *bp,
struct bio *bio,
struct rbd_req_coll *coll,
int coll_index)
@@ -1124,6 +1132,7 @@ static int rbd_do_op(struct request *rq,
ret = rbd_do_request(rq, rbd_dev, snapc, snapid,
seg_name, seg_ofs, seg_len,
+ bp,
bio,
NULL, 0,
flags,
@@ -1145,6 +1154,7 @@ static int rbd_req_write(struct request *rq,
struct rbd_device *rbd_dev,
struct ceph_snap_context *snapc,
u64 ofs, u64 len,
+ struct bio_pair *bp,
struct bio *bio,
int coll_index)
@@ -1153,7 +1163,7 @@ static int rbd_req_write(struct request *rq,
CEPH_OSD_OP_WRITE,
CEPH_OSD_FLAG_WRITE | CEPH_OSD_FLAG_ONDISK,
2,
- ofs, len, bio, coll, coll_index);
+ ofs, len, bp, bio, coll, coll_index);
}
/*
@@ -1163,6 +1173,7 @@ static int rbd_req_read(struct request *rq,
struct rbd_device *rbd_dev,
u64 snapid,
u64 ofs, u64 len,
+ struct bio_pair *bp,
struct bio *bio,
struct rbd_req_coll *coll,
int coll_index)
@@ -1172,7 +1183,7 @@ static int rbd_req_read(struct request *rq,
CEPH_OSD_OP_READ,
CEPH_OSD_FLAG_READ,
2,
- ofs, len, bio, coll, coll_index);
+ ofs, len, bp, bio, coll, coll_index);
}
/*
@@ -1215,7 +1226,7 @@ static int rbd_req_sync_notify_ack(struct rbd_device *dev,
ops[0].watch.flag = 0;
ret = rbd_do_request(NULL, dev, NULL, CEPH_NOSNAP,
- obj, 0, 0, NULL,
+ obj, 0, 0, NULL, NULL,
pages, 0,
CEPH_OSD_FLAG_READ,
ops,
@@ -1517,13 +1528,13 @@ static void rbd_rq_fn(struct request_queue *q)
rbd_req_write(rq, rbd_dev,
rbd_dev->header.snapc,
ofs,
- op_size, bio,
+ op_size, bp, bio,
coll, cur_seg);
else
rbd_req_read(rq, rbd_dev,
cur_snap_id(rbd_dev),
ofs,
- op_size, bio,
+ op_size, bp, bio,
coll, cur_seg);
next_seg:
@@ -1535,8 +1546,6 @@ next_seg:
} while (size > 0);
kref_put(&coll->kref, rbd_coll_release);
- if (bp)
- bio_pair_release(bp);
spin_lock_irq(q->queue_lock);
}
}
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH] rbd: fix the memory leak of bio_chain_clone
2012-07-02 22:48 ` Yehuda Sadeh
@ 2012-07-03 15:31 ` Guangliang Zhao
2012-07-04 3:09 ` Guangliang Zhao
1 sibling, 0 replies; 8+ messages in thread
From: Guangliang Zhao @ 2012-07-03 15:31 UTC (permalink / raw)
To: Yehuda Sadeh; +Cc: ceph-devel, Guangliang Zhao
On Mon, Jul 02, 2012 at 03:48:14PM -0700, Yehuda Sadeh wrote:
> On Mon, Jun 25, 2012 at 12:37 AM, Guangliang Zhao <gzhao@suse.com> wrote:
> > Signed-off-by: Guangliang Zhao <gzhao@suse.com>
> > ---
> > drivers/block/rbd.c | 10 ++++------
> > 1 files changed, 4 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
> > index 65665c9..3d6dfc8 100644
> > --- a/drivers/block/rbd.c
> > +++ b/drivers/block/rbd.c
> > @@ -719,8 +719,6 @@ static struct bio *bio_chain_clone(struct bio **old, struct bio **next,
> > goto err_out;
> >
> > if (total + old_chain->bi_size > len) {
> > - struct bio_pair *bp;
> > -
> > /*
> > * this split can only happen with a single paged bio,
> > * split_bio will BUG_ON if this is not the case
> > @@ -732,13 +730,13 @@ static struct bio *bio_chain_clone(struct bio **old, struct bio **next,
> >
> > /* split the bio. We'll release it either in the next
> > call, or it will have to be released outside */
> > - bp = bio_split(old_chain, (len - total) / SECTOR_SIZE);
> > - if (!bp)
> > + *bp = bio_split(old_chain, (len - total) / SECTOR_SIZE);
> > + if (!*bp)
> > goto err_out;
> >
> > - __bio_clone(tmp, &bp->bio1);
> > + __bio_clone(tmp, &(*bp)->bio1);
> >
> > - *next = &bp->bio2;
> > + *next = &(*bp)->bio2;
> > } else {
> > __bio_clone(tmp, old_chain);
> > *next = old_chain->bi_next;
>
> The above patch looks right, that is, fixes a leak. However, I'm not
> too sure that we're not missing some bigger problem that was hidden
> (due to the leak). Effectively we never called bio_pair_release() on
> the bio_pair that we created in bio_chain_clone(). However, I think we
> should only call that after we're done with sending the data.
The bio_pair is only used to split the orgin bio and clone them to the tmps,
no one will touch it again, so I think it is safe release it here.
The bio_pair could be freed actually only after 3 times release, because
the reference count of bio_pair is initialized to 3 when bio_split and
bio_pair_release only drops the reference count. There are not enough
release times for the bio_pair even apply either of these patches.
I think transferring data with bio1 and bio2 of bio_pair instead of the
tmp bio chain is a better way. It can not only save mem(for tmp bio chain),
but also manage the reference count of bio_pair more comfortably(with callback
of the bio1 and bio2). We can slove the above problems more easily with this
approach.
Thanks for you reply:)
--
Best regards,
Guangliang
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] rbd: fix the memory leak of bio_chain_clone
2012-07-02 22:48 ` Yehuda Sadeh
2012-07-03 15:31 ` Guangliang Zhao
@ 2012-07-04 3:09 ` Guangliang Zhao
1 sibling, 0 replies; 8+ messages in thread
From: Guangliang Zhao @ 2012-07-04 3:09 UTC (permalink / raw)
To: Yehuda Sadeh; +Cc: ceph-devel, Guangliang Zhao
On Mon, Jul 02, 2012 at 03:48:14PM -0700, Yehuda Sadeh wrote:
> On Mon, Jun 25, 2012 at 12:37 AM, Guangliang Zhao <gzhao@suse.com> wrote:
> > Signed-off-by: Guangliang Zhao <gzhao@suse.com>
> > ---
> > drivers/block/rbd.c | 10 ++++------
> > 1 files changed, 4 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
> > index 65665c9..3d6dfc8 100644
> > --- a/drivers/block/rbd.c
> > +++ b/drivers/block/rbd.c
> > @@ -719,8 +719,6 @@ static struct bio *bio_chain_clone(struct bio **old, struct bio **next,
> > goto err_out;
> >
> > if (total + old_chain->bi_size > len) {
> > - struct bio_pair *bp;
> > -
> > /*
> > * this split can only happen with a single paged bio,
> > * split_bio will BUG_ON if this is not the case
> > @@ -732,13 +730,13 @@ static struct bio *bio_chain_clone(struct bio **old, struct bio **next,
> >
> > /* split the bio. We'll release it either in the next
> > call, or it will have to be released outside */
> > - bp = bio_split(old_chain, (len - total) / SECTOR_SIZE);
> > - if (!bp)
> > + *bp = bio_split(old_chain, (len - total) / SECTOR_SIZE);
> > + if (!*bp)
> > goto err_out;
> >
> > - __bio_clone(tmp, &bp->bio1);
> > + __bio_clone(tmp, &(*bp)->bio1);
> >
> > - *next = &bp->bio2;
> > + *next = &(*bp)->bio2;
> > } else {
> > __bio_clone(tmp, old_chain);
> > *next = old_chain->bi_next;
>
> The above patch looks right, that is, fixes a leak. However, I'm not
> too sure that we're not missing some bigger problem that was hidden
> (due to the leak). Effectively we never called bio_pair_release() on
> the bio_pair that we created in bio_chain_clone(). However, I think we
> should only call that after we're done with sending the data.
The bio_pair is only used to split the orgin bio and clone them to the tmps,
no one will touch it again, so I think it is safe release it here.
The bio_pair could be freed actually only after 3 times release, because
the reference count of bio_pair is initialized to 3 when bio_split and
bio_pair_release only drops the reference count. There are not enough
release times for the bio_pair even apply either of these patches.
I think transferring data with bio1 and bio2 of bio_pair instead of the
tmp bio chain is a better way. It can not only save mem(for tmp bio chain),
but also manage the reference count of bio_pair more comfortably(with callback
of the bio1 and bio2). We can slove the above problems more easily with this
approach.
Thanks for you reply:)
--
Best regards,
Guangliang
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] rbd: fix the memory leak of bio_chain_clone
2012-07-17 20:18 ` Yehuda Sadeh
@ 2012-07-19 13:45 ` Guangliang Zhao
0 siblings, 0 replies; 8+ messages in thread
From: Guangliang Zhao @ 2012-07-19 13:45 UTC (permalink / raw)
To: Yehuda Sadeh; +Cc: ceph-devel
On Tue, Jul 17, 2012 at 01:18:50PM -0700, Yehuda Sadeh wrote:
> On Wed, Jul 11, 2012 at 5:34 AM, Guangliang Zhao <gzhao@suse.com> wrote:
> >
> > The bio_pair alloced in bio_chain_clone would not be freed,
> > this will cause a memory leak. It could be freed actually only
> > after 3 times release, because the reference count of bio_pair
> > is initialized to 3 when bio_split and bio_pair_release only
> > drops the reference count.
> >
> > The function bio_pair_release must be called three times for
> > releasing bio_pair, and the callback functions of bios on the
> > requests will be called when the last release time in bio_pair_release,
> > however, these functions will also be called in rbd_req_cb. In
> > other words, they will be called twice, and it may cause serious
> > consequences.
> >
> > This patch clones bio chian from the origin directly, doesn't use
> > bio_split(without bio_pair). The new bio chain can be release
> > whenever we don't need it.
> >
> > Signed-off-by: Guangliang Zhao <gzhao@suse.com>
> > ---
> > drivers/block/rbd.c | 66
> > ++++++++++++++++++++-------------------------------
> > 1 files changed, 26 insertions(+), 40 deletions(-)
> >
> > diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
> > index 013c7a5..6a12040 100644
> > --- a/drivers/block/rbd.c
> > +++ b/drivers/block/rbd.c
> > @@ -712,51 +712,43 @@ static void zero_bio_chain(struct bio *chain, int
> > start_ofs)
> > }
> > }
> >
> > -/*
> > - * bio_chain_clone - clone a chain of bios up to a certain length.
> > - * might return a bio_pair that will need to be released.
> > +/**
> > + * bio_chain_clone - clone a chain of bios up to a certain length.
> > + * @old: bio to clone
> > + * @offset: start point for bio clone
> > + * @len: length of bio chain
> > + * @gfp_mask: allocation priority
> > + *
> > + * RETURNS:
> > + * Pointer to new bio chain on success, NULL on failure.
> > */
> > -static struct bio *bio_chain_clone(struct bio **old, struct bio **next,
> > - struct bio_pair **bp,
> > +static struct bio *bio_chain_clone(struct bio **old, int *offset,
> > int len, gfp_t gfpmask)
> > {
> > struct bio *tmp, *old_chain = *old, *new_chain = NULL, *tail =
> > NULL;
> > int total = 0;
> >
> > - if (*bp) {
> > - bio_pair_release(*bp);
> > - *bp = NULL;
> > - }
> > -
> > while (old_chain && (total < len)) {
> > + int need = len - total;
> > +
> > tmp = bio_kmalloc(gfpmask, old_chain->bi_max_vecs);
> > if (!tmp)
> > goto err_out;
> >
> > - if (total + old_chain->bi_size > len) {
> > - struct bio_pair *bp;
> > -
> > + __bio_clone(tmp, old_chain);
> > + if (total + (tmp->bi_size - *offset) > len) {
>
> can change this to:
> if (tmp->bi_size - *offset > need)
>
> I think it'll be slightly more readable
Yes, I will modify it in the next version.
>
> > + tmp->bi_sector += *offset >> SECTOR_SHIFT;
> > + tmp->bi_io_vec->bv_offset += *offset >>
> > SECTOR_SHIFT;
>
> Shouldn't these two lines be outside this if?
Excellent, the else branch need it too.
>
>
>
> > /*
> > - * this split can only happen with a single paged
> > bio,
> > - * split_bio will BUG_ON if this is not the case
> > + * This can only happen with a single paged bio,
> > + * rbd_merge_bvec would guarantee it.
> > */
> > - dout("bio_chain_clone split! total=%d
> > remaining=%d"
> > - "bi_size=%d\n",
> > - (int)total, (int)len-total,
> > - (int)old_chain->bi_size);
> > -
> > - /* split the bio. We'll release it either in the
> > next
> > - call, or it will have to be released outside */
> > - bp = bio_split(old_chain, (len - total) /
> > SECTOR_SIZE);
> > - if (!bp)
> > - goto err_out;
> > -
> > - __bio_clone(tmp, &bp->bio1);
> > -
> > - *next = &bp->bio2;
> > + tmp->bi_size = need;
> > + tmp->bi_io_vec->bv_len = need;
> > + *offset += need;
> > } else {
> > - __bio_clone(tmp, old_chain);
> > - *next = old_chain->bi_next;
> > + old_chain = old_chain->bi_next;
> > + *offset = 0;
We also need adjust the length of tmp bios in the else branch, so the following
two lines should be added.
tmp->bi_size -= *offset;
tmp->bi_io_vec->bv_offset -= *offset;
> > }
> >
> > tmp->bi_bdev = NULL;
> > @@ -769,7 +761,6 @@ static struct bio *bio_chain_clone(struct bio **old,
> > struct bio **next,
> > tail->bi_next = tmp;
> > tail = tmp;
> > }
> > - old_chain = old_chain->bi_next;
> >
> > total += tmp->bi_size;
> > }
> > @@ -1447,13 +1438,12 @@ static void rbd_rq_fn(struct request_queue *q)
> > {
> > struct rbd_device *rbd_dev = q->queuedata;
> > struct request *rq;
> > - struct bio_pair *bp = NULL;
> >
> > while ((rq = blk_fetch_request(q))) {
> > struct bio *bio;
> > - struct bio *rq_bio, *next_bio = NULL;
> > + struct bio *rq_bio;
> > bool do_write;
> > - int size, op_size = 0;
> > + int size, op_size = 0, offset = 0;
> > u64 ofs;
> > int num_segs, cur_seg = 0;
> > struct rbd_req_coll *coll;
> > @@ -1503,7 +1493,7 @@ static void rbd_rq_fn(struct request_queue *q)
> > ofs, size,
> > NULL, NULL);
> > kref_get(&coll->kref);
> > - bio = bio_chain_clone(&rq_bio, &next_bio, &bp,
> > + bio = bio_chain_clone(&rq_bio, &offset,
> > op_size, GFP_ATOMIC);
> > if (!bio) {
> > rbd_coll_end_req_index(rq, coll, cur_seg,
> > @@ -1531,12 +1521,8 @@ next_seg:
> > ofs += op_size;
> >
> > cur_seg++;
> > - rq_bio = next_bio;
> > } while (size > 0);
> > kref_put(&coll->kref, rbd_coll_release);
> > -
> > - if (bp)
> > - bio_pair_release(bp);
> > spin_lock_irq(q->queue_lock);
> > }
> > }
>
> Yeah, looks cleaner.
>
> Thanks,
> Yehuda
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
I will send patch v2 if everything above mentioned is OK.
Thanks for your review.
--
Best regards,
Guangliang
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] rbd: fix the memory leak of bio_chain_clone
2012-07-11 12:34 Guangliang Zhao
@ 2012-07-17 20:18 ` Yehuda Sadeh
2012-07-19 13:45 ` Guangliang Zhao
0 siblings, 1 reply; 8+ messages in thread
From: Yehuda Sadeh @ 2012-07-17 20:18 UTC (permalink / raw)
To: Guangliang Zhao; +Cc: ceph-devel, sage
On Wed, Jul 11, 2012 at 5:34 AM, Guangliang Zhao <gzhao@suse.com> wrote:
>
> The bio_pair alloced in bio_chain_clone would not be freed,
> this will cause a memory leak. It could be freed actually only
> after 3 times release, because the reference count of bio_pair
> is initialized to 3 when bio_split and bio_pair_release only
> drops the reference count.
>
> The function bio_pair_release must be called three times for
> releasing bio_pair, and the callback functions of bios on the
> requests will be called when the last release time in bio_pair_release,
> however, these functions will also be called in rbd_req_cb. In
> other words, they will be called twice, and it may cause serious
> consequences.
>
> This patch clones bio chian from the origin directly, doesn't use
> bio_split(without bio_pair). The new bio chain can be release
> whenever we don't need it.
>
> Signed-off-by: Guangliang Zhao <gzhao@suse.com>
> ---
> drivers/block/rbd.c | 66
> ++++++++++++++++++++-------------------------------
> 1 files changed, 26 insertions(+), 40 deletions(-)
>
> diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
> index 013c7a5..6a12040 100644
> --- a/drivers/block/rbd.c
> +++ b/drivers/block/rbd.c
> @@ -712,51 +712,43 @@ static void zero_bio_chain(struct bio *chain, int
> start_ofs)
> }
> }
>
> -/*
> - * bio_chain_clone - clone a chain of bios up to a certain length.
> - * might return a bio_pair that will need to be released.
> +/**
> + * bio_chain_clone - clone a chain of bios up to a certain length.
> + * @old: bio to clone
> + * @offset: start point for bio clone
> + * @len: length of bio chain
> + * @gfp_mask: allocation priority
> + *
> + * RETURNS:
> + * Pointer to new bio chain on success, NULL on failure.
> */
> -static struct bio *bio_chain_clone(struct bio **old, struct bio **next,
> - struct bio_pair **bp,
> +static struct bio *bio_chain_clone(struct bio **old, int *offset,
> int len, gfp_t gfpmask)
> {
> struct bio *tmp, *old_chain = *old, *new_chain = NULL, *tail =
> NULL;
> int total = 0;
>
> - if (*bp) {
> - bio_pair_release(*bp);
> - *bp = NULL;
> - }
> -
> while (old_chain && (total < len)) {
> + int need = len - total;
> +
> tmp = bio_kmalloc(gfpmask, old_chain->bi_max_vecs);
> if (!tmp)
> goto err_out;
>
> - if (total + old_chain->bi_size > len) {
> - struct bio_pair *bp;
> -
> + __bio_clone(tmp, old_chain);
> + if (total + (tmp->bi_size - *offset) > len) {
can change this to:
if (tmp->bi_size - *offset > need)
I think it'll be slightly more readable
> + tmp->bi_sector += *offset >> SECTOR_SHIFT;
> + tmp->bi_io_vec->bv_offset += *offset >>
> SECTOR_SHIFT;
Shouldn't these two lines be outside this if?
> /*
> - * this split can only happen with a single paged
> bio,
> - * split_bio will BUG_ON if this is not the case
> + * This can only happen with a single paged bio,
> + * rbd_merge_bvec would guarantee it.
> */
> - dout("bio_chain_clone split! total=%d
> remaining=%d"
> - "bi_size=%d\n",
> - (int)total, (int)len-total,
> - (int)old_chain->bi_size);
> -
> - /* split the bio. We'll release it either in the
> next
> - call, or it will have to be released outside */
> - bp = bio_split(old_chain, (len - total) /
> SECTOR_SIZE);
> - if (!bp)
> - goto err_out;
> -
> - __bio_clone(tmp, &bp->bio1);
> -
> - *next = &bp->bio2;
> + tmp->bi_size = need;
> + tmp->bi_io_vec->bv_len = need;
> + *offset += need;
> } else {
> - __bio_clone(tmp, old_chain);
> - *next = old_chain->bi_next;
> + old_chain = old_chain->bi_next;
> + *offset = 0;
> }
>
> tmp->bi_bdev = NULL;
> @@ -769,7 +761,6 @@ static struct bio *bio_chain_clone(struct bio **old,
> struct bio **next,
> tail->bi_next = tmp;
> tail = tmp;
> }
> - old_chain = old_chain->bi_next;
>
> total += tmp->bi_size;
> }
> @@ -1447,13 +1438,12 @@ static void rbd_rq_fn(struct request_queue *q)
> {
> struct rbd_device *rbd_dev = q->queuedata;
> struct request *rq;
> - struct bio_pair *bp = NULL;
>
> while ((rq = blk_fetch_request(q))) {
> struct bio *bio;
> - struct bio *rq_bio, *next_bio = NULL;
> + struct bio *rq_bio;
> bool do_write;
> - int size, op_size = 0;
> + int size, op_size = 0, offset = 0;
> u64 ofs;
> int num_segs, cur_seg = 0;
> struct rbd_req_coll *coll;
> @@ -1503,7 +1493,7 @@ static void rbd_rq_fn(struct request_queue *q)
> ofs, size,
> NULL, NULL);
> kref_get(&coll->kref);
> - bio = bio_chain_clone(&rq_bio, &next_bio, &bp,
> + bio = bio_chain_clone(&rq_bio, &offset,
> op_size, GFP_ATOMIC);
> if (!bio) {
> rbd_coll_end_req_index(rq, coll, cur_seg,
> @@ -1531,12 +1521,8 @@ next_seg:
> ofs += op_size;
>
> cur_seg++;
> - rq_bio = next_bio;
> } while (size > 0);
> kref_put(&coll->kref, rbd_coll_release);
> -
> - if (bp)
> - bio_pair_release(bp);
> spin_lock_irq(q->queue_lock);
> }
> }
Yeah, looks cleaner.
Thanks,
Yehuda
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH] rbd: fix the memory leak of bio_chain_clone
@ 2012-07-11 12:34 Guangliang Zhao
2012-07-17 20:18 ` Yehuda Sadeh
0 siblings, 1 reply; 8+ messages in thread
From: Guangliang Zhao @ 2012-07-11 12:34 UTC (permalink / raw)
To: ceph-devel; +Cc: yehuda, sage
The bio_pair alloced in bio_chain_clone would not be freed,
this will cause a memory leak. It could be freed actually only
after 3 times release, because the reference count of bio_pair
is initialized to 3 when bio_split and bio_pair_release only
drops the reference count.
The function bio_pair_release must be called three times for
releasing bio_pair, and the callback functions of bios on the
requests will be called when the last release time in bio_pair_release,
however, these functions will also be called in rbd_req_cb. In
other words, they will be called twice, and it may cause serious
consequences.
This patch clones bio chian from the origin directly, doesn't use
bio_split(without bio_pair). The new bio chain can be release
whenever we don't need it.
Signed-off-by: Guangliang Zhao <gzhao@suse.com>
---
drivers/block/rbd.c | 66 ++++++++++++++++++++-------------------------------
1 files changed, 26 insertions(+), 40 deletions(-)
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index 013c7a5..6a12040 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -712,51 +712,43 @@ static void zero_bio_chain(struct bio *chain, int start_ofs)
}
}
-/*
- * bio_chain_clone - clone a chain of bios up to a certain length.
- * might return a bio_pair that will need to be released.
+/**
+ * bio_chain_clone - clone a chain of bios up to a certain length.
+ * @old: bio to clone
+ * @offset: start point for bio clone
+ * @len: length of bio chain
+ * @gfp_mask: allocation priority
+ *
+ * RETURNS:
+ * Pointer to new bio chain on success, NULL on failure.
*/
-static struct bio *bio_chain_clone(struct bio **old, struct bio **next,
- struct bio_pair **bp,
+static struct bio *bio_chain_clone(struct bio **old, int *offset,
int len, gfp_t gfpmask)
{
struct bio *tmp, *old_chain = *old, *new_chain = NULL, *tail = NULL;
int total = 0;
- if (*bp) {
- bio_pair_release(*bp);
- *bp = NULL;
- }
-
while (old_chain && (total < len)) {
+ int need = len - total;
+
tmp = bio_kmalloc(gfpmask, old_chain->bi_max_vecs);
if (!tmp)
goto err_out;
- if (total + old_chain->bi_size > len) {
- struct bio_pair *bp;
-
+ __bio_clone(tmp, old_chain);
+ if (total + (tmp->bi_size - *offset) > len) {
+ tmp->bi_sector += *offset >> SECTOR_SHIFT;
+ tmp->bi_io_vec->bv_offset += *offset >> SECTOR_SHIFT;
/*
- * this split can only happen with a single paged bio,
- * split_bio will BUG_ON if this is not the case
+ * This can only happen with a single paged bio,
+ * rbd_merge_bvec would guarantee it.
*/
- dout("bio_chain_clone split! total=%d remaining=%d"
- "bi_size=%d\n",
- (int)total, (int)len-total,
- (int)old_chain->bi_size);
-
- /* split the bio. We'll release it either in the next
- call, or it will have to be released outside */
- bp = bio_split(old_chain, (len - total) / SECTOR_SIZE);
- if (!bp)
- goto err_out;
-
- __bio_clone(tmp, &bp->bio1);
-
- *next = &bp->bio2;
+ tmp->bi_size = need;
+ tmp->bi_io_vec->bv_len = need;
+ *offset += need;
} else {
- __bio_clone(tmp, old_chain);
- *next = old_chain->bi_next;
+ old_chain = old_chain->bi_next;
+ *offset = 0;
}
tmp->bi_bdev = NULL;
@@ -769,7 +761,6 @@ static struct bio *bio_chain_clone(struct bio **old, struct bio **next,
tail->bi_next = tmp;
tail = tmp;
}
- old_chain = old_chain->bi_next;
total += tmp->bi_size;
}
@@ -1447,13 +1438,12 @@ static void rbd_rq_fn(struct request_queue *q)
{
struct rbd_device *rbd_dev = q->queuedata;
struct request *rq;
- struct bio_pair *bp = NULL;
while ((rq = blk_fetch_request(q))) {
struct bio *bio;
- struct bio *rq_bio, *next_bio = NULL;
+ struct bio *rq_bio;
bool do_write;
- int size, op_size = 0;
+ int size, op_size = 0, offset = 0;
u64 ofs;
int num_segs, cur_seg = 0;
struct rbd_req_coll *coll;
@@ -1503,7 +1493,7 @@ static void rbd_rq_fn(struct request_queue *q)
ofs, size,
NULL, NULL);
kref_get(&coll->kref);
- bio = bio_chain_clone(&rq_bio, &next_bio, &bp,
+ bio = bio_chain_clone(&rq_bio, &offset,
op_size, GFP_ATOMIC);
if (!bio) {
rbd_coll_end_req_index(rq, coll, cur_seg,
@@ -1531,12 +1521,8 @@ next_seg:
ofs += op_size;
cur_seg++;
- rq_bio = next_bio;
} while (size > 0);
kref_put(&coll->kref, rbd_coll_release);
-
- if (bp)
- bio_pair_release(bp);
spin_lock_irq(q->queue_lock);
}
}
--
1.7.3.4
^ permalink raw reply related [flat|nested] 8+ messages in thread
end of thread, other threads:[~2012-07-19 13:54 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-06-25 7:37 [PATCH] rbd: fix the memory leak of bio_chain_clone Guangliang Zhao
2012-07-02 18:39 ` Gregory Farnum
2012-07-02 22:48 ` Yehuda Sadeh
2012-07-03 15:31 ` Guangliang Zhao
2012-07-04 3:09 ` Guangliang Zhao
2012-07-11 12:34 Guangliang Zhao
2012-07-17 20:18 ` Yehuda Sadeh
2012-07-19 13:45 ` Guangliang Zhao
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.