* [PATCH for-next v2] RDMA/rxe: Fix memory leak in error path code
@ 2021-07-05 16:41 Bob Pearson
2021-07-06 5:48 ` Zhu Yanjun
2021-07-15 17:56 ` Jason Gunthorpe
0 siblings, 2 replies; 4+ messages in thread
From: Bob Pearson @ 2021-07-05 16:41 UTC (permalink / raw)
To: jgg, zyjzyj2000, linux-rdma, haakon.brugge, yang.jy; +Cc: Bob Pearson
In rxe_mr_init_user() in rxe_mr.c at the third error the driver fails to
free the memory at mr->map. This patch adds code to do that.
This error only occurs if page_address() fails to return a non zero address
which should never happen for 64 bit architectures.
Fixes: 8700e3e7c485 ("Soft RoCE driver")
Reported by: Haakon Bugge <haakon.bugge@oracle.com>
Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
v2:
Left out white space changes.
drivers/infiniband/sw/rxe/rxe_mr.c | 27 +++++++++++++++++----------
1 file changed, 17 insertions(+), 10 deletions(-)
diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
index 6aabcb4de235..be4bcb420fab 100644
--- a/drivers/infiniband/sw/rxe/rxe_mr.c
+++ b/drivers/infiniband/sw/rxe/rxe_mr.c
@@ -113,13 +113,14 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
int num_buf;
void *vaddr;
int err;
+ int i;
umem = ib_umem_get(pd->ibpd.device, start, length, access);
if (IS_ERR(umem)) {
- pr_warn("err %d from rxe_umem_get\n",
- (int)PTR_ERR(umem));
+ pr_warn("%s: Unable to pin memory region err = %d\n",
+ __func__, (int)PTR_ERR(umem));
err = PTR_ERR(umem);
- goto err1;
+ goto err_out;
}
mr->umem = umem;
@@ -129,9 +130,9 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
err = rxe_mr_alloc(mr, num_buf);
if (err) {
- pr_warn("err %d from rxe_mr_alloc\n", err);
- ib_umem_release(umem);
- goto err1;
+ pr_warn("%s: Unable to allocate memory for map\n",
+ __func__);
+ goto err_release_umem;
}
mr->page_shift = PAGE_SHIFT;
@@ -151,10 +152,10 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
vaddr = page_address(sg_page_iter_page(&sg_iter));
if (!vaddr) {
- pr_warn("null vaddr\n");
- ib_umem_release(umem);
+ pr_warn("%s: Unable to get virtual address\n",
+ __func__);
err = -ENOMEM;
- goto err1;
+ goto err_cleanup_map;
}
buf->addr = (uintptr_t)vaddr;
@@ -177,7 +178,13 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
return 0;
-err1:
+err_cleanup_map:
+ for (i = 0; i < mr->num_map; i++)
+ kfree(mr->map[i]);
+ kfree(mr->map);
+err_release_umem:
+ ib_umem_release(umem);
+err_out:
return err;
}
--
2.30.2
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH for-next v2] RDMA/rxe: Fix memory leak in error path code
2021-07-05 16:41 [PATCH for-next v2] RDMA/rxe: Fix memory leak in error path code Bob Pearson
@ 2021-07-06 5:48 ` Zhu Yanjun
2021-07-06 10:10 ` Haakon Bugge
2021-07-15 17:56 ` Jason Gunthorpe
1 sibling, 1 reply; 4+ messages in thread
From: Zhu Yanjun @ 2021-07-06 5:48 UTC (permalink / raw)
To: Bob Pearson; +Cc: Jason Gunthorpe, RDMA mailing list, haakon.brugge, yang.jy
On Tue, Jul 6, 2021 at 12:42 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
>
> In rxe_mr_init_user() in rxe_mr.c at the third error the driver fails to
> free the memory at mr->map. This patch adds code to do that.
> This error only occurs if page_address() fails to return a non zero address
> which should never happen for 64 bit architectures.
>
> Fixes: 8700e3e7c485 ("Soft RoCE driver")
> Reported by: Haakon Bugge <haakon.bugge@oracle.com>
> Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
Thanks a lot.
Reviewed-by: Zhu Yanjun <zyjzyj2000@gmail.com>
Zhu Yanjun
> ---
> v2:
> Left out white space changes.
>
> drivers/infiniband/sw/rxe/rxe_mr.c | 27 +++++++++++++++++----------
> 1 file changed, 17 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
> index 6aabcb4de235..be4bcb420fab 100644
> --- a/drivers/infiniband/sw/rxe/rxe_mr.c
> +++ b/drivers/infiniband/sw/rxe/rxe_mr.c
> @@ -113,13 +113,14 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
> int num_buf;
> void *vaddr;
> int err;
> + int i;
Thanks.
>
> umem = ib_umem_get(pd->ibpd.device, start, length, access);
> if (IS_ERR(umem)) {
> - pr_warn("err %d from rxe_umem_get\n",
> - (int)PTR_ERR(umem));
> + pr_warn("%s: Unable to pin memory region err = %d\n",
> + __func__, (int)PTR_ERR(umem));
> err = PTR_ERR(umem);
> - goto err1;
> + goto err_out;
> }
>
> mr->umem = umem;
> @@ -129,9 +130,9 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
>
> err = rxe_mr_alloc(mr, num_buf);
> if (err) {
> - pr_warn("err %d from rxe_mr_alloc\n", err);
> - ib_umem_release(umem);
> - goto err1;
> + pr_warn("%s: Unable to allocate memory for map\n",
> + __func__);
> + goto err_release_umem;
> }
>
> mr->page_shift = PAGE_SHIFT;
> @@ -151,10 +152,10 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
>
> vaddr = page_address(sg_page_iter_page(&sg_iter));
> if (!vaddr) {
> - pr_warn("null vaddr\n");
> - ib_umem_release(umem);
> + pr_warn("%s: Unable to get virtual address\n",
> + __func__);
> err = -ENOMEM;
> - goto err1;
> + goto err_cleanup_map;
> }
>
> buf->addr = (uintptr_t)vaddr;
> @@ -177,7 +178,13 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
>
> return 0;
>
> -err1:
> +err_cleanup_map:
> + for (i = 0; i < mr->num_map; i++)
> + kfree(mr->map[i]);
> + kfree(mr->map);
> +err_release_umem:
> + ib_umem_release(umem);
> +err_out:
> return err;
> }
>
> --
> 2.30.2
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH for-next v2] RDMA/rxe: Fix memory leak in error path code
2021-07-06 5:48 ` Zhu Yanjun
@ 2021-07-06 10:10 ` Haakon Bugge
0 siblings, 0 replies; 4+ messages in thread
From: Haakon Bugge @ 2021-07-06 10:10 UTC (permalink / raw)
To: Bob Pearson
Cc: Jason Gunthorpe, OFED mailing list, yang.jy, Haakon Bugge, Zhu Yanjun
> On 6 Jul 2021, at 07:48, Zhu Yanjun <zyjzyj2000@gmail.com> wrote:
>
> On Tue, Jul 6, 2021 at 12:42 AM Bob Pearson <rpearsonhpe@gmail.com> wrote:
>>
>> In rxe_mr_init_user() in rxe_mr.c at the third error the driver fails to
>> free the memory at mr->map. This patch adds code to do that.
>> This error only occurs if page_address() fails to return a non zero address
>> which should never happen for 64 bit architectures.
>>
>> Fixes: 8700e3e7c485 ("Soft RoCE driver")
>> Reported by: Haakon Bugge <haakon.bugge@oracle.com>
>> Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
>
> Thanks a lot.
>
> Reviewed-by: Zhu Yanjun <zyjzyj2000@gmail.com>
Reviewed-by: Håkon Bugge <haakon.bugge@oracle.com>
Thxs, Håkon
> Zhu Yanjun
>
>> ---
>> v2:
>> Left out white space changes.
>>
>> drivers/infiniband/sw/rxe/rxe_mr.c | 27 +++++++++++++++++----------
>> 1 file changed, 17 insertions(+), 10 deletions(-)
>>
>> diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c
>> index 6aabcb4de235..be4bcb420fab 100644
>> --- a/drivers/infiniband/sw/rxe/rxe_mr.c
>> +++ b/drivers/infiniband/sw/rxe/rxe_mr.c
>> @@ -113,13 +113,14 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
>> int num_buf;
>> void *vaddr;
>> int err;
>> + int i;
>
> Thanks.
>
>>
>> umem = ib_umem_get(pd->ibpd.device, start, length, access);
>> if (IS_ERR(umem)) {
>> - pr_warn("err %d from rxe_umem_get\n",
>> - (int)PTR_ERR(umem));
>> + pr_warn("%s: Unable to pin memory region err = %d\n",
>> + __func__, (int)PTR_ERR(umem));
>> err = PTR_ERR(umem);
>> - goto err1;
>> + goto err_out;
>> }
>>
>> mr->umem = umem;
>> @@ -129,9 +130,9 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
>>
>> err = rxe_mr_alloc(mr, num_buf);
>> if (err) {
>> - pr_warn("err %d from rxe_mr_alloc\n", err);
>> - ib_umem_release(umem);
>> - goto err1;
>> + pr_warn("%s: Unable to allocate memory for map\n",
>> + __func__);
>> + goto err_release_umem;
>> }
>>
>> mr->page_shift = PAGE_SHIFT;
>> @@ -151,10 +152,10 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
>>
>> vaddr = page_address(sg_page_iter_page(&sg_iter));
>> if (!vaddr) {
>> - pr_warn("null vaddr\n");
>> - ib_umem_release(umem);
>> + pr_warn("%s: Unable to get virtual address\n",
>> + __func__);
>> err = -ENOMEM;
>> - goto err1;
>> + goto err_cleanup_map;
>> }
>>
>> buf->addr = (uintptr_t)vaddr;
>> @@ -177,7 +178,13 @@ int rxe_mr_init_user(struct rxe_pd *pd, u64 start, u64 length, u64 iova,
>>
>> return 0;
>>
>> -err1:
>> +err_cleanup_map:
>> + for (i = 0; i < mr->num_map; i++)
>> + kfree(mr->map[i]);
>> + kfree(mr->map);
>> +err_release_umem:
>> + ib_umem_release(umem);
>> +err_out:
>> return err;
>> }
>>
>> --
>> 2.30.2
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH for-next v2] RDMA/rxe: Fix memory leak in error path code
2021-07-05 16:41 [PATCH for-next v2] RDMA/rxe: Fix memory leak in error path code Bob Pearson
2021-07-06 5:48 ` Zhu Yanjun
@ 2021-07-15 17:56 ` Jason Gunthorpe
1 sibling, 0 replies; 4+ messages in thread
From: Jason Gunthorpe @ 2021-07-15 17:56 UTC (permalink / raw)
To: Bob Pearson; +Cc: zyjzyj2000, linux-rdma, haakon.brugge, yang.jy
On Mon, Jul 05, 2021 at 11:41:54AM -0500, Bob Pearson wrote:
> In rxe_mr_init_user() in rxe_mr.c at the third error the driver fails to
> free the memory at mr->map. This patch adds code to do that.
> This error only occurs if page_address() fails to return a non zero address
> which should never happen for 64 bit architectures.
>
> Fixes: 8700e3e7c485 ("Soft RoCE driver")
> Reported by: Haakon Bugge <haakon.bugge@oracle.com>
> Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
> Reviewed-by: Zhu Yanjun <zyjzyj2000@gmail.com>
> Reviewed-by: Håkon Bugge <haakon.bugge@oracle.com>
> ---
> v2:
> Left out white space changes.
Applied to for-rc, thanks
Jason
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2021-07-15 17:56 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-05 16:41 [PATCH for-next v2] RDMA/rxe: Fix memory leak in error path code Bob Pearson
2021-07-06 5:48 ` Zhu Yanjun
2021-07-06 10:10 ` Haakon Bugge
2021-07-15 17:56 ` Jason Gunthorpe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).