All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] ceph: fix memory leak when reallocating pages array for writepages
@ 2020-07-26 12:28 Jeff Layton
  2020-07-27 12:16 ` Xiubo Li
  0 siblings, 1 reply; 6+ messages in thread
From: Jeff Layton @ 2020-07-26 12:28 UTC (permalink / raw)
  To: ceph-devel

Once we've replaced it, we don't want to keep the old one around
anymore.

Signed-off-by: Jeff Layton <jlayton@kernel.org>
---
 fs/ceph/addr.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 01ad09733ac7..01e167efa104 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1212,6 +1212,7 @@ static int ceph_writepages_start(struct address_space *mapping,
 			       locked_pages * sizeof(*pages));
 			memset(data_pages + i, 0,
 			       locked_pages * sizeof(*pages));
+			kfree(data_pages);
 		} else {
 			BUG_ON(num_ops != req->r_num_ops);
 			index = pages[i - 1]->index + 1;
-- 
2.26.2

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] ceph: fix memory leak when reallocating pages array for writepages
  2020-07-26 12:28 [PATCH] ceph: fix memory leak when reallocating pages array for writepages Jeff Layton
@ 2020-07-27 12:16 ` Xiubo Li
  2020-07-27 13:18   ` Jeff Layton
  0 siblings, 1 reply; 6+ messages in thread
From: Xiubo Li @ 2020-07-27 12:16 UTC (permalink / raw)
  To: Jeff Layton, ceph-devel

On 2020/7/26 20:28, Jeff Layton wrote:
> Once we've replaced it, we don't want to keep the old one around
> anymore.
>
> Signed-off-by: Jeff Layton <jlayton@kernel.org>
> ---
>   fs/ceph/addr.c | 1 +
>   1 file changed, 1 insertion(+)
>
> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> index 01ad09733ac7..01e167efa104 100644
> --- a/fs/ceph/addr.c
> +++ b/fs/ceph/addr.c
> @@ -1212,6 +1212,7 @@ static int ceph_writepages_start(struct address_space *mapping,
>   			       locked_pages * sizeof(*pages));
>   			memset(data_pages + i, 0,
>   			       locked_pages * sizeof(*pages));

BTW, do we still need to memset() the data_pages ?

> +			kfree(data_pages);
>   		} else {
>   			BUG_ON(num_ops != req->r_num_ops);
>   			index = pages[i - 1]->index + 1;

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] ceph: fix memory leak when reallocating pages array for writepages
  2020-07-27 12:16 ` Xiubo Li
@ 2020-07-27 13:18   ` Jeff Layton
  2020-07-27 13:27     ` Xiubo Li
  0 siblings, 1 reply; 6+ messages in thread
From: Jeff Layton @ 2020-07-27 13:18 UTC (permalink / raw)
  To: Xiubo Li, ceph-devel

On Mon, 2020-07-27 at 20:16 +0800, Xiubo Li wrote:
> On 2020/7/26 20:28, Jeff Layton wrote:
> > Once we've replaced it, we don't want to keep the old one around
> > anymore.
> > 
> > Signed-off-by: Jeff Layton <jlayton@kernel.org>
> > ---
> >   fs/ceph/addr.c | 1 +
> >   1 file changed, 1 insertion(+)
> > 
> > diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> > index 01ad09733ac7..01e167efa104 100644
> > --- a/fs/ceph/addr.c
> > +++ b/fs/ceph/addr.c
> > @@ -1212,6 +1212,7 @@ static int ceph_writepages_start(struct address_space *mapping,
> >   			       locked_pages * sizeof(*pages));
> >   			memset(data_pages + i, 0,
> >   			       locked_pages * sizeof(*pages));
> 
> BTW, do we still need to memset() the data_pages ?
> 

Self-NAK on this patch...

Zheng pointed out that this array is actually freed by the request
handler after the submission. This loop is creating a new pages array
for a second request.

As far as whether we need to memset the end of the original array...I
don't think we do. It looks like the pointers at the end of the array
are ignored once you go past the length of the request. That said, it's
fairly cheap to do so, and I'm not inclined to change it, just in case
there is code that does look at those pointers.

> 
> > +			kfree(data_pages);
> >   		} else {
> >   			BUG_ON(num_ops != req->r_num_ops);
> >   			index = pages[i - 1]->index + 1;
> 
> 

-- 
Jeff Layton <jlayton@kernel.org>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] ceph: fix memory leak when reallocating pages array for writepages
  2020-07-27 13:18   ` Jeff Layton
@ 2020-07-27 13:27     ` Xiubo Li
  2020-07-27 13:35       ` Jeff Layton
  0 siblings, 1 reply; 6+ messages in thread
From: Xiubo Li @ 2020-07-27 13:27 UTC (permalink / raw)
  To: Jeff Layton, ceph-devel

On 2020/7/27 21:18, Jeff Layton wrote:
> On Mon, 2020-07-27 at 20:16 +0800, Xiubo Li wrote:
>> On 2020/7/26 20:28, Jeff Layton wrote:
>>> Once we've replaced it, we don't want to keep the old one around
>>> anymore.
>>>
>>> Signed-off-by: Jeff Layton <jlayton@kernel.org>
>>> ---
>>>    fs/ceph/addr.c | 1 +
>>>    1 file changed, 1 insertion(+)
>>>
>>> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
>>> index 01ad09733ac7..01e167efa104 100644
>>> --- a/fs/ceph/addr.c
>>> +++ b/fs/ceph/addr.c
>>> @@ -1212,6 +1212,7 @@ static int ceph_writepages_start(struct address_space *mapping,
>>>    			       locked_pages * sizeof(*pages));
>>>    			memset(data_pages + i, 0,
>>>    			       locked_pages * sizeof(*pages));
>> BTW, do we still need to memset() the data_pages ?
>>
> Self-NAK on this patch...
>
> Zheng pointed out that this array is actually freed by the request
> handler after the submission. This loop is creating a new pages array
> for a second request.

Do you mean ceph_osd_data_release() ?

The request is only freeing the pages in that arrary, not the arrary 
itself, did I miss something ?


> As far as whether we need to memset the end of the original array...I
> don't think we do. It looks like the pointers at the end of the array
> are ignored once you go past the length of the request. That said, it's
> fairly cheap to do so, and I'm not inclined to change it, just in case
> there is code that does look at those pointers.
>
>>> +			kfree(data_pages);
>>>    		} else {
>>>    			BUG_ON(num_ops != req->r_num_ops);
>>>    			index = pages[i - 1]->index + 1;
>>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] ceph: fix memory leak when reallocating pages array for writepages
  2020-07-27 13:27     ` Xiubo Li
@ 2020-07-27 13:35       ` Jeff Layton
  2020-07-27 13:44         ` Xiubo Li
  0 siblings, 1 reply; 6+ messages in thread
From: Jeff Layton @ 2020-07-27 13:35 UTC (permalink / raw)
  To: Xiubo Li, ceph-devel

On Mon, 2020-07-27 at 21:27 +0800, Xiubo Li wrote:
> On 2020/7/27 21:18, Jeff Layton wrote:
> > On Mon, 2020-07-27 at 20:16 +0800, Xiubo Li wrote:
> > > On 2020/7/26 20:28, Jeff Layton wrote:
> > > > Once we've replaced it, we don't want to keep the old one around
> > > > anymore.
> > > > 
> > > > Signed-off-by: Jeff Layton <jlayton@kernel.org>
> > > > ---
> > > >    fs/ceph/addr.c | 1 +
> > > >    1 file changed, 1 insertion(+)
> > > > 
> > > > diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> > > > index 01ad09733ac7..01e167efa104 100644
> > > > --- a/fs/ceph/addr.c
> > > > +++ b/fs/ceph/addr.c
> > > > @@ -1212,6 +1212,7 @@ static int ceph_writepages_start(struct address_space *mapping,
> > > >    			       locked_pages * sizeof(*pages));
> > > >    			memset(data_pages + i, 0,
> > > >    			       locked_pages * sizeof(*pages));
> > > BTW, do we still need to memset() the data_pages ?
> > > 
> > Self-NAK on this patch...
> > 
> > Zheng pointed out that this array is actually freed by the request
> > handler after the submission. This loop is creating a new pages array
> > for a second request.
> 
> Do you mean ceph_osd_data_release() ?
> 
> The request is only freeing the pages in that arrary, not the arrary 
> itself, did I miss something ?
> 
> 

No, I meant in writepages_finish(). It has this:

        if (osd_data->pages_from_pool)
                mempool_free(osd_data->pages,
                             ceph_sb_to_client(inode->i_sb)->wb_pagevec_pool);
        else
                kfree(osd_data->pages);

The pages themselves are freed in the loop above that point however.

> > As far as whether we need to memset the end of the original array...I
> > don't think we do. It looks like the pointers at the end of the array
> > are ignored once you go past the length of the request. That said, it's
> > fairly cheap to do so, and I'm not inclined to change it, just in case
> > there is code that does look at those pointers.
> > 
> > > > +			kfree(data_pages);
> > > >    		} else {
> > > >    			BUG_ON(num_ops != req->r_num_ops);
> > > >    			index = pages[i - 1]->index + 1;

-- 
Jeff Layton <jlayton@kernel.org>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] ceph: fix memory leak when reallocating pages array for writepages
  2020-07-27 13:35       ` Jeff Layton
@ 2020-07-27 13:44         ` Xiubo Li
  0 siblings, 0 replies; 6+ messages in thread
From: Xiubo Li @ 2020-07-27 13:44 UTC (permalink / raw)
  To: Jeff Layton, ceph-devel

On 2020/7/27 21:35, Jeff Layton wrote:
> On Mon, 2020-07-27 at 21:27 +0800, Xiubo Li wrote:
>> On 2020/7/27 21:18, Jeff Layton wrote:
>>> On Mon, 2020-07-27 at 20:16 +0800, Xiubo Li wrote:
>>>> On 2020/7/26 20:28, Jeff Layton wrote:
>>>>> Once we've replaced it, we don't want to keep the old one around
>>>>> anymore.
>>>>>
>>>>> Signed-off-by: Jeff Layton <jlayton@kernel.org>
>>>>> ---
>>>>>     fs/ceph/addr.c | 1 +
>>>>>     1 file changed, 1 insertion(+)
>>>>>
>>>>> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
>>>>> index 01ad09733ac7..01e167efa104 100644
>>>>> --- a/fs/ceph/addr.c
>>>>> +++ b/fs/ceph/addr.c
>>>>> @@ -1212,6 +1212,7 @@ static int ceph_writepages_start(struct address_space *mapping,
>>>>>     			       locked_pages * sizeof(*pages));
>>>>>     			memset(data_pages + i, 0,
>>>>>     			       locked_pages * sizeof(*pages));
>>>> BTW, do we still need to memset() the data_pages ?
>>>>
>>> Self-NAK on this patch...
>>>
>>> Zheng pointed out that this array is actually freed by the request
>>> handler after the submission. This loop is creating a new pages array
>>> for a second request.
>> Do you mean ceph_osd_data_release() ?
>>
>> The request is only freeing the pages in that arrary, not the arrary
>> itself, did I miss something ?
>>
>>
> No, I meant in writepages_finish(). It has this:
>
>          if (osd_data->pages_from_pool)
>                  mempool_free(osd_data->pages,
>                               ceph_sb_to_client(inode->i_sb)->wb_pagevec_pool);
>          else
>                  kfree(osd_data->pages);
>
> The pages themselves are freed in the loop above that point however.

Okay, that's right.

Thanks.


>>> As far as whether we need to memset the end of the original array...I
>>> don't think we do. It looks like the pointers at the end of the array
>>> are ignored once you go past the length of the request. That said, it's
>>> fairly cheap to do so, and I'm not inclined to change it, just in case
>>> there is code that does look at those pointers.
>>>
>>>>> +			kfree(data_pages);
>>>>>     		} else {
>>>>>     			BUG_ON(num_ops != req->r_num_ops);
>>>>>     			index = pages[i - 1]->index + 1;

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-07-27 13:44 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-26 12:28 [PATCH] ceph: fix memory leak when reallocating pages array for writepages Jeff Layton
2020-07-27 12:16 ` Xiubo Li
2020-07-27 13:18   ` Jeff Layton
2020-07-27 13:27     ` Xiubo Li
2020-07-27 13:35       ` Jeff Layton
2020-07-27 13:44         ` Xiubo Li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.