linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [Question] A novel case happened when using mempool allocate memory.
@ 2018-08-01 15:31 zhong jiang
  2018-08-01 15:37 ` Matthew Wilcox
  0 siblings, 1 reply; 5+ messages in thread
From: zhong jiang @ 2018-08-01 15:31 UTC (permalink / raw)
  To: Michal Hocko, Johannes Weiner, mgorman, Joonsoo Kim,
	Laura Abbott, Hugh Dickins, Oleg Nesterov
  Cc: Linux Memory Management List, LKML

Hi,  Everyone

 I ran across the following novel case similar to memory leak in linux-4.1 stable when allocating
 memory object by kmem_cache_alloc.   it rarely can be reproduced.

I create a specific  mempool with 24k size based on the slab.  it can not be merged with
other kmem cache.  I  record the allocation and free usage by atomic_add/sub.    After a while,
I watch the specific slab consume most of total memory.   After halting the code execution.
The counter of allocation and free is equal.  Therefore,  I am sure that module have released
all meory resource.  but the statistic of specific slab is very high but stable by checking /proc/slabinfo.

but It is strange that the specific slab will free get back all memory when unregister the module.
I got the following information from specific slab data structure when halt the module execution.


kmem_cache_node                                                          kmem_cache

nr_partial = 1,                                                             min_partial = 7
partial = {                                                                    cpu_partial = 2
        next = 0xffff7c00085cae20                             object_size = 24576
        prev = 0xffff7c00085cae20
},

nr_slabs = {
    counter = 365610
 },

total_objects = {
 counter = 365610
},

full = {
      next =  0xffff8013e44f75f0,
     prev =  0xffff8013e44f75f0
},

From the above restricted information , we can know that the node full list is empty.  and partial list only
have a  slab.   A slab contain a object.  I think that most of slab stay in the cpu_partial
list even though it seems to be impossible theoretically.  because I come to the conclusion based on the case
that slab take up the memory will be release when unregister the moudle.

but I check the code(mm/slub.c) carefully . I can not find any clue to prove my assumption.
I will be appreciate if anyone have any idea about the case. 


Thanks
zhong jiang


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Question] A novel case happened when using mempool allocate memory.
  2018-08-01 15:31 [Question] A novel case happened when using mempool allocate memory zhong jiang
@ 2018-08-01 15:37 ` Matthew Wilcox
  2018-08-02  6:22   ` zhong jiang
  0 siblings, 1 reply; 5+ messages in thread
From: Matthew Wilcox @ 2018-08-01 15:37 UTC (permalink / raw)
  To: zhong jiang
  Cc: Michal Hocko, Johannes Weiner, mgorman, Joonsoo Kim,
	Laura Abbott, Hugh Dickins, Oleg Nesterov,
	Linux Memory Management List, LKML

On Wed, Aug 01, 2018 at 11:31:15PM +0800, zhong jiang wrote:
> Hi,  Everyone
> 
>  I ran across the following novel case similar to memory leak in linux-4.1 stable when allocating
>  memory object by kmem_cache_alloc.   it rarely can be reproduced.
> 
> I create a specific  mempool with 24k size based on the slab.  it can not be merged with
> other kmem cache.  I  record the allocation and free usage by atomic_add/sub.    After a while,
> I watch the specific slab consume most of total memory.   After halting the code execution.
> The counter of allocation and free is equal.  Therefore,  I am sure that module have released
> all meory resource.  but the statistic of specific slab is very high but stable by checking /proc/slabinfo.

Please post the code.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Question] A novel case happened when using mempool allocate memory.
  2018-08-01 15:37 ` Matthew Wilcox
@ 2018-08-02  6:22   ` zhong jiang
  2018-08-02 13:31     ` Matthew Wilcox
  0 siblings, 1 reply; 5+ messages in thread
From: zhong jiang @ 2018-08-02  6:22 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Michal Hocko, Johannes Weiner, mgorman, Joonsoo Kim,
	Laura Abbott, Hugh Dickins, Oleg Nesterov,
	Linux Memory Management List, LKML

On 2018/8/1 23:37, Matthew Wilcox wrote:
> On Wed, Aug 01, 2018 at 11:31:15PM +0800, zhong jiang wrote:
>> Hi,  Everyone
>>
>>  I ran across the following novel case similar to memory leak in linux-4.1 stable when allocating
>>  memory object by kmem_cache_alloc.   it rarely can be reproduced.
>>
>> I create a specific  mempool with 24k size based on the slab.  it can not be merged with
>> other kmem cache.  I  record the allocation and free usage by atomic_add/sub.    After a while,
>> I watch the specific slab consume most of total memory.   After halting the code execution.
>> The counter of allocation and free is equal.  Therefore,  I am sure that module have released
>> all meory resource.  but the statistic of specific slab is very high but stable by checking /proc/slabinfo.
> Please post the code.
>
> .
>

when module is loaded. we create the specific mempool. The code flow is as follows.

mem_pool_create() {

slab_cache = kmem_cache_create(name, item_size, 0, 0 , NULL);

mempoll_create(min_pool_size, mempool_alloc_slab, mempool_free_slab, slab_cache);   //min_pool_size is assigned to 1024
atomic_set(pool->statistics, 0);
}

we allocate memory from specific mempool , The code flow is as follows.

mem_alloc() {
mempool_alloc(pool, gfp_flags);

atomic_inc(pool->statistics);
}

we release memory to specific mempool . The code flow is as follows.
mem_free() {
mempool_free(object_ptr, pool);

atomic_dec(pool->statistics);
}


when we unregister the module,  the memory has been taken up will get back the system.
the code flow is as follows.

mem_pool_destroy() {
 mempool_destroy(pool);
kmem_cache_destroy(slab_cache);
}

From the above information.  I assume the specific kmem_cache will not take up overmuch memory
when halting the execution and pool->statistics is equal to 0.

I have no idea about the issue. 

Thanks
zhong jiang




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Question] A novel case happened when using mempool allocate memory.
  2018-08-02  6:22   ` zhong jiang
@ 2018-08-02 13:31     ` Matthew Wilcox
  2018-08-02 14:17       ` zhong jiang
  0 siblings, 1 reply; 5+ messages in thread
From: Matthew Wilcox @ 2018-08-02 13:31 UTC (permalink / raw)
  To: zhong jiang
  Cc: Michal Hocko, Johannes Weiner, mgorman, Joonsoo Kim,
	Laura Abbott, Hugh Dickins, Oleg Nesterov,
	Linux Memory Management List, LKML

On Thu, Aug 02, 2018 at 02:22:03PM +0800, zhong jiang wrote:
> On 2018/8/1 23:37, Matthew Wilcox wrote:
> > Please post the code.
> 
> when module is loaded. we create the specific mempool. The code flow is as follows.

I actually meant "post the code you are testing", not "write out some
pseudocode".

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [Question] A novel case happened when using mempool allocate memory.
  2018-08-02 13:31     ` Matthew Wilcox
@ 2018-08-02 14:17       ` zhong jiang
  0 siblings, 0 replies; 5+ messages in thread
From: zhong jiang @ 2018-08-02 14:17 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Michal Hocko, Johannes Weiner, mgorman, Joonsoo Kim,
	Laura Abbott, Hugh Dickins, Oleg Nesterov,
	Linux Memory Management List, LKML

On 2018/8/2 21:31, Matthew Wilcox wrote:
> On Thu, Aug 02, 2018 at 02:22:03PM +0800, zhong jiang wrote:
>> On 2018/8/1 23:37, Matthew Wilcox wrote:
>>> Please post the code.
>> when module is loaded. we create the specific mempool. The code flow is as follows.
> I actually meant "post the code you are testing", not "write out some
> pseudocode".
>
> .
>
The source code is as follow about mempool utility.

**
* @brief 元素的类型顺序与 @smio_mem_type_t的定义顺序一致
*/
static smio_mem_mng_t g_smio_mem[] =
{
{
.name = "MEDIA_INFO",
.min_pool_size = 128,
.item_size = sizeof(smio_media_info_t),
.slab_cache = NULL,
},
{
.name = "DSW_IO_REQ",
.min_pool_size = 1024,
.item_size = sizeof(dsw_io_req_t),
.slab_cache = NULL,
},
{
.name = "DSW_IO_PAGE",
.min_pool_size = 1024,
.item_size = sizeof(dsw_page_t) * DSW_MAX_PAGE_PER_REQ,
.slab_cache = NULL,
},
{
.name = "32_ARRAY",
.min_pool_size = 1024,
.item_size = sizeof(void *) * 32,
.slab_cache = NULL,
},
{
.name = "SCSI_SENSE_BUF",
.min_pool_size = 1024,
.item_size = sizeof(char) * SCSI_SENSE_BUFFERSIZE,
.slab_cache = NULL,
},
};

/**
* @brief 申请数据类型内存
*
* @param id 申请者模块ID
* @param type 申请内存的类型
*
* @return 成功返回内存块的首地址;失败返回NULL
*/
void *smio_mem_alloc(smio_module_id_t id, smio_mem_type_t type)
{
void *m = NULL;
smio_mem_mng_t *pool_mng = NULL;
SMIO_ASSERT_RETURN(id < SMIO_MOD_ID_BUTT, NULL);
SMIO_ASSERT_RETURN(type < SMIO_MEM_TYPE_BUTT, NULL);

pool_mng = &g_smio_mem[type];

SMIO_LOG_DEBUG("alloc %s, size: %d\n", pool_mng->name, pool_mng->item_size);

m = mempool_alloc(pool_mng->pool, GFP_KERNEL);
if (NULL == m)
{
return NULL;
}

memset(m, 0, pool_mng->item_size);

atomic_inc(&pool_mng->statistics[id]);

return m;
}
EXPORT_SYMBOL(smio_mem_alloc);


/**
* @brief 释放内存块
*
* @param id 申请者的模块ID
* @param type 内存块的类型
* @param m 内在的首地址
*/
void smio_mem_free(smio_module_id_t id, smio_mem_type_t type, void *m)
{
smio_mem_mng_t *pool_mng = NULL;
SMIO_ASSERT(NULL != m);
SMIO_ASSERT(id < SMIO_MOD_ID_BUTT);
SMIO_ASSERT(type < SMIO_MEM_TYPE_BUTT);

pool_mng = &g_smio_mem[type];

mempool_free(m, pool_mng->pool);

atomic_dec(&pool_mng->statistics[id]);
}
EXPORT_SYMBOL(smio_mem_free);


/**
* @brief 创建管理内在池
*
* @param pool_mng 内存类型管理结构
*
* @return 成功返回@SMIO_OK;失败返回@SMIO_ERR
*/
static int smio_mem_pool_create(smio_mem_mng_t *pool_mng)
{
int i;
SMIO_ASSERT_RETURN(NULL != pool_mng, SMIO_ERR);

pool_mng->slab_cache = kmem_cache_create(pool_mng->name,
pool_mng->item_size, 0, 0, NULL);

if (SMIO_IS_ERR_OR_NULL(pool_mng->slab_cache))
{
SMIO_LOG_ERR("kmem_cache_create for %s failed\n", pool_mng->name);
return SMIO_ERR;
}
pool_mng->pool = mempool_create(pool_mng->min_pool_size, mempool_alloc_slab,
mempool_free_slab, pool_mng->slab_cache);
if (NULL == pool_mng->pool)
{
SMIO_LOG_ERR("pool create for %s failed\n", pool_mng->name);
kmem_cache_destroy(pool_mng->slab_cache);
return SMIO_ERR;
}

for (i = 0; i < SMIO_MOD_ID_BUTT; i++)
{
atomic_set(&pool_mng->statistics[i], 0);
}

return SMIO_OK;
}


/**
* @brief 清除内存池
*
* @param pool_mng 所要清除的内存池
*/
void smio_mem_pool_destroy(smio_mem_mng_t *pool_mng)
{
SMIO_ASSERT(NULL != pool_mng);

if (NULL != pool_mng->pool)
{
mempool_destroy(pool_mng->pool);
pool_mng->pool = NULL;
}

if (NULL != pool_mng->slab_cache)
{
kmem_cache_destroy(pool_mng->slab_cache);
pool_mng->slab_cache = NULL;
}
}


/**
* @brief 内存管理单元初始化
*
* @return 成功返回@SMIO_OK;失败返回@SMIO_ERR
*/
int smio_mem_init(void)
{
int i;
int pool_num = (int) SMIO_ARRAY_SIZE(g_smio_mem);
int ret = SMIO_OK;
bool free = SMIO_FALSE;

for (i = 0; i < pool_num; i++)
{
SMIO_LOG_INFO("memory of %s initialize, min_pool_size: %d, item_size: %d\n",
g_smio_mem[i].name, g_smio_mem[i].min_pool_size, g_smio_mem[i].item_size);
if (SMIO_OK != smio_mem_pool_create(&g_smio_mem[i]))
{
SMIO_LOG_ERR("memory of %s initialize failed\n", g_smio_mem[i].name);
ret = SMIO_ERR;
free = SMIO_TRUE;
break;
}
}

/* clean if smio_mem_pool_create failed*/
while ((SMIO_TRUE == free) && (--i >= 0))
{
smio_mem_pool_destroy(&g_smio_mem[i]);
}

return ret;
}


/**
* @brief 内存管理模块清除退出
*/
void smio_mem_exit(void)
{
int i;
int pool_num = (int) SMIO_ARRAY_SIZE(g_smio_mem);

for (i = 0; i < pool_num; i++)
{
smio_mem_pool_destroy(&g_smio_mem[i]);
}
}




^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-08-02 14:17 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-01 15:31 [Question] A novel case happened when using mempool allocate memory zhong jiang
2018-08-01 15:37 ` Matthew Wilcox
2018-08-02  6:22   ` zhong jiang
2018-08-02 13:31     ` Matthew Wilcox
2018-08-02 14:17       ` zhong jiang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).