linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] ksm: fix conflict between mmput and scan_get_next_rmap_item
@ 2016-05-05 12:42 Zhou Chengming
  2016-05-05 21:07 ` Andrew Morton
  2016-05-05 21:57 ` Andrea Arcangeli
  0 siblings, 2 replies; 8+ messages in thread
From: Zhou Chengming @ 2016-05-05 12:42 UTC (permalink / raw)
  To: akpm, hughd, aarcange, kirill.shutemov, vbabka, geliangtang, minchan
  Cc: linux-mm, linux-kernel, guohanjun, dingtianhong, huawei.libin,
	thunder.leizhen, qiuxishi, zhouchengming1

A concurrency issue about KSM in the function scan_get_next_rmap_item.

task A (ksmd):				|task B (the mm's task):
					|
mm = slot->mm;				|
down_read(&mm->mmap_sem);		|
					|
...					|
					|
spin_lock(&ksm_mmlist_lock);		|
					|
ksm_scan.mm_slot go to the next slot;	|
					|
spin_unlock(&ksm_mmlist_lock);		|
					|mmput() ->
					|	ksm_exit():
					|
					|spin_lock(&ksm_mmlist_lock);
					|if (mm_slot && ksm_scan.mm_slot != mm_slot) {
					|	if (!mm_slot->rmap_list) {
					|		easy_to_free = 1;
					|		...
					|
					|if (easy_to_free) {
					|	mmdrop(mm);
					|	...
					|
					|So this mm_struct will be freed successfully.
					|
up_read(&mm->mmap_sem);			|

As we can see above, the ksmd thread may access a mm_struct that already
been freed to the kmem_cache.
Suppose a fork will get this mm_struct from the kmem_cache, the ksmd thread
then call up_read(&mm->mmap_sem), will cause mmap_sem.count to become -1.
I changed the scan_get_next_rmap_item function refered to the khugepaged
scan function.

Signed-off-by: Zhou Chengming <zhouchengming1@huawei.com>
---
 mm/ksm.c |    7 ++-----
 1 files changed, 2 insertions(+), 5 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 7ee101e..6e4324d 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1650,6 +1650,7 @@ next_mm:
 	 * because there were no VM_MERGEABLE vmas with such addresses.
 	 */
 	remove_trailing_rmap_items(slot, ksm_scan.rmap_list);
+	up_read(&mm->mmap_sem);
 
 	spin_lock(&ksm_mmlist_lock);
 	ksm_scan.mm_slot = list_entry(slot->mm_list.next,
@@ -1666,16 +1667,12 @@ next_mm:
 		 */
 		hash_del(&slot->link);
 		list_del(&slot->mm_list);
-		spin_unlock(&ksm_mmlist_lock);
 
 		free_mm_slot(slot);
 		clear_bit(MMF_VM_MERGEABLE, &mm->flags);
-		up_read(&mm->mmap_sem);
 		mmdrop(mm);
-	} else {
-		spin_unlock(&ksm_mmlist_lock);
-		up_read(&mm->mmap_sem);
 	}
+	spin_unlock(&ksm_mmlist_lock);
 
 	/* Repeat until we've completed scanning the whole list */
 	slot = ksm_scan.mm_slot;
-- 
1.7.7

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] ksm: fix conflict between mmput and scan_get_next_rmap_item
  2016-05-05 12:42 [PATCH] ksm: fix conflict between mmput and scan_get_next_rmap_item Zhou Chengming
@ 2016-05-05 21:07 ` Andrew Morton
  2016-05-06  2:50   ` zhouchengming
  2016-05-05 21:57 ` Andrea Arcangeli
  1 sibling, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2016-05-05 21:07 UTC (permalink / raw)
  To: Zhou Chengming
  Cc: hughd, aarcange, kirill.shutemov, vbabka, geliangtang, minchan,
	linux-mm, linux-kernel, guohanjun, dingtianhong, huawei.libin,
	thunder.leizhen, qiuxishi

On Thu, 5 May 2016 20:42:56 +0800 Zhou Chengming <zhouchengming1@huawei.com> wrote:

> A concurrency issue about KSM in the function scan_get_next_rmap_item.
> 
> task A (ksmd):				|task B (the mm's task):
> 					|
> mm = slot->mm;				|
> down_read(&mm->mmap_sem);		|
> 					|
> ...					|
> 					|
> spin_lock(&ksm_mmlist_lock);		|
> 					|
> ksm_scan.mm_slot go to the next slot;	|
> 					|
> spin_unlock(&ksm_mmlist_lock);		|
> 					|mmput() ->
> 					|	ksm_exit():
> 					|
> 					|spin_lock(&ksm_mmlist_lock);
> 					|if (mm_slot && ksm_scan.mm_slot != mm_slot) {
> 					|	if (!mm_slot->rmap_list) {
> 					|		easy_to_free = 1;
> 					|		...
> 					|
> 					|if (easy_to_free) {
> 					|	mmdrop(mm);
> 					|	...
> 					|
> 					|So this mm_struct will be freed successfully.
> 					|
> up_read(&mm->mmap_sem);			|
> 
> As we can see above, the ksmd thread may access a mm_struct that already
> been freed to the kmem_cache.
> Suppose a fork will get this mm_struct from the kmem_cache, the ksmd thread
> then call up_read(&mm->mmap_sem), will cause mmap_sem.count to become -1.
> I changed the scan_get_next_rmap_item function refered to the khugepaged
> scan function.

Thanks.

We need to decide whether this fix should be backported into earlier
(-stable) kernels.  Can you tell us how easily this is triggered and
share your thoughts on this?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] ksm: fix conflict between mmput and scan_get_next_rmap_item
  2016-05-05 12:42 [PATCH] ksm: fix conflict between mmput and scan_get_next_rmap_item Zhou Chengming
  2016-05-05 21:07 ` Andrew Morton
@ 2016-05-05 21:57 ` Andrea Arcangeli
  2016-05-06  2:54   ` Ding Tianhong
  2016-05-06  3:07   ` zhouchengming
  1 sibling, 2 replies; 8+ messages in thread
From: Andrea Arcangeli @ 2016-05-05 21:57 UTC (permalink / raw)
  To: Zhou Chengming
  Cc: akpm, hughd, kirill.shutemov, vbabka, geliangtang, minchan,
	linux-mm, linux-kernel, guohanjun, dingtianhong, huawei.libin,
	thunder.leizhen, qiuxishi

Hello Zhou,

Great catch.

On Thu, May 05, 2016 at 08:42:56PM +0800, Zhou Chengming wrote:
>  	remove_trailing_rmap_items(slot, ksm_scan.rmap_list);
> +	up_read(&mm->mmap_sem);
>  
>  	spin_lock(&ksm_mmlist_lock);
>  	ksm_scan.mm_slot = list_entry(slot->mm_list.next,
> @@ -1666,16 +1667,12 @@ next_mm:
>  		 */
>  		hash_del(&slot->link);
>  		list_del(&slot->mm_list);
> -		spin_unlock(&ksm_mmlist_lock);
>  
>  		free_mm_slot(slot);
>  		clear_bit(MMF_VM_MERGEABLE, &mm->flags);
> -		up_read(&mm->mmap_sem);
>  		mmdrop(mm);

I thought the mmap_sem for reading prevented a race of the above
clear_bit against a concurrent madvise(MADV_MERGEABLE) which takes the
mmap_sem for writing. After this change can't __ksm_enter run
concurrently with the clear_bit above introducing a different SMP race
condition?

> -	} else {
> -		spin_unlock(&ksm_mmlist_lock);
> -		up_read(&mm->mmap_sem);

The strict obviously safe fix is just to invert the above two,
up_read; spin_unlock.

Then I found another instance of this same SMP race condition in
unmerge_and_remove_all_rmap_items() that you didn't fix.

Actually for the other instance of the bug the implementation above
that releases the mmap_sem early sounds safe, because it's a
ksm_text_exit that takes the clear_bit path, not just the fact we
didn't find a vma with VM_MERGEABLE set and we garbage collect the
mm_slot, while the "mm" may still alive. In the other case the "mm"
isn't alive anymore so the race with MADV_MERGEABLE shouldn't be
possible to materialize.

Could you fix it by just inverting the up_read/spin_unlock order, in
the place you patched, and add this comment:

	} else {
		/*
		 * up_read(&mm->mmap_sem) first because after
		 * spin_unlock(&ksm_mmlist_lock) run, the "mm" may
		 * already have been freed under us by __ksm_exit()
		 * because the "mm_slot" is still hashed and
		 * ksm_scan.mm_slot doesn't point to it anymore.
		 */
		up_read(&mm->mmap_sem);
		spin_unlock(&ksm_mmlist_lock);
	}

And in unmerge_and_remove_all_rmap_items() same thing, except there
you can apply your up_read() early and you can just drop the "else"
clause.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] ksm: fix conflict between mmput and scan_get_next_rmap_item
  2016-05-05 21:07 ` Andrew Morton
@ 2016-05-06  2:50   ` zhouchengming
  2016-05-07  4:04     ` Hugh Dickins
  0 siblings, 1 reply; 8+ messages in thread
From: zhouchengming @ 2016-05-06  2:50 UTC (permalink / raw)
  To: Andrew Morton
  Cc: hughd, aarcange, kirill.shutemov, vbabka, geliangtang, minchan,
	linux-mm, linux-kernel, guohanjun, dingtianhong, huawei.libin,
	thunder.leizhen, qiuxishi

On 2016/5/6 5:07, Andrew Morton wrote:
> On Thu, 5 May 2016 20:42:56 +0800 Zhou Chengming<zhouchengming1@huawei.com>  wrote:
>
>> A concurrency issue about KSM in the function scan_get_next_rmap_item.
>>
>> task A (ksmd):				|task B (the mm's task):
>> 					|
>> mm = slot->mm;				|
>> down_read(&mm->mmap_sem);		|
>> 					|
>> ...					|
>> 					|
>> spin_lock(&ksm_mmlist_lock);		|
>> 					|
>> ksm_scan.mm_slot go to the next slot;	|
>> 					|
>> spin_unlock(&ksm_mmlist_lock);		|
>> 					|mmput() ->
>> 					|	ksm_exit():
>> 					|
>> 					|spin_lock(&ksm_mmlist_lock);
>> 					|if (mm_slot&&  ksm_scan.mm_slot != mm_slot) {
>> 					|	if (!mm_slot->rmap_list) {
>> 					|		easy_to_free = 1;
>> 					|		...
>> 					|
>> 					|if (easy_to_free) {
>> 					|	mmdrop(mm);
>> 					|	...
>> 					|
>> 					|So this mm_struct will be freed successfully.
>> 					|
>> up_read(&mm->mmap_sem);			|
>>
>> As we can see above, the ksmd thread may access a mm_struct that already
>> been freed to the kmem_cache.
>> Suppose a fork will get this mm_struct from the kmem_cache, the ksmd thread
>> then call up_read(&mm->mmap_sem), will cause mmap_sem.count to become -1.
>> I changed the scan_get_next_rmap_item function refered to the khugepaged
>> scan function.
>
> Thanks.
>
> We need to decide whether this fix should be backported into earlier
> (-stable) kernels.  Can you tell us how easily this is triggered and
> share your thoughts on this?
>
>
> .
>

I write a patch that can easily trigger this bug.
When ksmd go to sleep, if a fork get this mm_struct, BUG_ON
will be triggered.

 From eedfdd12eb11858f69ff4a4300acad42946ca260 Mon Sep 17 00:00:00 2001
From: Zhou Chengming <zhouchengming1@huawei.com>
Date: Thu, 5 May 2016 17:49:22 +0800
Subject: [PATCH] ksm: trigger a bug

Signed-off-by: Zhou Chengming <zhouchengming1@huawei.com>
---
  mm/ksm.c |   17 +++++++++++++++++
  1 files changed, 17 insertions(+), 0 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index ca6d2a0..676368c 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1519,6 +1519,18 @@ static struct rmap_item 
*get_next_rmap_item(struct mm_slot *mm_slot,
  	return rmap_item;
  }

+static void trigger_a_bug(struct task_struct *p, struct mm_struct *mm)
+{
+	/* send KILL sig to the task, hope the mm_struct will be freed */
+	do_send_sig_info(SIGKILL, SEND_SIG_FORCED, p, true);
+	/* sleep for 5s, the mm_struct will be freed and another fork
+	 * will use this mm_struct
+	 */
+	schedule_timeout(msecs_to_jiffies(5000));
+	/* the mm_struct owned by another task */
+	BUG_ON(mm->owner != p);
+}
+
  static struct rmap_item *scan_get_next_rmap_item(struct page **page)
  {
  	struct mm_struct *mm;
@@ -1526,6 +1538,7 @@ static struct rmap_item 
*scan_get_next_rmap_item(struct page **page)
  	struct vm_area_struct *vma;
  	struct rmap_item *rmap_item;
  	int nid;
+	struct task_struct *taskp;

  	if (list_empty(&ksm_mm_head.mm_list))
  		return NULL;
@@ -1636,6 +1649,8 @@ next_mm:
  	remove_trailing_rmap_items(slot, ksm_scan.rmap_list);

  	spin_lock(&ksm_mmlist_lock);
+	/* get the mm's task now in the ksm_mmlist_lock */
+	taskp = mm->owner;
  	ksm_scan.mm_slot = list_entry(slot->mm_list.next,
  						struct mm_slot, mm_list);
  	if (ksm_scan.address == 0) {
@@ -1651,6 +1666,7 @@ next_mm:
  		hash_del(&slot->link);
  		list_del(&slot->mm_list);
  		spin_unlock(&ksm_mmlist_lock);
+		trigger_a_bug(taskp, mm);

  		free_mm_slot(slot);
  		clear_bit(MMF_VM_MERGEABLE, &mm->flags);
@@ -1658,6 +1674,7 @@ next_mm:
  		mmdrop(mm);
  	} else {
  		spin_unlock(&ksm_mmlist_lock);
+		trigger_a_bug(taskp, mm);
  		up_read(&mm->mmap_sem);
  	}

-- 
1.7.7

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] ksm: fix conflict between mmput and scan_get_next_rmap_item
  2016-05-05 21:57 ` Andrea Arcangeli
@ 2016-05-06  2:54   ` Ding Tianhong
  2016-05-06  3:07   ` zhouchengming
  1 sibling, 0 replies; 8+ messages in thread
From: Ding Tianhong @ 2016-05-06  2:54 UTC (permalink / raw)
  To: Andrea Arcangeli, Zhou Chengming
  Cc: akpm, hughd, kirill.shutemov, vbabka, geliangtang, minchan,
	linux-mm, linux-kernel, guohanjun, huawei.libin, thunder.leizhen,
	qiuxishi

Good Catch.

The original code looks too old, use the ksm_mmlist_lock to protect the mm_list looks will affect the performance,
Should we use the RCU to protect the list and not free the mm until out of the rcu critical period? 


On 2016/5/6 5:57, Andrea Arcangeli wrote:
> Hello Zhou,
> 
> Great catch.
> 
> On Thu, May 05, 2016 at 08:42:56PM +0800, Zhou Chengming wrote:
>>  	remove_trailing_rmap_items(slot, ksm_scan.rmap_list);
>> +	up_read(&mm->mmap_sem);
>>  
>>  	spin_lock(&ksm_mmlist_lock);
>>  	ksm_scan.mm_slot = list_entry(slot->mm_list.next,
>> @@ -1666,16 +1667,12 @@ next_mm:
>>  		 */
>>  		hash_del(&slot->link);
>>  		list_del(&slot->mm_list);
>> -		spin_unlock(&ksm_mmlist_lock);
>>  
>>  		free_mm_slot(slot);
>>  		clear_bit(MMF_VM_MERGEABLE, &mm->flags);
>> -		up_read(&mm->mmap_sem);
>>  		mmdrop(mm);
> 
> I thought the mmap_sem for reading prevented a race of the above
> clear_bit against a concurrent madvise(MADV_MERGEABLE) which takes the
> mmap_sem for writing. After this change can't __ksm_enter run
> concurrently with the clear_bit above introducing a different SMP race
> condition?
> 
>> -	} else {
>> -		spin_unlock(&ksm_mmlist_lock);
>> -		up_read(&mm->mmap_sem);
> 
> The strict obviously safe fix is just to invert the above two,
> up_read; spin_unlock.
> 
> Then I found another instance of this same SMP race condition in
> unmerge_and_remove_all_rmap_items() that you didn't fix.
> 
> Actually for the other instance of the bug the implementation above
> that releases the mmap_sem early sounds safe, because it's a
> ksm_text_exit that takes the clear_bit path, not just the fact we
> didn't find a vma with VM_MERGEABLE set and we garbage collect the
> mm_slot, while the "mm" may still alive. In the other case the "mm"
> isn't alive anymore so the race with MADV_MERGEABLE shouldn't be
> possible to materialize.
> 
> Could you fix it by just inverting the up_read/spin_unlock order, in
> the place you patched, and add this comment:
> 
> 	} else {
> 		/*
> 		 * up_read(&mm->mmap_sem) first because after
> 		 * spin_unlock(&ksm_mmlist_lock) run, the "mm" may
> 		 * already have been freed under us by __ksm_exit()
> 		 * because the "mm_slot" is still hashed and
> 		 * ksm_scan.mm_slot doesn't point to it anymore.
> 		 */
> 		up_read(&mm->mmap_sem);
> 		spin_unlock(&ksm_mmlist_lock);
> 	}
> 
> And in unmerge_and_remove_all_rmap_items() same thing, except there
> you can apply your up_read() early and you can just drop the "else"
> clause.
> 
> .
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] ksm: fix conflict between mmput and scan_get_next_rmap_item
  2016-05-05 21:57 ` Andrea Arcangeli
  2016-05-06  2:54   ` Ding Tianhong
@ 2016-05-06  3:07   ` zhouchengming
  1 sibling, 0 replies; 8+ messages in thread
From: zhouchengming @ 2016-05-06  3:07 UTC (permalink / raw)
  To: Andrea Arcangeli
  Cc: akpm, hughd, kirill.shutemov, vbabka, geliangtang, minchan,
	linux-mm, linux-kernel, guohanjun, dingtianhong, huawei.libin,
	thunder.leizhen, qiuxishi

On 2016/5/6 5:57, Andrea Arcangeli wrote:
> Hello Zhou,
>
> Great catch.
>
> On Thu, May 05, 2016 at 08:42:56PM +0800, Zhou Chengming wrote:
>>   	remove_trailing_rmap_items(slot, ksm_scan.rmap_list);
>> +	up_read(&mm->mmap_sem);
>>
>>   	spin_lock(&ksm_mmlist_lock);
>>   	ksm_scan.mm_slot = list_entry(slot->mm_list.next,
>> @@ -1666,16 +1667,12 @@ next_mm:
>>   		 */
>>   		hash_del(&slot->link);
>>   		list_del(&slot->mm_list);
>> -		spin_unlock(&ksm_mmlist_lock);
>>
>>   		free_mm_slot(slot);
>>   		clear_bit(MMF_VM_MERGEABLE,&mm->flags);
>> -		up_read(&mm->mmap_sem);
>>   		mmdrop(mm);
>
> I thought the mmap_sem for reading prevented a race of the above
> clear_bit against a concurrent madvise(MADV_MERGEABLE) which takes the
> mmap_sem for writing. After this change can't __ksm_enter run
> concurrently with the clear_bit above introducing a different SMP race
> condition?
>

Yes, I didn't notice this problem... Thanks.
>> -	} else {
>> -		spin_unlock(&ksm_mmlist_lock);
>> -		up_read(&mm->mmap_sem);
>
> The strict obviously safe fix is just to invert the above two,
> up_read; spin_unlock.
>
> Then I found another instance of this same SMP race condition in
> unmerge_and_remove_all_rmap_items() that you didn't fix.
>
> Actually for the other instance of the bug the implementation above
> that releases the mmap_sem early sounds safe, because it's a
> ksm_text_exit that takes the clear_bit path, not just the fact we
> didn't find a vma with VM_MERGEABLE set and we garbage collect the
> mm_slot, while the "mm" may still alive. In the other case the "mm"
> isn't alive anymore so the race with MADV_MERGEABLE shouldn't be
> possible to materialize.
>
> Could you fix it by just inverting the up_read/spin_unlock order, in
> the place you patched, and add this comment:
>
> 	} else {
> 		/*
> 		 * up_read(&mm->mmap_sem) first because after
> 		 * spin_unlock(&ksm_mmlist_lock) run, the "mm" may
> 		 * already have been freed under us by __ksm_exit()
> 		 * because the "mm_slot" is still hashed and
> 		 * ksm_scan.mm_slot doesn't point to it anymore.
> 		 */
> 		up_read(&mm->mmap_sem);
> 		spin_unlock(&ksm_mmlist_lock);
> 	}
>
> And in unmerge_and_remove_all_rmap_items() same thing, except there
> you can apply your up_read() early and you can just drop the "else"
> clause.
>
> .
>

Your change is better and the comment is good and clear.
So I will send a PATCH v2 based on your suggestion. Thanks.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] ksm: fix conflict between mmput and scan_get_next_rmap_item
  2016-05-06  2:50   ` zhouchengming
@ 2016-05-07  4:04     ` Hugh Dickins
  2016-05-08  6:46       ` zhouchengming
  0 siblings, 1 reply; 8+ messages in thread
From: Hugh Dickins @ 2016-05-07  4:04 UTC (permalink / raw)
  To: zhouchengming
  Cc: Andrew Morton, hughd, aarcange, kirill.shutemov, vbabka,
	geliangtang, minchan, linux-mm, linux-kernel, guohanjun,
	dingtianhong, huawei.libin, thunder.leizhen, qiuxishi

On Fri, 6 May 2016, zhouchengming wrote:
> On 2016/5/6 5:07, Andrew Morton wrote:
> > On Thu, 5 May 2016 20:42:56 +0800 Zhou Chengming<zhouchengming1@huawei.com>
> > wrote:
> > 
> > > A concurrency issue about KSM in the function scan_get_next_rmap_item.
> > > 
> > > task A (ksmd):				|task B (the mm's task):
> > > 					|
> > > mm = slot->mm;				|
> > > down_read(&mm->mmap_sem);		|
> > > 					|
> > > ...					|
> > > 					|
> > > spin_lock(&ksm_mmlist_lock);		|
> > > 					|
> > > ksm_scan.mm_slot go to the next slot;	|
> > > 					|
> > > spin_unlock(&ksm_mmlist_lock);		|
> > > 					|mmput() ->
> > > 					|	ksm_exit():
> > > 					|
> > > 					|spin_lock(&ksm_mmlist_lock);
> > > 					|if (mm_slot&&  ksm_scan.mm_slot !=
> > > mm_slot) {
> > > 					|	if (!mm_slot->rmap_list) {
> > > 					|		easy_to_free = 1;
> > > 					|		...
> > > 					|
> > > 					|if (easy_to_free) {
> > > 					|	mmdrop(mm);
> > > 					|	...
> > > 					|
> > > 					|So this mm_struct will be freed
> > > successfully.

Good catch, yes.  Note that the mmdrop(mm) shown above is not the one that
frees the mm_struct: the whole address space has to be torn down before
we reach the mmdrop(mm) which actually frees the mm_struct.  But you're
right that there's no serialization against ksmd in that interval, so if
ksmd is rescheduled or interrupted for a long time, yes that mm_struct
might be freed by the time of its up_read() below.

> > > 					|
> > > up_read(&mm->mmap_sem);			|
> > > 
> > > As we can see above, the ksmd thread may access a mm_struct that already
> > > been freed to the kmem_cache.
> > > Suppose a fork will get this mm_struct from the kmem_cache, the ksmd
> > > thread
> > > then call up_read(&mm->mmap_sem), will cause mmap_sem.count to become -1.
> > > I changed the scan_get_next_rmap_item function refered to the khugepaged
> > > scan function.
> > 
> > Thanks.
> > 
> > We need to decide whether this fix should be backported into earlier
> > (-stable) kernels.  Can you tell us how easily this is triggered and
> > share your thoughts on this?

Not easy to trigger at all, I think, and I've never seen it or heard
a report of it; but possible.  It can only happen when there are one or
more VM_MERGEABLE areas in the process, but they're all empty or swapped
out when it exits (the easy_to_free route which presents this problem is
only taken in that !mm_slot->rmap_list case - intended to minimize the
drag on quick processes which exit before ksmd even reaches them).

But if ksmd is preempted for a long time in between its spin_unlock
and its up_read, then yes it can happen.  Fix should go back to
2.6.32, I don't think there's been much change here since it went in.

> > 
> > 
> > .
> > 
> 
> I write a patch that can easily trigger this bug.
> When ksmd go to sleep, if a fork get this mm_struct, BUG_ON
> will be triggered.

Please don't use the patch below to test the final version of your fix
(including latest suggestions from Andrea): mm->owner is updated even
before the final mmput() which calls ksm_exit(), so BUGging on a
change of mm->owner says nothing about how likely it would be to
up_read on a freed mm_struct.

Hugh

> 
> From eedfdd12eb11858f69ff4a4300acad42946ca260 Mon Sep 17 00:00:00 2001
> From: Zhou Chengming <zhouchengming1@huawei.com>
> Date: Thu, 5 May 2016 17:49:22 +0800
> Subject: [PATCH] ksm: trigger a bug
> 
> Signed-off-by: Zhou Chengming <zhouchengming1@huawei.com>
> ---
>  mm/ksm.c |   17 +++++++++++++++++
>  1 files changed, 17 insertions(+), 0 deletions(-)
> 
> diff --git a/mm/ksm.c b/mm/ksm.c
> index ca6d2a0..676368c 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1519,6 +1519,18 @@ static struct rmap_item *get_next_rmap_item(struct
> mm_slot *mm_slot,
>  	return rmap_item;
>  }
> 
> +static void trigger_a_bug(struct task_struct *p, struct mm_struct *mm)
> +{
> +	/* send KILL sig to the task, hope the mm_struct will be freed */
> +	do_send_sig_info(SIGKILL, SEND_SIG_FORCED, p, true);
> +	/* sleep for 5s, the mm_struct will be freed and another fork
> +	 * will use this mm_struct
> +	 */
> +	schedule_timeout(msecs_to_jiffies(5000));
> +	/* the mm_struct owned by another task */
> +	BUG_ON(mm->owner != p);
> +}
> +
>  static struct rmap_item *scan_get_next_rmap_item(struct page **page)
>  {
>  	struct mm_struct *mm;
> @@ -1526,6 +1538,7 @@ static struct rmap_item *scan_get_next_rmap_item(struct
> page **page)
>  	struct vm_area_struct *vma;
>  	struct rmap_item *rmap_item;
>  	int nid;
> +	struct task_struct *taskp;
> 
>  	if (list_empty(&ksm_mm_head.mm_list))
>  		return NULL;
> @@ -1636,6 +1649,8 @@ next_mm:
>  	remove_trailing_rmap_items(slot, ksm_scan.rmap_list);
> 
>  	spin_lock(&ksm_mmlist_lock);
> +	/* get the mm's task now in the ksm_mmlist_lock */
> +	taskp = mm->owner;
>  	ksm_scan.mm_slot = list_entry(slot->mm_list.next,
>  						struct mm_slot, mm_list);
>  	if (ksm_scan.address == 0) {
> @@ -1651,6 +1666,7 @@ next_mm:
>  		hash_del(&slot->link);
>  		list_del(&slot->mm_list);
>  		spin_unlock(&ksm_mmlist_lock);
> +		trigger_a_bug(taskp, mm);
> 
>  		free_mm_slot(slot);
>  		clear_bit(MMF_VM_MERGEABLE, &mm->flags);
> @@ -1658,6 +1674,7 @@ next_mm:
>  		mmdrop(mm);
>  	} else {
>  		spin_unlock(&ksm_mmlist_lock);
> +		trigger_a_bug(taskp, mm);
>  		up_read(&mm->mmap_sem);
>  	}
> 
> -- 
> 1.7.7

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] ksm: fix conflict between mmput and scan_get_next_rmap_item
  2016-05-07  4:04     ` Hugh Dickins
@ 2016-05-08  6:46       ` zhouchengming
  0 siblings, 0 replies; 8+ messages in thread
From: zhouchengming @ 2016-05-08  6:46 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, aarcange, kirill.shutemov, vbabka, geliangtang,
	minchan, linux-mm, linux-kernel, guohanjun, dingtianhong,
	huawei.libin, thunder.leizhen, qiuxishi

On 2016/5/7 12:04, Hugh Dickins wrote:
> On Fri, 6 May 2016, zhouchengming wrote:
>> On 2016/5/6 5:07, Andrew Morton wrote:
>>> On Thu, 5 May 2016 20:42:56 +0800 Zhou Chengming<zhouchengming1@huawei.com>
>>> wrote:
>>>
>>>> A concurrency issue about KSM in the function scan_get_next_rmap_item.
>>>>
>>>> task A (ksmd):				|task B (the mm's task):
>>>> 					|
>>>> mm = slot->mm;				|
>>>> down_read(&mm->mmap_sem);		|
>>>> 					|
>>>> ...					|
>>>> 					|
>>>> spin_lock(&ksm_mmlist_lock);		|
>>>> 					|
>>>> ksm_scan.mm_slot go to the next slot;	|
>>>> 					|
>>>> spin_unlock(&ksm_mmlist_lock);		|
>>>> 					|mmput() ->
>>>> 					|	ksm_exit():
>>>> 					|
>>>> 					|spin_lock(&ksm_mmlist_lock);
>>>> 					|if (mm_slot&&   ksm_scan.mm_slot !=
>>>> mm_slot) {
>>>> 					|	if (!mm_slot->rmap_list) {
>>>> 					|		easy_to_free = 1;
>>>> 					|		...
>>>> 					|
>>>> 					|if (easy_to_free) {
>>>> 					|	mmdrop(mm);
>>>> 					|	...
>>>> 					|
>>>> 					|So this mm_struct will be freed
>>>> successfully.
>
> Good catch, yes.  Note that the mmdrop(mm) shown above is not the one that
> frees the mm_struct: the whole address space has to be torn down before
> we reach the mmdrop(mm) which actually frees the mm_struct.  But you're
> right that there's no serialization against ksmd in that interval, so if
> ksmd is rescheduled or interrupted for a long time, yes that mm_struct
> might be freed by the time of its up_read() below.
>

Yes, my description above is a little misleading. I will amend it. Thanks
>>>> 					|
>>>> up_read(&mm->mmap_sem);			|
>>>>
>>>> As we can see above, the ksmd thread may access a mm_struct that already
>>>> been freed to the kmem_cache.
>>>> Suppose a fork will get this mm_struct from the kmem_cache, the ksmd
>>>> thread
>>>> then call up_read(&mm->mmap_sem), will cause mmap_sem.count to become -1.
>>>> I changed the scan_get_next_rmap_item function refered to the khugepaged
>>>> scan function.
>>>
>>> Thanks.
>>>
>>> We need to decide whether this fix should be backported into earlier
>>> (-stable) kernels.  Can you tell us how easily this is triggered and
>>> share your thoughts on this?
>
> Not easy to trigger at all, I think, and I've never seen it or heard
> a report of it; but possible.  It can only happen when there are one or
> more VM_MERGEABLE areas in the process, but they're all empty or swapped
> out when it exits (the easy_to_free route which presents this problem is
> only taken in that !mm_slot->rmap_list case - intended to minimize the
> drag on quick processes which exit before ksmd even reaches them).
>
> But if ksmd is preempted for a long time in between its spin_unlock
> and its up_read, then yes it can happen.  Fix should go back to
> 2.6.32, I don't think there's been much change here since it went in.
>
>>>
>>>
>>> .
>>>
>>
>> I write a patch that can easily trigger this bug.
>> When ksmd go to sleep, if a fork get this mm_struct, BUG_ON
>> will be triggered.
>
> Please don't use the patch below to test the final version of your fix
> (including latest suggestions from Andrea): mm->owner is updated even
> before the final mmput() which calls ksm_exit(), so BUGging on a
> change of mm->owner says nothing about how likely it would be to
> up_read on a freed mm_struct.
>
> Hugh
>

Thanks, you are right. mm->owner may change before the final mmput() 
which calls ksm_exit(). So I wonder if there is a way to check the
bug happened ?
>>
>>  From eedfdd12eb11858f69ff4a4300acad42946ca260 Mon Sep 17 00:00:00 2001
>> From: Zhou Chengming<zhouchengming1@huawei.com>
>> Date: Thu, 5 May 2016 17:49:22 +0800
>> Subject: [PATCH] ksm: trigger a bug
>>
>> Signed-off-by: Zhou Chengming<zhouchengming1@huawei.com>
>> ---
>>   mm/ksm.c |   17 +++++++++++++++++
>>   1 files changed, 17 insertions(+), 0 deletions(-)
>>
>> diff --git a/mm/ksm.c b/mm/ksm.c
>> index ca6d2a0..676368c 100644
>> --- a/mm/ksm.c
>> +++ b/mm/ksm.c
>> @@ -1519,6 +1519,18 @@ static struct rmap_item *get_next_rmap_item(struct
>> mm_slot *mm_slot,
>>   	return rmap_item;
>>   }
>>
>> +static void trigger_a_bug(struct task_struct *p, struct mm_struct *mm)
>> +{
>> +	/* send KILL sig to the task, hope the mm_struct will be freed */
>> +	do_send_sig_info(SIGKILL, SEND_SIG_FORCED, p, true);
>> +	/* sleep for 5s, the mm_struct will be freed and another fork
>> +	 * will use this mm_struct
>> +	 */
>> +	schedule_timeout(msecs_to_jiffies(5000));
>> +	/* the mm_struct owned by another task */
>> +	BUG_ON(mm->owner != p);
>> +}
>> +
>>   static struct rmap_item *scan_get_next_rmap_item(struct page **page)
>>   {
>>   	struct mm_struct *mm;
>> @@ -1526,6 +1538,7 @@ static struct rmap_item *scan_get_next_rmap_item(struct
>> page **page)
>>   	struct vm_area_struct *vma;
>>   	struct rmap_item *rmap_item;
>>   	int nid;
>> +	struct task_struct *taskp;
>>
>>   	if (list_empty(&ksm_mm_head.mm_list))
>>   		return NULL;
>> @@ -1636,6 +1649,8 @@ next_mm:
>>   	remove_trailing_rmap_items(slot, ksm_scan.rmap_list);
>>
>>   	spin_lock(&ksm_mmlist_lock);
>> +	/* get the mm's task now in the ksm_mmlist_lock */
>> +	taskp = mm->owner;
>>   	ksm_scan.mm_slot = list_entry(slot->mm_list.next,
>>   						struct mm_slot, mm_list);
>>   	if (ksm_scan.address == 0) {
>> @@ -1651,6 +1666,7 @@ next_mm:
>>   		hash_del(&slot->link);
>>   		list_del(&slot->mm_list);
>>   		spin_unlock(&ksm_mmlist_lock);
>> +		trigger_a_bug(taskp, mm);
>>
>>   		free_mm_slot(slot);
>>   		clear_bit(MMF_VM_MERGEABLE,&mm->flags);
>> @@ -1658,6 +1674,7 @@ next_mm:
>>   		mmdrop(mm);
>>   	} else {
>>   		spin_unlock(&ksm_mmlist_lock);
>> +		trigger_a_bug(taskp, mm);
>>   		up_read(&mm->mmap_sem);
>>   	}
>>
>> --
>> 1.7.7
>
> .
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-05-08  6:54 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-05 12:42 [PATCH] ksm: fix conflict between mmput and scan_get_next_rmap_item Zhou Chengming
2016-05-05 21:07 ` Andrew Morton
2016-05-06  2:50   ` zhouchengming
2016-05-07  4:04     ` Hugh Dickins
2016-05-08  6:46       ` zhouchengming
2016-05-05 21:57 ` Andrea Arcangeli
2016-05-06  2:54   ` Ding Tianhong
2016-05-06  3:07   ` zhouchengming

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).