linux-mips.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] MIPS: reset all task's asid to 0 after asid_cache(cpu) overflows
@ 2017-03-05  3:24 Jiwei Sun
  2017-03-05  3:24 ` Jiwei Sun
  2017-03-05  9:38 ` Sergei Shtylyov
  0 siblings, 2 replies; 12+ messages in thread
From: Jiwei Sun @ 2017-03-05  3:24 UTC (permalink / raw)
  To: ralf, paul.burton, james.hogan; +Cc: linux-mips, linux-kernel, jiwei.sun.bj

If asid_cache(cpu) overflows, there may be two tasks with the same
asid. It is a risk that the two different tasks may have the same
address space.

A process will update its asid to newer version only when switch_mm()
is called and matches the following condition:
    if ((cpu_context(cpu, next) ^ asid_cache(cpu))
                    & asid_version_mask(cpu))
            get_new_mmu_context(next, cpu);
If asid_cache(cpu) overflows, cpu_context(cpu,next) and asid_cache(cpu)
will be reset to asid_first_version(cpu), and start a new cycle. It
can result in two tasks that have the same ASID in the process list.

For example, in CONFIG_CPU_MIPS32_R2, task named A's asid on CPU1 is
0x100, and has been sleeping and been not scheduled. After a long period
of time, another running task named B's asid on CPU1 is 0xffffffff, and
asid cached in the CPU1 is 0xffffffff too, next task named C is forked,
when schedule from B to C on CPU1, asid_cache(cpu) will overflow, so C's
asid on CPU1 will be 0x100 according to get_new_mmu_context(). A's asid
is the same as C, if now A is rescheduled on CPU1, A's asid is not able
to renew according to 'if' clause, and the local TLB entry can't be
flushed too, A's address space will be the same as C.

If asid_cache(cpu) overflows, all of user space task's asid on this CPU
are able to set a invalid value (such as 0), it will avoid the risk.

Signed-off-by: Jiwei Sun <jiwei.sun@windriver.com>
---
 arch/mips/include/asm/mmu_context.h | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
index ddd57ad..1f60efc 100644
--- a/arch/mips/include/asm/mmu_context.h
+++ b/arch/mips/include/asm/mmu_context.h
@@ -108,8 +108,15 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
 #else
 		local_flush_tlb_all();	/* start new asid cycle */
 #endif
-		if (!asid)		/* fix version if needed */
+		if (!asid) {		/* fix version if needed */
+			struct task_struct *p;
+
+			for_each_process(p) {
+				if ((p->mm))
+					cpu_context(cpu, p->mm) = 0;
+			}
 			asid = asid_first_version(cpu);
+		}
 	}
 
 	cpu_context(cpu, mm) = asid_cache(cpu) = asid;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH] MIPS: reset all task's asid to 0 after asid_cache(cpu) overflows
  2017-03-05  3:24 [PATCH] MIPS: reset all task's asid to 0 after asid_cache(cpu) overflows Jiwei Sun
@ 2017-03-05  3:24 ` Jiwei Sun
  2017-03-05  9:38 ` Sergei Shtylyov
  1 sibling, 0 replies; 12+ messages in thread
From: Jiwei Sun @ 2017-03-05  3:24 UTC (permalink / raw)
  To: ralf, paul.burton, james.hogan; +Cc: linux-mips, linux-kernel, jiwei.sun.bj

If asid_cache(cpu) overflows, there may be two tasks with the same
asid. It is a risk that the two different tasks may have the same
address space.

A process will update its asid to newer version only when switch_mm()
is called and matches the following condition:
    if ((cpu_context(cpu, next) ^ asid_cache(cpu))
                    & asid_version_mask(cpu))
            get_new_mmu_context(next, cpu);
If asid_cache(cpu) overflows, cpu_context(cpu,next) and asid_cache(cpu)
will be reset to asid_first_version(cpu), and start a new cycle. It
can result in two tasks that have the same ASID in the process list.

For example, in CONFIG_CPU_MIPS32_R2, task named A's asid on CPU1 is
0x100, and has been sleeping and been not scheduled. After a long period
of time, another running task named B's asid on CPU1 is 0xffffffff, and
asid cached in the CPU1 is 0xffffffff too, next task named C is forked,
when schedule from B to C on CPU1, asid_cache(cpu) will overflow, so C's
asid on CPU1 will be 0x100 according to get_new_mmu_context(). A's asid
is the same as C, if now A is rescheduled on CPU1, A's asid is not able
to renew according to 'if' clause, and the local TLB entry can't be
flushed too, A's address space will be the same as C.

If asid_cache(cpu) overflows, all of user space task's asid on this CPU
are able to set a invalid value (such as 0), it will avoid the risk.

Signed-off-by: Jiwei Sun <jiwei.sun@windriver.com>
---
 arch/mips/include/asm/mmu_context.h | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
index ddd57ad..1f60efc 100644
--- a/arch/mips/include/asm/mmu_context.h
+++ b/arch/mips/include/asm/mmu_context.h
@@ -108,8 +108,15 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
 #else
 		local_flush_tlb_all();	/* start new asid cycle */
 #endif
-		if (!asid)		/* fix version if needed */
+		if (!asid) {		/* fix version if needed */
+			struct task_struct *p;
+
+			for_each_process(p) {
+				if ((p->mm))
+					cpu_context(cpu, p->mm) = 0;
+			}
 			asid = asid_first_version(cpu);
+		}
 	}
 
 	cpu_context(cpu, mm) = asid_cache(cpu) = asid;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH] MIPS: reset all task's asid to 0 after asid_cache(cpu) overflows
  2017-03-05  3:24 [PATCH] MIPS: reset all task's asid to 0 after asid_cache(cpu) overflows Jiwei Sun
  2017-03-05  3:24 ` Jiwei Sun
@ 2017-03-05  9:38 ` Sergei Shtylyov
  2017-03-06  7:21   ` jsun4
  1 sibling, 1 reply; 12+ messages in thread
From: Sergei Shtylyov @ 2017-03-05  9:38 UTC (permalink / raw)
  To: Jiwei Sun, ralf, paul.burton, james.hogan
  Cc: linux-mips, linux-kernel, jiwei.sun.bj

Hello!

On 3/5/2017 6:24 AM, Jiwei Sun wrote:

> If asid_cache(cpu) overflows, there may be two tasks with the same
> asid. It is a risk that the two different tasks may have the same
> address space.
>
> A process will update its asid to newer version only when switch_mm()
> is called and matches the following condition:
>     if ((cpu_context(cpu, next) ^ asid_cache(cpu))
>                     & asid_version_mask(cpu))
>             get_new_mmu_context(next, cpu);
> If asid_cache(cpu) overflows, cpu_context(cpu,next) and asid_cache(cpu)
> will be reset to asid_first_version(cpu), and start a new cycle. It
> can result in two tasks that have the same ASID in the process list.
>
> For example, in CONFIG_CPU_MIPS32_R2, task named A's asid on CPU1 is
> 0x100, and has been sleeping and been not scheduled. After a long period
> of time, another running task named B's asid on CPU1 is 0xffffffff, and
> asid cached in the CPU1 is 0xffffffff too, next task named C is forked,
> when schedule from B to C on CPU1, asid_cache(cpu) will overflow, so C's
> asid on CPU1 will be 0x100 according to get_new_mmu_context(). A's asid
> is the same as C, if now A is rescheduled on CPU1, A's asid is not able
> to renew according to 'if' clause, and the local TLB entry can't be
> flushed too, A's address space will be the same as C.
>
> If asid_cache(cpu) overflows, all of user space task's asid on this CPU
> are able to set a invalid value (such as 0), it will avoid the risk.
>
> Signed-off-by: Jiwei Sun <jiwei.sun@windriver.com>
> ---
>  arch/mips/include/asm/mmu_context.h | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
> index ddd57ad..1f60efc 100644
> --- a/arch/mips/include/asm/mmu_context.h
> +++ b/arch/mips/include/asm/mmu_context.h
> @@ -108,8 +108,15 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
>  #else
>  		local_flush_tlb_all();	/* start new asid cycle */
>  #endif
> -		if (!asid)		/* fix version if needed */
> +		if (!asid) {		/* fix version if needed */
> +			struct task_struct *p;
> +
> +			for_each_process(p) {
> +				if ((p->mm))

    Why double parens?

> +					cpu_context(cpu, p->mm) = 0;
> +			}
>  			asid = asid_first_version(cpu);
> +		}
>  	}
>
>  	cpu_context(cpu, mm) = asid_cache(cpu) = asid;

MBR, Sergei

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] MIPS: reset all task's asid to 0 after asid_cache(cpu) overflows
  2017-03-05  9:38 ` Sergei Shtylyov
@ 2017-03-06  7:21   ` jsun4
  2017-03-06  7:21     ` jsun4
  2017-03-06  8:34     ` Sergei Shtylyov
  0 siblings, 2 replies; 12+ messages in thread
From: jsun4 @ 2017-03-06  7:21 UTC (permalink / raw)
  To: Sergei Shtylyov, ralf, paul.burton, james.hogan
  Cc: linux-mips, linux-kernel, jiwei.sun.bj

Hello Sergei,

Thanks for your reply.

On 03/05/2017 05:38 PM, Sergei Shtylyov wrote:
> Hello!
> 
> On 3/5/2017 6:24 AM, Jiwei Sun wrote:
> 
>> If asid_cache(cpu) overflows, there may be two tasks with the same
>> asid. It is a risk that the two different tasks may have the same
>> address space.
>>
>> A process will update its asid to newer version only when switch_mm()
>> is called and matches the following condition:
>>     if ((cpu_context(cpu, next) ^ asid_cache(cpu))
>>                     & asid_version_mask(cpu))
>>             get_new_mmu_context(next, cpu);
>> If asid_cache(cpu) overflows, cpu_context(cpu,next) and asid_cache(cpu)
>> will be reset to asid_first_version(cpu), and start a new cycle. It
>> can result in two tasks that have the same ASID in the process list.
>>
>> For example, in CONFIG_CPU_MIPS32_R2, task named A's asid on CPU1 is
>> 0x100, and has been sleeping and been not scheduled. After a long period
>> of time, another running task named B's asid on CPU1 is 0xffffffff, and
>> asid cached in the CPU1 is 0xffffffff too, next task named C is forked,
>> when schedule from B to C on CPU1, asid_cache(cpu) will overflow, so C's
>> asid on CPU1 will be 0x100 according to get_new_mmu_context(). A's asid
>> is the same as C, if now A is rescheduled on CPU1, A's asid is not able
>> to renew according to 'if' clause, and the local TLB entry can't be
>> flushed too, A's address space will be the same as C.
>>
>> If asid_cache(cpu) overflows, all of user space task's asid on this CPU
>> are able to set a invalid value (such as 0), it will avoid the risk.
>>
>> Signed-off-by: Jiwei Sun <jiwei.sun@windriver.com>
>> ---
>>  arch/mips/include/asm/mmu_context.h | 9 ++++++++-
>>  1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
>> index ddd57ad..1f60efc 100644
>> --- a/arch/mips/include/asm/mmu_context.h
>> +++ b/arch/mips/include/asm/mmu_context.h
>> @@ -108,8 +108,15 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
>>  #else
>>          local_flush_tlb_all();    /* start new asid cycle */
>>  #endif
>> -        if (!asid)        /* fix version if needed */
>> +        if (!asid) {        /* fix version if needed */
>> +            struct task_struct *p;
>> +
>> +            for_each_process(p) {
>> +                if ((p->mm))
> 
>    Why double parens?

At the beginning, the code was written as following
	if ((p->mm) && (p->mm != mm))
		cpu_context(cpu, p->mm) = 0;

Because cpu_context(cpu,mm) will be changed to asid_first_version(cpu) after 'for' loop,
and in order to improve the efficiency of the loop, I deleted "&& (p->mm != mm)",
but I forgot to delete the redundant parentheses.

Thanks,
Best regards,
Jiwei

> 
>> +                    cpu_context(cpu, p->mm) = 0;
>> +            }
>>              asid = asid_first_version(cpu);
>> +        }
>>      }
>>
>>      cpu_context(cpu, mm) = asid_cache(cpu) = asid;
> 
> MBR, Sergei
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] MIPS: reset all task's asid to 0 after asid_cache(cpu) overflows
  2017-03-06  7:21   ` jsun4
@ 2017-03-06  7:21     ` jsun4
  2017-03-06  8:34     ` Sergei Shtylyov
  1 sibling, 0 replies; 12+ messages in thread
From: jsun4 @ 2017-03-06  7:21 UTC (permalink / raw)
  To: Sergei Shtylyov, ralf, paul.burton, james.hogan
  Cc: linux-mips, linux-kernel, jiwei.sun.bj

Hello Sergei,

Thanks for your reply.

On 03/05/2017 05:38 PM, Sergei Shtylyov wrote:
> Hello!
> 
> On 3/5/2017 6:24 AM, Jiwei Sun wrote:
> 
>> If asid_cache(cpu) overflows, there may be two tasks with the same
>> asid. It is a risk that the two different tasks may have the same
>> address space.
>>
>> A process will update its asid to newer version only when switch_mm()
>> is called and matches the following condition:
>>     if ((cpu_context(cpu, next) ^ asid_cache(cpu))
>>                     & asid_version_mask(cpu))
>>             get_new_mmu_context(next, cpu);
>> If asid_cache(cpu) overflows, cpu_context(cpu,next) and asid_cache(cpu)
>> will be reset to asid_first_version(cpu), and start a new cycle. It
>> can result in two tasks that have the same ASID in the process list.
>>
>> For example, in CONFIG_CPU_MIPS32_R2, task named A's asid on CPU1 is
>> 0x100, and has been sleeping and been not scheduled. After a long period
>> of time, another running task named B's asid on CPU1 is 0xffffffff, and
>> asid cached in the CPU1 is 0xffffffff too, next task named C is forked,
>> when schedule from B to C on CPU1, asid_cache(cpu) will overflow, so C's
>> asid on CPU1 will be 0x100 according to get_new_mmu_context(). A's asid
>> is the same as C, if now A is rescheduled on CPU1, A's asid is not able
>> to renew according to 'if' clause, and the local TLB entry can't be
>> flushed too, A's address space will be the same as C.
>>
>> If asid_cache(cpu) overflows, all of user space task's asid on this CPU
>> are able to set a invalid value (such as 0), it will avoid the risk.
>>
>> Signed-off-by: Jiwei Sun <jiwei.sun@windriver.com>
>> ---
>>  arch/mips/include/asm/mmu_context.h | 9 ++++++++-
>>  1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
>> index ddd57ad..1f60efc 100644
>> --- a/arch/mips/include/asm/mmu_context.h
>> +++ b/arch/mips/include/asm/mmu_context.h
>> @@ -108,8 +108,15 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
>>  #else
>>          local_flush_tlb_all();    /* start new asid cycle */
>>  #endif
>> -        if (!asid)        /* fix version if needed */
>> +        if (!asid) {        /* fix version if needed */
>> +            struct task_struct *p;
>> +
>> +            for_each_process(p) {
>> +                if ((p->mm))
> 
>    Why double parens?

At the beginning, the code was written as following
	if ((p->mm) && (p->mm != mm))
		cpu_context(cpu, p->mm) = 0;

Because cpu_context(cpu,mm) will be changed to asid_first_version(cpu) after 'for' loop,
and in order to improve the efficiency of the loop, I deleted "&& (p->mm != mm)",
but I forgot to delete the redundant parentheses.

Thanks,
Best regards,
Jiwei

> 
>> +                    cpu_context(cpu, p->mm) = 0;
>> +            }
>>              asid = asid_first_version(cpu);
>> +        }
>>      }
>>
>>      cpu_context(cpu, mm) = asid_cache(cpu) = asid;
> 
> MBR, Sergei
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] MIPS: reset all task's asid to 0 after asid_cache(cpu) overflows
  2017-03-06  7:21   ` jsun4
  2017-03-06  7:21     ` jsun4
@ 2017-03-06  8:34     ` Sergei Shtylyov
  2017-03-07 12:06       ` Jiwei Sun
  1 sibling, 1 reply; 12+ messages in thread
From: Sergei Shtylyov @ 2017-03-06  8:34 UTC (permalink / raw)
  To: jsun4, ralf, paul.burton, james.hogan
  Cc: linux-mips, linux-kernel, jiwei.sun.bj

On 3/6/2017 10:21 AM, jsun4 wrote:

>>> If asid_cache(cpu) overflows, there may be two tasks with the same
>>> asid. It is a risk that the two different tasks may have the same
>>> address space.
>>>
>>> A process will update its asid to newer version only when switch_mm()
>>> is called and matches the following condition:
>>>     if ((cpu_context(cpu, next) ^ asid_cache(cpu))
>>>                     & asid_version_mask(cpu))
>>>             get_new_mmu_context(next, cpu);
>>> If asid_cache(cpu) overflows, cpu_context(cpu,next) and asid_cache(cpu)
>>> will be reset to asid_first_version(cpu), and start a new cycle. It
>>> can result in two tasks that have the same ASID in the process list.
>>>
>>> For example, in CONFIG_CPU_MIPS32_R2, task named A's asid on CPU1 is
>>> 0x100, and has been sleeping and been not scheduled. After a long period
>>> of time, another running task named B's asid on CPU1 is 0xffffffff, and
>>> asid cached in the CPU1 is 0xffffffff too, next task named C is forked,
>>> when schedule from B to C on CPU1, asid_cache(cpu) will overflow, so C's
>>> asid on CPU1 will be 0x100 according to get_new_mmu_context(). A's asid
>>> is the same as C, if now A is rescheduled on CPU1, A's asid is not able
>>> to renew according to 'if' clause, and the local TLB entry can't be
>>> flushed too, A's address space will be the same as C.
>>>
>>> If asid_cache(cpu) overflows, all of user space task's asid on this CPU
>>> are able to set a invalid value (such as 0), it will avoid the risk.
>>>
>>> Signed-off-by: Jiwei Sun <jiwei.sun@windriver.com>
>>> ---
>>>  arch/mips/include/asm/mmu_context.h | 9 ++++++++-
>>>  1 file changed, 8 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
>>> index ddd57ad..1f60efc 100644
>>> --- a/arch/mips/include/asm/mmu_context.h
>>> +++ b/arch/mips/include/asm/mmu_context.h
>>> @@ -108,8 +108,15 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
>>>  #else
>>>          local_flush_tlb_all();    /* start new asid cycle */
>>>  #endif
>>> -        if (!asid)        /* fix version if needed */
>>> +        if (!asid) {        /* fix version if needed */
>>> +            struct task_struct *p;
>>> +
>>> +            for_each_process(p) {
>>> +                if ((p->mm))
>>
>>    Why double parens?
>
> At the beginning, the code was written as following
> 	if ((p->mm) && (p->mm != mm))
> 		cpu_context(cpu, p->mm) = 0;
>
> Because cpu_context(cpu,mm) will be changed to asid_first_version(cpu) after 'for' loop,
> and in order to improve the efficiency of the loop, I deleted "&& (p->mm != mm)",
> but I forgot to delete the redundant parentheses.

    Note that parens around 'p->mm' were never needed. And neither around the 
right operand of &&.

> Thanks,
> Best regards,
> Jiwei

MBR, Sergei

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] MIPS: reset all task's asid to 0 after asid_cache(cpu) overflows
  2017-03-06  8:34     ` Sergei Shtylyov
@ 2017-03-07 12:06       ` Jiwei Sun
  2017-03-07 12:06         ` Jiwei Sun
  0 siblings, 1 reply; 12+ messages in thread
From: Jiwei Sun @ 2017-03-07 12:06 UTC (permalink / raw)
  To: Sergei Shtylyov; +Cc: linux-mips, linux-kernel, jiwei.sun.bj



On 03/06/2017 04:34 PM, Sergei Shtylyov wrote:
> On 3/6/2017 10:21 AM, jsun4 wrote:
> 
>>>> If asid_cache(cpu) overflows, there may be two tasks with the same
>>>> asid. It is a risk that the two different tasks may have the same
>>>> address space.
>>>>
>>>> A process will update its asid to newer version only when switch_mm()
>>>> is called and matches the following condition:
>>>>     if ((cpu_context(cpu, next) ^ asid_cache(cpu))
>>>>                     & asid_version_mask(cpu))
>>>>             get_new_mmu_context(next, cpu);
>>>> If asid_cache(cpu) overflows, cpu_context(cpu,next) and asid_cache(cpu)
>>>> will be reset to asid_first_version(cpu), and start a new cycle. It
>>>> can result in two tasks that have the same ASID in the process list.
>>>>
>>>> For example, in CONFIG_CPU_MIPS32_R2, task named A's asid on CPU1 is
>>>> 0x100, and has been sleeping and been not scheduled. After a long period
>>>> of time, another running task named B's asid on CPU1 is 0xffffffff, and
>>>> asid cached in the CPU1 is 0xffffffff too, next task named C is forked,
>>>> when schedule from B to C on CPU1, asid_cache(cpu) will overflow, so C's
>>>> asid on CPU1 will be 0x100 according to get_new_mmu_context(). A's asid
>>>> is the same as C, if now A is rescheduled on CPU1, A's asid is not able
>>>> to renew according to 'if' clause, and the local TLB entry can't be
>>>> flushed too, A's address space will be the same as C.
>>>>
>>>> If asid_cache(cpu) overflows, all of user space task's asid on this CPU
>>>> are able to set a invalid value (such as 0), it will avoid the risk.
>>>>
>>>> Signed-off-by: Jiwei Sun <jiwei.sun@windriver.com>
>>>> ---
>>>>  arch/mips/include/asm/mmu_context.h | 9 ++++++++-
>>>>  1 file changed, 8 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
>>>> index ddd57ad..1f60efc 100644
>>>> --- a/arch/mips/include/asm/mmu_context.h
>>>> +++ b/arch/mips/include/asm/mmu_context.h
>>>> @@ -108,8 +108,15 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
>>>>  #else
>>>>          local_flush_tlb_all();    /* start new asid cycle */
>>>>  #endif
>>>> -        if (!asid)        /* fix version if needed */
>>>> +        if (!asid) {        /* fix version if needed */
>>>> +            struct task_struct *p;
>>>> +
>>>> +            for_each_process(p) {
>>>> +                if ((p->mm))
>>>
>>>    Why double parens?
>>
>> At the beginning, the code was written as following
>>     if ((p->mm) && (p->mm != mm))
>>         cpu_context(cpu, p->mm) = 0;
>>
>> Because cpu_context(cpu,mm) will be changed to asid_first_version(cpu) after 'for' loop,
>> and in order to improve the efficiency of the loop, I deleted "&& (p->mm != mm)",
>> but I forgot to delete the redundant parentheses.
> 
>    Note that parens around 'p->mm' were never needed. And neither around the right operand of &&.

You are right, I will pay attention to similar problems next time.
Thanks for your reminder.

Best regards,
Jiwei

> 
>> Thanks,
>> Best regards,
>> Jiwei
> 
> MBR, Sergei
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] MIPS: reset all task's asid to 0 after asid_cache(cpu) overflows
  2017-03-07 12:06       ` Jiwei Sun
@ 2017-03-07 12:06         ` Jiwei Sun
  0 siblings, 0 replies; 12+ messages in thread
From: Jiwei Sun @ 2017-03-07 12:06 UTC (permalink / raw)
  To: Sergei Shtylyov; +Cc: linux-mips, linux-kernel, jiwei.sun.bj



On 03/06/2017 04:34 PM, Sergei Shtylyov wrote:
> On 3/6/2017 10:21 AM, jsun4 wrote:
> 
>>>> If asid_cache(cpu) overflows, there may be two tasks with the same
>>>> asid. It is a risk that the two different tasks may have the same
>>>> address space.
>>>>
>>>> A process will update its asid to newer version only when switch_mm()
>>>> is called and matches the following condition:
>>>>     if ((cpu_context(cpu, next) ^ asid_cache(cpu))
>>>>                     & asid_version_mask(cpu))
>>>>             get_new_mmu_context(next, cpu);
>>>> If asid_cache(cpu) overflows, cpu_context(cpu,next) and asid_cache(cpu)
>>>> will be reset to asid_first_version(cpu), and start a new cycle. It
>>>> can result in two tasks that have the same ASID in the process list.
>>>>
>>>> For example, in CONFIG_CPU_MIPS32_R2, task named A's asid on CPU1 is
>>>> 0x100, and has been sleeping and been not scheduled. After a long period
>>>> of time, another running task named B's asid on CPU1 is 0xffffffff, and
>>>> asid cached in the CPU1 is 0xffffffff too, next task named C is forked,
>>>> when schedule from B to C on CPU1, asid_cache(cpu) will overflow, so C's
>>>> asid on CPU1 will be 0x100 according to get_new_mmu_context(). A's asid
>>>> is the same as C, if now A is rescheduled on CPU1, A's asid is not able
>>>> to renew according to 'if' clause, and the local TLB entry can't be
>>>> flushed too, A's address space will be the same as C.
>>>>
>>>> If asid_cache(cpu) overflows, all of user space task's asid on this CPU
>>>> are able to set a invalid value (such as 0), it will avoid the risk.
>>>>
>>>> Signed-off-by: Jiwei Sun <jiwei.sun@windriver.com>
>>>> ---
>>>>  arch/mips/include/asm/mmu_context.h | 9 ++++++++-
>>>>  1 file changed, 8 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
>>>> index ddd57ad..1f60efc 100644
>>>> --- a/arch/mips/include/asm/mmu_context.h
>>>> +++ b/arch/mips/include/asm/mmu_context.h
>>>> @@ -108,8 +108,15 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
>>>>  #else
>>>>          local_flush_tlb_all();    /* start new asid cycle */
>>>>  #endif
>>>> -        if (!asid)        /* fix version if needed */
>>>> +        if (!asid) {        /* fix version if needed */
>>>> +            struct task_struct *p;
>>>> +
>>>> +            for_each_process(p) {
>>>> +                if ((p->mm))
>>>
>>>    Why double parens?
>>
>> At the beginning, the code was written as following
>>     if ((p->mm) && (p->mm != mm))
>>         cpu_context(cpu, p->mm) = 0;
>>
>> Because cpu_context(cpu,mm) will be changed to asid_first_version(cpu) after 'for' loop,
>> and in order to improve the efficiency of the loop, I deleted "&& (p->mm != mm)",
>> but I forgot to delete the redundant parentheses.
> 
>    Note that parens around 'p->mm' were never needed. And neither around the right operand of &&.

You are right, I will pay attention to similar problems next time.
Thanks for your reminder.

Best regards,
Jiwei

> 
>> Thanks,
>> Best regards,
>> Jiwei
> 
> MBR, Sergei
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] MIPS: reset all task's asid to 0 after asid_cache(cpu) overflows
  2017-03-06  2:44 yhb
  2017-03-06  2:44 ` yhb
@ 2017-03-06  8:00 ` jsun4
  2017-03-06  8:00   ` jsun4
  1 sibling, 1 reply; 12+ messages in thread
From: jsun4 @ 2017-03-06  8:00 UTC (permalink / raw)
  To: yhb; +Cc: linux-mips

Hello yhb,

Thanks for your reply and review.

On 03/06/2017 10:44 AM, yhb@ruijie.com.cn wrote:
> +		if (!asid) {		/* fix version if needed */
> +			struct task_struct *p;
> +
> +			for_each_process(p) {
> +				if ((p->mm))
> +					cpu_context(cpu, p->mm) = 0;
> +			}
>   It is not safe. When the processor is executing these codes, another processor is freeing task_struct, setting p->mm to NULL, and freeing mm_struct.

Yes, I overlooked this point. There are else code in order to resolve this question,

diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
index 2abf94f..b1c0911 100644
--- a/arch/mips/include/asm/mmu_context.h
+++ b/arch/mips/include/asm/mmu_context.h
@@ -105,8 +105,20 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
		if (cpu_has_vtag_icache)
			flush_icache_all();
		local_flush_tlb_all(); /* start new asid cycle */
-	 	if (!asid) /* fix version if needed */
+		if (!asid) { /* fix version if needed */
+	 		struct task_struct *p;
+
+		 	read_lock(&tasklist_lock);
+ 			for_each_process(p) {
+ 				task_lock(p);
+ 				if (p->mm)
+ 					cpu_context(cpu, p->mm) = 0;
+ 				task_unlock(p);
+ 			}
+	 		read_unlock(&tasklist_lock);
+
+			asid = asid_first_version(cpu);
+ 		}
	}

Because before another processor frees mm_struct, it will get the lock p->alloc_lock.
543 /* more a memory barrier than a real lock */
544 task_lock(current);
545 current->mm = NULL;
546 up_read(&mm->mmap_sem);
547 enter_lazy_tlb(mm, current);
548 task_unlock(current);

>   I committed a patch to solve this problem.Please see https://patchwork.linux-mips.org/patch/13789/.
> 
I saw the patch that you list in the link.
Why did you add a list and a lot of else codes(else arch and mm/, kernel/) to resolve this
risk that is difficult to hit? I don't think this is a good idea.
And in clear_other_mmu_contexts()
+	static inline void clear_other_mmu_contexts(struct mm_struct *mm,
+ 	unsigned long cpu)
+	{
+ 		unsigned long flags;
+ 		struct mm_struct *p;
+
+ 		spin_lock_irqsave(&mmlink_lock, flags);
+ 		list_for_each_entry(p, &mmlist, mmlink) {
+ 			if ((p != mm) && cpu_context(cpu, p))

"(p != mm)" is not essential, because cpu_context(cpu, mm) will be changed to asid_first_version(cpu) in
get_new_mmu_context(struct mm_struct *mm, unsigned long cpu), and it is inefficient.
And "cpu_context(cpu, p)" is not essential too, I think. 

+ 				cpu_context(cpu, p) = 0;
+ 		}
+ 		spin_unlock_irqrestore(&mmlink_lock, flags);
+	}
+

Thanks,
Best regards,
Jiwei

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH] MIPS: reset all task's asid to 0 after asid_cache(cpu) overflows
  2017-03-06  8:00 ` jsun4
@ 2017-03-06  8:00   ` jsun4
  0 siblings, 0 replies; 12+ messages in thread
From: jsun4 @ 2017-03-06  8:00 UTC (permalink / raw)
  To: yhb; +Cc: linux-mips

Hello yhb,

Thanks for your reply and review.

On 03/06/2017 10:44 AM, yhb@ruijie.com.cn wrote:
> +		if (!asid) {		/* fix version if needed */
> +			struct task_struct *p;
> +
> +			for_each_process(p) {
> +				if ((p->mm))
> +					cpu_context(cpu, p->mm) = 0;
> +			}
>   It is not safe. When the processor is executing these codes, another processor is freeing task_struct, setting p->mm to NULL, and freeing mm_struct.

Yes, I overlooked this point. There are else code in order to resolve this question,

diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
index 2abf94f..b1c0911 100644
--- a/arch/mips/include/asm/mmu_context.h
+++ b/arch/mips/include/asm/mmu_context.h
@@ -105,8 +105,20 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
		if (cpu_has_vtag_icache)
			flush_icache_all();
		local_flush_tlb_all(); /* start new asid cycle */
-	 	if (!asid) /* fix version if needed */
+		if (!asid) { /* fix version if needed */
+	 		struct task_struct *p;
+
+		 	read_lock(&tasklist_lock);
+ 			for_each_process(p) {
+ 				task_lock(p);
+ 				if (p->mm)
+ 					cpu_context(cpu, p->mm) = 0;
+ 				task_unlock(p);
+ 			}
+	 		read_unlock(&tasklist_lock);
+
+			asid = asid_first_version(cpu);
+ 		}
	}

Because before another processor frees mm_struct, it will get the lock p->alloc_lock.
543 /* more a memory barrier than a real lock */
544 task_lock(current);
545 current->mm = NULL;
546 up_read(&mm->mmap_sem);
547 enter_lazy_tlb(mm, current);
548 task_unlock(current);

>   I committed a patch to solve this problem.Please see https://patchwork.linux-mips.org/patch/13789/.
> 
I saw the patch that you list in the link.
Why did you add a list and a lot of else codes(else arch and mm/, kernel/) to resolve this
risk that is difficult to hit? I don't think this is a good idea.
And in clear_other_mmu_contexts()
+	static inline void clear_other_mmu_contexts(struct mm_struct *mm,
+ 	unsigned long cpu)
+	{
+ 		unsigned long flags;
+ 		struct mm_struct *p;
+
+ 		spin_lock_irqsave(&mmlink_lock, flags);
+ 		list_for_each_entry(p, &mmlist, mmlink) {
+ 			if ((p != mm) && cpu_context(cpu, p))

"(p != mm)" is not essential, because cpu_context(cpu, mm) will be changed to asid_first_version(cpu) in
get_new_mmu_context(struct mm_struct *mm, unsigned long cpu), and it is inefficient.
And "cpu_context(cpu, p)" is not essential too, I think. 

+ 				cpu_context(cpu, p) = 0;
+ 		}
+ 		spin_unlock_irqrestore(&mmlink_lock, flags);
+	}
+

Thanks,
Best regards,
Jiwei

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH] MIPS: reset all task's asid to 0 after asid_cache(cpu) overflows
@ 2017-03-06  2:44 yhb
  2017-03-06  2:44 ` yhb
  2017-03-06  8:00 ` jsun4
  0 siblings, 2 replies; 12+ messages in thread
From: yhb @ 2017-03-06  2:44 UTC (permalink / raw)
  To: jiwei.sun; +Cc: linux-mips

+		if (!asid) {		/* fix version if needed */
+			struct task_struct *p;
+
+			for_each_process(p) {
+				if ((p->mm))
+					cpu_context(cpu, p->mm) = 0;
+			}
  It is not safe. When the processor is executing these codes, another processor is freeing task_struct, setting p->mm to NULL, and freeing mm_struct.
  I committed a patch to solve this problem.Please see https://patchwork.linux-mips.org/patch/13789/.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] MIPS: reset all task's asid to 0 after asid_cache(cpu) overflows
  2017-03-06  2:44 yhb
@ 2017-03-06  2:44 ` yhb
  2017-03-06  8:00 ` jsun4
  1 sibling, 0 replies; 12+ messages in thread
From: yhb @ 2017-03-06  2:44 UTC (permalink / raw)
  To: jiwei.sun; +Cc: linux-mips

+		if (!asid) {		/* fix version if needed */
+			struct task_struct *p;
+
+			for_each_process(p) {
+				if ((p->mm))
+					cpu_context(cpu, p->mm) = 0;
+			}
  It is not safe. When the processor is executing these codes, another processor is freeing task_struct, setting p->mm to NULL, and freeing mm_struct.
  I committed a patch to solve this problem.Please see https://patchwork.linux-mips.org/patch/13789/.


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2017-03-07 12:06 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-05  3:24 [PATCH] MIPS: reset all task's asid to 0 after asid_cache(cpu) overflows Jiwei Sun
2017-03-05  3:24 ` Jiwei Sun
2017-03-05  9:38 ` Sergei Shtylyov
2017-03-06  7:21   ` jsun4
2017-03-06  7:21     ` jsun4
2017-03-06  8:34     ` Sergei Shtylyov
2017-03-07 12:06       ` Jiwei Sun
2017-03-07 12:06         ` Jiwei Sun
2017-03-06  2:44 yhb
2017-03-06  2:44 ` yhb
2017-03-06  8:00 ` jsun4
2017-03-06  8:00   ` jsun4

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).