* [PATCH 1/3] powerpc/mm: move mmap_sem unlock up from do_sigbus
2017-02-14 16:45 [PATCH 0/3] powerpc/mm: page fault handler cleaning Laurent Dufour
@ 2017-02-14 16:45 ` Laurent Dufour
2017-03-21 8:39 ` Aneesh Kumar K.V
2017-03-21 11:36 ` [1/3] " Michael Ellerman
2017-02-14 16:45 ` [PATCH 2/3] powerpc/mm: handle VM_FAULT_RETRY earlier Laurent Dufour
` (2 subsequent siblings)
3 siblings, 2 replies; 11+ messages in thread
From: Laurent Dufour @ 2017-02-14 16:45 UTC (permalink / raw)
To: mpe, benh, paulus, aneesh.kumar, bsingharora, npiggin
Cc: linuxppc-dev, linux-kernel
Move mmap_sem releasing in the do_sigbus()'s unique caller : mm_fault_error()
No functional changes.
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
arch/powerpc/mm/fault.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 62a50d6d1053..ee09604bbe12 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -119,8 +119,6 @@ static int do_sigbus(struct pt_regs *regs, unsigned long address,
siginfo_t info;
unsigned int lsb = 0;
- up_read(¤t->mm->mmap_sem);
-
if (!user_mode(regs))
return MM_FAULT_ERR(SIGBUS);
@@ -184,8 +182,10 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr, int fault)
return MM_FAULT_RETURN;
}
- if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE))
+ if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE)) {
+ up_read(¤t->mm->mmap_sem);
return do_sigbus(regs, addr, fault);
+ }
/* We don't understand the fault code, this is fatal */
BUG();
--
2.7.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] powerpc/mm: move mmap_sem unlock up from do_sigbus
2017-02-14 16:45 ` [PATCH 1/3] powerpc/mm: move mmap_sem unlock up from do_sigbus Laurent Dufour
@ 2017-03-21 8:39 ` Aneesh Kumar K.V
2017-03-21 11:36 ` [1/3] " Michael Ellerman
1 sibling, 0 replies; 11+ messages in thread
From: Aneesh Kumar K.V @ 2017-03-21 8:39 UTC (permalink / raw)
To: Laurent Dufour, mpe, benh, paulus, bsingharora, npiggin
Cc: linuxppc-dev, linux-kernel
Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:
> Move mmap_sem releasing in the do_sigbus()'s unique caller : mm_fault_error()
>
> No functional changes.
>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
> arch/powerpc/mm/fault.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
> index 62a50d6d1053..ee09604bbe12 100644
> --- a/arch/powerpc/mm/fault.c
> +++ b/arch/powerpc/mm/fault.c
> @@ -119,8 +119,6 @@ static int do_sigbus(struct pt_regs *regs, unsigned long address,
> siginfo_t info;
> unsigned int lsb = 0;
>
> - up_read(¤t->mm->mmap_sem);
> -
> if (!user_mode(regs))
> return MM_FAULT_ERR(SIGBUS);
>
> @@ -184,8 +182,10 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr, int fault)
> return MM_FAULT_RETURN;
> }
>
> - if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE))
> + if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE)) {
> + up_read(¤t->mm->mmap_sem);
> return do_sigbus(regs, addr, fault);
> + }
>
> /* We don't understand the fault code, this is fatal */
> BUG();
> --
> 2.7.4
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [1/3] powerpc/mm: move mmap_sem unlock up from do_sigbus
2017-02-14 16:45 ` [PATCH 1/3] powerpc/mm: move mmap_sem unlock up from do_sigbus Laurent Dufour
2017-03-21 8:39 ` Aneesh Kumar K.V
@ 2017-03-21 11:36 ` Michael Ellerman
1 sibling, 0 replies; 11+ messages in thread
From: Michael Ellerman @ 2017-03-21 11:36 UTC (permalink / raw)
To: Laurent Dufour, benh, paulus, aneesh.kumar, bsingharora, npiggin
Cc: linuxppc-dev, linux-kernel
On Tue, 2017-02-14 at 16:45:10 UTC, Laurent Dufour wrote:
> Move mmap_sem releasing in the do_sigbus()'s unique caller : mm_fault_error()
>
> No functional changes.
>
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Series applied to powerpc next, thanks.
https://git.kernel.org/powerpc/c/c2294e0ffe741c8b34c630a71c7dc4
cheers
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 2/3] powerpc/mm: handle VM_FAULT_RETRY earlier
2017-02-14 16:45 [PATCH 0/3] powerpc/mm: page fault handler cleaning Laurent Dufour
2017-02-14 16:45 ` [PATCH 1/3] powerpc/mm: move mmap_sem unlock up from do_sigbus Laurent Dufour
@ 2017-02-14 16:45 ` Laurent Dufour
2017-03-21 9:12 ` Aneesh Kumar K.V
2017-02-14 16:45 ` [PATCH 3/3] powerpc/mm: move mmap_sem unlocking in do_page_fault() Laurent Dufour
2017-03-02 12:30 ` [PATCH 0/3] powerpc/mm: page fault handler cleaning Laurent Dufour
3 siblings, 1 reply; 11+ messages in thread
From: Laurent Dufour @ 2017-02-14 16:45 UTC (permalink / raw)
To: mpe, benh, paulus, aneesh.kumar, bsingharora, npiggin
Cc: linuxppc-dev, linux-kernel
In do_page_fault() if handle_mm_fault() returns VM_FAULT_RETRY, retry
the page fault handling before anything else.
This would simplify the handling of the mmap_sem lock in this part of
the code.
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
arch/powerpc/mm/fault.c | 67 ++++++++++++++++++++++++++++---------------------
1 file changed, 38 insertions(+), 29 deletions(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index ee09604bbe12..2a6bc7e6e69a 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -434,6 +434,26 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
* the fault.
*/
fault = handle_mm_fault(vma, address, flags);
+
+ /*
+ * Handle the retry right now, the mmap_sem has been released in that
+ * case.
+ */
+ if (unlikely(fault & VM_FAULT_RETRY)) {
+ /* We retry only once */
+ if (flags & FAULT_FLAG_ALLOW_RETRY) {
+ /*
+ * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
+ * of starvation.
+ */
+ flags &= ~FAULT_FLAG_ALLOW_RETRY;
+ flags |= FAULT_FLAG_TRIED;
+ if (!fatal_signal_pending(current))
+ goto retry;
+ }
+ /* We will enter mm_fault_error() below */
+ }
+
if (unlikely(fault & (VM_FAULT_RETRY|VM_FAULT_ERROR))) {
if (fault & VM_FAULT_SIGSEGV)
goto bad_area;
@@ -445,38 +465,27 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
}
/*
- * Major/minor page fault accounting is only done on the
- * initial attempt. If we go through a retry, it is extremely
- * likely that the page will be found in page cache at that point.
+ * Major/minor page fault accounting.
*/
- if (flags & FAULT_FLAG_ALLOW_RETRY) {
- if (fault & VM_FAULT_MAJOR) {
- current->maj_flt++;
- perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1,
- regs, address);
+ if (fault & VM_FAULT_MAJOR) {
+ current->maj_flt++;
+ perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1,
+ regs, address);
#ifdef CONFIG_PPC_SMLPAR
- if (firmware_has_feature(FW_FEATURE_CMO)) {
- u32 page_ins;
-
- preempt_disable();
- page_ins = be32_to_cpu(get_lppaca()->page_ins);
- page_ins += 1 << PAGE_FACTOR;
- get_lppaca()->page_ins = cpu_to_be32(page_ins);
- preempt_enable();
- }
-#endif /* CONFIG_PPC_SMLPAR */
- } else {
- current->min_flt++;
- perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1,
- regs, address);
- }
- if (fault & VM_FAULT_RETRY) {
- /* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
- * of starvation. */
- flags &= ~FAULT_FLAG_ALLOW_RETRY;
- flags |= FAULT_FLAG_TRIED;
- goto retry;
+ if (firmware_has_feature(FW_FEATURE_CMO)) {
+ u32 page_ins;
+
+ preempt_disable();
+ page_ins = be32_to_cpu(get_lppaca()->page_ins);
+ page_ins += 1 << PAGE_FACTOR;
+ get_lppaca()->page_ins = cpu_to_be32(page_ins);
+ preempt_enable();
}
+#endif /* CONFIG_PPC_SMLPAR */
+ } else {
+ current->min_flt++;
+ perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1,
+ regs, address);
}
up_read(&mm->mmap_sem);
--
2.7.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] powerpc/mm: handle VM_FAULT_RETRY earlier
2017-02-14 16:45 ` [PATCH 2/3] powerpc/mm: handle VM_FAULT_RETRY earlier Laurent Dufour
@ 2017-03-21 9:12 ` Aneesh Kumar K.V
2017-03-21 9:57 ` Laurent Dufour
0 siblings, 1 reply; 11+ messages in thread
From: Aneesh Kumar K.V @ 2017-03-21 9:12 UTC (permalink / raw)
To: Laurent Dufour, mpe, benh, paulus, bsingharora, npiggin
Cc: linuxppc-dev, linux-kernel
Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:
> In do_page_fault() if handle_mm_fault() returns VM_FAULT_RETRY, retry
> the page fault handling before anything else.
>
> This would simplify the handling of the mmap_sem lock in this part of
> the code.
>
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
> arch/powerpc/mm/fault.c | 67 ++++++++++++++++++++++++++++---------------------
> 1 file changed, 38 insertions(+), 29 deletions(-)
>
> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
> index ee09604bbe12..2a6bc7e6e69a 100644
> --- a/arch/powerpc/mm/fault.c
> +++ b/arch/powerpc/mm/fault.c
> @@ -434,6 +434,26 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
> * the fault.
> */
> fault = handle_mm_fault(vma, address, flags);
> +
> + /*
> + * Handle the retry right now, the mmap_sem has been released in that
> + * case.
> + */
> + if (unlikely(fault & VM_FAULT_RETRY)) {
> + /* We retry only once */
> + if (flags & FAULT_FLAG_ALLOW_RETRY) {
> + /*
> + * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
> + * of starvation.
> + */
> + flags &= ~FAULT_FLAG_ALLOW_RETRY;
> + flags |= FAULT_FLAG_TRIED;
> + if (!fatal_signal_pending(current))
> + goto retry;
> + }
> + /* We will enter mm_fault_error() below */
> + }
> +
> if (unlikely(fault & (VM_FAULT_RETRY|VM_FAULT_ERROR))) {
> if (fault & VM_FAULT_SIGSEGV)
> goto bad_area;
> @@ -445,38 +465,27 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
> }
We could make it further simpler, by handling the FAULT_RETRY without
FLAG_ALLOW_RETRY set earlier. But i guess that can be done later ?
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>
> /*
> - * Major/minor page fault accounting is only done on the
> - * initial attempt. If we go through a retry, it is extremely
> - * likely that the page will be found in page cache at that point.
> + * Major/minor page fault accounting.
> */
> - if (flags & FAULT_FLAG_ALLOW_RETRY) {
> - if (fault & VM_FAULT_MAJOR) {
> - current->maj_flt++;
> - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1,
> - regs, address);
> + if (fault & VM_FAULT_MAJOR) {
> + current->maj_flt++;
> + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1,
> + regs, address);
> #ifdef CONFIG_PPC_SMLPAR
> - if (firmware_has_feature(FW_FEATURE_CMO)) {
> - u32 page_ins;
> -
> - preempt_disable();
> - page_ins = be32_to_cpu(get_lppaca()->page_ins);
> - page_ins += 1 << PAGE_FACTOR;
> - get_lppaca()->page_ins = cpu_to_be32(page_ins);
> - preempt_enable();
> - }
> -#endif /* CONFIG_PPC_SMLPAR */
> - } else {
> - current->min_flt++;
> - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1,
> - regs, address);
> - }
> - if (fault & VM_FAULT_RETRY) {
> - /* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
> - * of starvation. */
> - flags &= ~FAULT_FLAG_ALLOW_RETRY;
> - flags |= FAULT_FLAG_TRIED;
> - goto retry;
> + if (firmware_has_feature(FW_FEATURE_CMO)) {
> + u32 page_ins;
> +
> + preempt_disable();
> + page_ins = be32_to_cpu(get_lppaca()->page_ins);
> + page_ins += 1 << PAGE_FACTOR;
> + get_lppaca()->page_ins = cpu_to_be32(page_ins);
> + preempt_enable();
> }
> +#endif /* CONFIG_PPC_SMLPAR */
> + } else {
> + current->min_flt++;
> + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1,
> + regs, address);
> }
>
> up_read(&mm->mmap_sem);
> --
> 2.7.4
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] powerpc/mm: handle VM_FAULT_RETRY earlier
2017-03-21 9:12 ` Aneesh Kumar K.V
@ 2017-03-21 9:57 ` Laurent Dufour
0 siblings, 0 replies; 11+ messages in thread
From: Laurent Dufour @ 2017-03-21 9:57 UTC (permalink / raw)
To: Aneesh Kumar K.V, mpe, benh, paulus, bsingharora, npiggin
Cc: linuxppc-dev, linux-kernel
On 21/03/2017 10:12, Aneesh Kumar K.V wrote:
> Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:
>
>> In do_page_fault() if handle_mm_fault() returns VM_FAULT_RETRY, retry
>> the page fault handling before anything else.
>>
>> This would simplify the handling of the mmap_sem lock in this part of
>> the code.
>>
>> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
>> ---
>> arch/powerpc/mm/fault.c | 67 ++++++++++++++++++++++++++++---------------------
>> 1 file changed, 38 insertions(+), 29 deletions(-)
>>
>> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
>> index ee09604bbe12..2a6bc7e6e69a 100644
>> --- a/arch/powerpc/mm/fault.c
>> +++ b/arch/powerpc/mm/fault.c
>> @@ -434,6 +434,26 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
>> * the fault.
>> */
>> fault = handle_mm_fault(vma, address, flags);
>> +
>> + /*
>> + * Handle the retry right now, the mmap_sem has been released in that
>> + * case.
>> + */
>> + if (unlikely(fault & VM_FAULT_RETRY)) {
>> + /* We retry only once */
>> + if (flags & FAULT_FLAG_ALLOW_RETRY) {
>> + /*
>> + * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
>> + * of starvation.
>> + */
>> + flags &= ~FAULT_FLAG_ALLOW_RETRY;
>> + flags |= FAULT_FLAG_TRIED;
>> + if (!fatal_signal_pending(current))
>> + goto retry;
>> + }
>> + /* We will enter mm_fault_error() below */
>> + }
>> +
>> if (unlikely(fault & (VM_FAULT_RETRY|VM_FAULT_ERROR))) {
>> if (fault & VM_FAULT_SIGSEGV)
>> goto bad_area;
>> @@ -445,38 +465,27 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
>> }
>
> We could make it further simpler, by handling the FAULT_RETRY without
> FLAG_ALLOW_RETRY set earlier. But i guess that can be done later ?
Thanks for the review,
I agree that double checking against VM_FAULT_RETRY is confusing here.
But handling all the retry path in the first if() statement means that
we'll have to handle part of the mm_fault_error() code and segv here...
Unless we can't identify what is really relevant in that retry path.
It would take time to review all that tricky part, but I agree it should
be simplified later.
>
> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>
>
>>
>> /*
>> - * Major/minor page fault accounting is only done on the
>> - * initial attempt. If we go through a retry, it is extremely
>> - * likely that the page will be found in page cache at that point.
>> + * Major/minor page fault accounting.
>> */
>> - if (flags & FAULT_FLAG_ALLOW_RETRY) {
>> - if (fault & VM_FAULT_MAJOR) {
>> - current->maj_flt++;
>> - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1,
>> - regs, address);
>> + if (fault & VM_FAULT_MAJOR) {
>> + current->maj_flt++;
>> + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1,
>> + regs, address);
>> #ifdef CONFIG_PPC_SMLPAR
>> - if (firmware_has_feature(FW_FEATURE_CMO)) {
>> - u32 page_ins;
>> -
>> - preempt_disable();
>> - page_ins = be32_to_cpu(get_lppaca()->page_ins);
>> - page_ins += 1 << PAGE_FACTOR;
>> - get_lppaca()->page_ins = cpu_to_be32(page_ins);
>> - preempt_enable();
>> - }
>> -#endif /* CONFIG_PPC_SMLPAR */
>> - } else {
>> - current->min_flt++;
>> - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1,
>> - regs, address);
>> - }
>> - if (fault & VM_FAULT_RETRY) {
>> - /* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
>> - * of starvation. */
>> - flags &= ~FAULT_FLAG_ALLOW_RETRY;
>> - flags |= FAULT_FLAG_TRIED;
>> - goto retry;
>> + if (firmware_has_feature(FW_FEATURE_CMO)) {
>> + u32 page_ins;
>> +
>> + preempt_disable();
>> + page_ins = be32_to_cpu(get_lppaca()->page_ins);
>> + page_ins += 1 << PAGE_FACTOR;
>> + get_lppaca()->page_ins = cpu_to_be32(page_ins);
>> + preempt_enable();
>> }
>> +#endif /* CONFIG_PPC_SMLPAR */
>> + } else {
>> + current->min_flt++;
>> + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1,
>> + regs, address);
>> }
>>
>> up_read(&mm->mmap_sem);
>> --
>> 2.7.4
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 3/3] powerpc/mm: move mmap_sem unlocking in do_page_fault()
2017-02-14 16:45 [PATCH 0/3] powerpc/mm: page fault handler cleaning Laurent Dufour
2017-02-14 16:45 ` [PATCH 1/3] powerpc/mm: move mmap_sem unlock up from do_sigbus Laurent Dufour
2017-02-14 16:45 ` [PATCH 2/3] powerpc/mm: handle VM_FAULT_RETRY earlier Laurent Dufour
@ 2017-02-14 16:45 ` Laurent Dufour
2017-03-21 9:12 ` Aneesh Kumar K.V
2017-03-02 12:30 ` [PATCH 0/3] powerpc/mm: page fault handler cleaning Laurent Dufour
3 siblings, 1 reply; 11+ messages in thread
From: Laurent Dufour @ 2017-02-14 16:45 UTC (permalink / raw)
To: mpe, benh, paulus, aneesh.kumar, bsingharora, npiggin
Cc: linuxppc-dev, linux-kernel
Since the fault retry is now handled earlier, we can release the
mmap_sem lock earlier too and remove later unlocking previously done in
mm_fault_error().
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
arch/powerpc/mm/fault.c | 19 ++++---------------
1 file changed, 4 insertions(+), 15 deletions(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 2a6bc7e6e69a..21e06cce8984 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -151,13 +151,6 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr, int fault)
* continue the pagefault.
*/
if (fatal_signal_pending(current)) {
- /*
- * If we have retry set, the mmap semaphore will have
- * alrady been released in __lock_page_or_retry(). Else
- * we release it now.
- */
- if (!(fault & VM_FAULT_RETRY))
- up_read(¤t->mm->mmap_sem);
/* Coming from kernel, we need to deal with uaccess fixups */
if (user_mode(regs))
return MM_FAULT_RETURN;
@@ -170,8 +163,6 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr, int fault)
/* Out of memory */
if (fault & VM_FAULT_OOM) {
- up_read(¤t->mm->mmap_sem);
-
/*
* We ran out of memory, or some other thing happened to us that
* made us unable to handle the page fault gracefully.
@@ -182,10 +173,8 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr, int fault)
return MM_FAULT_RETURN;
}
- if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE)) {
- up_read(¤t->mm->mmap_sem);
+ if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE))
return do_sigbus(regs, addr, fault);
- }
/* We don't understand the fault code, this is fatal */
BUG();
@@ -452,11 +441,12 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
goto retry;
}
/* We will enter mm_fault_error() below */
- }
+ } else
+ up_read(¤t->mm->mmap_sem);
if (unlikely(fault & (VM_FAULT_RETRY|VM_FAULT_ERROR))) {
if (fault & VM_FAULT_SIGSEGV)
- goto bad_area;
+ goto bad_area_nosemaphore;
rc = mm_fault_error(regs, address, fault);
if (rc >= MM_FAULT_RETURN)
goto bail;
@@ -488,7 +478,6 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
regs, address);
}
- up_read(&mm->mmap_sem);
goto bail;
bad_area:
--
2.7.4
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 3/3] powerpc/mm: move mmap_sem unlocking in do_page_fault()
2017-02-14 16:45 ` [PATCH 3/3] powerpc/mm: move mmap_sem unlocking in do_page_fault() Laurent Dufour
@ 2017-03-21 9:12 ` Aneesh Kumar K.V
0 siblings, 0 replies; 11+ messages in thread
From: Aneesh Kumar K.V @ 2017-03-21 9:12 UTC (permalink / raw)
To: Laurent Dufour, mpe, benh, paulus, bsingharora, npiggin
Cc: linuxppc-dev, linux-kernel
Laurent Dufour <ldufour@linux.vnet.ibm.com> writes:
> Since the fault retry is now handled earlier, we can release the
> mmap_sem lock earlier too and remove later unlocking previously done in
> mm_fault_error().
>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
> ---
> arch/powerpc/mm/fault.c | 19 ++++---------------
> 1 file changed, 4 insertions(+), 15 deletions(-)
>
> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
> index 2a6bc7e6e69a..21e06cce8984 100644
> --- a/arch/powerpc/mm/fault.c
> +++ b/arch/powerpc/mm/fault.c
> @@ -151,13 +151,6 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr, int fault)
> * continue the pagefault.
> */
> if (fatal_signal_pending(current)) {
> - /*
> - * If we have retry set, the mmap semaphore will have
> - * alrady been released in __lock_page_or_retry(). Else
> - * we release it now.
> - */
> - if (!(fault & VM_FAULT_RETRY))
> - up_read(¤t->mm->mmap_sem);
> /* Coming from kernel, we need to deal with uaccess fixups */
> if (user_mode(regs))
> return MM_FAULT_RETURN;
> @@ -170,8 +163,6 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr, int fault)
>
> /* Out of memory */
> if (fault & VM_FAULT_OOM) {
> - up_read(¤t->mm->mmap_sem);
> -
> /*
> * We ran out of memory, or some other thing happened to us that
> * made us unable to handle the page fault gracefully.
> @@ -182,10 +173,8 @@ static int mm_fault_error(struct pt_regs *regs, unsigned long addr, int fault)
> return MM_FAULT_RETURN;
> }
>
> - if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE)) {
> - up_read(¤t->mm->mmap_sem);
> + if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE))
> return do_sigbus(regs, addr, fault);
> - }
>
> /* We don't understand the fault code, this is fatal */
> BUG();
> @@ -452,11 +441,12 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
> goto retry;
> }
> /* We will enter mm_fault_error() below */
> - }
> + } else
> + up_read(¤t->mm->mmap_sem);
>
> if (unlikely(fault & (VM_FAULT_RETRY|VM_FAULT_ERROR))) {
> if (fault & VM_FAULT_SIGSEGV)
> - goto bad_area;
> + goto bad_area_nosemaphore;
> rc = mm_fault_error(regs, address, fault);
> if (rc >= MM_FAULT_RETURN)
> goto bail;
> @@ -488,7 +478,6 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
> regs, address);
> }
>
> - up_read(&mm->mmap_sem);
> goto bail;
>
> bad_area:
> --
> 2.7.4
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/3] powerpc/mm: page fault handler cleaning
2017-02-14 16:45 [PATCH 0/3] powerpc/mm: page fault handler cleaning Laurent Dufour
` (2 preceding siblings ...)
2017-02-14 16:45 ` [PATCH 3/3] powerpc/mm: move mmap_sem unlocking in do_page_fault() Laurent Dufour
@ 2017-03-02 12:30 ` Laurent Dufour
2017-03-03 12:17 ` Michael Ellerman
3 siblings, 1 reply; 11+ messages in thread
From: Laurent Dufour @ 2017-03-02 12:30 UTC (permalink / raw)
To: mpe, benh, paulus, aneesh.kumar, bsingharora, npiggin
Cc: linuxppc-dev, linux-kernel
Kindly ping...
On 14/02/2017 17:45, Laurent Dufour wrote:
> This series attempts to clean the page fault handler in the way it has
> been done previously for the x86 architecture [1].
>
> The goal is to manage the mmap_sem earlier and only in
> do_page_fault(). This done by handling the retry case earlier, before
> handling the error case. This way the semaphore can be released
> earlier and the error path processed without holding it.
>
> The first patch is just moving a unlock to the caller of the service,
> which as no functional impact.
>
> The second patch is handling the retry case earlier in
> do_page_fault(). This is where most the change are done, but I was
> conservative here, not changing the use of mm_fault_error() in the
> case of the second retry. It may be smarter to handle that case
> separately but this may create duplicate code.
>
> The last patch is moving up semaphore releasing from mm_fault_error()
> to do_page_fault().
>
> [1] see commits from Linus Torvalds
> 26178ec11ef3 ("x86: mm: consolidate VM_FAULT_RETRY handling")
> 7fb08eca4527 ("x86: mm: move mmap_sem unlock from mm_fault_error() to
> caller")
>
> Laurent Dufour (3):
> powerpc/mm: move mmap_sem unlock up from do_sigbus
> powerpc/mm: handle VM_FAULT_RETRY earlier
> powerpc/mm: move mmap_sem unlocking in do_page_fault()
>
> arch/powerpc/mm/fault.c | 82 ++++++++++++++++++++++++-------------------------
> 1 file changed, 40 insertions(+), 42 deletions(-)
>
^ permalink raw reply [flat|nested] 11+ messages in thread