* [PATCH v6 1/4] MIPS: Do not flush tlb page when updating PTE entry @ 2020-05-25 2:52 Bibo Mao 2020-05-25 2:52 ` [PATCH v6 2/4] mm/memory.c: Update local TLB if PTE entry exists Bibo Mao ` (4 more replies) 0 siblings, 5 replies; 13+ messages in thread From: Bibo Mao @ 2020-05-25 2:52 UTC (permalink / raw) To: Thomas Bogendoerfer, Jiaxun Yang, Huacai Chen, Andrew Morton, Paul Burton, Dmitry Korotin, Philippe Mathieu-Daudé, Stafford Horne, Steven Price, Anshuman Khandual Cc: linux-mips, linux-kernel, Mike Rapoport, Sergei Shtylyov, Maciej W. Rozycki, linux-mm, David Hildenbrand It is not necessary to flush tlb page on all CPUs if suitable PTE entry exists already during page fault handling, just updating TLB is fine. Here redefine flush_tlb_fix_spurious_fault as empty on MIPS system. V6: - Add update_mmu_tlb function as empty on all platform except mips system, we use this function to update local tlb for page fault smp-race handling V5: - define update_mmu_cache function specified on MIPS platform, and add page fault smp-race stats info V4: - add pte_sw_mkyoung function to implement readable privilege, and this function is only in effect on MIPS system. - add page valid bit judgement in function pte_modify V3: - add detailed changelog, modify typo issue in patch V2 v2: - split flush_tlb_fix_spurious_fault and tlb update into two patches - comments typo modification - separate tlb update and add pte readable privilege into two patches Signed-off-by: Bibo Mao <maobibo@loongson.cn> --- arch/mips/include/asm/pgtable.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h index 9b01d2d..0d625c2 100644 --- a/arch/mips/include/asm/pgtable.h +++ b/arch/mips/include/asm/pgtable.h @@ -478,6 +478,8 @@ static inline pgprot_t pgprot_writecombine(pgprot_t _prot) return __pgprot(prot); } +#define flush_tlb_fix_spurious_fault(vma, address) do { } while (0) + /* * Conversion functions: convert a page and protection to a page entry, * and a page entry and page directory to the page they refer to. -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v6 2/4] mm/memory.c: Update local TLB if PTE entry exists 2020-05-25 2:52 [PATCH v6 1/4] MIPS: Do not flush tlb page when updating PTE entry Bibo Mao @ 2020-05-25 2:52 ` Bibo Mao 2020-05-25 21:43 ` Andrew Morton 2020-05-25 2:52 ` [PATCH v6 3/4] mm/memory.c: Add memory read privilege on page fault handling Bibo Mao ` (3 subsequent siblings) 4 siblings, 1 reply; 13+ messages in thread From: Bibo Mao @ 2020-05-25 2:52 UTC (permalink / raw) To: Thomas Bogendoerfer, Jiaxun Yang, Huacai Chen, Andrew Morton, Paul Burton, Dmitry Korotin, Philippe Mathieu-Daudé, Stafford Horne, Steven Price, Anshuman Khandual Cc: linux-mips, linux-kernel, Mike Rapoport, Sergei Shtylyov, Maciej W. Rozycki, linux-mm, David Hildenbrand If two threads concurrently fault at the same page, the thread that won the race updates the PTE and its local TLB. For now, the other thread gives up, simply does nothing, and continues. It could happen that this second thread triggers another fault, whereby it only updates its local TLB while handling the fault. Instead of triggering another fault, let's directly update the local TLB of the second thread. Function update_mmu_tlb is used here to update local TLB on the second thread, and it is defined as empty on other arches. Signed-off-by: Bibo Mao <maobibo@loongson.cn> --- arch/mips/include/asm/pgtable.h | 23 +++++++++++++++++++++++ include/asm-generic/pgtable.h | 17 +++++++++++++++++ mm/memory.c | 27 +++++++++++++++++++-------- 3 files changed, 59 insertions(+), 8 deletions(-) diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h index 0d625c2..d2004b5 100644 --- a/arch/mips/include/asm/pgtable.h +++ b/arch/mips/include/asm/pgtable.h @@ -480,6 +480,26 @@ static inline pgprot_t pgprot_writecombine(pgprot_t _prot) #define flush_tlb_fix_spurious_fault(vma, address) do { } while (0) +#define __HAVE_ARCH_PTE_SAME +static inline int pte_same(pte_t pte_a, pte_t pte_b) +{ + return pte_val(pte_a) == pte_val(pte_b); +} + +#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS +static inline int ptep_set_access_flags(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, + pte_t entry, int dirty) +{ + if (!pte_same(*ptep, entry)) + set_pte_at(vma->vm_mm, address, ptep, entry); + /* + * update_mmu_cache will unconditionally execute, handling both + * the case that the PTE changed and the spurious fault case. + */ + return true; +} + /* * Conversion functions: convert a page and protection to a page entry, * and a page entry and page directory to the page they refer to. @@ -523,6 +543,9 @@ static inline void update_mmu_cache(struct vm_area_struct *vma, __update_tlb(vma, address, pte); } +#define __HAVE_ARCH_UPDATE_MMU_TLB +#define update_mmu_tlb update_mmu_cache + static inline void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp) { diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 329b8c8..fa5c73f 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -188,6 +188,23 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm, } #endif + +/* + * If two threads concurrently fault at the same page, the thread that + * won the race updates the PTE and its local TLB/Cache. The other thread + * gives up, simply does nothing, and continues; on architectures where + * software can update TLB, local TLB can be updated here to avoid next page + * fault. This function updates TLB only, do nothing with cache or others. + * It is the difference with function update_mmu_cache. + */ +#ifndef __HAVE_ARCH_UPDATE_MMU_TLB +static inline void update_mmu_tlb(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep) +{ +} +#define __HAVE_ARCH_UPDATE_MMU_TLB +#endif + /* * Some architectures may be able to avoid expensive synchronization * primitives when modifications are made to PTE's which are already diff --git a/mm/memory.c b/mm/memory.c index f703fe8..8bb31c4 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2436,10 +2436,9 @@ static inline bool cow_user_page(struct page *dst, struct page *src, if (!likely(pte_same(*vmf->pte, vmf->orig_pte))) { /* * Other thread has already handled the fault - * and we don't need to do anything. If it's - * not the case, the fault will be triggered - * again on the same address. + * and update local tlb only */ + update_mmu_tlb(vma, addr, vmf->pte); ret = false; goto pte_unlock; } @@ -2463,7 +2462,8 @@ static inline bool cow_user_page(struct page *dst, struct page *src, vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl); locked = true; if (!likely(pte_same(*vmf->pte, vmf->orig_pte))) { - /* The PTE changed under us. Retry page fault. */ + /* The PTE changed under us, update local tlb */ + update_mmu_tlb(vma, addr, vmf->pte); ret = false; goto pte_unlock; } @@ -2752,6 +2752,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) new_page = old_page; page_copied = 1; } else { + update_mmu_tlb(vma, vmf->address, vmf->pte); mem_cgroup_cancel_charge(new_page, memcg, false); } @@ -2812,6 +2813,7 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf) * pte_offset_map_lock. */ if (!pte_same(*vmf->pte, vmf->orig_pte)) { + update_mmu_tlb(vmf->vma, vmf->address, vmf->pte); pte_unmap_unlock(vmf->pte, vmf->ptl); return VM_FAULT_NOPAGE; } @@ -2936,6 +2938,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); if (!pte_same(*vmf->pte, vmf->orig_pte)) { + update_mmu_tlb(vma, vmf->address, vmf->pte); unlock_page(vmf->page); pte_unmap_unlock(vmf->pte, vmf->ptl); put_page(vmf->page); @@ -3341,8 +3344,10 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) vma->vm_page_prot)); vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); - if (!pte_none(*vmf->pte)) + if (!pte_none(*vmf->pte)) { + update_mmu_tlb(vma, vmf->address, vmf->pte); goto unlock; + } ret = check_stable_address_space(vma->vm_mm); if (ret) goto unlock; @@ -3378,8 +3383,10 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); - if (!pte_none(*vmf->pte)) + if (!pte_none(*vmf->pte)) { + update_mmu_cache(vma, vmf->address, vmf->pte); goto release; + } ret = check_stable_address_space(vma->vm_mm); if (ret) @@ -3646,8 +3653,10 @@ vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg, } /* Re-check under ptl */ - if (unlikely(!pte_none(*vmf->pte))) + if (unlikely(!pte_none(*vmf->pte))) { + update_mmu_tlb(vma, vmf->address, vmf->pte); return VM_FAULT_NOPAGE; + } flush_icache_page(vma, page); entry = mk_pte(page, vma->vm_page_prot); @@ -4224,8 +4233,10 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) vmf->ptl = pte_lockptr(vmf->vma->vm_mm, vmf->pmd); spin_lock(vmf->ptl); entry = vmf->orig_pte; - if (unlikely(!pte_same(*vmf->pte, entry))) + if (unlikely(!pte_same(*vmf->pte, entry))) { + update_mmu_tlb(vmf->vma, vmf->address, vmf->pte); goto unlock; + } if (vmf->flags & FAULT_FLAG_WRITE) { if (!pte_write(entry)) return do_wp_page(vmf); -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v6 2/4] mm/memory.c: Update local TLB if PTE entry exists 2020-05-25 2:52 ` [PATCH v6 2/4] mm/memory.c: Update local TLB if PTE entry exists Bibo Mao @ 2020-05-25 21:43 ` Andrew Morton 0 siblings, 0 replies; 13+ messages in thread From: Andrew Morton @ 2020-05-25 21:43 UTC (permalink / raw) To: Bibo Mao Cc: Thomas Bogendoerfer, Jiaxun Yang, Huacai Chen, Paul Burton, Dmitry Korotin, Philippe Mathieu-Daudé, Stafford Horne, Steven Price, Anshuman Khandual, linux-mips, linux-kernel, Mike Rapoport, Sergei Shtylyov, Maciej W. Rozycki, linux-mm, David Hildenbrand On Mon, 25 May 2020 10:52:38 +0800 Bibo Mao <maobibo@loongson.cn> wrote: > If two threads concurrently fault at the same page, the thread that > won the race updates the PTE and its local TLB. For now, the other > thread gives up, simply does nothing, and continues. > > It could happen that this second thread triggers another fault, whereby > it only updates its local TLB while handling the fault. Instead of > triggering another fault, let's directly update the local TLB of the > second thread. Function update_mmu_tlb is used here to update local > TLB on the second thread, and it is defined as empty on other arches. Acked-by: Andrew Morton <akpm@linux-foundation.org> Thanks for persisting with these. ^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v6 3/4] mm/memory.c: Add memory read privilege on page fault handling 2020-05-25 2:52 [PATCH v6 1/4] MIPS: Do not flush tlb page when updating PTE entry Bibo Mao 2020-05-25 2:52 ` [PATCH v6 2/4] mm/memory.c: Update local TLB if PTE entry exists Bibo Mao @ 2020-05-25 2:52 ` Bibo Mao 2020-05-25 21:44 ` Andrew Morton 2020-05-25 2:52 ` [PATCH v6 4/4] MIPS: mm: add page valid judgement in function pte_modify Bibo Mao ` (2 subsequent siblings) 4 siblings, 1 reply; 13+ messages in thread From: Bibo Mao @ 2020-05-25 2:52 UTC (permalink / raw) To: Thomas Bogendoerfer, Jiaxun Yang, Huacai Chen, Andrew Morton, Paul Burton, Dmitry Korotin, Philippe Mathieu-Daudé, Stafford Horne, Steven Price, Anshuman Khandual Cc: linux-mips, linux-kernel, Mike Rapoport, Sergei Shtylyov, Maciej W. Rozycki, linux-mm, David Hildenbrand Here add pte_sw_mkyoung function to make page readable on MIPS platform during page fault handling. This patch improves page fault latency about 10% on my MIPS machine with lmbench lat_pagefault case. It is noop function on other arches, there is no negative influence on those architectures. Signed-off-by: Bibo Mao <maobibo@loongson.cn> --- arch/mips/include/asm/pgtable.h | 2 ++ include/asm-generic/pgtable.h | 16 ++++++++++++++++ mm/memory.c | 3 +++ 3 files changed, 21 insertions(+) diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h index d2004b5..0743087 100644 --- a/arch/mips/include/asm/pgtable.h +++ b/arch/mips/include/asm/pgtable.h @@ -414,6 +414,8 @@ static inline pte_t pte_mkyoung(pte_t pte) return pte; } +#define pte_sw_mkyoung pte_mkyoung + #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT static inline int pte_huge(pte_t pte) { return pte_val(pte) & _PAGE_HUGE; } diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index fa5c73f..b5278ec 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -244,6 +244,22 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres } #endif +/* + * On some architectures hardware does not set page access bit when accessing + * memory page, it is responsibilty of software setting this bit. It brings + * out extra page fault penalty to track page access bit. For optimization page + * access bit can be set during all page fault flow on these arches. + * To be differentiate with macro pte_mkyoung, this macro is used on platforms + * where software maintains page access bit. + */ +#ifndef pte_sw_mkyoung +static inline pte_t pte_sw_mkyoung(pte_t pte) +{ + return pte; +} +#define pte_sw_mkyoung pte_sw_mkyoung +#endif + #ifndef pte_savedwrite #define pte_savedwrite pte_write #endif diff --git a/mm/memory.c b/mm/memory.c index 8bb31c4..c7c8960 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2704,6 +2704,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) } flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); entry = mk_pte(new_page, vma->vm_page_prot); + entry = pte_sw_mkyoung(entry); entry = maybe_mkwrite(pte_mkdirty(entry), vma); /* * Clear the pte entry and flush it first, before updating the @@ -3378,6 +3379,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) __SetPageUptodate(page); entry = mk_pte(page, vma->vm_page_prot); + entry = pte_sw_mkyoung(entry); if (vma->vm_flags & VM_WRITE) entry = pte_mkwrite(pte_mkdirty(entry)); @@ -3660,6 +3662,7 @@ vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg, flush_icache_page(vma, page); entry = mk_pte(page, vma->vm_page_prot); + entry = pte_sw_mkyoung(entry); if (write) entry = maybe_mkwrite(pte_mkdirty(entry), vma); /* copy-on-write page */ -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v6 3/4] mm/memory.c: Add memory read privilege on page fault handling 2020-05-25 2:52 ` [PATCH v6 3/4] mm/memory.c: Add memory read privilege on page fault handling Bibo Mao @ 2020-05-25 21:44 ` Andrew Morton 2020-05-27 1:05 ` maobibo 0 siblings, 1 reply; 13+ messages in thread From: Andrew Morton @ 2020-05-25 21:44 UTC (permalink / raw) To: Bibo Mao Cc: Thomas Bogendoerfer, Jiaxun Yang, Huacai Chen, Paul Burton, Dmitry Korotin, Philippe Mathieu-Daudé, Stafford Horne, Steven Price, Anshuman Khandual, linux-mips, linux-kernel, Mike Rapoport, Sergei Shtylyov, Maciej W. Rozycki, linux-mm, David Hildenbrand On Mon, 25 May 2020 10:52:39 +0800 Bibo Mao <maobibo@loongson.cn> wrote: > Here add pte_sw_mkyoung function to make page readable on MIPS > platform during page fault handling. This patch improves page > fault latency about 10% on my MIPS machine with lmbench > lat_pagefault case. > > It is noop function on other arches, there is no negative > influence on those architectures. Acked-by: Andrew Morton <akpm@linux-foundation.org> Should I take these, or would the mips tree be preferred? I'm OK either way, but probably the MIPS tree would be better? ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v6 3/4] mm/memory.c: Add memory read privilege on page fault handling 2020-05-25 21:44 ` Andrew Morton @ 2020-05-27 1:05 ` maobibo 2020-05-27 7:10 ` Thomas Bogendoerfer 0 siblings, 1 reply; 13+ messages in thread From: maobibo @ 2020-05-27 1:05 UTC (permalink / raw) To: Andrew Morton Cc: Thomas Bogendoerfer, Jiaxun Yang, Huacai Chen, Paul Burton, Dmitry Korotin, Philippe Mathieu-Daudé, Stafford Horne, Steven Price, Anshuman Khandual, linux-mips, linux-kernel, Mike Rapoport, Sergei Shtylyov, Maciej W. Rozycki, linux-mm, David Hildenbrand On 05/26/2020 05:44 AM, Andrew Morton wrote: > On Mon, 25 May 2020 10:52:39 +0800 Bibo Mao <maobibo@loongson.cn> wrote: > >> Here add pte_sw_mkyoung function to make page readable on MIPS >> platform during page fault handling. This patch improves page >> fault latency about 10% on my MIPS machine with lmbench >> lat_pagefault case. >> >> It is noop function on other arches, there is no negative >> influence on those architectures. > > Acked-by: Andrew Morton <akpm@linux-foundation.org> > > Should I take these, or would the mips tree be preferred? I'm OK > either way, but probably the MIPS tree would be better? Thanks for reviewing again and again. This patch is based on mips-next, maybe MIPS tree will be better. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v6 3/4] mm/memory.c: Add memory read privilege on page fault handling 2020-05-27 1:05 ` maobibo @ 2020-05-27 7:10 ` Thomas Bogendoerfer 0 siblings, 0 replies; 13+ messages in thread From: Thomas Bogendoerfer @ 2020-05-27 7:10 UTC (permalink / raw) To: maobibo Cc: Andrew Morton, Jiaxun Yang, Huacai Chen, Paul Burton, Dmitry Korotin, Philippe Mathieu-Daudé, Stafford Horne, Steven Price, Anshuman Khandual, linux-mips, linux-kernel, Mike Rapoport, Sergei Shtylyov, Maciej W. Rozycki, linux-mm, David Hildenbrand On Wed, May 27, 2020 at 09:05:41AM +0800, maobibo wrote: > > > On 05/26/2020 05:44 AM, Andrew Morton wrote: > > On Mon, 25 May 2020 10:52:39 +0800 Bibo Mao <maobibo@loongson.cn> wrote: > > > >> Here add pte_sw_mkyoung function to make page readable on MIPS > >> platform during page fault handling. This patch improves page > >> fault latency about 10% on my MIPS machine with lmbench > >> lat_pagefault case. > >> > >> It is noop function on other arches, there is no negative > >> influence on those architectures. > > > > Acked-by: Andrew Morton <akpm@linux-foundation.org> > > > > Should I take these, or would the mips tree be preferred? I'm OK > > either way, but probably the MIPS tree would be better? > Thanks for reviewing again and again. > This patch is based on mips-next, maybe MIPS tree will be better. I'll take your next version then. Thomas. -- Crap can work. Given enough thrust pigs will fly, but it's not necessarily a good idea. [ RFC1925, 2.3 ] ^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v6 4/4] MIPS: mm: add page valid judgement in function pte_modify 2020-05-25 2:52 [PATCH v6 1/4] MIPS: Do not flush tlb page when updating PTE entry Bibo Mao 2020-05-25 2:52 ` [PATCH v6 2/4] mm/memory.c: Update local TLB if PTE entry exists Bibo Mao 2020-05-25 2:52 ` [PATCH v6 3/4] mm/memory.c: Add memory read privilege on page fault handling Bibo Mao @ 2020-05-25 2:52 ` Bibo Mao 2020-05-25 8:12 ` [PATCH v6 1/4] MIPS: Do not flush tlb page when updating PTE entry Sergei Shtylyov 2020-05-25 21:42 ` Andrew Morton 4 siblings, 0 replies; 13+ messages in thread From: Bibo Mao @ 2020-05-25 2:52 UTC (permalink / raw) To: Thomas Bogendoerfer, Jiaxun Yang, Huacai Chen, Andrew Morton, Paul Burton, Dmitry Korotin, Philippe Mathieu-Daudé, Stafford Horne, Steven Price, Anshuman Khandual Cc: linux-mips, linux-kernel, Mike Rapoport, Sergei Shtylyov, Maciej W. Rozycki, linux-mm, David Hildenbrand If original PTE has _PAGE_ACCESSED bit set, and new pte has no _PAGE_NO_READ bit set, we can add _PAGE_SILENT_READ bit to enable page valid bit. Signed-off-by: Bibo Mao <maobibo@loongson.cn> --- arch/mips/include/asm/pgtable.h | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h index 0743087..dfe79f4 100644 --- a/arch/mips/include/asm/pgtable.h +++ b/arch/mips/include/asm/pgtable.h @@ -529,8 +529,11 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) #else static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) { - return __pte((pte_val(pte) & _PAGE_CHG_MASK) | - (pgprot_val(newprot) & ~_PAGE_CHG_MASK)); + pte_val(pte) &= _PAGE_CHG_MASK; + pte_val(pte) |= pgprot_val(newprot) & ~_PAGE_CHG_MASK; + if ((pte_val(pte) & _PAGE_ACCESSED) && !(pte_val(pte) & _PAGE_NO_READ)) + pte_val(pte) |= _PAGE_SILENT_READ; + return pte; } #endif -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v6 1/4] MIPS: Do not flush tlb page when updating PTE entry 2020-05-25 2:52 [PATCH v6 1/4] MIPS: Do not flush tlb page when updating PTE entry Bibo Mao ` (2 preceding siblings ...) 2020-05-25 2:52 ` [PATCH v6 4/4] MIPS: mm: add page valid judgement in function pte_modify Bibo Mao @ 2020-05-25 8:12 ` Sergei Shtylyov 2020-05-25 8:31 ` Sergei Shtylyov 2020-05-25 21:42 ` Andrew Morton 4 siblings, 1 reply; 13+ messages in thread From: Sergei Shtylyov @ 2020-05-25 8:12 UTC (permalink / raw) To: Bibo Mao, Thomas Bogendoerfer, Jiaxun Yang, Huacai Chen, Andrew Morton, Paul Burton, Dmitry Korotin, Philippe Mathieu-Daudé, Stafford Horne, Steven Price, Anshuman Khandual Cc: linux-mips, linux-kernel, Mike Rapoport, Maciej W. Rozycki, linux-mm, David Hildenbrand Hello! On 25.05.2020 5:52, Bibo Mao wrote: > It is not necessary to flush tlb page on all CPUs if suitable PTE > entry exists already during page fault handling, just updating > TLB is fine. > > Here redefine flush_tlb_fix_spurious_fault as empty on MIPS system. Need empty line here. > V6: > - Add update_mmu_tlb function as empty on all platform except mips > system, we use this function to update local tlb for page fault > smp-race handling > V5: > - define update_mmu_cache function specified on MIPS platform, and > add page fault smp-race stats info > V4: > - add pte_sw_mkyoung function to implement readable privilege, and > this function is only in effect on MIPS system. > - add page valid bit judgement in function pte_modify > V3: > - add detailed changelog, modify typo issue in patch V2 > v2: > - split flush_tlb_fix_spurious_fault and tlb update into two patches > - comments typo modification > - separate tlb update and add pte readable privilege into two patches It was a bad idea to keep the version change log in the 1st patch only, we have either cover letter for that, or all the individual patches... > Signed-off-by: Bibo Mao <maobibo@loongson.cn> [...] MBR, Sergei ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v6 1/4] MIPS: Do not flush tlb page when updating PTE entry 2020-05-25 8:12 ` [PATCH v6 1/4] MIPS: Do not flush tlb page when updating PTE entry Sergei Shtylyov @ 2020-05-25 8:31 ` Sergei Shtylyov 2020-05-27 1:07 ` maobibo 0 siblings, 1 reply; 13+ messages in thread From: Sergei Shtylyov @ 2020-05-25 8:31 UTC (permalink / raw) To: Bibo Mao, Thomas Bogendoerfer, Jiaxun Yang, Huacai Chen, Andrew Morton, Paul Burton, Dmitry Korotin, Philippe Mathieu-Daudé, Stafford Horne, Steven Price, Anshuman Khandual Cc: linux-mips, linux-kernel, Mike Rapoport, Maciej W. Rozycki, linux-mm, David Hildenbrand On 25.05.2020 11:12, Sergei Shtylyov wrote: >> It is not necessary to flush tlb page on all CPUs if suitable PTE >> entry exists already during page fault handling, just updating >> TLB is fine. >> >> Here redefine flush_tlb_fix_spurious_fault as empty on MIPS system. > > Need empty line here. > >> V6: >> - Add update_mmu_tlb function as empty on all platform except mips >> system, we use this function to update local tlb for page fault >> smp-race handling >> V5: >> - define update_mmu_cache function specified on MIPS platform, and >> add page fault smp-race stats info >> V4: >> - add pte_sw_mkyoung function to implement readable privilege, and >> this function is only in effect on MIPS system. >> - add page valid bit judgement in function pte_modify >> V3: >> - add detailed changelog, modify typo issue in patch V2 >> v2: >> - split flush_tlb_fix_spurious_fault and tlb update into two patches >> - comments typo modification >> - separate tlb update and add pte readable privilege into two patches > > It was a bad idea to keep the version change log in the 1st patch only, > we have either cover letter for that, or all the individual patches... Sorry for noticing this only now. With 4 patches, you should have a cover letter anyway... >> Signed-off-by: Bibo Mao <maobibo@loongson.cn> > [...] MBR, Sergei ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v6 1/4] MIPS: Do not flush tlb page when updating PTE entry 2020-05-25 8:31 ` Sergei Shtylyov @ 2020-05-27 1:07 ` maobibo 0 siblings, 0 replies; 13+ messages in thread From: maobibo @ 2020-05-27 1:07 UTC (permalink / raw) To: Sergei Shtylyov, Thomas Bogendoerfer, Jiaxun Yang, Huacai Chen, Andrew Morton, Paul Burton, Dmitry Korotin, Philippe Mathieu-Daudé, Stafford Horne, Steven Price, Anshuman Khandual Cc: linux-mips, linux-kernel, Mike Rapoport, Maciej W. Rozycki, linux-mm, David Hildenbrand On 05/25/2020 04:31 PM, Sergei Shtylyov wrote: > On 25.05.2020 11:12, Sergei Shtylyov wrote: > >>> It is not necessary to flush tlb page on all CPUs if suitable PTE >>> entry exists already during page fault handling, just updating >>> TLB is fine. >>> >>> Here redefine flush_tlb_fix_spurious_fault as empty on MIPS system. >> >> Need empty line here. >> >>> V6: >>> - Add update_mmu_tlb function as empty on all platform except mips >>> system, we use this function to update local tlb for page fault >>> smp-race handling >>> V5: >>> - define update_mmu_cache function specified on MIPS platform, and >>> add page fault smp-race stats info >>> V4: >>> - add pte_sw_mkyoung function to implement readable privilege, and >>> this function is only in effect on MIPS system. >>> - add page valid bit judgement in function pte_modify >>> V3: >>> - add detailed changelog, modify typo issue in patch V2 >>> v2: >>> - split flush_tlb_fix_spurious_fault and tlb update into two patches >>> - comments typo modification >>> - separate tlb update and add pte readable privilege into two patches >> >> It was a bad idea to keep the version change log in the 1st patch only, >> we have either cover letter for that, or all the individual patches... > > Sorry for noticing this only now. With 4 patches, you should have a cover letter anyway... Thanks for reviewing my patch, a cover letter will be added. > >>> Signed-off-by: Bibo Mao <maobibo@loongson.cn> >> [...] > > MBR, Sergei ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v6 1/4] MIPS: Do not flush tlb page when updating PTE entry 2020-05-25 2:52 [PATCH v6 1/4] MIPS: Do not flush tlb page when updating PTE entry Bibo Mao ` (3 preceding siblings ...) 2020-05-25 8:12 ` [PATCH v6 1/4] MIPS: Do not flush tlb page when updating PTE entry Sergei Shtylyov @ 2020-05-25 21:42 ` Andrew Morton 2020-05-27 1:06 ` maobibo 4 siblings, 1 reply; 13+ messages in thread From: Andrew Morton @ 2020-05-25 21:42 UTC (permalink / raw) To: Bibo Mao Cc: Thomas Bogendoerfer, Jiaxun Yang, Huacai Chen, Paul Burton, Dmitry Korotin, Philippe Mathieu-Daudé, Stafford Horne, Steven Price, Anshuman Khandual, linux-mips, linux-kernel, Mike Rapoport, Sergei Shtylyov, Maciej W. Rozycki, linux-mm, David Hildenbrand On Mon, 25 May 2020 10:52:37 +0800 Bibo Mao <maobibo@loongson.cn> wrote: > It is not necessary to flush tlb page on all CPUs if suitable PTE > entry exists already during page fault handling, just updating > TLB is fine. > > Here redefine flush_tlb_fix_spurious_fault as empty on MIPS system. > > ... > > --- a/arch/mips/include/asm/pgtable.h > +++ b/arch/mips/include/asm/pgtable.h > @@ -478,6 +478,8 @@ static inline pgprot_t pgprot_writecombine(pgprot_t _prot) > return __pgprot(prot); > } > > +#define flush_tlb_fix_spurious_fault(vma, address) do { } while (0) > + static inline C would be preferred, if that works. For a number of reasons: - looks nicer - more likely to get a code comment (for some reason) - adds typechecking. So right now a MIPS developer could do struct wibble a; struct wobble b; flush_tlb_fix_spurious_fault(&a, &b); and there would be no compiler warning. Then the code gets merged upstream and in come the embarrassing emails! - avoids unused-var warnings foo() { struct address_space a; struct vm_area_struct v; flush_tlb_fix_spurious_fault(&v, &a); } will generate unused-variable warnings if flush_tlb_fix_spurious_fault() is a macro. Making flush_tlb_fix_spurious_fault() inlined C prevents this. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v6 1/4] MIPS: Do not flush tlb page when updating PTE entry 2020-05-25 21:42 ` Andrew Morton @ 2020-05-27 1:06 ` maobibo 0 siblings, 0 replies; 13+ messages in thread From: maobibo @ 2020-05-27 1:06 UTC (permalink / raw) To: Andrew Morton Cc: Thomas Bogendoerfer, Jiaxun Yang, Huacai Chen, Paul Burton, Dmitry Korotin, Philippe Mathieu-Daudé, Stafford Horne, Steven Price, Anshuman Khandual, linux-mips, linux-kernel, Mike Rapoport, Sergei Shtylyov, Maciej W. Rozycki, linux-mm, David Hildenbrand On 05/26/2020 05:42 AM, Andrew Morton wrote: > On Mon, 25 May 2020 10:52:37 +0800 Bibo Mao <maobibo@loongson.cn> wrote: > >> It is not necessary to flush tlb page on all CPUs if suitable PTE >> entry exists already during page fault handling, just updating >> TLB is fine. >> >> Here redefine flush_tlb_fix_spurious_fault as empty on MIPS system. >> >> ... >> >> --- a/arch/mips/include/asm/pgtable.h >> +++ b/arch/mips/include/asm/pgtable.h >> @@ -478,6 +478,8 @@ static inline pgprot_t pgprot_writecombine(pgprot_t _prot) >> return __pgprot(prot); >> } >> >> +#define flush_tlb_fix_spurious_fault(vma, address) do { } while (0) >> + > > static inline C would be preferred, if that works. For a number of reasons: > > - looks nicer > > - more likely to get a code comment (for some reason) > > - adds typechecking. So right now a MIPS developer could do > > struct wibble a; > struct wobble b; > > flush_tlb_fix_spurious_fault(&a, &b); > > and there would be no compiler warning. Then the code gets merged > upstream and in come the embarrassing emails! > > - avoids unused-var warnings > > foo() > { > struct address_space a; > struct vm_area_struct v; > > flush_tlb_fix_spurious_fault(&v, &a); > } > > will generate unused-variable warnings if > flush_tlb_fix_spurious_fault() is a macro. Making > flush_tlb_fix_spurious_fault() inlined C prevents this. > Sure, I will modify this and send another version. ^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2020-05-27 8:03 UTC | newest] Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-05-25 2:52 [PATCH v6 1/4] MIPS: Do not flush tlb page when updating PTE entry Bibo Mao 2020-05-25 2:52 ` [PATCH v6 2/4] mm/memory.c: Update local TLB if PTE entry exists Bibo Mao 2020-05-25 21:43 ` Andrew Morton 2020-05-25 2:52 ` [PATCH v6 3/4] mm/memory.c: Add memory read privilege on page fault handling Bibo Mao 2020-05-25 21:44 ` Andrew Morton 2020-05-27 1:05 ` maobibo 2020-05-27 7:10 ` Thomas Bogendoerfer 2020-05-25 2:52 ` [PATCH v6 4/4] MIPS: mm: add page valid judgement in function pte_modify Bibo Mao 2020-05-25 8:12 ` [PATCH v6 1/4] MIPS: Do not flush tlb page when updating PTE entry Sergei Shtylyov 2020-05-25 8:31 ` Sergei Shtylyov 2020-05-27 1:07 ` maobibo 2020-05-25 21:42 ` Andrew Morton 2020-05-27 1:06 ` maobibo
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).