From: Aaron Lu <aaron.lu@intel.com> To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Dave Hansen <dave.hansen@intel.com>, Tim Chen <tim.c.chen@intel.com>, Andrew Morton <akpm@linux-foundation.org>, Ying Huang <ying.huang@intel.com>, Aaron Lu <aaron.lu@intel.com> Subject: [PATCH v2 4/5] mm: add force_free_pages in zap_pte_range Date: Wed, 15 Mar 2017 17:00:03 +0800 [thread overview] Message-ID: <1489568404-7817-5-git-send-email-aaron.lu@intel.com> (raw) In-Reply-To: <1489568404-7817-1-git-send-email-aaron.lu@intel.com> force_flush in zap_pte_range is set in the following 2 conditions: 1 When no more batches can be allocated (either due to no memory or MAX_GATHER_BATCH_COUNT has reached) to store those to-be-freed page pointers; 2 When a TLB_only flush is needed before dropping the PTE lock to avoid a race condition as explained in commit 1cf35d47712d ("mm: split 'tlb_flush_mmu()' into tlb flushing and memory freeing parts"). Once force_flush is set, the pages accumulated thus far will all be freed. Since there is no need to do page free for condition 2, add a new variable named force_free_pages to decide if page free should be done and it will only be set in condition 1. With this change, the page accumulation will not be interrupted by condition 2 anymore. In the meantime, rename force_flush to force_flush_tlb for condition 2. Signed-off-by: Aaron Lu <aaron.lu@intel.com> --- mm/memory.c | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 19b25bb5f45b..83b38823aaba 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1199,7 +1199,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, struct zap_details *details) { struct mm_struct *mm = tlb->mm; - int force_flush = 0; + int force_flush_tlb = 0, force_free_pages = 0; int rss[NR_MM_COUNTERS]; spinlock_t *ptl; pte_t *start_pte; @@ -1239,7 +1239,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, if (!PageAnon(page)) { if (pte_dirty(ptent)) { - force_flush = 1; + force_flush_tlb = 1; set_page_dirty(page); } if (pte_young(ptent) && @@ -1251,7 +1251,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, if (unlikely(page_mapcount(page) < 0)) print_bad_pte(vma, addr, ptent, page); if (unlikely(__tlb_remove_page(tlb, page))) { - force_flush = 1; + force_free_pages = 1; addr += PAGE_SIZE; break; } @@ -1279,18 +1279,14 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, arch_leave_lazy_mmu_mode(); /* Do the actual TLB flush before dropping ptl */ - if (force_flush) + if (force_flush_tlb) { + force_flush_tlb = 0; tlb_flush_mmu_tlbonly(tlb); + } pte_unmap_unlock(start_pte, ptl); - /* - * If we forced a TLB flush (either due to running out of - * batch buffers or because we needed to flush dirty TLB - * entries before releasing the ptl), free the batched - * memory too. Restart if we didn't do everything. - */ - if (force_flush) { - force_flush = 0; + if (force_free_pages) { + force_free_pages = 0; tlb_flush_mmu_free(tlb); if (addr != end) goto again; -- 2.7.4
WARNING: multiple messages have this Message-ID (diff)
From: Aaron Lu <aaron.lu@intel.com> To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Dave Hansen <dave.hansen@intel.com>, Tim Chen <tim.c.chen@intel.com>, Andrew Morton <akpm@linux-foundation.org>, Ying Huang <ying.huang@intel.com>, Aaron Lu <aaron.lu@intel.com> Subject: [PATCH v2 4/5] mm: add force_free_pages in zap_pte_range Date: Wed, 15 Mar 2017 17:00:03 +0800 [thread overview] Message-ID: <1489568404-7817-5-git-send-email-aaron.lu@intel.com> (raw) In-Reply-To: <1489568404-7817-1-git-send-email-aaron.lu@intel.com> force_flush in zap_pte_range is set in the following 2 conditions: 1 When no more batches can be allocated (either due to no memory or MAX_GATHER_BATCH_COUNT has reached) to store those to-be-freed page pointers; 2 When a TLB_only flush is needed before dropping the PTE lock to avoid a race condition as explained in commit 1cf35d47712d ("mm: split 'tlb_flush_mmu()' into tlb flushing and memory freeing parts"). Once force_flush is set, the pages accumulated thus far will all be freed. Since there is no need to do page free for condition 2, add a new variable named force_free_pages to decide if page free should be done and it will only be set in condition 1. With this change, the page accumulation will not be interrupted by condition 2 anymore. In the meantime, rename force_flush to force_flush_tlb for condition 2. Signed-off-by: Aaron Lu <aaron.lu@intel.com> --- mm/memory.c | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 19b25bb5f45b..83b38823aaba 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1199,7 +1199,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, struct zap_details *details) { struct mm_struct *mm = tlb->mm; - int force_flush = 0; + int force_flush_tlb = 0, force_free_pages = 0; int rss[NR_MM_COUNTERS]; spinlock_t *ptl; pte_t *start_pte; @@ -1239,7 +1239,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, if (!PageAnon(page)) { if (pte_dirty(ptent)) { - force_flush = 1; + force_flush_tlb = 1; set_page_dirty(page); } if (pte_young(ptent) && @@ -1251,7 +1251,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, if (unlikely(page_mapcount(page) < 0)) print_bad_pte(vma, addr, ptent, page); if (unlikely(__tlb_remove_page(tlb, page))) { - force_flush = 1; + force_free_pages = 1; addr += PAGE_SIZE; break; } @@ -1279,18 +1279,14 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, arch_leave_lazy_mmu_mode(); /* Do the actual TLB flush before dropping ptl */ - if (force_flush) + if (force_flush_tlb) { + force_flush_tlb = 0; tlb_flush_mmu_tlbonly(tlb); + } pte_unmap_unlock(start_pte, ptl); - /* - * If we forced a TLB flush (either due to running out of - * batch buffers or because we needed to flush dirty TLB - * entries before releasing the ptl), free the batched - * memory too. Restart if we didn't do everything. - */ - if (force_flush) { - force_flush = 0; + if (force_free_pages) { + force_free_pages = 0; tlb_flush_mmu_free(tlb); if (addr != end) goto again; -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-03-15 9:00 UTC|newest] Thread overview: 84+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-03-15 8:59 [PATCH v2 0/5] mm: support parallel free of memory Aaron Lu 2017-03-15 8:59 ` Aaron Lu 2017-03-15 9:00 ` [PATCH v2 1/5] mm: add tlb_flush_mmu_free_batches Aaron Lu 2017-03-15 9:00 ` Aaron Lu 2017-03-15 9:00 ` [PATCH v2 2/5] mm: parallel free pages Aaron Lu 2017-03-15 9:00 ` Aaron Lu 2017-03-15 9:42 ` Hillf Danton 2017-03-15 9:42 ` Hillf Danton 2017-03-15 11:54 ` Aaron Lu 2017-03-15 11:54 ` Aaron Lu 2017-03-15 9:00 ` [PATCH v2 3/5] mm: use a dedicated workqueue for the free workers Aaron Lu 2017-03-15 9:00 ` Aaron Lu 2017-03-22 6:33 ` Minchan Kim 2017-03-22 6:33 ` Minchan Kim 2017-03-22 8:41 ` Aaron Lu 2017-03-22 8:41 ` Aaron Lu 2017-03-22 8:55 ` Minchan Kim 2017-03-22 8:55 ` Minchan Kim 2017-03-22 13:43 ` Aaron Lu 2017-03-22 13:43 ` Aaron Lu 2017-03-23 5:53 ` Minchan Kim 2017-03-23 5:53 ` Minchan Kim 2017-03-23 15:38 ` Dave Hansen 2017-03-23 15:38 ` Dave Hansen 2017-03-24 12:37 ` Aaron Lu 2017-03-24 12:37 ` Aaron Lu 2017-03-15 9:00 ` Aaron Lu [this message] 2017-03-15 9:00 ` [PATCH v2 4/5] mm: add force_free_pages in zap_pte_range Aaron Lu 2017-03-15 9:00 ` [PATCH v2 5/5] mm: add debugfs interface for parallel free tuning Aaron Lu 2017-03-15 9:00 ` Aaron Lu 2017-03-15 14:18 ` [PATCH v2 0/5] mm: support parallel free of memory Michal Hocko 2017-03-15 14:18 ` Michal Hocko 2017-03-15 15:44 ` Aaron Lu 2017-03-15 15:44 ` Aaron Lu 2017-03-15 16:28 ` Michal Hocko 2017-03-15 16:28 ` Michal Hocko 2017-03-15 21:38 ` Tim Chen 2017-03-15 21:38 ` Tim Chen 2017-03-16 9:07 ` Michal Hocko 2017-03-16 9:07 ` Michal Hocko 2017-03-16 18:36 ` Tim Chen 2017-03-16 18:36 ` Tim Chen 2017-03-17 7:47 ` Michal Hocko 2017-03-17 7:47 ` Michal Hocko 2017-03-17 8:07 ` Minchan Kim 2017-03-17 8:07 ` Minchan Kim 2017-03-17 12:33 ` Aaron Lu 2017-03-17 12:33 ` Aaron Lu 2017-03-17 12:59 ` Michal Hocko 2017-03-17 12:59 ` Michal Hocko 2017-03-17 13:16 ` Peter Zijlstra 2017-03-17 13:16 ` Peter Zijlstra 2017-03-17 12:53 ` Peter Zijlstra 2017-03-17 12:53 ` Peter Zijlstra 2017-03-17 13:05 ` Michal Hocko 2017-03-17 13:05 ` Michal Hocko 2017-03-21 14:54 ` Dave Hansen 2017-03-21 14:54 ` Dave Hansen 2017-03-22 8:02 ` Aaron Lu 2017-03-22 8:02 ` Aaron Lu 2017-03-24 7:04 ` Aaron Lu 2017-03-24 7:04 ` Aaron Lu 2017-03-21 15:18 ` Tim Chen 2017-03-21 15:18 ` Tim Chen 2017-03-16 6:54 ` Aaron Lu 2017-03-16 6:54 ` Aaron Lu 2017-03-16 7:34 ` Aaron Lu 2017-03-16 7:34 ` Aaron Lu 2017-03-16 13:51 ` Aaron Lu 2017-03-16 13:51 ` Aaron Lu 2017-03-16 14:14 ` Aaron Lu 2017-03-16 14:14 ` Aaron Lu 2017-03-15 14:56 ` Vlastimil Babka 2017-03-15 14:56 ` Vlastimil Babka 2017-03-15 15:50 ` Aaron Lu 2017-03-15 15:50 ` Aaron Lu 2017-03-17 3:10 ` Aaron Lu 2017-03-17 3:10 ` Aaron Lu 2017-03-16 19:38 ` Alex Thorlton 2017-03-16 19:38 ` Alex Thorlton 2017-03-17 2:21 ` Aaron Lu 2017-03-17 2:21 ` Aaron Lu 2017-03-20 19:15 ` Alex Thorlton 2017-03-20 19:15 ` Alex Thorlton
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1489568404-7817-5-git-send-email-aaron.lu@intel.com \ --to=aaron.lu@intel.com \ --cc=akpm@linux-foundation.org \ --cc=dave.hansen@intel.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=tim.c.chen@intel.com \ --cc=ying.huang@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.