* [merged] mm-thp-use-more-portable-pmd-clearing-sequenece-in-zap_huge_pmd.patch removed from -mm tree
@ 2012-10-09 18:15 akpm
0 siblings, 0 replies; only message in thread
From: akpm @ 2012-10-09 18:15 UTC (permalink / raw)
To: davem, aarcange, gerald.schaefer, hannes, mm-commits
The patch titled
Subject: mm: thp: Use more portable PMD clearing sequenece in zap_huge_pmd().
has been removed from the -mm tree. Its filename was
mm-thp-use-more-portable-pmd-clearing-sequenece-in-zap_huge_pmd.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------
From: David Miller <davem@davemloft.net>
Subject: mm: thp: Use more portable PMD clearing sequenece in zap_huge_pmd().
Invalidation sequences are handled in various ways on various
architectures.
One way, which sparc64 uses, is to let the set_*_at() functions accumulate
pending flushes into a per-cpu array. Then the flush_tlb_range() et al.
calls process the pending TLB flushes.
In this regime, the __tlb_remove_*tlb_entry() implementations are
essentially NOPs.
The canonical PTE zap in mm/memory.c is:
ptent = ptep_get_and_clear_full(mm, addr, pte,
tlb->fullmm);
tlb_remove_tlb_entry(tlb, pte, addr);
With a subsequent tlb_flush_mmu() if needed.
Mirror this in the THP PMD zapping using:
orig_pmd = pmdp_get_and_clear(tlb->mm, addr, pmd);
page = pmd_page(orig_pmd);
tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
And we properly accomodate TLB flush mechanims like the one described
above.
Signed-off-by: David S. Miller <davem@davemloft.net>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/huge_memory.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff -puN mm/huge_memory.c~mm-thp-use-more-portable-pmd-clearing-sequenece-in-zap_huge_pmd mm/huge_memory.c
--- a/mm/huge_memory.c~mm-thp-use-more-portable-pmd-clearing-sequenece-in-zap_huge_pmd
+++ a/mm/huge_memory.c
@@ -1024,9 +1024,10 @@ int zap_huge_pmd(struct mmu_gather *tlb,
if (__pmd_trans_huge_lock(pmd, vma) == 1) {
struct page *page;
pgtable_t pgtable;
+ pmd_t orig_pmd;
pgtable = pgtable_trans_huge_withdraw(tlb->mm);
- page = pmd_page(*pmd);
- pmd_clear(pmd);
+ orig_pmd = pmdp_get_and_clear(tlb->mm, addr, pmd);
+ page = pmd_page(orig_pmd);
tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
page_remove_rmap(page);
VM_BUG_ON(page_mapcount(page) < 0);
_
Patches currently in -mm which might be from davem@davemloft.net are
origin.patch
linux-next.patch
compat-generic-compat_sys_sched_rr_get_interval-implementation.patch
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2012-10-09 18:15 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-10-09 18:15 [merged] mm-thp-use-more-portable-pmd-clearing-sequenece-in-zap_huge_pmd.patch removed from -mm tree akpm
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.