From: jglisse@redhat.com To: linux-mm@kvack.org Cc: "Andrew Morton" <akpm@linux-foundation.org>, linux-kernel@vger.kernel.org, "Ralph Campbell" <rcampbell@nvidia.com>, "Jérôme Glisse" <jglisse@redhat.com>, "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>, stable@vger.kernel.org Subject: [PATCH 2/6] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly v3 Date: Fri, 19 Oct 2018 12:04:38 -0400 [thread overview] Message-ID: <20181019160442.18723-3-jglisse@redhat.com> (raw) In-Reply-To: <20181019160442.18723-1-jglisse@redhat.com> From: Ralph Campbell <rcampbell@nvidia.com> Private ZONE_DEVICE pages use a special pte entry and thus are not present. Properly handle this case in map_pte(), it is already handled in check_pte(), the map_pte() part was lost in some rebase most probably. Without this patch the slow migration path can not migrate back to any private ZONE_DEVICE memory to regular memory. This was found after stress testing migration back to system memory. This ultimatly can lead to the CPU constantly page fault looping on the special swap entry. Changes since v2: - add comments explaining what is going on Changes since v1: - properly lock pte directory in map_pte() Signed-off-by: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: Jérôme Glisse <jglisse@redhat.com> Reviewed-by: Balbir Singh <bsingharora@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: stable@vger.kernel.org --- mm/page_vma_mapped.c | 24 +++++++++++++++++++++++- 1 file changed, 23 insertions(+), 1 deletion(-) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index ae3c2a35d61b..11df03e71288 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -21,7 +21,29 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw) if (!is_swap_pte(*pvmw->pte)) return false; } else { - if (!pte_present(*pvmw->pte)) + /* + * We get here when we are trying to unmap a private + * device page from the process address space. Such + * page is not CPU accessible and thus is mapped as + * a special swap entry, nonetheless it still does + * count as a valid regular mapping for the page (and + * is accounted as such in page maps count). + * + * So handle this special case as if it was a normal + * page mapping ie lock CPU page table and returns + * true. + * + * For more details on device private memory see HMM + * (include/linux/hmm.h or mm/hmm.c). + */ + if (is_swap_pte(*pvmw->pte)) { + swp_entry_t entry; + + /* Handle un-addressable ZONE_DEVICE memory */ + entry = pte_to_swp_entry(*pvmw->pte); + if (!is_device_private_entry(entry)) + return false; + } else if (!pte_present(*pvmw->pte)) return false; } } -- 2.17.2
WARNING: multiple messages have this Message-ID (diff)
From: jglisse@redhat.com To: linux-mm@kvack.org Cc: "Andrew Morton" <akpm@linux-foundation.org>, linux-kernel@vger.kernel.org, "Ralph Campbell" <rcampbell@nvidia.com>, "Jérôme Glisse" <jglisse@redhat.com>, "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>, stable@vger.kernel.org Subject: [PATCH 2/6] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly v3 Date: Fri, 19 Oct 2018 12:04:38 -0400 [thread overview] Message-ID: <20181019160442.18723-3-jglisse@redhat.com> (raw) In-Reply-To: <20181019160442.18723-1-jglisse@redhat.com> From: Ralph Campbell <rcampbell@nvidia.com> Private ZONE_DEVICE pages use a special pte entry and thus are not present. Properly handle this case in map_pte(), it is already handled in check_pte(), the map_pte() part was lost in some rebase most probably. Without this patch the slow migration path can not migrate back to any private ZONE_DEVICE memory to regular memory. This was found after stress testing migration back to system memory. This ultimatly can lead to the CPU constantly page fault looping on the special swap entry. Changes since v2: - add comments explaining what is going on Changes since v1: - properly lock pte directory in map_pte() Signed-off-by: Ralph Campbell <rcampbell@nvidia.com> Signed-off-by: JA(C)rA'me Glisse <jglisse@redhat.com> Reviewed-by: Balbir Singh <bsingharora@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: stable@vger.kernel.org --- mm/page_vma_mapped.c | 24 +++++++++++++++++++++++- 1 file changed, 23 insertions(+), 1 deletion(-) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index ae3c2a35d61b..11df03e71288 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -21,7 +21,29 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw) if (!is_swap_pte(*pvmw->pte)) return false; } else { - if (!pte_present(*pvmw->pte)) + /* + * We get here when we are trying to unmap a private + * device page from the process address space. Such + * page is not CPU accessible and thus is mapped as + * a special swap entry, nonetheless it still does + * count as a valid regular mapping for the page (and + * is accounted as such in page maps count). + * + * So handle this special case as if it was a normal + * page mapping ie lock CPU page table and returns + * true. + * + * For more details on device private memory see HMM + * (include/linux/hmm.h or mm/hmm.c). + */ + if (is_swap_pte(*pvmw->pte)) { + swp_entry_t entry; + + /* Handle un-addressable ZONE_DEVICE memory */ + entry = pte_to_swp_entry(*pvmw->pte); + if (!is_device_private_entry(entry)) + return false; + } else if (!pte_present(*pvmw->pte)) return false; } } -- 2.17.2
next prev parent reply other threads:[~2018-10-19 16:04 UTC|newest] Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-10-19 16:04 [PATCH 0/6] HMM updates, improvements and fixes v2 jglisse 2018-10-19 16:04 ` jglisse 2018-10-19 16:04 ` [PATCH 1/6] mm/hmm: fix utf8 jglisse 2018-10-19 16:04 ` jglisse 2018-10-19 16:04 ` jglisse [this message] 2018-10-19 16:04 ` [PATCH 2/6] mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly v3 jglisse 2018-10-19 16:04 ` [PATCH 3/6] mm/hmm: fix race between hmm_mirror_unregister() and mmu_notifier callback jglisse 2018-10-19 16:04 ` jglisse 2018-10-24 23:10 ` Andrew Morton 2018-10-19 16:04 ` [PATCH 4/6] mm/hmm: properly handle migration pmd v3 jglisse 2018-10-19 16:04 ` jglisse 2018-10-19 16:04 ` [PATCH 5/6] mm/hmm: use a structure for update callback parameters v2 jglisse 2018-10-19 16:04 ` jglisse 2018-10-19 16:04 ` [PATCH 6/6] mm/hmm: invalidate device page table at start of invalidation jglisse 2018-10-19 16:04 ` jglisse
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20181019160442.18723-3-jglisse@redhat.com \ --to=jglisse@redhat.com \ --cc=akpm@linux-foundation.org \ --cc=kirill.shutemov@linux.intel.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=rcampbell@nvidia.com \ --cc=stable@vger.kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.