From: "Jérôme Glisse" <jglisse@redhat.com> To: akpm@linux-foundation.org, <linux-kernel@vger.kernel.org>, linux-mm@kvack.org Cc: "Linus Torvalds" <torvalds@linux-foundation.org>, joro@8bytes.org, "Mel Gorman" <mgorman@suse.de>, "H. Peter Anvin" <hpa@zytor.com>, "Peter Zijlstra" <peterz@infradead.org>, "Andrea Arcangeli" <aarcange@redhat.com>, "Johannes Weiner" <jweiner@redhat.com>, "Larry Woodman" <lwoodman@redhat.com>, "Rik van Riel" <riel@redhat.com>, "Dave Airlie" <airlied@redhat.com>, "Brendan Conoboy" <blc@redhat.com>, "Joe Donohue" <jdonohue@redhat.com>, "Christophe Harle" <charle@nvidia.com>, "Duncan Poole" <dpoole@nvidia.com>, "Sherry Cheung" <SCheung@nvidia.com>, "Subhash Gutti" <sgutti@nvidia.com>, "John Hubbard" <jhubbard@nvidia.com>, "Mark Hairgrove" <mhairgrove@nvidia.com>, "Lucien Dunning" <ldunning@nvidia.com>, "Cameron Buschardt" <cabuschardt@nvidia.com>, "Arvind Gopalakrishnan" <arvindg@nvidia.com>, "Haggai Eran" <haggaie@mellanox.com>, "Shachar Raindel" <raindel@mellanox.com>, "Liran Liss" <liranl@mellanox.com>, "Roland Dreier" <roland@purestorage.com>, "Ben Sander" <ben.sander@amd.com>, "Greg Stoner" <Greg.Stoner@amd.com>, "John Bridgman" <John.Bridgman@amd.com>, "Michael Mantor" <Michael.Mantor@amd.com>, "Paul Blinzer" <Paul.Blinzer@amd.com>, "Leonid Shamis" <Leonid.Shamis@amd.com>, "Laurent Morichetti" <Laurent.Morichetti@amd.com>, "Alexander Deucher" <Alexander.Deucher@amd.com>, "Jérôme Glisse" <jglisse@redhat.com> Subject: [PATCH v12 10/29] HMM: use CPU page table during invalidation. Date: Tue, 8 Mar 2016 15:43:03 -0500 [thread overview] Message-ID: <1457469802-11850-11-git-send-email-jglisse@redhat.com> (raw) In-Reply-To: <1457469802-11850-1-git-send-email-jglisse@redhat.com> Once we store the dma mapping inside the secondary page table we can no longer easily find back the page backing an address. Instead use the cpu page table which still has the proper information, except for the invalidate_page() case which is handled by using the page passed by the mmu_notifier layer. Signed-off-by: Jérôme Glisse <jglisse@redhat.com> --- mm/hmm.c | 53 +++++++++++++++++++++++++++++++++++------------------ 1 file changed, 35 insertions(+), 18 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index 74e429a..7b6ba6a 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -47,9 +47,11 @@ static struct mmu_notifier_ops hmm_notifier_ops; static void hmm_mirror_kill(struct hmm_mirror *mirror); static inline int hmm_mirror_update(struct hmm_mirror *mirror, - struct hmm_event *event); + struct hmm_event *event, + struct page *page); static void hmm_mirror_update_pt(struct hmm_mirror *mirror, - struct hmm_event *event); + struct hmm_event *event, + struct page *page); /* hmm_event - use to track information relating to an event. @@ -223,7 +225,9 @@ again: } } -static void hmm_update(struct hmm *hmm, struct hmm_event *event) +static void hmm_update(struct hmm *hmm, + struct hmm_event *event, + struct page *page) { struct hmm_mirror *mirror; @@ -236,7 +240,7 @@ static void hmm_update(struct hmm *hmm, struct hmm_event *event) again: down_read(&hmm->rwsem); hlist_for_each_entry(mirror, &hmm->mirrors, mlist) - if (hmm_mirror_update(mirror, event)) { + if (hmm_mirror_update(mirror, event, page)) { mirror = hmm_mirror_ref(mirror); up_read(&hmm->rwsem); hmm_mirror_kill(mirror); @@ -304,7 +308,7 @@ static void hmm_notifier_release(struct mmu_notifier *mn, struct mm_struct *mm) /* Make sure everything is unmapped. */ hmm_event_init(&event, mirror->hmm, 0, -1UL, HMM_MUNMAP); - hmm_mirror_update(mirror, &event); + hmm_mirror_update(mirror, &event, NULL); mirror->device->ops->release(mirror); hmm_mirror_unref(&mirror); @@ -338,9 +342,10 @@ static void hmm_mmu_mprot_to_etype(struct mm_struct *mm, *etype = HMM_NONE; } -static void hmm_notifier_invalidate_range_start(struct mmu_notifier *mn, - struct mm_struct *mm, - const struct mmu_notifier_range *range) +static void hmm_notifier_invalidate(struct mmu_notifier *mn, + struct mm_struct *mm, + struct page *page, + const struct mmu_notifier_range *range) { struct hmm_event event; unsigned long start = range->start, end = range->end; @@ -382,7 +387,14 @@ static void hmm_notifier_invalidate_range_start(struct mmu_notifier *mn, hmm_event_init(&event, hmm, start, end, event.etype); - hmm_update(hmm, &event); + hmm_update(hmm, &event, page); +} + +static void hmm_notifier_invalidate_range_start(struct mmu_notifier *mn, + struct mm_struct *mm, + const struct mmu_notifier_range *range) +{ + hmm_notifier_invalidate(mn, mm, NULL, range); } static void hmm_notifier_invalidate_page(struct mmu_notifier *mn, @@ -396,7 +408,7 @@ static void hmm_notifier_invalidate_page(struct mmu_notifier *mn, range.start = addr & PAGE_MASK; range.end = range.start + PAGE_SIZE; range.event = mmu_event; - hmm_notifier_invalidate_range_start(mn, mm, &range); + hmm_notifier_invalidate(mn, mm, page, &range); } static struct mmu_notifier_ops hmm_notifier_ops = { @@ -554,23 +566,27 @@ void hmm_mirror_unref(struct hmm_mirror **mirror) EXPORT_SYMBOL(hmm_mirror_unref); static inline int hmm_mirror_update(struct hmm_mirror *mirror, - struct hmm_event *event) + struct hmm_event *event, + struct page *page) { struct hmm_device *device = mirror->device; int ret = 0; ret = device->ops->update(mirror, event); - hmm_mirror_update_pt(mirror, event); + hmm_mirror_update_pt(mirror, event, page); return ret; } static void hmm_mirror_update_pt(struct hmm_mirror *mirror, - struct hmm_event *event) + struct hmm_event *event, + struct page *page) { unsigned long addr; struct hmm_pt_iter iter; + struct mm_pt_iter mm_iter; hmm_pt_iter_init(&iter, &mirror->pt); + mm_pt_iter_init(&mm_iter, mirror->hmm->mm); for (addr = event->start; addr != event->end;) { unsigned long next = event->end; dma_addr_t *hmm_pte; @@ -591,10 +607,10 @@ static void hmm_mirror_update_pt(struct hmm_mirror *mirror, continue; if (hmm_pte_test_and_clear_dirty(hmm_pte) && hmm_pte_test_write(hmm_pte)) { - struct page *page; - - page = pfn_to_page(hmm_pte_pfn(*hmm_pte)); - set_page_dirty(page); + page = page ? : mm_pt_iter_page(&mm_iter, addr); + if (page) + set_page_dirty(page); + page = NULL; } *hmm_pte &= event->pte_mask; if (hmm_pte_test_valid_pfn(hmm_pte)) @@ -604,6 +620,7 @@ static void hmm_mirror_update_pt(struct hmm_mirror *mirror, hmm_pt_iter_directory_unlock(&iter); } hmm_pt_iter_fini(&iter); + mm_pt_iter_fini(&mm_iter); } static inline bool hmm_mirror_is_dead(struct hmm_mirror *mirror) @@ -1004,7 +1021,7 @@ static void hmm_mirror_kill(struct hmm_mirror *mirror) /* Make sure everything is unmapped. */ hmm_event_init(&event, mirror->hmm, 0, -1UL, HMM_MUNMAP); - hmm_mirror_update(mirror, &event); + hmm_mirror_update(mirror, &event, NULL); device->ops->release(mirror); hmm_mirror_unref(&mirror); -- 2.4.3
WARNING: multiple messages have this Message-ID (diff)
From: "Jérôme Glisse" <jglisse@redhat.com> To: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: "Linus Torvalds" <torvalds@linux-foundation.org>, joro@8bytes.org, "Mel Gorman" <mgorman@suse.de>, "H. Peter Anvin" <hpa@zytor.com>, "Peter Zijlstra" <peterz@infradead.org>, "Andrea Arcangeli" <aarcange@redhat.com>, "Johannes Weiner" <jweiner@redhat.com>, "Larry Woodman" <lwoodman@redhat.com>, "Rik van Riel" <riel@redhat.com>, "Dave Airlie" <airlied@redhat.com>, "Brendan Conoboy" <blc@redhat.com>, "Joe Donohue" <jdonohue@redhat.com>, "Christophe Harle" <charle@nvidia.com>, "Duncan Poole" <dpoole@nvidia.com>, "Sherry Cheung" <SCheung@nvidia.com>, "Subhash Gutti" <sgutti@nvidia.com>, "John Hubbard" <jhubbard@nvidia.com>, "Mark Hairgrove" <mhairgrove@nvidia.com>, "Lucien Dunning" <ldunning@nvidia.com>, "Cameron Buschardt" <cabuschardt@nvidia.com>, "Arvind Gopalakrishnan" <arvindg@nvidia.com>, "Haggai Eran" <haggaie@mellanox.com>, "Shachar Raindel" <raindel@mellanox.com>, "Liran Liss" <liranl@mellanox.com>, "Roland Dreier" <roland@purestorage.com>, "Ben Sander" <ben.sander@amd.com>, "Greg Stoner" <Greg.Stoner@amd.com>, "John Bridgman" <John.Bridgman@amd.com>, "Michael Mantor" <Michael.Mantor@amd.com>, "Paul Blinzer" <Paul.Blinzer@amd.com>, "Leonid Shamis" <Leonid.Shamis@amd.com>, "Laurent Morichetti" <Laurent.Morichetti@amd.com>, "Alexander Deucher" <Alexander.Deucher@amd.com>, "Jérôme Glisse" <jglisse@redhat.com> Subject: [PATCH v12 10/29] HMM: use CPU page table during invalidation. Date: Tue, 8 Mar 2016 15:43:03 -0500 [thread overview] Message-ID: <1457469802-11850-11-git-send-email-jglisse@redhat.com> (raw) In-Reply-To: <1457469802-11850-1-git-send-email-jglisse@redhat.com> Once we store the dma mapping inside the secondary page table we can no longer easily find back the page backing an address. Instead use the cpu page table which still has the proper information, except for the invalidate_page() case which is handled by using the page passed by the mmu_notifier layer. Signed-off-by: JA(C)rA'me Glisse <jglisse@redhat.com> --- mm/hmm.c | 53 +++++++++++++++++++++++++++++++++++------------------ 1 file changed, 35 insertions(+), 18 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index 74e429a..7b6ba6a 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -47,9 +47,11 @@ static struct mmu_notifier_ops hmm_notifier_ops; static void hmm_mirror_kill(struct hmm_mirror *mirror); static inline int hmm_mirror_update(struct hmm_mirror *mirror, - struct hmm_event *event); + struct hmm_event *event, + struct page *page); static void hmm_mirror_update_pt(struct hmm_mirror *mirror, - struct hmm_event *event); + struct hmm_event *event, + struct page *page); /* hmm_event - use to track information relating to an event. @@ -223,7 +225,9 @@ again: } } -static void hmm_update(struct hmm *hmm, struct hmm_event *event) +static void hmm_update(struct hmm *hmm, + struct hmm_event *event, + struct page *page) { struct hmm_mirror *mirror; @@ -236,7 +240,7 @@ static void hmm_update(struct hmm *hmm, struct hmm_event *event) again: down_read(&hmm->rwsem); hlist_for_each_entry(mirror, &hmm->mirrors, mlist) - if (hmm_mirror_update(mirror, event)) { + if (hmm_mirror_update(mirror, event, page)) { mirror = hmm_mirror_ref(mirror); up_read(&hmm->rwsem); hmm_mirror_kill(mirror); @@ -304,7 +308,7 @@ static void hmm_notifier_release(struct mmu_notifier *mn, struct mm_struct *mm) /* Make sure everything is unmapped. */ hmm_event_init(&event, mirror->hmm, 0, -1UL, HMM_MUNMAP); - hmm_mirror_update(mirror, &event); + hmm_mirror_update(mirror, &event, NULL); mirror->device->ops->release(mirror); hmm_mirror_unref(&mirror); @@ -338,9 +342,10 @@ static void hmm_mmu_mprot_to_etype(struct mm_struct *mm, *etype = HMM_NONE; } -static void hmm_notifier_invalidate_range_start(struct mmu_notifier *mn, - struct mm_struct *mm, - const struct mmu_notifier_range *range) +static void hmm_notifier_invalidate(struct mmu_notifier *mn, + struct mm_struct *mm, + struct page *page, + const struct mmu_notifier_range *range) { struct hmm_event event; unsigned long start = range->start, end = range->end; @@ -382,7 +387,14 @@ static void hmm_notifier_invalidate_range_start(struct mmu_notifier *mn, hmm_event_init(&event, hmm, start, end, event.etype); - hmm_update(hmm, &event); + hmm_update(hmm, &event, page); +} + +static void hmm_notifier_invalidate_range_start(struct mmu_notifier *mn, + struct mm_struct *mm, + const struct mmu_notifier_range *range) +{ + hmm_notifier_invalidate(mn, mm, NULL, range); } static void hmm_notifier_invalidate_page(struct mmu_notifier *mn, @@ -396,7 +408,7 @@ static void hmm_notifier_invalidate_page(struct mmu_notifier *mn, range.start = addr & PAGE_MASK; range.end = range.start + PAGE_SIZE; range.event = mmu_event; - hmm_notifier_invalidate_range_start(mn, mm, &range); + hmm_notifier_invalidate(mn, mm, page, &range); } static struct mmu_notifier_ops hmm_notifier_ops = { @@ -554,23 +566,27 @@ void hmm_mirror_unref(struct hmm_mirror **mirror) EXPORT_SYMBOL(hmm_mirror_unref); static inline int hmm_mirror_update(struct hmm_mirror *mirror, - struct hmm_event *event) + struct hmm_event *event, + struct page *page) { struct hmm_device *device = mirror->device; int ret = 0; ret = device->ops->update(mirror, event); - hmm_mirror_update_pt(mirror, event); + hmm_mirror_update_pt(mirror, event, page); return ret; } static void hmm_mirror_update_pt(struct hmm_mirror *mirror, - struct hmm_event *event) + struct hmm_event *event, + struct page *page) { unsigned long addr; struct hmm_pt_iter iter; + struct mm_pt_iter mm_iter; hmm_pt_iter_init(&iter, &mirror->pt); + mm_pt_iter_init(&mm_iter, mirror->hmm->mm); for (addr = event->start; addr != event->end;) { unsigned long next = event->end; dma_addr_t *hmm_pte; @@ -591,10 +607,10 @@ static void hmm_mirror_update_pt(struct hmm_mirror *mirror, continue; if (hmm_pte_test_and_clear_dirty(hmm_pte) && hmm_pte_test_write(hmm_pte)) { - struct page *page; - - page = pfn_to_page(hmm_pte_pfn(*hmm_pte)); - set_page_dirty(page); + page = page ? : mm_pt_iter_page(&mm_iter, addr); + if (page) + set_page_dirty(page); + page = NULL; } *hmm_pte &= event->pte_mask; if (hmm_pte_test_valid_pfn(hmm_pte)) @@ -604,6 +620,7 @@ static void hmm_mirror_update_pt(struct hmm_mirror *mirror, hmm_pt_iter_directory_unlock(&iter); } hmm_pt_iter_fini(&iter); + mm_pt_iter_fini(&mm_iter); } static inline bool hmm_mirror_is_dead(struct hmm_mirror *mirror) @@ -1004,7 +1021,7 @@ static void hmm_mirror_kill(struct hmm_mirror *mirror) /* Make sure everything is unmapped. */ hmm_event_init(&event, mirror->hmm, 0, -1UL, HMM_MUNMAP); - hmm_mirror_update(mirror, &event); + hmm_mirror_update(mirror, &event, NULL); device->ops->release(mirror); hmm_mirror_unref(&mirror); -- 2.4.3 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-03-08 19:48 UTC|newest] Thread overview: 81+ messages / expand[flat|nested] mbox.gz Atom feed top 2016-03-08 20:42 HMM (Heterogeneous Memory Management) Jérôme Glisse 2016-03-08 20:42 ` Jérôme Glisse 2016-03-08 20:42 ` [PATCH v12 01/29] mmu_notifier: add event information to address invalidation v9 Jérôme Glisse 2016-03-08 20:42 ` Jérôme Glisse 2016-03-08 20:42 ` [PATCH v12 02/29] mmu_notifier: keep track of active invalidation ranges v5 Jérôme Glisse 2016-03-08 20:42 ` Jérôme Glisse 2016-03-08 20:42 ` [PATCH v12 03/29] mmu_notifier: pass page pointer to mmu_notifier_invalidate_page() v2 Jérôme Glisse 2016-03-08 20:42 ` Jérôme Glisse 2016-03-08 20:42 ` [PATCH v12 04/29] mmu_notifier: allow range invalidation to exclude a specific mmu_notifier Jérôme Glisse 2016-03-08 20:42 ` Jérôme Glisse 2016-03-08 20:42 ` [PATCH v12 05/29] HMM: introduce heterogeneous memory management v5 Jérôme Glisse 2016-03-08 20:42 ` Jérôme Glisse 2016-03-08 20:42 ` [PATCH v12 06/29] HMM: add HMM page table v4 Jérôme Glisse 2016-03-08 20:42 ` Jérôme Glisse 2016-03-08 20:43 ` [PATCH v12 07/29] HMM: add per mirror " Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-29 22:58 ` John Hubbard 2016-03-29 22:58 ` John Hubbard 2016-03-08 20:43 ` [PATCH v12 08/29] HMM: add device page fault support v6 Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-23 6:52 ` Aneesh Kumar K.V 2016-03-23 6:52 ` Aneesh Kumar K.V 2016-03-23 10:09 ` Jerome Glisse 2016-03-23 10:09 ` Jerome Glisse 2016-03-23 10:29 ` Aneesh Kumar K.V 2016-03-23 10:29 ` Aneesh Kumar K.V 2016-03-23 11:25 ` Jerome Glisse 2016-03-23 11:25 ` Jerome Glisse 2016-03-08 20:43 ` [PATCH v12 09/29] HMM: add mm page table iterator helpers Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse [this message] 2016-03-08 20:43 ` [PATCH v12 10/29] HMM: use CPU page table during invalidation Jérôme Glisse 2016-03-08 20:43 ` [PATCH v12 11/29] HMM: add discard range helper (to clear and free resources for a range) Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-08 20:43 ` [PATCH v12 12/29] HMM: add dirty range helper (toggle dirty bit inside mirror page table) v2 Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-08 20:43 ` [PATCH v12 13/29] HMM: DMA map memory on behalf of device driver v2 Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-08 20:43 ` [PATCH v12 14/29] HMM: Add support for hugetlb Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-08 20:43 ` [PATCH v12 15/29] HMM: add documentation explaining HMM internals and how to use it Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-08 20:43 ` [PATCH v12 16/29] fork: pass the dst vma to copy_page_range() and its sub-functions Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-08 20:43 ` [PATCH v12 17/29] HMM: add special swap filetype for memory migrated to device v2 Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-08 20:43 ` [PATCH v12 18/29] HMM: add new HMM page table flag (valid device memory) Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-08 20:43 ` [PATCH v12 19/29] HMM: add new HMM page table flag (select flag) Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-08 20:43 ` [PATCH v12 20/29] HMM: handle HMM device page table entry on mirror page table fault and update Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-08 20:43 ` [PATCH v12 21/29] HMM: mm add helper to update page table when migrating memory back v2 Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-21 11:27 ` Aneesh Kumar K.V 2016-03-21 11:27 ` Aneesh Kumar K.V 2016-03-21 12:02 ` Jerome Glisse 2016-03-21 12:02 ` Jerome Glisse 2016-03-21 13:48 ` Aneesh Kumar K.V 2016-03-21 13:48 ` Aneesh Kumar K.V 2016-03-21 14:30 ` Jerome Glisse 2016-03-21 14:30 ` Jerome Glisse 2016-03-08 20:43 ` [PATCH v12 22/29] HMM: mm add helper to update page table when migrating memory v3 Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-21 14:24 ` Aneesh Kumar K.V 2016-03-21 14:24 ` Aneesh Kumar K.V 2016-03-08 20:43 ` [PATCH v12 23/29] HMM: new callback for copying memory from and to device memory v2 Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-08 20:43 ` [PATCH v12 24/29] HMM: allow to get pointer to spinlock protecting a directory Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-08 20:43 ` [PATCH v12 25/29] HMM: split DMA mapping function in two Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-08 20:43 ` [PATCH v12 26/29] HMM: add helpers for migration back to system memory v3 Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-08 20:43 ` [PATCH v12 27/29] HMM: fork copy migrated memory into system memory for child process Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-08 20:43 ` [PATCH v12 28/29] HMM: CPU page fault on migrated memory Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-08 20:43 ` [PATCH v12 29/29] HMM: add mirror fault support for system to device memory migration v3 Jérôme Glisse 2016-03-08 20:43 ` Jérôme Glisse 2016-03-08 22:02 ` HMM (Heterogeneous Memory Management) John Hubbard
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1457469802-11850-11-git-send-email-jglisse@redhat.com \ --to=jglisse@redhat.com \ --cc=Alexander.Deucher@amd.com \ --cc=Greg.Stoner@amd.com \ --cc=John.Bridgman@amd.com \ --cc=Laurent.Morichetti@amd.com \ --cc=Leonid.Shamis@amd.com \ --cc=Michael.Mantor@amd.com \ --cc=Paul.Blinzer@amd.com \ --cc=SCheung@nvidia.com \ --cc=aarcange@redhat.com \ --cc=airlied@redhat.com \ --cc=akpm@linux-foundation.org \ --cc=arvindg@nvidia.com \ --cc=ben.sander@amd.com \ --cc=blc@redhat.com \ --cc=cabuschardt@nvidia.com \ --cc=charle@nvidia.com \ --cc=dpoole@nvidia.com \ --cc=haggaie@mellanox.com \ --cc=hpa@zytor.com \ --cc=jdonohue@redhat.com \ --cc=jhubbard@nvidia.com \ --cc=joro@8bytes.org \ --cc=jweiner@redhat.com \ --cc=ldunning@nvidia.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=liranl@mellanox.com \ --cc=lwoodman@redhat.com \ --cc=mgorman@suse.de \ --cc=mhairgrove@nvidia.com \ --cc=peterz@infradead.org \ --cc=raindel@mellanox.com \ --cc=riel@redhat.com \ --cc=roland@purestorage.com \ --cc=sgutti@nvidia.com \ --cc=torvalds@linux-foundation.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.