From: Ralph Campbell <rcampbell@nvidia.com> To: <linux-rdma@vger.kernel.org>, <linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>, <nouveau@lists.freedesktop.org>, <linux-kselftest@vger.kernel.org> Cc: Jerome Glisse <jglisse@redhat.com>, John Hubbard <jhubbard@nvidia.com>, Christoph Hellwig <hch@lst.de>, Jason Gunthorpe <jgg@mellanox.com>, "Andrew Morton" <akpm@linux-foundation.org>, Ben Skeggs <bskeggs@redhat.com>, "Shuah Khan" <shuah@kernel.org>, Ralph Campbell <rcampbell@nvidia.com> Subject: [PATCH v6 3/6] mm/notifier: add mmu_interval_notifier_update() Date: Mon, 13 Jan 2020 14:47:00 -0800 [thread overview] Message-ID: <20200113224703.5917-4-rcampbell@nvidia.com> (raw) In-Reply-To: <20200113224703.5917-1-rcampbell@nvidia.com> When using mmu interval notifiers, the underlying VMA range that the interval is tracking might be expanded or shrunk. Add a helper function to allow the interval to be updated safely from the ops->invalidate() callback in response to VMA range changes. This is effectively removing and inserting the mmu interval notifier but more efficient. Signed-off-by: Ralph Campbell <rcampbell@nvidia.com> --- include/linux/mmu_notifier.h | 4 +++ mm/mmu_notifier.c | 69 ++++++++++++++++++++++++++++++++++-- 2 files changed, 70 insertions(+), 3 deletions(-) diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index 6dcaa632eef7..0ce59b4f22c2 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -251,6 +251,8 @@ struct mmu_interval_notifier { struct mm_struct *mm; struct hlist_node deferred_item; unsigned long invalidate_seq; + unsigned long updated_start; + unsigned long updated_last; }; #ifdef CONFIG_MMU_NOTIFIER @@ -310,6 +312,8 @@ int mmu_interval_notifier_insert_safe( const struct mmu_interval_notifier_ops *ops); void mmu_interval_notifier_remove(struct mmu_interval_notifier *mni); void mmu_interval_notifier_put(struct mmu_interval_notifier *mni); +void mmu_interval_notifier_update(struct mmu_interval_notifier *mni, + unsigned long start, unsigned long last); /** * mmu_interval_set_seq - Save the invalidation sequence diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 40c837ae8d90..47ad9cc89aab 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -157,7 +157,14 @@ static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm) else { interval_tree_remove(&mni->interval_tree, &mmn_mm->itree); - if (mni->ops->release) + if (mni->updated_last) { + mni->interval_tree.start = mni->updated_start; + mni->interval_tree.last = mni->updated_last; + mni->updated_start = 0; + mni->updated_last = 0; + interval_tree_insert(&mni->interval_tree, + &mmn_mm->itree); + } else if (mni->ops->release) hlist_add_head(&mni->deferred_item, &removed_list); } @@ -872,6 +879,8 @@ static int __mmu_interval_notifier_insert( mni->ops = ops; RB_CLEAR_NODE(&mni->interval_tree.rb); mni->interval_tree.start = start; + mni->updated_start = 0; + mni->updated_last = 0; /* * Note that the representation of the intervals in the interval tree * considers the ending point as contained in the interval. @@ -1038,8 +1047,12 @@ static unsigned long __mmu_interval_notifier_put( if (RB_EMPTY_NODE(&mni->interval_tree.rb)) { hlist_del(&mni->deferred_item); } else { - hlist_add_head(&mni->deferred_item, - &mmn_mm->deferred_list); + if (mni->updated_last) { + mni->updated_start = 0; + mni->updated_last = 0; + } else + hlist_add_head(&mni->deferred_item, + &mmn_mm->deferred_list); seq = mmn_mm->invalidate_seq; } } else { @@ -1108,6 +1121,56 @@ void mmu_interval_notifier_put(struct mmu_interval_notifier *mni) } EXPORT_SYMBOL_GPL(mmu_interval_notifier_put); +/** + * mmu_interval_notifier_update - Update interval notifier range + * @mni: Interval notifier to update + * @start: New starting virtual address to monitor + * @last: New last virtual address (inclusive) to monitor + * + * This function updates the range being monitored and is safe to call from + * the invalidate() callback function. + * Upon return, the mmu_interval_notifier range may not be updated in the + * interval tree yet. The caller must use the normal interval notifier read + * flow via mmu_interval_read_begin() to establish SPTEs for this range. + */ +void mmu_interval_notifier_update(struct mmu_interval_notifier *mni, + unsigned long start, unsigned long last) +{ + struct mm_struct *mm = mni->mm; + struct mmu_notifier_mm *mmn_mm = mm->mmu_notifier_mm; + unsigned long seq = 0; + + if (WARN_ON(start >= last)) + return; + + spin_lock(&mmn_mm->lock); + if (mn_itree_is_invalidating(mmn_mm)) { + /* + * Update is being called after insert put this on the + * deferred list, but before the deferred list was processed. + */ + if (RB_EMPTY_NODE(&mni->interval_tree.rb)) { + mni->interval_tree.start = start; + mni->interval_tree.last = last; + } else { + if (!mni->updated_last) + hlist_add_head(&mni->deferred_item, + &mmn_mm->deferred_list); + mni->updated_start = start; + mni->updated_last = last; + } + seq = mmn_mm->invalidate_seq; + } else { + WARN_ON(RB_EMPTY_NODE(&mni->interval_tree.rb)); + interval_tree_remove(&mni->interval_tree, &mmn_mm->itree); + mni->interval_tree.start = start; + mni->interval_tree.last = last; + interval_tree_insert(&mni->interval_tree, &mmn_mm->itree); + } + spin_unlock(&mmn_mm->lock); +} +EXPORT_SYMBOL_GPL(mmu_interval_notifier_update); + /** * mmu_notifier_synchronize - Ensure all mmu_notifiers are freed * -- 2.20.1
WARNING: multiple messages have this Message-ID (diff)
From: Ralph Campbell <rcampbell-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org> To: linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, nouveau-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org, linux-kselftest-u79uwXL29TY76Z2rM5mHXA@public.gmane.org Cc: Jerome Glisse <jglisse-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>, John Hubbard <jhubbard-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>, Christoph Hellwig <hch-jcswGhMUV9g@public.gmane.org>, Jason Gunthorpe <jgg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>, Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>, Ben Skeggs <bskeggs-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>, Shuah Khan <shuah-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>, Ralph Campbell <rcampbell-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org> Subject: [PATCH v6 3/6] mm/notifier: add mmu_interval_notifier_update() Date: Mon, 13 Jan 2020 14:47:00 -0800 [thread overview] Message-ID: <20200113224703.5917-4-rcampbell@nvidia.com> (raw) In-Reply-To: <20200113224703.5917-1-rcampbell-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org> When using mmu interval notifiers, the underlying VMA range that the interval is tracking might be expanded or shrunk. Add a helper function to allow the interval to be updated safely from the ops->invalidate() callback in response to VMA range changes. This is effectively removing and inserting the mmu interval notifier but more efficient. Signed-off-by: Ralph Campbell <rcampbell-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org> --- include/linux/mmu_notifier.h | 4 +++ mm/mmu_notifier.c | 69 ++++++++++++++++++++++++++++++++++-- 2 files changed, 70 insertions(+), 3 deletions(-) diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index 6dcaa632eef7..0ce59b4f22c2 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -251,6 +251,8 @@ struct mmu_interval_notifier { struct mm_struct *mm; struct hlist_node deferred_item; unsigned long invalidate_seq; + unsigned long updated_start; + unsigned long updated_last; }; #ifdef CONFIG_MMU_NOTIFIER @@ -310,6 +312,8 @@ int mmu_interval_notifier_insert_safe( const struct mmu_interval_notifier_ops *ops); void mmu_interval_notifier_remove(struct mmu_interval_notifier *mni); void mmu_interval_notifier_put(struct mmu_interval_notifier *mni); +void mmu_interval_notifier_update(struct mmu_interval_notifier *mni, + unsigned long start, unsigned long last); /** * mmu_interval_set_seq - Save the invalidation sequence diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 40c837ae8d90..47ad9cc89aab 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -157,7 +157,14 @@ static void mn_itree_inv_end(struct mmu_notifier_mm *mmn_mm) else { interval_tree_remove(&mni->interval_tree, &mmn_mm->itree); - if (mni->ops->release) + if (mni->updated_last) { + mni->interval_tree.start = mni->updated_start; + mni->interval_tree.last = mni->updated_last; + mni->updated_start = 0; + mni->updated_last = 0; + interval_tree_insert(&mni->interval_tree, + &mmn_mm->itree); + } else if (mni->ops->release) hlist_add_head(&mni->deferred_item, &removed_list); } @@ -872,6 +879,8 @@ static int __mmu_interval_notifier_insert( mni->ops = ops; RB_CLEAR_NODE(&mni->interval_tree.rb); mni->interval_tree.start = start; + mni->updated_start = 0; + mni->updated_last = 0; /* * Note that the representation of the intervals in the interval tree * considers the ending point as contained in the interval. @@ -1038,8 +1047,12 @@ static unsigned long __mmu_interval_notifier_put( if (RB_EMPTY_NODE(&mni->interval_tree.rb)) { hlist_del(&mni->deferred_item); } else { - hlist_add_head(&mni->deferred_item, - &mmn_mm->deferred_list); + if (mni->updated_last) { + mni->updated_start = 0; + mni->updated_last = 0; + } else + hlist_add_head(&mni->deferred_item, + &mmn_mm->deferred_list); seq = mmn_mm->invalidate_seq; } } else { @@ -1108,6 +1121,56 @@ void mmu_interval_notifier_put(struct mmu_interval_notifier *mni) } EXPORT_SYMBOL_GPL(mmu_interval_notifier_put); +/** + * mmu_interval_notifier_update - Update interval notifier range + * @mni: Interval notifier to update + * @start: New starting virtual address to monitor + * @last: New last virtual address (inclusive) to monitor + * + * This function updates the range being monitored and is safe to call from + * the invalidate() callback function. + * Upon return, the mmu_interval_notifier range may not be updated in the + * interval tree yet. The caller must use the normal interval notifier read + * flow via mmu_interval_read_begin() to establish SPTEs for this range. + */ +void mmu_interval_notifier_update(struct mmu_interval_notifier *mni, + unsigned long start, unsigned long last) +{ + struct mm_struct *mm = mni->mm; + struct mmu_notifier_mm *mmn_mm = mm->mmu_notifier_mm; + unsigned long seq = 0; + + if (WARN_ON(start >= last)) + return; + + spin_lock(&mmn_mm->lock); + if (mn_itree_is_invalidating(mmn_mm)) { + /* + * Update is being called after insert put this on the + * deferred list, but before the deferred list was processed. + */ + if (RB_EMPTY_NODE(&mni->interval_tree.rb)) { + mni->interval_tree.start = start; + mni->interval_tree.last = last; + } else { + if (!mni->updated_last) + hlist_add_head(&mni->deferred_item, + &mmn_mm->deferred_list); + mni->updated_start = start; + mni->updated_last = last; + } + seq = mmn_mm->invalidate_seq; + } else { + WARN_ON(RB_EMPTY_NODE(&mni->interval_tree.rb)); + interval_tree_remove(&mni->interval_tree, &mmn_mm->itree); + mni->interval_tree.start = start; + mni->interval_tree.last = last; + interval_tree_insert(&mni->interval_tree, &mmn_mm->itree); + } + spin_unlock(&mmn_mm->lock); +} +EXPORT_SYMBOL_GPL(mmu_interval_notifier_update); + /** * mmu_notifier_synchronize - Ensure all mmu_notifiers are freed * -- 2.20.1
next prev parent reply other threads:[~2020-01-13 22:47 UTC|newest] Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-01-13 22:46 [PATCH v6 0/6] mm/hmm/test: add self tests for HMM Ralph Campbell 2020-01-13 22:46 ` Ralph Campbell 2020-01-13 22:46 ` [PATCH v6 1/6] mm/mmu_notifier: add mmu_interval_notifier_insert_safe() Ralph Campbell 2020-01-13 22:46 ` Ralph Campbell 2020-01-16 10:07 ` Christoph Hellwig 2020-01-13 22:46 ` [PATCH v6 2/6] mm/mmu_notifier: add mmu_interval_notifier_put() Ralph Campbell 2020-01-13 22:46 ` Ralph Campbell 2020-01-13 22:47 ` Ralph Campbell [this message] 2020-01-13 22:47 ` [PATCH v6 3/6] mm/notifier: add mmu_interval_notifier_update() Ralph Campbell 2020-01-13 22:47 ` [PATCH v6 4/6] mm/mmu_notifier: add mmu_interval_notifier_find() Ralph Campbell 2020-01-13 22:47 ` Ralph Campbell 2020-01-14 12:49 ` Jason Gunthorpe 2020-01-14 12:49 ` Jason Gunthorpe 2020-01-15 22:05 ` Ralph Campbell 2020-01-15 22:05 ` Ralph Campbell 2020-01-16 14:11 ` Jason Gunthorpe 2020-01-13 22:47 ` [PATCH v6 5/6] nouveau: use new mmu interval notifiers Ralph Campbell 2020-01-13 22:47 ` Ralph Campbell 2020-01-14 13:00 ` Jason Gunthorpe 2020-01-14 13:00 ` Jason Gunthorpe 2020-01-15 22:09 ` Ralph Campbell 2020-01-16 16:00 ` Jason Gunthorpe 2020-01-16 20:16 ` Ralph Campbell 2020-01-16 20:16 ` Ralph Campbell 2020-01-16 20:21 ` Jason Gunthorpe 2020-02-20 1:10 ` Ralph Campbell 2020-02-20 1:10 ` Ralph Campbell 2020-01-13 22:47 ` [PATCH v6 6/6] mm/hmm/test: add self tests for HMM Ralph Campbell 2020-01-13 22:47 ` Ralph Campbell
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20200113224703.5917-4-rcampbell@nvidia.com \ --to=rcampbell@nvidia.com \ --cc=akpm@linux-foundation.org \ --cc=bskeggs@redhat.com \ --cc=hch@lst.de \ --cc=jgg@mellanox.com \ --cc=jglisse@redhat.com \ --cc=jhubbard@nvidia.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-kselftest@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=linux-rdma@vger.kernel.org \ --cc=nouveau@lists.freedesktop.org \ --cc=shuah@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.