From: John Hubbard <jhubbard@nvidia.com> To: Andrew Morton <akpm@linux-foundation.org> Cc: Souptick Joarder <jrdr.linux@gmail.com>, Matthew Wilcox <willy@infradead.org>, Jani Nikula <jani.nikula@linux.intel.com>, Joonas Lahtinen <joonas.lahtinen@linux.intel.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>, David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>, Chris Wilson <chris@chris-wilson.co.uk>, Tvrtko Ursulin <tvrtko.ursulin@intel.com>, Matthew Auld <matthew.auld@intel.com>, <intel-gfx@lists.freedesktop.org>, <dri-devel@lists.freedesktop.org>, LKML <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>, John Hubbard <jhubbard@nvidia.com> Subject: [PATCH v2 1/4] mm/gup: move __get_user_pages_fast() down a few lines in gup.c Date: Thu, 21 May 2020 22:19:28 -0700 [thread overview] Message-ID: <20200522051931.54191-2-jhubbard@nvidia.com> (raw) In-Reply-To: <20200522051931.54191-1-jhubbard@nvidia.com> This is in order to avoid a forward declaration of internal_get_user_pages_fast(), in the next patch. This is code movement only--all generated code should be identical. Signed-off-by: John Hubbard <jhubbard@nvidia.com> --- mm/gup.c | 112 +++++++++++++++++++++++++++---------------------------- 1 file changed, 56 insertions(+), 56 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 50cd9323efff..4502846d57f9 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2666,62 +2666,6 @@ static bool gup_fast_permitted(unsigned long start, unsigned long end) } #endif -/* - * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall back to - * the regular GUP. - * Note a difference with get_user_pages_fast: this always returns the - * number of pages pinned, 0 if no pages were pinned. - * - * If the architecture does not support this function, simply return with no - * pages pinned. - */ -int __get_user_pages_fast(unsigned long start, int nr_pages, int write, - struct page **pages) -{ - unsigned long len, end; - unsigned long flags; - int nr_pinned = 0; - /* - * Internally (within mm/gup.c), gup fast variants must set FOLL_GET, - * because gup fast is always a "pin with a +1 page refcount" request. - */ - unsigned int gup_flags = FOLL_GET; - - if (write) - gup_flags |= FOLL_WRITE; - - start = untagged_addr(start) & PAGE_MASK; - len = (unsigned long) nr_pages << PAGE_SHIFT; - end = start + len; - - if (end <= start) - return 0; - if (unlikely(!access_ok((void __user *)start, len))) - return 0; - - /* - * Disable interrupts. We use the nested form as we can already have - * interrupts disabled by get_futex_key. - * - * With interrupts disabled, we block page table pages from being - * freed from under us. See struct mmu_table_batch comments in - * include/asm-generic/tlb.h for more details. - * - * We do not adopt an rcu_read_lock(.) here as we also want to - * block IPIs that come from THPs splitting. - */ - - if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && - gup_fast_permitted(start, end)) { - local_irq_save(flags); - gup_pgd_range(start, end, gup_flags, pages, &nr_pinned); - local_irq_restore(flags); - } - - return nr_pinned; -} -EXPORT_SYMBOL_GPL(__get_user_pages_fast); - static int __gup_longterm_unlocked(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages) { @@ -2794,6 +2738,62 @@ static int internal_get_user_pages_fast(unsigned long start, int nr_pages, return ret; } +/* + * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall back to + * the regular GUP. + * Note a difference with get_user_pages_fast: this always returns the + * number of pages pinned, 0 if no pages were pinned. + * + * If the architecture does not support this function, simply return with no + * pages pinned. + */ +int __get_user_pages_fast(unsigned long start, int nr_pages, int write, + struct page **pages) +{ + unsigned long len, end; + unsigned long flags; + int nr_pinned = 0; + /* + * Internally (within mm/gup.c), gup fast variants must set FOLL_GET, + * because gup fast is always a "pin with a +1 page refcount" request. + */ + unsigned int gup_flags = FOLL_GET; + + if (write) + gup_flags |= FOLL_WRITE; + + start = untagged_addr(start) & PAGE_MASK; + len = (unsigned long) nr_pages << PAGE_SHIFT; + end = start + len; + + if (end <= start) + return 0; + if (unlikely(!access_ok((void __user *)start, len))) + return 0; + + /* + * Disable interrupts. We use the nested form as we can already have + * interrupts disabled by get_futex_key. + * + * With interrupts disabled, we block page table pages from being + * freed from under us. See struct mmu_table_batch comments in + * include/asm-generic/tlb.h for more details. + * + * We do not adopt an rcu_read_lock(.) here as we also want to + * block IPIs that come from THPs splitting. + */ + + if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && + gup_fast_permitted(start, end)) { + local_irq_save(flags); + gup_pgd_range(start, end, gup_flags, pages, &nr_pinned); + local_irq_restore(flags); + } + + return nr_pinned; +} +EXPORT_SYMBOL_GPL(__get_user_pages_fast); + /** * get_user_pages_fast() - pin user pages in memory * @start: starting user address -- 2.26.2
WARNING: multiple messages have this Message-ID
From: John Hubbard <jhubbard@nvidia.com> To: Andrew Morton <akpm@linux-foundation.org> Cc: Matthew Wilcox <willy@infradead.org>, dri-devel@lists.freedesktop.org, Tvrtko Ursulin <tvrtko.ursulin@intel.com>, David Airlie <airlied@linux.ie>, John Hubbard <jhubbard@nvidia.com>, intel-gfx@lists.freedesktop.org, LKML <linux-kernel@vger.kernel.org>, Chris Wilson <chris@chris-wilson.co.uk>, linux-mm@kvack.org, Souptick Joarder <jrdr.linux@gmail.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>, Matthew Auld <matthew.auld@intel.com> Subject: [PATCH v2 1/4] mm/gup: move __get_user_pages_fast() down a few lines in gup.c Date: Thu, 21 May 2020 22:19:28 -0700 [thread overview] Message-ID: <20200522051931.54191-2-jhubbard@nvidia.com> (raw) In-Reply-To: <20200522051931.54191-1-jhubbard@nvidia.com> This is in order to avoid a forward declaration of internal_get_user_pages_fast(), in the next patch. This is code movement only--all generated code should be identical. Signed-off-by: John Hubbard <jhubbard@nvidia.com> --- mm/gup.c | 112 +++++++++++++++++++++++++++---------------------------- 1 file changed, 56 insertions(+), 56 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 50cd9323efff..4502846d57f9 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2666,62 +2666,6 @@ static bool gup_fast_permitted(unsigned long start, unsigned long end) } #endif -/* - * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall back to - * the regular GUP. - * Note a difference with get_user_pages_fast: this always returns the - * number of pages pinned, 0 if no pages were pinned. - * - * If the architecture does not support this function, simply return with no - * pages pinned. - */ -int __get_user_pages_fast(unsigned long start, int nr_pages, int write, - struct page **pages) -{ - unsigned long len, end; - unsigned long flags; - int nr_pinned = 0; - /* - * Internally (within mm/gup.c), gup fast variants must set FOLL_GET, - * because gup fast is always a "pin with a +1 page refcount" request. - */ - unsigned int gup_flags = FOLL_GET; - - if (write) - gup_flags |= FOLL_WRITE; - - start = untagged_addr(start) & PAGE_MASK; - len = (unsigned long) nr_pages << PAGE_SHIFT; - end = start + len; - - if (end <= start) - return 0; - if (unlikely(!access_ok((void __user *)start, len))) - return 0; - - /* - * Disable interrupts. We use the nested form as we can already have - * interrupts disabled by get_futex_key. - * - * With interrupts disabled, we block page table pages from being - * freed from under us. See struct mmu_table_batch comments in - * include/asm-generic/tlb.h for more details. - * - * We do not adopt an rcu_read_lock(.) here as we also want to - * block IPIs that come from THPs splitting. - */ - - if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && - gup_fast_permitted(start, end)) { - local_irq_save(flags); - gup_pgd_range(start, end, gup_flags, pages, &nr_pinned); - local_irq_restore(flags); - } - - return nr_pinned; -} -EXPORT_SYMBOL_GPL(__get_user_pages_fast); - static int __gup_longterm_unlocked(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages) { @@ -2794,6 +2738,62 @@ static int internal_get_user_pages_fast(unsigned long start, int nr_pages, return ret; } +/* + * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall back to + * the regular GUP. + * Note a difference with get_user_pages_fast: this always returns the + * number of pages pinned, 0 if no pages were pinned. + * + * If the architecture does not support this function, simply return with no + * pages pinned. + */ +int __get_user_pages_fast(unsigned long start, int nr_pages, int write, + struct page **pages) +{ + unsigned long len, end; + unsigned long flags; + int nr_pinned = 0; + /* + * Internally (within mm/gup.c), gup fast variants must set FOLL_GET, + * because gup fast is always a "pin with a +1 page refcount" request. + */ + unsigned int gup_flags = FOLL_GET; + + if (write) + gup_flags |= FOLL_WRITE; + + start = untagged_addr(start) & PAGE_MASK; + len = (unsigned long) nr_pages << PAGE_SHIFT; + end = start + len; + + if (end <= start) + return 0; + if (unlikely(!access_ok((void __user *)start, len))) + return 0; + + /* + * Disable interrupts. We use the nested form as we can already have + * interrupts disabled by get_futex_key. + * + * With interrupts disabled, we block page table pages from being + * freed from under us. See struct mmu_table_batch comments in + * include/asm-generic/tlb.h for more details. + * + * We do not adopt an rcu_read_lock(.) here as we also want to + * block IPIs that come from THPs splitting. + */ + + if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && + gup_fast_permitted(start, end)) { + local_irq_save(flags); + gup_pgd_range(start, end, gup_flags, pages, &nr_pinned); + local_irq_restore(flags); + } + + return nr_pinned; +} +EXPORT_SYMBOL_GPL(__get_user_pages_fast); + /** * get_user_pages_fast() - pin user pages in memory * @start: starting user address -- 2.26.2 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
WARNING: multiple messages have this Message-ID
From: John Hubbard <jhubbard@nvidia.com> To: Andrew Morton <akpm@linux-foundation.org> Cc: Matthew Wilcox <willy@infradead.org>, dri-devel@lists.freedesktop.org, David Airlie <airlied@linux.ie>, John Hubbard <jhubbard@nvidia.com>, intel-gfx@lists.freedesktop.org, LKML <linux-kernel@vger.kernel.org>, Chris Wilson <chris@chris-wilson.co.uk>, linux-mm@kvack.org, Souptick Joarder <jrdr.linux@gmail.com>, Matthew Auld <matthew.auld@intel.com> Subject: [Intel-gfx] [PATCH v2 1/4] mm/gup: move __get_user_pages_fast() down a few lines in gup.c Date: Thu, 21 May 2020 22:19:28 -0700 [thread overview] Message-ID: <20200522051931.54191-2-jhubbard@nvidia.com> (raw) In-Reply-To: <20200522051931.54191-1-jhubbard@nvidia.com> This is in order to avoid a forward declaration of internal_get_user_pages_fast(), in the next patch. This is code movement only--all generated code should be identical. Signed-off-by: John Hubbard <jhubbard@nvidia.com> --- mm/gup.c | 112 +++++++++++++++++++++++++++---------------------------- 1 file changed, 56 insertions(+), 56 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 50cd9323efff..4502846d57f9 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2666,62 +2666,6 @@ static bool gup_fast_permitted(unsigned long start, unsigned long end) } #endif -/* - * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall back to - * the regular GUP. - * Note a difference with get_user_pages_fast: this always returns the - * number of pages pinned, 0 if no pages were pinned. - * - * If the architecture does not support this function, simply return with no - * pages pinned. - */ -int __get_user_pages_fast(unsigned long start, int nr_pages, int write, - struct page **pages) -{ - unsigned long len, end; - unsigned long flags; - int nr_pinned = 0; - /* - * Internally (within mm/gup.c), gup fast variants must set FOLL_GET, - * because gup fast is always a "pin with a +1 page refcount" request. - */ - unsigned int gup_flags = FOLL_GET; - - if (write) - gup_flags |= FOLL_WRITE; - - start = untagged_addr(start) & PAGE_MASK; - len = (unsigned long) nr_pages << PAGE_SHIFT; - end = start + len; - - if (end <= start) - return 0; - if (unlikely(!access_ok((void __user *)start, len))) - return 0; - - /* - * Disable interrupts. We use the nested form as we can already have - * interrupts disabled by get_futex_key. - * - * With interrupts disabled, we block page table pages from being - * freed from under us. See struct mmu_table_batch comments in - * include/asm-generic/tlb.h for more details. - * - * We do not adopt an rcu_read_lock(.) here as we also want to - * block IPIs that come from THPs splitting. - */ - - if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && - gup_fast_permitted(start, end)) { - local_irq_save(flags); - gup_pgd_range(start, end, gup_flags, pages, &nr_pinned); - local_irq_restore(flags); - } - - return nr_pinned; -} -EXPORT_SYMBOL_GPL(__get_user_pages_fast); - static int __gup_longterm_unlocked(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages) { @@ -2794,6 +2738,62 @@ static int internal_get_user_pages_fast(unsigned long start, int nr_pages, return ret; } +/* + * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall back to + * the regular GUP. + * Note a difference with get_user_pages_fast: this always returns the + * number of pages pinned, 0 if no pages were pinned. + * + * If the architecture does not support this function, simply return with no + * pages pinned. + */ +int __get_user_pages_fast(unsigned long start, int nr_pages, int write, + struct page **pages) +{ + unsigned long len, end; + unsigned long flags; + int nr_pinned = 0; + /* + * Internally (within mm/gup.c), gup fast variants must set FOLL_GET, + * because gup fast is always a "pin with a +1 page refcount" request. + */ + unsigned int gup_flags = FOLL_GET; + + if (write) + gup_flags |= FOLL_WRITE; + + start = untagged_addr(start) & PAGE_MASK; + len = (unsigned long) nr_pages << PAGE_SHIFT; + end = start + len; + + if (end <= start) + return 0; + if (unlikely(!access_ok((void __user *)start, len))) + return 0; + + /* + * Disable interrupts. We use the nested form as we can already have + * interrupts disabled by get_futex_key. + * + * With interrupts disabled, we block page table pages from being + * freed from under us. See struct mmu_table_batch comments in + * include/asm-generic/tlb.h for more details. + * + * We do not adopt an rcu_read_lock(.) here as we also want to + * block IPIs that come from THPs splitting. + */ + + if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) && + gup_fast_permitted(start, end)) { + local_irq_save(flags); + gup_pgd_range(start, end, gup_flags, pages, &nr_pinned); + local_irq_restore(flags); + } + + return nr_pinned; +} +EXPORT_SYMBOL_GPL(__get_user_pages_fast); + /** * get_user_pages_fast() - pin user pages in memory * @start: starting user address -- 2.26.2 _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2020-05-22 5:19 UTC|newest] Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-05-22 5:19 [PATCH v2 0/4] mm/gup, drm/i915: refactor gup_fast, convert to pin_user_pages() John Hubbard 2020-05-22 5:19 ` [Intel-gfx] " John Hubbard 2020-05-22 5:19 ` John Hubbard 2020-05-22 5:19 ` John Hubbard [this message] 2020-05-22 5:19 ` [Intel-gfx] [PATCH v2 1/4] mm/gup: move __get_user_pages_fast() down a few lines in gup.c John Hubbard 2020-05-22 5:19 ` John Hubbard 2020-05-22 5:19 ` [PATCH v2 2/4] mm/gup: refactor and de-duplicate gup_fast() code John Hubbard 2020-05-22 5:19 ` [Intel-gfx] " John Hubbard 2020-05-22 5:19 ` John Hubbard 2020-05-22 5:19 ` [PATCH v2 3/4] mm/gup: introduce pin_user_pages_fast_only() John Hubbard 2020-05-22 5:19 ` [Intel-gfx] " John Hubbard 2020-05-22 5:19 ` John Hubbard 2020-05-22 5:19 ` [PATCH v2 4/4] drm/i915: convert get_user_pages() --> pin_user_pages() John Hubbard 2020-05-22 5:19 ` [Intel-gfx] " John Hubbard 2020-05-22 5:19 ` John Hubbard 2020-05-23 9:41 ` [PATCH v2 0/4] mm/gup, drm/i915: refactor gup_fast, convert to pin_user_pages() Chris Wilson 2020-05-23 9:41 ` [Intel-gfx] " Chris Wilson 2020-05-23 9:41 ` Chris Wilson 2020-05-23 22:33 ` John Hubbard 2020-05-23 22:33 ` [Intel-gfx] " John Hubbard 2020-05-23 22:33 ` John Hubbard 2020-05-23 18:35 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork 2020-05-23 18:37 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork 2020-05-23 20:34 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork 2020-05-23 21:35 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20200522051931.54191-2-jhubbard@nvidia.com \ --to=jhubbard@nvidia.com \ --cc=airlied@linux.ie \ --cc=akpm@linux-foundation.org \ --cc=chris@chris-wilson.co.uk \ --cc=daniel@ffwll.ch \ --cc=dri-devel@lists.freedesktop.org \ --cc=intel-gfx@lists.freedesktop.org \ --cc=jani.nikula@linux.intel.com \ --cc=joonas.lahtinen@linux.intel.com \ --cc=jrdr.linux@gmail.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=matthew.auld@intel.com \ --cc=rodrigo.vivi@intel.com \ --cc=tvrtko.ursulin@intel.com \ --cc=willy@infradead.org \ --subject='Re: [PATCH v2 1/4] mm/gup: move __get_user_pages_fast() down a few lines in gup.c' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.