* [PATCH v3 mm 1/2] mm: Define pte_index as macro for x86
@ 2020-02-28 5:47 Arjun Roy
2020-02-28 5:47 ` [PATCH v3 mm 2/2] mm: vm_insert_pages() checks if pte_index defined Arjun Roy
0 siblings, 1 reply; 4+ messages in thread
From: Arjun Roy @ 2020-02-28 5:47 UTC (permalink / raw)
To: davem, akpm, linux-mm; +Cc: arjunroy, soheil, edumazet, sfr, geert
From: Arjun Roy <arjunroy@google.com>
pte_index() is either defined as a macro (e.g. sparc64) or as an
inlined function (e.g. x86). vm_insert_pages() depends on pte_index
but it is not defined on all platforms (e.g. m68k).
To fix compilation of vm_insert_pages() on architectures not providing
pte_index(), we perform the following fix:
0. For platforms where it is meaningful, and defined as a macro, no
change is needed.
1. For platforms where it is meaningful and defined as an inlined
function, and we want to use it with vm_insert_pages(), we define
a degenerate macro of the form: #define pte_index pte_index
2. vm_insert_pages() checks for the existence of a pte_index macro
definition. If found, it implements a batched insert. If not found,
it devolves to calling vm_insert_page() in a loop.
This patch implements step 1 for x86.
v3 of this patch fixes a compilation warning for an unused method.
v2 of this patch moved a macro definition to a more readable location.
Signed-off-by: Arjun Roy <arjunroy@google.com>
---
arch/x86/include/asm/pgtable.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 7e118660bbd9..d9925b10e326 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -828,7 +828,10 @@ static inline unsigned long pmd_index(unsigned long address)
*
* this function returns the index of the entry in the pte page which would
* control the given virtual address
+ *
+ * Also define macro so we can test if pte_index is defined for arch.
*/
+#define pte_index pte_index
static inline unsigned long pte_index(unsigned long address)
{
return (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
--
2.25.1.481.gfbce0eb801-goog
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH v3 mm 2/2] mm: vm_insert_pages() checks if pte_index defined.
2020-02-28 5:47 [PATCH v3 mm 1/2] mm: Define pte_index as macro for x86 Arjun Roy
@ 2020-02-28 5:47 ` Arjun Roy
2020-02-28 13:38 ` Jason Gunthorpe
0 siblings, 1 reply; 4+ messages in thread
From: Arjun Roy @ 2020-02-28 5:47 UTC (permalink / raw)
To: davem, akpm, linux-mm; +Cc: arjunroy, soheil, edumazet, sfr, geert
From: Arjun Roy <arjunroy@google.com>
pte_index() is either defined as a macro (e.g. sparc64) or as an
inlined function (e.g. x86). vm_insert_pages() depends on pte_index
but it is not defined on all platforms (e.g. m68k).
To fix compilation of vm_insert_pages() on architectures not providing
pte_index(), we perform the following fix:
0. For platforms where it is meaningful, and defined as a macro, no
change is needed.
1. For platforms where it is meaningful and defined as an inlined
function, and we want to use it with vm_insert_pages(), we define
a degenerate macro of the form: #define pte_index pte_index
2. vm_insert_pages() checks for the existence of a pte_index macro
definition. If found, it implements a batched insert. If not found,
it devolves to calling vm_insert_page() in a loop.
This patch implements step 2.
v3 of this patch fixes a compilation warning for an unused method.
v2 of this patch moved a macro definition to a more readable location.
Signed-off-by: Arjun Roy <arjunroy@google.com>
---
mm/memory.c | 41 ++++++++++++++++++++++++++++-------------
1 file changed, 28 insertions(+), 13 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index d6f834f7d145..47b28fcc73c2 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1460,18 +1460,6 @@ static int insert_page_into_pte_locked(struct mm_struct *mm, pte_t *pte,
return 0;
}
-static int insert_page_in_batch_locked(struct mm_struct *mm, pmd_t *pmd,
- unsigned long addr, struct page *page, pgprot_t prot)
-{
- int err;
-
- if (!page_count(page))
- return -EINVAL;
- err = validate_page_before_insert(page);
- return err ? err : insert_page_into_pte_locked(
- mm, pte_offset_map(pmd, addr), addr, page, prot);
-}
-
/*
* This is the old fallback for page remapping.
*
@@ -1500,8 +1488,21 @@ static int insert_page(struct vm_area_struct *vma, unsigned long addr,
return retval;
}
+#ifdef pte_index
+static int insert_page_in_batch_locked(struct mm_struct *mm, pmd_t *pmd,
+ unsigned long addr, struct page *page, pgprot_t prot)
+{
+ int err;
+
+ if (!page_count(page))
+ return -EINVAL;
+ err = validate_page_before_insert(page);
+ return err ? err : insert_page_into_pte_locked(
+ mm, pte_offset_map(pmd, addr), addr, page, prot);
+}
+
/* insert_pages() amortizes the cost of spinlock operations
- * when inserting pages in a loop.
+ * when inserting pages in a loop. Arch *must* define pte_index.
*/
static int insert_pages(struct vm_area_struct *vma, unsigned long addr,
struct page **pages, unsigned long *num, pgprot_t prot)
@@ -1556,6 +1557,7 @@ static int insert_pages(struct vm_area_struct *vma, unsigned long addr,
*num = remaining_pages_total;
return ret;
}
+#endif /* ifdef pte_index */
/**
* vm_insert_pages - insert multiple pages into user vma, batching the pmd lock.
@@ -1575,6 +1577,7 @@ static int insert_pages(struct vm_area_struct *vma, unsigned long addr,
int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr,
struct page **pages, unsigned long *num)
{
+#ifdef pte_index
const unsigned long end_addr = addr + (*num * PAGE_SIZE) - 1;
if (addr < vma->vm_start || end_addr >= vma->vm_end)
@@ -1586,6 +1589,18 @@ int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr,
}
/* Defer page refcount checking till we're about to map that page. */
return insert_pages(vma, addr, pages, num, vma->vm_page_prot);
+#else
+ unsigned long idx = 0, pgcount = *num;
+ int err;
+
+ for (; idx < pgcount; ++idx) {
+ err = vm_insert_page(vma, addr + (PAGE_SIZE * idx), pages[idx]);
+ if (err)
+ break;
+ }
+ *num = pgcount - idx;
+ return err;
+#endif /* ifdef pte_index */
}
EXPORT_SYMBOL(vm_insert_pages);
--
2.25.1.481.gfbce0eb801-goog
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH v3 mm 2/2] mm: vm_insert_pages() checks if pte_index defined.
2020-02-28 5:47 ` [PATCH v3 mm 2/2] mm: vm_insert_pages() checks if pte_index defined Arjun Roy
@ 2020-02-28 13:38 ` Jason Gunthorpe
2020-02-28 17:22 ` Soheil Hassas Yeganeh
0 siblings, 1 reply; 4+ messages in thread
From: Jason Gunthorpe @ 2020-02-28 13:38 UTC (permalink / raw)
To: Arjun Roy; +Cc: davem, akpm, linux-mm, arjunroy, soheil, edumazet, sfr, geert
On Thu, Feb 27, 2020 at 09:47:14PM -0800, Arjun Roy wrote:
> diff --git a/mm/memory.c b/mm/memory.c
> index d6f834f7d145..47b28fcc73c2 100644
> +++ b/mm/memory.c
> @@ -1460,18 +1460,6 @@ static int insert_page_into_pte_locked(struct mm_struct *mm, pte_t *pte,
> return 0;
> }
>
> -static int insert_page_in_batch_locked(struct mm_struct *mm, pmd_t *pmd,
> - unsigned long addr, struct page *page, pgprot_t prot)
> -{
> - int err;
> -
> - if (!page_count(page))
> - return -EINVAL;
> - err = validate_page_before_insert(page);
> - return err ? err : insert_page_into_pte_locked(
> - mm, pte_offset_map(pmd, addr), addr, page, prot);
> -}
> -
> /*
> * This is the old fallback for page remapping.
> *
> @@ -1500,8 +1488,21 @@ static int insert_page(struct vm_area_struct *vma, unsigned long addr,
> return retval;
> }
>
> +#ifdef pte_index
It seems a bit weird like this, don't we usually do this kind of stuff
with some CONFIG_ARCH_HAS_XX thing?
IMHO all arches should implement pte_index as the static inline, that
has been the general direction lately.
Jason
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v3 mm 2/2] mm: vm_insert_pages() checks if pte_index defined.
2020-02-28 13:38 ` Jason Gunthorpe
@ 2020-02-28 17:22 ` Soheil Hassas Yeganeh
0 siblings, 0 replies; 4+ messages in thread
From: Soheil Hassas Yeganeh @ 2020-02-28 17:22 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Arjun Roy, David Miller, akpm, linux-mm, Arjun Roy, Eric Dumazet,
Stephen Rothwell, geert
On Fri, Feb 28, 2020 at 8:38 AM Jason Gunthorpe <jgg@ziepe.ca> wrote:
>
> On Thu, Feb 27, 2020 at 09:47:14PM -0800, Arjun Roy wrote:
> > diff --git a/mm/memory.c b/mm/memory.c
> > index d6f834f7d145..47b28fcc73c2 100644
> > +++ b/mm/memory.c
> > @@ -1460,18 +1460,6 @@ static int insert_page_into_pte_locked(struct mm_struct *mm, pte_t *pte,
> > return 0;
> > }
> >
> > -static int insert_page_in_batch_locked(struct mm_struct *mm, pmd_t *pmd,
> > - unsigned long addr, struct page *page, pgprot_t prot)
> > -{
> > - int err;
> > -
> > - if (!page_count(page))
> > - return -EINVAL;
> > - err = validate_page_before_insert(page);
> > - return err ? err : insert_page_into_pte_locked(
> > - mm, pte_offset_map(pmd, addr), addr, page, prot);
> > -}
> > -
> > /*
> > * This is the old fallback for page remapping.
> > *
> > @@ -1500,8 +1488,21 @@ static int insert_page(struct vm_area_struct *vma, unsigned long addr,
> > return retval;
> > }
> >
> > +#ifdef pte_index
>
> It seems a bit weird like this, don't we usually do this kind of stuff
> with some CONFIG_ARCH_HAS_XX thing?
>
> IMHO all arches should implement pte_index as the static inline, that
> has been the general direction lately.
Based on a comment from Stephen Rothwell, we found out that "static
inline" functions are only used in tile and x86. That's why Arjun
opted for this method to create a smaller patch-series to fix the
build breakage.
Thanks,
Soheil
> Jason
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2020-02-28 17:23 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-28 5:47 [PATCH v3 mm 1/2] mm: Define pte_index as macro for x86 Arjun Roy
2020-02-28 5:47 ` [PATCH v3 mm 2/2] mm: vm_insert_pages() checks if pte_index defined Arjun Roy
2020-02-28 13:38 ` Jason Gunthorpe
2020-02-28 17:22 ` Soheil Hassas Yeganeh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).