All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 mm 1/2] mm: Define pte_index as macro for x86
@ 2020-02-28  3:03 Arjun Roy
  2020-02-28  3:03 ` [PATCH v2 mm 2/2] mm: vm_insert_pages() checks if pte_index defined Arjun Roy
  0 siblings, 1 reply; 2+ messages in thread
From: Arjun Roy @ 2020-02-28  3:03 UTC (permalink / raw)
  To: davem, akpm, linux-mm; +Cc: arjunroy, soheil, edumazet, sfr, geert

From: Arjun Roy <arjunroy@google.com>

pte_index() is either defined as a macro (e.g. sparc64) or as an
inlined function (e.g. x86). vm_insert_pages() depends on pte_index
but it is not defined on all platforms (e.g. m68k).

To fix compilation of vm_insert_pages() on architectures not providing
pte_index(), we perform the following fix:

0. For platforms where it is meaningful, and defined as a macro, no
    change is needed.
1. For platforms where it is meaningful and defined as an inlined
    function, and we want to use it with vm_insert_pages(), we define
    a degenerate macro of the form:  #define pte_index pte_index
2. vm_insert_pages() checks for the existence of a pte_index macro
   definition. If found, it implements a batched insert. If not found,
   it devolves to calling vm_insert_page() in a loop.

This patch implements step 1 for x86.

Signed-off-by: Arjun Roy <arjunroy@google.com>
---
 arch/x86/include/asm/pgtable.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 7e118660bbd9..d9925b10e326 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -828,7 +828,10 @@ static inline unsigned long pmd_index(unsigned long address)
  *
  * this function returns the index of the entry in the pte page which would
  * control the given virtual address
+ *
+ * Also define macro so we can test if pte_index is defined for arch.
  */
+#define pte_index pte_index
 static inline unsigned long pte_index(unsigned long address)
 {
 	return (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);
-- 
2.25.1.481.gfbce0eb801-goog



^ permalink raw reply related	[flat|nested] 2+ messages in thread

* [PATCH v2 mm 2/2] mm: vm_insert_pages() checks if pte_index defined.
  2020-02-28  3:03 [PATCH v2 mm 1/2] mm: Define pte_index as macro for x86 Arjun Roy
@ 2020-02-28  3:03 ` Arjun Roy
  0 siblings, 0 replies; 2+ messages in thread
From: Arjun Roy @ 2020-02-28  3:03 UTC (permalink / raw)
  To: davem, akpm, linux-mm; +Cc: arjunroy, soheil, edumazet, sfr, geert

From: Arjun Roy <arjunroy@google.com>

pte_index() is either defined as a macro (e.g. sparc64) or as an
inlined function (e.g. x86). vm_insert_pages() depends on pte_index
but it is not defined on all platforms (e.g. m68k).

To fix compilation of vm_insert_pages() on architectures not providing
pte_index(), we perform the following fix:

0. For platforms where it is meaningful, and defined as a macro, no
   change is needed.
1. For platforms where it is meaningful and defined as an inlined
   function, and we want to use it with vm_insert_pages(), we define
   a degenerate macro of the form:  #define pte_index pte_index
2. vm_insert_pages() checks for the existence of a pte_index macro
   definition. If found, it implements a batched insert. If not found,
   it devolves to calling vm_insert_page() in a loop.

This patch implements step 2.

Signed-off-by: Arjun Roy <arjunroy@google.com>
---
 mm/memory.c | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index d6f834f7d145..33631a5b5c3f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1500,8 +1500,9 @@ static int insert_page(struct vm_area_struct *vma, unsigned long addr,
 	return retval;
 }
 
+#ifdef pte_index
 /* insert_pages() amortizes the cost of spinlock operations
- * when inserting pages in a loop.
+ * when inserting pages in a loop. Arch *must* define pte_index.
  */
 static int insert_pages(struct vm_area_struct *vma, unsigned long addr,
 			struct page **pages, unsigned long *num, pgprot_t prot)
@@ -1556,6 +1557,7 @@ static int insert_pages(struct vm_area_struct *vma, unsigned long addr,
 	*num = remaining_pages_total;
 	return ret;
 }
+#endif  /* ifdef pte_index */
 
 /**
  * vm_insert_pages - insert multiple pages into user vma, batching the pmd lock.
@@ -1575,6 +1577,7 @@ static int insert_pages(struct vm_area_struct *vma, unsigned long addr,
 int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr,
 			struct page **pages, unsigned long *num)
 {
+#ifdef pte_index
 	const unsigned long end_addr = addr + (*num * PAGE_SIZE) - 1;
 
 	if (addr < vma->vm_start || end_addr >= vma->vm_end)
@@ -1586,6 +1589,18 @@ int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr,
 	}
 	/* Defer page refcount checking till we're about to map that page. */
 	return insert_pages(vma, addr, pages, num, vma->vm_page_prot);
+#else
+	unsigned long idx = 0, pgcount = *num;
+	int err;
+
+	for (; idx < pgcount; ++idx) {
+		err = vm_insert_page(vma, addr + (PAGE_SIZE * idx), pages[idx]);
+		if (err)
+			break;
+	}
+	*num = pgcount - idx;
+	return err;
+#endif  /* ifdef pte_index */
 }
 EXPORT_SYMBOL(vm_insert_pages);
 
-- 
2.25.1.481.gfbce0eb801-goog



^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-02-28  3:04 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-28  3:03 [PATCH v2 mm 1/2] mm: Define pte_index as macro for x86 Arjun Roy
2020-02-28  3:03 ` [PATCH v2 mm 2/2] mm: vm_insert_pages() checks if pte_index defined Arjun Roy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.