linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] fix free pmd/pte page handlings on x86
@ 2018-05-15 21:39 Toshi Kani
  2018-05-15 21:39 ` [PATCH v2 1/3] x86/mm: disable ioremap free page handling on x86-PAE Toshi Kani
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Toshi Kani @ 2018-05-15 21:39 UTC (permalink / raw)
  To: mhocko, akpm, tglx, mingo, hpa
  Cc: cpandya, linux-mm, x86, linux-arm-kernel, linux-kernel

This series fixes two issues in the x86 ioremap free page handlings
for pud/pmd mappings.

Patch 01 fixes BUG_ON on x86-PAE reported by Joerg.  It disables
the free page handling on x86-PAE.

Patch 02-03 fixes a possible issue with speculation which can cause
stale page-directory cache.
 - Patch 02 is from Chintan's v9 01/04 patch [1], which adds a new arg
   'addr'.  This avoids merge conflicts with his series.
 - Patch 03 adds a TLB purge (INVLPG) to purge page-structure caches
   that may be cached by speculation.  See the patch descriptions for
   more detal.

[1] https://patchwork.kernel.org/patch/10371015/

v2:
 - Reordered patch-set, so that patch 01 can be applied independently.
 - Added a NULL pointer check for the page alloc in patch 03. 

---
Toshi Kani (2):
  1/3 x86/mm: disable ioremap free page handling on x86-PAE
  3/3 x86/mm: add TLB purge to free pmd/pte page interfaces

Chintan Pandya (1):
  2/3 ioremap: Update pgtable free interfaces with addr

---
 arch/arm64/mm/mmu.c           |  4 +--
 arch/x86/mm/pgtable.c         | 59 +++++++++++++++++++++++++++++++++++++------
 include/asm-generic/pgtable.h |  8 +++---
 lib/ioremap.c                 |  4 +--
 4 files changed, 59 insertions(+), 16 deletions(-)

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2 1/3] x86/mm: disable ioremap free page handling on x86-PAE
  2018-05-15 21:39 [PATCH v2 0/3] fix free pmd/pte page handlings on x86 Toshi Kani
@ 2018-05-15 21:39 ` Toshi Kani
  2018-05-16 11:00   ` kbuild test robot
  2018-05-15 21:39 ` [PATCH v2 2/3] ioremap: Update pgtable free interfaces with addr Toshi Kani
  2018-05-15 21:39 ` [PATCH v2 3/3] x86/mm: add TLB purge to free pmd/pte page interfaces Toshi Kani
  2 siblings, 1 reply; 6+ messages in thread
From: Toshi Kani @ 2018-05-15 21:39 UTC (permalink / raw)
  To: mhocko, akpm, tglx, mingo, hpa
  Cc: cpandya, linux-mm, x86, linux-arm-kernel, linux-kernel,
	Toshi Kani, Joerg Roedel, stable

ioremap() supports pmd mappings on x86-PAE.  However, kernel's pmd
tables are not shared among processes on x86-PAE.  Therefore, any
update to sync'd pmd entries need re-syncing.  Freeing a pte page
also leads to a vmalloc fault and hits the BUG_ON in vmalloc_sync_one().

Disable free page handling on x86-PAE.  pud_free_pmd_page() and
pmd_free_pte_page() simply return 0 if a given pud/pmd entry is present.
This assures that ioremap() does not update sync'd pmd entries at the
cost of falling back to pte mappings.

Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces")
Reported-by: Joerg Roedel <joro@8bytes.org>
Signed-off-by: Toshi Kani <toshi.kani@hpe.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: <stable@vger.kernel.org>
---
 arch/x86/mm/pgtable.c |   19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index ffc8c13c50e4..08cdd7c13619 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -715,6 +715,7 @@ int pmd_clear_huge(pmd_t *pmd)
 	return 0;
 }
 
+#ifdef CONFIG_X86_64
 /**
  * pud_free_pmd_page - Clear pud entry and free pmd page.
  * @pud: Pointer to a PUD.
@@ -762,4 +763,22 @@ int pmd_free_pte_page(pmd_t *pmd)
 
 	return 1;
 }
+
+#else /* !CONFIG_X86_64 */
+
+int pud_free_pmd_page(pud_t *pud, unsigned long addr)
+{
+	return pud_none(*pud);
+}
+
+/*
+ * Disable free page handling on x86-PAE. This assures that ioremap()
+ * does not update sync'd pmd entries. See vmalloc_sync_one().
+ */
+int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
+{
+	return pmd_none(*pmd);
+}
+
+#endif /* CONFIG_X86_64 */
 #endif	/* CONFIG_HAVE_ARCH_HUGE_VMAP */

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 2/3] ioremap: Update pgtable free interfaces with addr
  2018-05-15 21:39 [PATCH v2 0/3] fix free pmd/pte page handlings on x86 Toshi Kani
  2018-05-15 21:39 ` [PATCH v2 1/3] x86/mm: disable ioremap free page handling on x86-PAE Toshi Kani
@ 2018-05-15 21:39 ` Toshi Kani
  2018-05-15 21:39 ` [PATCH v2 3/3] x86/mm: add TLB purge to free pmd/pte page interfaces Toshi Kani
  2 siblings, 0 replies; 6+ messages in thread
From: Toshi Kani @ 2018-05-15 21:39 UTC (permalink / raw)
  To: mhocko, akpm, tglx, mingo, hpa
  Cc: cpandya, linux-mm, x86, linux-arm-kernel, linux-kernel,
	Toshi Kani, stable

From: Chintan Pandya <cpandya@codeaurora.org>

This patch ("mm/vmalloc: Add interfaces to free unmapped
page table") adds following 2 interfaces to free the page
table in case we implement huge mapping.

pud_free_pmd_page() and pmd_free_pte_page()

Some architectures (like arm64) needs to do proper TLB
maintanance after updating pagetable entry even in map.
Why ? Read this,
https://patchwork.kernel.org/patch/10134581/

Pass 'addr' in these interfaces so that proper TLB ops
can be performed.

Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces")
Signed-off-by: Chintan Pandya <cpandya@codeaurora.org>
Signed-off-by: Toshi Kani <toshi.kani@hpe.com>
Cc: <stable@vger.kernel.org>
---
 arch/arm64/mm/mmu.c           |    4 ++--
 arch/x86/mm/pgtable.c         |    8 +++++---
 include/asm-generic/pgtable.h |    8 ++++----
 lib/ioremap.c                 |    4 ++--
 4 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 2dbb2c9f1ec1..da98828609a1 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -973,12 +973,12 @@ int pmd_clear_huge(pmd_t *pmdp)
 	return 1;
 }
 
-int pud_free_pmd_page(pud_t *pud)
+int pud_free_pmd_page(pud_t *pud, unsigned long addr)
 {
 	return pud_none(*pud);
 }
 
-int pmd_free_pte_page(pmd_t *pmd)
+int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
 {
 	return pmd_none(*pmd);
 }
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index 08cdd7c13619..f60fdf411103 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -719,11 +719,12 @@ int pmd_clear_huge(pmd_t *pmd)
 /**
  * pud_free_pmd_page - Clear pud entry and free pmd page.
  * @pud: Pointer to a PUD.
+ * @addr: Virtual address associated with pud.
  *
  * Context: The pud range has been unmaped and TLB purged.
  * Return: 1 if clearing the entry succeeded. 0 otherwise.
  */
-int pud_free_pmd_page(pud_t *pud)
+int pud_free_pmd_page(pud_t *pud, unsigned long addr)
 {
 	pmd_t *pmd;
 	int i;
@@ -734,7 +735,7 @@ int pud_free_pmd_page(pud_t *pud)
 	pmd = (pmd_t *)pud_page_vaddr(*pud);
 
 	for (i = 0; i < PTRS_PER_PMD; i++)
-		if (!pmd_free_pte_page(&pmd[i]))
+		if (!pmd_free_pte_page(&pmd[i], addr + (i * PMD_SIZE)))
 			return 0;
 
 	pud_clear(pud);
@@ -746,11 +747,12 @@ int pud_free_pmd_page(pud_t *pud)
 /**
  * pmd_free_pte_page - Clear pmd entry and free pte page.
  * @pmd: Pointer to a PMD.
+ * @addr: Virtual address associated with pmd.
  *
  * Context: The pmd range has been unmaped and TLB purged.
  * Return: 1 if clearing the entry succeeded. 0 otherwise.
  */
-int pmd_free_pte_page(pmd_t *pmd)
+int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
 {
 	pte_t *pte;
 
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index f59639afaa39..b081794ba135 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -1019,8 +1019,8 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot);
 int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot);
 int pud_clear_huge(pud_t *pud);
 int pmd_clear_huge(pmd_t *pmd);
-int pud_free_pmd_page(pud_t *pud);
-int pmd_free_pte_page(pmd_t *pmd);
+int pud_free_pmd_page(pud_t *pud, unsigned long addr);
+int pmd_free_pte_page(pmd_t *pmd, unsigned long addr);
 #else	/* !CONFIG_HAVE_ARCH_HUGE_VMAP */
 static inline int p4d_set_huge(p4d_t *p4d, phys_addr_t addr, pgprot_t prot)
 {
@@ -1046,11 +1046,11 @@ static inline int pmd_clear_huge(pmd_t *pmd)
 {
 	return 0;
 }
-static inline int pud_free_pmd_page(pud_t *pud)
+static inline int pud_free_pmd_page(pud_t *pud, unsigned long addr)
 {
 	return 0;
 }
-static inline int pmd_free_pte_page(pmd_t *pmd)
+static inline int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
 {
 	return 0;
 }
diff --git a/lib/ioremap.c b/lib/ioremap.c
index 54e5bbaa3200..517f5853ffed 100644
--- a/lib/ioremap.c
+++ b/lib/ioremap.c
@@ -92,7 +92,7 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr,
 		if (ioremap_pmd_enabled() &&
 		    ((next - addr) == PMD_SIZE) &&
 		    IS_ALIGNED(phys_addr + addr, PMD_SIZE) &&
-		    pmd_free_pte_page(pmd)) {
+		    pmd_free_pte_page(pmd, addr)) {
 			if (pmd_set_huge(pmd, phys_addr + addr, prot))
 				continue;
 		}
@@ -119,7 +119,7 @@ static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr,
 		if (ioremap_pud_enabled() &&
 		    ((next - addr) == PUD_SIZE) &&
 		    IS_ALIGNED(phys_addr + addr, PUD_SIZE) &&
-		    pud_free_pmd_page(pud)) {
+		    pud_free_pmd_page(pud, addr)) {
 			if (pud_set_huge(pud, phys_addr + addr, prot))
 				continue;
 		}

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 3/3] x86/mm: add TLB purge to free pmd/pte page interfaces
  2018-05-15 21:39 [PATCH v2 0/3] fix free pmd/pte page handlings on x86 Toshi Kani
  2018-05-15 21:39 ` [PATCH v2 1/3] x86/mm: disable ioremap free page handling on x86-PAE Toshi Kani
  2018-05-15 21:39 ` [PATCH v2 2/3] ioremap: Update pgtable free interfaces with addr Toshi Kani
@ 2018-05-15 21:39 ` Toshi Kani
  2 siblings, 0 replies; 6+ messages in thread
From: Toshi Kani @ 2018-05-15 21:39 UTC (permalink / raw)
  To: mhocko, akpm, tglx, mingo, hpa
  Cc: cpandya, linux-mm, x86, linux-arm-kernel, linux-kernel,
	Toshi Kani, Joerg Roedel, stable

ioremap() calls pud_free_pmd_page() / pmd_free_pte_page() when it creates
a pud / pmd map.  The following preconditions are met at their entry.
 - All pte entries for a target pud/pmd address range have been cleared.
 - System-wide TLB purges have been peformed for a target pud/pmd address
   range.

The preconditions assure that there is no stale TLB entry for the range.
Speculation may not cache TLB entries since it requires all levels of page
entries, including ptes, to have P & A-bits set for an associated address.
However, speculation may cache pud/pmd entries (paging-structure caches)
when they have P-bit set.

Add a system-wide TLB purge (INVLPG) to a single page after clearing
pud/pmd entry's P-bit.

SDM 4.10.4.1, Operation that Invalidate TLBs and Paging-Structure Caches,
states that:
  INVLPG invalidates all paging-structure caches associated with the
  current PCID regardless of the liner addresses to which they correspond.

Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces")
Signed-off-by: Toshi Kani <toshi.kani@hpe.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: <stable@vger.kernel.org>
---
 arch/x86/mm/pgtable.c |   34 ++++++++++++++++++++++++++++------
 1 file changed, 28 insertions(+), 6 deletions(-)

diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index f60fdf411103..7e96594c7e97 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -721,24 +721,42 @@ int pmd_clear_huge(pmd_t *pmd)
  * @pud: Pointer to a PUD.
  * @addr: Virtual address associated with pud.
  *
- * Context: The pud range has been unmaped and TLB purged.
+ * Context: The pud range has been unmapped and TLB purged.
  * Return: 1 if clearing the entry succeeded. 0 otherwise.
  */
 int pud_free_pmd_page(pud_t *pud, unsigned long addr)
 {
-	pmd_t *pmd;
+	pmd_t *pmd, *pmd_sv;
+	pte_t *pte;
 	int i;
 
 	if (pud_none(*pud))
 		return 1;
 
 	pmd = (pmd_t *)pud_page_vaddr(*pud);
+	pmd_sv = (pmd_t *)__get_free_page(GFP_KERNEL);
+	if (!pmd_sv)
+		return 0;
 
-	for (i = 0; i < PTRS_PER_PMD; i++)
-		if (!pmd_free_pte_page(&pmd[i], addr + (i * PMD_SIZE)))
-			return 0;
+	for (i = 0; i < PTRS_PER_PMD; i++) {
+		pmd_sv[i] = pmd[i];
+		if (!pmd_none(pmd[i]))
+			pmd_clear(&pmd[i]);
+	}
 
 	pud_clear(pud);
+
+	/* INVLPG to clear all paging-structure caches */
+	flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1);
+
+	for (i = 0; i < PTRS_PER_PMD; i++) {
+		if (!pmd_none(pmd_sv[i])) {
+			pte = (pte_t *)pmd_page_vaddr(pmd_sv[i]);
+			free_page((unsigned long)pte);
+		}
+	}
+
+	free_page((unsigned long)pmd_sv);
 	free_page((unsigned long)pmd);
 
 	return 1;
@@ -749,7 +767,7 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr)
  * @pmd: Pointer to a PMD.
  * @addr: Virtual address associated with pmd.
  *
- * Context: The pmd range has been unmaped and TLB purged.
+ * Context: The pmd range has been unmapped and TLB purged.
  * Return: 1 if clearing the entry succeeded. 0 otherwise.
  */
 int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
@@ -761,6 +779,10 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
 
 	pte = (pte_t *)pmd_page_vaddr(*pmd);
 	pmd_clear(pmd);
+
+	/* INVLPG to clear all paging-structure caches */
+	flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1);
+
 	free_page((unsigned long)pte);
 
 	return 1;

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 1/3] x86/mm: disable ioremap free page handling on x86-PAE
  2018-05-15 21:39 ` [PATCH v2 1/3] x86/mm: disable ioremap free page handling on x86-PAE Toshi Kani
@ 2018-05-16 11:00   ` kbuild test robot
  2018-05-16 14:05     ` Kani, Toshi
  0 siblings, 1 reply; 6+ messages in thread
From: kbuild test robot @ 2018-05-16 11:00 UTC (permalink / raw)
  To: Toshi Kani
  Cc: kbuild-all, mhocko, akpm, tglx, mingo, hpa, cpandya, linux-mm,
	x86, linux-arm-kernel, linux-kernel, Joerg Roedel, stable

[-- Attachment #1: Type: text/plain, Size: 2622 bytes --]

Hi Toshi,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on arm64/for-next/core]
[also build test ERROR on v4.17-rc5 next-20180515]
[cannot apply to tip/x86/core]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Toshi-Kani/fix-free-pmd-pte-page-handlings-on-x86/20180516-183317
base:   https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
config: i386-randconfig-x013-201819 (attached as .config)
compiler: gcc-7 (Debian 7.3.0-16) 7.3.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

Note: the linux-review/Toshi-Kani/fix-free-pmd-pte-page-handlings-on-x86/20180516-183317 HEAD 93944422fcef9bfadf22e345c1d7a34723cc3203 builds fine.
      It only hurts bisectibility.

All errors (new ones prefixed by >>):

>> arch/x86/mm/pgtable.c:757:5: error: conflicting types for 'pud_free_pmd_page'
    int pud_free_pmd_page(pud_t *pud, unsigned long addr)
        ^~~~~~~~~~~~~~~~~
   In file included from arch/x86/include/asm/pgtable.h:1301:0,
                    from include/linux/memremap.h:8,
                    from include/linux/mm.h:27,
                    from arch/x86/mm/pgtable.c:2:
   include/asm-generic/pgtable.h:1022:5: note: previous declaration of 'pud_free_pmd_page' was here
    int pud_free_pmd_page(pud_t *pud);
        ^~~~~~~~~~~~~~~~~
>> arch/x86/mm/pgtable.c:766:5: error: conflicting types for 'pmd_free_pte_page'
    int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
        ^~~~~~~~~~~~~~~~~
   In file included from arch/x86/include/asm/pgtable.h:1301:0,
                    from include/linux/memremap.h:8,
                    from include/linux/mm.h:27,
                    from arch/x86/mm/pgtable.c:2:
   include/asm-generic/pgtable.h:1023:5: note: previous declaration of 'pmd_free_pte_page' was here
    int pmd_free_pte_page(pmd_t *pmd);
        ^~~~~~~~~~~~~~~~~

vim +/pud_free_pmd_page +757 arch/x86/mm/pgtable.c

   756	
 > 757	int pud_free_pmd_page(pud_t *pud, unsigned long addr)
   758	{
   759		return pud_none(*pud);
   760	}
   761	
   762	/*
   763	 * Disable free page handling on x86-PAE. This assures that ioremap()
   764	 * does not update sync'd pmd entries. See vmalloc_sync_one().
   765	 */
 > 766	int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
   767	{
   768		return pmd_none(*pmd);
   769	}
   770	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 34248 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 1/3] x86/mm: disable ioremap free page handling on x86-PAE
  2018-05-16 11:00   ` kbuild test robot
@ 2018-05-16 14:05     ` Kani, Toshi
  0 siblings, 0 replies; 6+ messages in thread
From: Kani, Toshi @ 2018-05-16 14:05 UTC (permalink / raw)
  To: lkp
  Cc: linux-kernel, tglx, linux-mm, stable, joro, x86, akpm, hpa,
	mingo, kbuild-all, Hocko, Michal, cpandya, linux-arm-kernel

On Wed, 2018-05-16 at 19:00 +0800, kbuild test robot wrote:
> Hi Toshi,
> 
> Thank you for the patch! Yet something to improve:
> 
> [auto build test ERROR on arm64/for-next/core]
> [also build test ERROR on v4.17-rc5 next-20180515]
> [cannot apply to tip/x86/core]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> 
> url:    https://github.com/0day-ci/linux/commits/Toshi-Kani/fix-free-pmd-pte-page-handlings-on-x86/20180516-183317
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
> config: i386-randconfig-x013-201819 (attached as .config)
> compiler: gcc-7 (Debian 7.3.0-16) 7.3.0
> reproduce:
>         # save the attached .config to linux build tree
>         make ARCH=i386 
> 
> Note: the linux-review/Toshi-Kani/fix-free-pmd-pte-page-handlings-on-x86/20180516-183317 HEAD 93944422fcef9bfadf22e345c1d7a34723cc3203 builds fine.
>       It only hurts bisectibility.
> 
> All errors (new ones prefixed by >>):
> 
> > > arch/x86/mm/pgtable.c:757:5: error: conflicting types for 'pud_free_pmd_page'
> 
>     int pud_free_pmd_page(pud_t *pud, unsigned long addr)
>         ^~~~~~~~~~~~~~~~~

Thanks for catching this!  Patch reordering caused this.  Will fix.
-Toshi

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-05-16 14:05 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-15 21:39 [PATCH v2 0/3] fix free pmd/pte page handlings on x86 Toshi Kani
2018-05-15 21:39 ` [PATCH v2 1/3] x86/mm: disable ioremap free page handling on x86-PAE Toshi Kani
2018-05-16 11:00   ` kbuild test robot
2018-05-16 14:05     ` Kani, Toshi
2018-05-15 21:39 ` [PATCH v2 2/3] ioremap: Update pgtable free interfaces with addr Toshi Kani
2018-05-15 21:39 ` [PATCH v2 3/3] x86/mm: add TLB purge to free pmd/pte page interfaces Toshi Kani

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).