linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/4] basic KASAN support for Xen PV domains
@ 2020-02-07 14:26 Sergey Dyasli
  2020-02-07 14:26 ` [PATCH v3 1/4] kasan: introduce set_pmd_early_shadow() Sergey Dyasli
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Sergey Dyasli @ 2020-02-07 14:26 UTC (permalink / raw)
  To: xen-devel, kasan-dev, linux-mm, linux-kernel
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Boris Ostrovsky, Juergen Gross, Stefano Stabellini,
	George Dunlap, Ross Lagerwall, Andrew Morton, Sergey Dyasli

This series allows to boot and run Xen PV kernels (Dom0 and DomU) with
CONFIG_KASAN=y. It has been used internally for some time now with good
results for finding memory corruption issues in Dom0 kernel.

Only Outline instrumentation is supported at the moment.

Sergey Dyasli (2):
  kasan: introduce set_pmd_early_shadow()
  x86/xen: add basic KASAN support for PV kernel

Ross Lagerwall (2):
  xen: teach KASAN about grant tables
  xen/netback: fix grant copy across page boundary

 arch/x86/mm/kasan_init_64.c       | 10 +++++-
 arch/x86/xen/Makefile             |  7 ++++
 arch/x86/xen/enlighten_pv.c       |  3 ++
 arch/x86/xen/mmu_pv.c             | 43 ++++++++++++++++++++++
 drivers/net/xen-netback/common.h  |  2 +-
 drivers/net/xen-netback/netback.c | 60 +++++++++++++++++++++++++------
 drivers/xen/Makefile              |  2 ++
 drivers/xen/grant-table.c         |  5 ++-
 include/linux/kasan.h             |  2 ++
 include/xen/xen-ops.h             | 10 ++++++
 lib/Kconfig.kasan                 |  3 +-
 mm/kasan/init.c                   | 32 ++++++++++++-----
 12 files changed, 156 insertions(+), 23 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 1/4] kasan: introduce set_pmd_early_shadow()
  2020-02-07 14:26 [PATCH v3 0/4] basic KASAN support for Xen PV domains Sergey Dyasli
@ 2020-02-07 14:26 ` Sergey Dyasli
  2020-02-07 14:26 ` [PATCH v3 2/4] x86/xen: add basic KASAN support for PV kernel Sergey Dyasli
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 9+ messages in thread
From: Sergey Dyasli @ 2020-02-07 14:26 UTC (permalink / raw)
  To: xen-devel, kasan-dev, linux-mm, linux-kernel
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Boris Ostrovsky, Juergen Gross, Stefano Stabellini,
	George Dunlap, Ross Lagerwall, Andrew Morton, Sergey Dyasli

It is incorrect to call pmd_populate_kernel() multiple times for the
same page table from inside Xen PV domains. Xen notices it during
kasan_populate_early_shadow():

    (XEN) mm.c:3222:d155v0 mfn 3704b already pinned

This happens for kasan_early_shadow_pte when USE_SPLIT_PTE_PTLOCKS is
enabled. Fix this by introducing set_pmd_early_shadow() which calls
pmd_populate_kernel() only once and uses set_pmd() afterwards.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
v2 --> v3: no changes

v1 --> v2:
- Fix compilation without CONFIG_XEN_PV
- Slightly updated description

RFC --> v1:
- New patch
---
 mm/kasan/init.c | 32 ++++++++++++++++++++++++--------
 1 file changed, 24 insertions(+), 8 deletions(-)

diff --git a/mm/kasan/init.c b/mm/kasan/init.c
index ce45c491ebcd..7791fe0a7704 100644
--- a/mm/kasan/init.c
+++ b/mm/kasan/init.c
@@ -81,6 +81,26 @@ static inline bool kasan_early_shadow_page_entry(pte_t pte)
 	return pte_page(pte) == virt_to_page(lm_alias(kasan_early_shadow_page));
 }
 
+#ifdef CONFIG_XEN_PV
+static inline void set_pmd_early_shadow(pmd_t *pmd)
+{
+	static bool pmd_populated = false;
+	pte_t *early_shadow = lm_alias(kasan_early_shadow_pte);
+
+	if (likely(pmd_populated)) {
+		set_pmd(pmd, __pmd(__pa(early_shadow) | _PAGE_TABLE));
+	} else {
+		pmd_populate_kernel(&init_mm, pmd, early_shadow);
+		pmd_populated = true;
+	}
+}
+#else
+static inline void set_pmd_early_shadow(pmd_t *pmd)
+{
+	pmd_populate_kernel(&init_mm, pmd, lm_alias(kasan_early_shadow_pte));
+}
+#endif /* ifdef CONFIG_XEN_PV */
+
 static __init void *early_alloc(size_t size, int node)
 {
 	void *ptr = memblock_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
@@ -120,8 +140,7 @@ static int __ref zero_pmd_populate(pud_t *pud, unsigned long addr,
 		next = pmd_addr_end(addr, end);
 
 		if (IS_ALIGNED(addr, PMD_SIZE) && end - addr >= PMD_SIZE) {
-			pmd_populate_kernel(&init_mm, pmd,
-					lm_alias(kasan_early_shadow_pte));
+			set_pmd_early_shadow(pmd);
 			continue;
 		}
 
@@ -157,8 +176,7 @@ static int __ref zero_pud_populate(p4d_t *p4d, unsigned long addr,
 			pud_populate(&init_mm, pud,
 					lm_alias(kasan_early_shadow_pmd));
 			pmd = pmd_offset(pud, addr);
-			pmd_populate_kernel(&init_mm, pmd,
-					lm_alias(kasan_early_shadow_pte));
+			set_pmd_early_shadow(pmd);
 			continue;
 		}
 
@@ -198,8 +216,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
 			pud_populate(&init_mm, pud,
 					lm_alias(kasan_early_shadow_pmd));
 			pmd = pmd_offset(pud, addr);
-			pmd_populate_kernel(&init_mm, pmd,
-					lm_alias(kasan_early_shadow_pte));
+			set_pmd_early_shadow(pmd);
 			continue;
 		}
 
@@ -271,8 +288,7 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
 			pud_populate(&init_mm, pud,
 					lm_alias(kasan_early_shadow_pmd));
 			pmd = pmd_offset(pud, addr);
-			pmd_populate_kernel(&init_mm, pmd,
-					lm_alias(kasan_early_shadow_pte));
+			set_pmd_early_shadow(pmd);
 			continue;
 		}
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 2/4] x86/xen: add basic KASAN support for PV kernel
  2020-02-07 14:26 [PATCH v3 0/4] basic KASAN support for Xen PV domains Sergey Dyasli
  2020-02-07 14:26 ` [PATCH v3 1/4] kasan: introduce set_pmd_early_shadow() Sergey Dyasli
@ 2020-02-07 14:26 ` Sergey Dyasli
  2020-02-10 20:29   ` Boris Ostrovsky
  2020-02-07 14:26 ` [PATCH v3 3/4] xen: teach KASAN about grant tables Sergey Dyasli
  2020-02-07 14:26 ` [PATCH v3 4/4] xen/netback: fix grant copy across page boundary Sergey Dyasli
  3 siblings, 1 reply; 9+ messages in thread
From: Sergey Dyasli @ 2020-02-07 14:26 UTC (permalink / raw)
  To: xen-devel, kasan-dev, linux-mm, linux-kernel
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Boris Ostrovsky, Juergen Gross, Stefano Stabellini,
	George Dunlap, Ross Lagerwall, Andrew Morton, Sergey Dyasli

Introduce and use xen_kasan_* functions that are needed to properly
initialise KASAN for Xen PV domains. Disable instrumentation for files
that are used by xen_start_kernel() before kasan_early_init() could
be called.

This enables to use Outline instrumentation for Xen PV kernels.
KASAN_INLINE and KASAN_VMALLOC options currently lead to boot crashes
and hence disabled.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
v2 --> v3:
- Fix compilation without CONFIG_KASAN
- Dropped _pv prefixes from new functions
- Made xen_kasan_early_init() call kasan_map_early_shadow() directly
- Updated description

v1 --> v2:
- Fix compilation without CONFIG_XEN_PV
- Use macros for KASAN_SHADOW_START

RFC --> v1:
- New functions with declarations in xen/xen-ops.h
- Fixed the issue with free_kernel_image_pages() with the help of
  xen_pv_kasan_unpin_pgd()
---
 arch/x86/mm/kasan_init_64.c | 10 ++++++++-
 arch/x86/xen/Makefile       |  7 ++++++
 arch/x86/xen/enlighten_pv.c |  3 +++
 arch/x86/xen/mmu_pv.c       | 43 +++++++++++++++++++++++++++++++++++++
 drivers/xen/Makefile        |  2 ++
 include/linux/kasan.h       |  2 ++
 include/xen/xen-ops.h       | 10 +++++++++
 lib/Kconfig.kasan           |  3 ++-
 8 files changed, 78 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 763e71abc0fe..b862c03a2019 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -13,6 +13,8 @@
 #include <linux/sched/task.h>
 #include <linux/vmalloc.h>
 
+#include <xen/xen-ops.h>
+
 #include <asm/e820/types.h>
 #include <asm/pgalloc.h>
 #include <asm/tlbflush.h>
@@ -231,7 +233,7 @@ static void __init kasan_early_p4d_populate(pgd_t *pgd,
 	} while (p4d++, addr = next, addr != end && p4d_none(*p4d));
 }
 
-static void __init kasan_map_early_shadow(pgd_t *pgd)
+void __init kasan_map_early_shadow(pgd_t *pgd)
 {
 	/* See comment in kasan_init() */
 	unsigned long addr = KASAN_SHADOW_START & PGDIR_MASK;
@@ -317,6 +319,8 @@ void __init kasan_early_init(void)
 
 	kasan_map_early_shadow(early_top_pgt);
 	kasan_map_early_shadow(init_top_pgt);
+
+	xen_kasan_early_init();
 }
 
 void __init kasan_init(void)
@@ -348,6 +352,8 @@ void __init kasan_init(void)
 				__pgd(__pa(tmp_p4d_table) | _KERNPG_TABLE));
 	}
 
+	xen_kasan_pin_pgd(early_top_pgt);
+
 	load_cr3(early_top_pgt);
 	__flush_tlb_all();
 
@@ -412,6 +418,8 @@ void __init kasan_init(void)
 	load_cr3(init_top_pgt);
 	__flush_tlb_all();
 
+	xen_kasan_unpin_pgd(early_top_pgt);
+
 	/*
 	 * kasan_early_shadow_page has been used as early shadow memory, thus
 	 * it may contain some garbage. Now we can clear and write protect it,
diff --git a/arch/x86/xen/Makefile b/arch/x86/xen/Makefile
index 084de77a109e..102fad0b0bca 100644
--- a/arch/x86/xen/Makefile
+++ b/arch/x86/xen/Makefile
@@ -1,3 +1,10 @@
+KASAN_SANITIZE_enlighten_pv.o := n
+KASAN_SANITIZE_enlighten.o := n
+KASAN_SANITIZE_irq.o := n
+KASAN_SANITIZE_mmu_pv.o := n
+KASAN_SANITIZE_p2m.o := n
+KASAN_SANITIZE_multicalls.o := n
+
 # SPDX-License-Identifier: GPL-2.0
 OBJECT_FILES_NON_STANDARD_xen-asm_$(BITS).o := y
 
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index ae4a41ca19f6..27de55699f24 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -72,6 +72,7 @@
 #include <asm/mwait.h>
 #include <asm/pci_x86.h>
 #include <asm/cpu.h>
+#include <asm/kasan.h>
 
 #ifdef CONFIG_ACPI
 #include <linux/acpi.h>
@@ -1231,6 +1232,8 @@ asmlinkage __visible void __init xen_start_kernel(void)
 	/* Get mfn list */
 	xen_build_dynamic_phys_to_machine();
 
+	kasan_early_init();
+
 	/*
 	 * Set up kernel GDT and segment registers, mainly so that
 	 * -fstack-protector code can be executed.
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index bbba8b17829a..a9a47f0bf22e 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -1771,6 +1771,41 @@ static void __init set_page_prot(void *addr, pgprot_t prot)
 {
 	return set_page_prot_flags(addr, prot, UVMF_NONE);
 }
+
+#ifdef CONFIG_KASAN
+void __init xen_kasan_early_init(void)
+{
+	if (!xen_pv_domain())
+		return;
+
+	/* PV page tables must be read-only */
+	set_page_prot(kasan_early_shadow_pud, PAGE_KERNEL_RO);
+	set_page_prot(kasan_early_shadow_pmd, PAGE_KERNEL_RO);
+	set_page_prot(kasan_early_shadow_pte, PAGE_KERNEL_RO);
+
+	/* Add KASAN mappings into initial PV page tables */
+	kasan_map_early_shadow((pgd_t *)xen_start_info->pt_base);
+}
+
+void __init xen_kasan_pin_pgd(pgd_t *pgd)
+{
+	if (!xen_pv_domain())
+		return;
+
+	set_page_prot(pgd, PAGE_KERNEL_RO);
+	pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE, PFN_DOWN(__pa_symbol(pgd)));
+}
+
+void __init xen_kasan_unpin_pgd(pgd_t *pgd)
+{
+	if (!xen_pv_domain())
+		return;
+
+	pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa_symbol(pgd)));
+	set_page_prot(pgd, PAGE_KERNEL);
+}
+#endif /* ifdef CONFIG_KASAN */
+
 #ifdef CONFIG_X86_32
 static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
 {
@@ -1943,6 +1978,14 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	if (i && i < pgd_index(__START_KERNEL_map))
 		init_top_pgt[i] = ((pgd_t *)xen_start_info->pt_base)[i];
 
+#ifdef CONFIG_KASAN
+	/* Copy KASAN mappings */
+	for (i = pgd_index(KASAN_SHADOW_START);
+	     i < pgd_index(KASAN_SHADOW_END);
+	     i++)
+		init_top_pgt[i] = ((pgd_t *)xen_start_info->pt_base)[i];
+#endif /* ifdef CONFIG_KASAN */
+
 	/* Make pagetable pieces RO */
 	set_page_prot(init_top_pgt, PAGE_KERNEL_RO);
 	set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 0c4efa6fe450..1e9e1e41c0a8 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -1,4 +1,6 @@
 # SPDX-License-Identifier: GPL-2.0
+KASAN_SANITIZE_features.o := n
+
 obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
 obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o time.o
 obj-y	+= mem-reservation.o
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 5cde9e7c2664..2ab644229217 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -20,6 +20,8 @@ extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
 extern pud_t kasan_early_shadow_pud[PTRS_PER_PUD];
 extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
 
+void kasan_map_early_shadow(pgd_t *pgd);
+
 int kasan_populate_early_shadow(const void *shadow_start,
 				const void *shadow_end);
 
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 095be1d66f31..f67f1f2d73c6 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -241,4 +241,14 @@ static inline void xen_preemptible_hcall_end(void)
 
 #endif /* CONFIG_PREEMPTION */
 
+#if defined(CONFIG_XEN_PV) && defined(CONFIG_KASAN)
+void xen_kasan_early_init(void);
+void xen_kasan_pin_pgd(pgd_t *pgd);
+void xen_kasan_unpin_pgd(pgd_t *pgd);
+#else
+static inline void xen_kasan_early_init(void) { }
+static inline void xen_kasan_pin_pgd(pgd_t *pgd) { }
+static inline void xen_kasan_unpin_pgd(pgd_t *pgd) { }
+#endif /* if defined(CONFIG_XEN_PV) && defined(CONFIG_KASAN) */
+
 #endif /* INCLUDE_XEN_OPS_H */
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 81f5464ea9e1..429a638625ea 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -98,6 +98,7 @@ config KASAN_OUTLINE
 
 config KASAN_INLINE
 	bool "Inline instrumentation"
+	depends on !XEN_PV
 	help
 	  Compiler directly inserts code checking shadow memory before
 	  memory accesses. This is faster than outline (in some workloads
@@ -147,7 +148,7 @@ config KASAN_SW_TAGS_IDENTIFY
 
 config KASAN_VMALLOC
 	bool "Back mappings in vmalloc space with real shadow memory"
-	depends on KASAN && HAVE_ARCH_KASAN_VMALLOC
+	depends on KASAN && HAVE_ARCH_KASAN_VMALLOC && !XEN_PV
 	help
 	  By default, the shadow region for vmalloc space is the read-only
 	  zero page. This means that KASAN cannot detect errors involving
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 3/4] xen: teach KASAN about grant tables
  2020-02-07 14:26 [PATCH v3 0/4] basic KASAN support for Xen PV domains Sergey Dyasli
  2020-02-07 14:26 ` [PATCH v3 1/4] kasan: introduce set_pmd_early_shadow() Sergey Dyasli
  2020-02-07 14:26 ` [PATCH v3 2/4] x86/xen: add basic KASAN support for PV kernel Sergey Dyasli
@ 2020-02-07 14:26 ` Sergey Dyasli
  2020-02-10 20:34   ` Boris Ostrovsky
  2020-02-07 14:26 ` [PATCH v3 4/4] xen/netback: fix grant copy across page boundary Sergey Dyasli
  3 siblings, 1 reply; 9+ messages in thread
From: Sergey Dyasli @ 2020-02-07 14:26 UTC (permalink / raw)
  To: xen-devel, kasan-dev, linux-mm, linux-kernel
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Boris Ostrovsky, Juergen Gross, Stefano Stabellini,
	George Dunlap, Ross Lagerwall, Andrew Morton, Sergey Dyasli

From: Ross Lagerwall <ross.lagerwall@citrix.com>

Otherwise it produces lots of false positives when a guest starts using
PV I/O devices.

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
v2 --> v3: no changes

v1 --> v2: no changes

RFC --> v1:
- Slightly clarified the commit message
---
 drivers/xen/grant-table.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 7b36b51cdb9f..ce95f7232de6 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -1048,6 +1048,7 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 			foreign = xen_page_foreign(pages[i]);
 			foreign->domid = map_ops[i].dom;
 			foreign->gref = map_ops[i].ref;
+			kasan_alloc_pages(pages[i], 0);
 			break;
 		}
 
@@ -1084,8 +1085,10 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 	if (ret)
 		return ret;
 
-	for (i = 0; i < count; i++)
+	for (i = 0; i < count; i++) {
 		ClearPageForeign(pages[i]);
+		kasan_free_pages(pages[i], 0);
+	}
 
 	return clear_foreign_p2m_mapping(unmap_ops, kunmap_ops, pages, count);
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 4/4] xen/netback: fix grant copy across page boundary
  2020-02-07 14:26 [PATCH v3 0/4] basic KASAN support for Xen PV domains Sergey Dyasli
                   ` (2 preceding siblings ...)
  2020-02-07 14:26 ` [PATCH v3 3/4] xen: teach KASAN about grant tables Sergey Dyasli
@ 2020-02-07 14:26 ` Sergey Dyasli
  2020-02-07 14:36   ` David Miller
  3 siblings, 1 reply; 9+ messages in thread
From: Sergey Dyasli @ 2020-02-07 14:26 UTC (permalink / raw)
  To: xen-devel, kasan-dev, linux-mm, linux-kernel
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Boris Ostrovsky, Juergen Gross, Stefano Stabellini,
	George Dunlap, Ross Lagerwall, Andrew Morton, Sergey Dyasli,
	David S. Miller, netdev, Wei Liu, Paul Durrant

From: Ross Lagerwall <ross.lagerwall@citrix.com>

When KASAN (or SLUB_DEBUG) is turned on, there is a higher chance that
non-power-of-two allocations are not aligned to the next power of 2 of
the size. Therefore, handle grant copies that cross page boundaries.

Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
Acked-by: Paul Durrant <paul@xen.org>
---
v2 --> v3:
- Added Acked-by: Paul Durrant <paul@xen.org>
CC: "David S. Miller" <davem@davemloft.net>
CC: netdev@vger.kernel.org

v1 --> v2:
- Use sizeof_field(struct sk_buff, cb)) instead of magic number 48
- Slightly update commit message

RFC --> v1:
- Added BUILD_BUG_ON to the netback patch
- xenvif_idx_release() now located outside the loop

CC: Wei Liu <wei.liu@kernel.org>
CC: Paul Durrant <paul@xen.org>
---
 drivers/net/xen-netback/common.h  |  2 +-
 drivers/net/xen-netback/netback.c | 60 +++++++++++++++++++++++++------
 2 files changed, 50 insertions(+), 12 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 05847eb91a1b..e57684415edd 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -155,7 +155,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */
 	struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
 	grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
 
-	struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS];
+	struct gnttab_copy tx_copy_ops[MAX_PENDING_REQS * 2];
 	struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
 	struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
 	/* passed to gnttab_[un]map_refs with pages under (un)mapping */
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 315dfc6ea297..41054de38a62 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -320,6 +320,7 @@ static int xenvif_count_requests(struct xenvif_queue *queue,
 
 struct xenvif_tx_cb {
 	u16 pending_idx;
+	u8 copies;
 };
 
 #define XENVIF_TX_CB(skb) ((struct xenvif_tx_cb *)(skb)->cb)
@@ -439,6 +440,7 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue,
 {
 	struct gnttab_map_grant_ref *gop_map = *gopp_map;
 	u16 pending_idx = XENVIF_TX_CB(skb)->pending_idx;
+	u8 copies = XENVIF_TX_CB(skb)->copies;
 	/* This always points to the shinfo of the skb being checked, which
 	 * could be either the first or the one on the frag_list
 	 */
@@ -450,23 +452,26 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue,
 	int nr_frags = shinfo->nr_frags;
 	const bool sharedslot = nr_frags &&
 				frag_get_pending_idx(&shinfo->frags[0]) == pending_idx;
-	int i, err;
+	int i, err = 0;
 
-	/* Check status of header. */
-	err = (*gopp_copy)->status;
-	if (unlikely(err)) {
-		if (net_ratelimit())
-			netdev_dbg(queue->vif->dev,
+	while (copies) {
+		/* Check status of header. */
+		int newerr = (*gopp_copy)->status;
+		if (unlikely(newerr)) {
+			if (net_ratelimit())
+				netdev_dbg(queue->vif->dev,
 				   "Grant copy of header failed! status: %d pending_idx: %u ref: %u\n",
 				   (*gopp_copy)->status,
 				   pending_idx,
 				   (*gopp_copy)->source.u.ref);
-		/* The first frag might still have this slot mapped */
-		if (!sharedslot)
-			xenvif_idx_release(queue, pending_idx,
-					   XEN_NETIF_RSP_ERROR);
+			err = newerr;
+		}
+		(*gopp_copy)++;
+		copies--;
 	}
-	(*gopp_copy)++;
+	/* The first frag might still have this slot mapped */
+	if (unlikely(err) && !sharedslot)
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 check_frags:
 	for (i = 0; i < nr_frags; i++, gop_map++) {
@@ -910,6 +915,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
 			xenvif_tx_err(queue, &txreq, extra_count, idx);
 			break;
 		}
+		XENVIF_TX_CB(skb)->copies = 0;
 
 		skb_shinfo(skb)->nr_frags = ret;
 		if (data_len < txreq.size)
@@ -933,6 +939,7 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
 						   "Can't allocate the frag_list skb.\n");
 				break;
 			}
+			XENVIF_TX_CB(nskb)->copies = 0;
 		}
 
 		if (extras[XEN_NETIF_EXTRA_TYPE_GSO - 1].type) {
@@ -990,6 +997,31 @@ static void xenvif_tx_build_gops(struct xenvif_queue *queue,
 
 		queue->tx_copy_ops[*copy_ops].len = data_len;
 		queue->tx_copy_ops[*copy_ops].flags = GNTCOPY_source_gref;
+		XENVIF_TX_CB(skb)->copies++;
+
+		if (offset_in_page(skb->data) + data_len > XEN_PAGE_SIZE) {
+			unsigned int extra_len = offset_in_page(skb->data) +
+					     data_len - XEN_PAGE_SIZE;
+
+			queue->tx_copy_ops[*copy_ops].len -= extra_len;
+			(*copy_ops)++;
+
+			queue->tx_copy_ops[*copy_ops].source.u.ref = txreq.gref;
+			queue->tx_copy_ops[*copy_ops].source.domid =
+				queue->vif->domid;
+			queue->tx_copy_ops[*copy_ops].source.offset =
+				txreq.offset + data_len - extra_len;
+
+			queue->tx_copy_ops[*copy_ops].dest.u.gmfn =
+				virt_to_gfn(skb->data + data_len - extra_len);
+			queue->tx_copy_ops[*copy_ops].dest.domid = DOMID_SELF;
+			queue->tx_copy_ops[*copy_ops].dest.offset = 0;
+
+			queue->tx_copy_ops[*copy_ops].len = extra_len;
+			queue->tx_copy_ops[*copy_ops].flags = GNTCOPY_source_gref;
+
+			XENVIF_TX_CB(skb)->copies++;
+		}
 
 		(*copy_ops)++;
 
@@ -1688,5 +1720,11 @@ static void __exit netback_fini(void)
 }
 module_exit(netback_fini);
 
+static void __init __maybe_unused build_assertions(void)
+{
+	BUILD_BUG_ON(sizeof(struct xenvif_tx_cb) >
+		     sizeof_field(struct sk_buff, cb));
+}
+
 MODULE_LICENSE("Dual BSD/GPL");
 MODULE_ALIAS("xen-backend:vif");
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 4/4] xen/netback: fix grant copy across page boundary
  2020-02-07 14:26 ` [PATCH v3 4/4] xen/netback: fix grant copy across page boundary Sergey Dyasli
@ 2020-02-07 14:36   ` David Miller
  2020-02-10 13:27     ` Sergey Dyasli
  0 siblings, 1 reply; 9+ messages in thread
From: David Miller @ 2020-02-07 14:36 UTC (permalink / raw)
  To: sergey.dyasli
  Cc: xen-devel, kasan-dev, linux-mm, linux-kernel, aryabinin, glider,
	dvyukov, boris.ostrovsky, jgross, sstabellini, george.dunlap,
	ross.lagerwall, akpm, netdev, wei.liu, paul

From: Sergey Dyasli <sergey.dyasli@citrix.com>
Date: Fri, 7 Feb 2020 14:26:52 +0000

> From: Ross Lagerwall <ross.lagerwall@citrix.com>
> 
> When KASAN (or SLUB_DEBUG) is turned on, there is a higher chance that
> non-power-of-two allocations are not aligned to the next power of 2 of
> the size. Therefore, handle grant copies that cross page boundaries.
> 
> Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
> Acked-by: Paul Durrant <paul@xen.org>

This is part of a larger patch series to which netdev was not CC:'d

Where is this patch targetted to be applied?

Do you expect a networking ACK on this?

Please do not submit patches in such an ambiguous manner like this
in the future, thank you.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 4/4] xen/netback: fix grant copy across page boundary
  2020-02-07 14:36   ` David Miller
@ 2020-02-10 13:27     ` Sergey Dyasli
  0 siblings, 0 replies; 9+ messages in thread
From: Sergey Dyasli @ 2020-02-10 13:27 UTC (permalink / raw)
  To: David Miller
  Cc: xen-devel, kasan-dev, linux-mm, linux-kernel, aryabinin, glider,
	dvyukov, boris.ostrovsky, jgross, sstabellini, george.dunlap,
	ross.lagerwall, akpm, netdev, wei.liu, paul,
	sergey.dyasli@citrix.com >> Sergey Dyasli

On 07/02/2020 14:36, David Miller wrote:
> From: Sergey Dyasli <sergey.dyasli@citrix.com>
> Date: Fri, 7 Feb 2020 14:26:52 +0000
>
>> From: Ross Lagerwall <ross.lagerwall@citrix.com>
>>
>> When KASAN (or SLUB_DEBUG) is turned on, there is a higher chance that
>> non-power-of-two allocations are not aligned to the next power of 2 of
>> the size. Therefore, handle grant copies that cross page boundaries.
>>
>> Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
>> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
>> Acked-by: Paul Durrant <paul@xen.org>
>
> This is part of a larger patch series to which netdev was not CC:'d
>
> Where is this patch targetted to be applied?
>
> Do you expect a networking ACK on this?
>
> Please do not submit patches in such an ambiguous manner like this
> in the future, thank you.

Please see the following for more context:

    https://lore.kernel.org/linux-mm/20200122140512.zxtld5sanohpmgt2@debian/

Sorry for not providing enough context with this submission.

--
Thanks,
Sergey

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 2/4] x86/xen: add basic KASAN support for PV kernel
  2020-02-07 14:26 ` [PATCH v3 2/4] x86/xen: add basic KASAN support for PV kernel Sergey Dyasli
@ 2020-02-10 20:29   ` Boris Ostrovsky
  0 siblings, 0 replies; 9+ messages in thread
From: Boris Ostrovsky @ 2020-02-10 20:29 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel, kasan-dev, linux-mm, linux-kernel
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Juergen Gross, Stefano Stabellini, George Dunlap, Ross Lagerwall,
	Andrew Morton



On 2/7/20 9:26 AM, Sergey Dyasli wrote:
> Introduce and use xen_kasan_* functions that are needed to properly
> initialise KASAN for Xen PV domains. Disable instrumentation for files
> that are used by xen_start_kernel() before kasan_early_init() could
> be called.
>
> This enables to use Outline instrumentation for Xen PV kernels.
> KASAN_INLINE and KASAN_VMALLOC options currently lead to boot crashes
> and hence disabled.
>
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

Xen bits:

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 3/4] xen: teach KASAN about grant tables
  2020-02-07 14:26 ` [PATCH v3 3/4] xen: teach KASAN about grant tables Sergey Dyasli
@ 2020-02-10 20:34   ` Boris Ostrovsky
  0 siblings, 0 replies; 9+ messages in thread
From: Boris Ostrovsky @ 2020-02-10 20:34 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel, kasan-dev, linux-mm, linux-kernel
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Juergen Gross, Stefano Stabellini, George Dunlap, Ross Lagerwall,
	Andrew Morton



On 2/7/20 9:26 AM, Sergey Dyasli wrote:
> From: Ross Lagerwall <ross.lagerwall@citrix.com>
>
> Otherwise it produces lots of false positives when a guest starts using
> PV I/O devices.
>
> Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-02-10 20:33 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-07 14:26 [PATCH v3 0/4] basic KASAN support for Xen PV domains Sergey Dyasli
2020-02-07 14:26 ` [PATCH v3 1/4] kasan: introduce set_pmd_early_shadow() Sergey Dyasli
2020-02-07 14:26 ` [PATCH v3 2/4] x86/xen: add basic KASAN support for PV kernel Sergey Dyasli
2020-02-10 20:29   ` Boris Ostrovsky
2020-02-07 14:26 ` [PATCH v3 3/4] xen: teach KASAN about grant tables Sergey Dyasli
2020-02-10 20:34   ` Boris Ostrovsky
2020-02-07 14:26 ` [PATCH v3 4/4] xen/netback: fix grant copy across page boundary Sergey Dyasli
2020-02-07 14:36   ` David Miller
2020-02-10 13:27     ` Sergey Dyasli

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).