All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] iommu/iova: Remove stale cached32_node
@ 2019-07-20 18:08 ` Chris Wilson
  0 siblings, 0 replies; 12+ messages in thread
From: Chris Wilson @ 2019-07-20 18:08 UTC (permalink / raw)
  To: iommu
  Cc: intel-gfx, Chris Wilson, Robin Murphy, Joerg Roedel,
	Joerg Roedel, stable

Since the cached32_node is allowed to be advanced above dma_32bit_pfn
(to provide a shortcut into the limited range), we need to be careful to
remove the to be freed node if it is the cached32_node.

[   48.477773] BUG: KASAN: use-after-free in __cached_rbnode_delete_update+0x68/0x110
[   48.477812] Read of size 8 at addr ffff88870fc19020 by task kworker/u8:1/37
[   48.477843]
[   48.477879] CPU: 1 PID: 37 Comm: kworker/u8:1 Tainted: G     U            5.2.0+ #735
[   48.477915] Hardware name: Intel Corporation NUC7i5BNK/NUC7i5BNB, BIOS BNKBL357.86A.0052.2017.0918.1346 09/18/2017
[   48.478047] Workqueue: i915 __i915_gem_free_work [i915]
[   48.478075] Call Trace:
[   48.478111]  dump_stack+0x5b/0x90
[   48.478137]  print_address_description+0x67/0x237
[   48.478178]  ? __cached_rbnode_delete_update+0x68/0x110
[   48.478212]  __kasan_report.cold.3+0x1c/0x38
[   48.478240]  ? __cached_rbnode_delete_update+0x68/0x110
[   48.478280]  ? __cached_rbnode_delete_update+0x68/0x110
[   48.478308]  __cached_rbnode_delete_update+0x68/0x110
[   48.478344]  private_free_iova+0x2b/0x60
[   48.478378]  iova_magazine_free_pfns+0x46/0xa0
[   48.478403]  free_iova_fast+0x277/0x340
[   48.478443]  fq_ring_free+0x15a/0x1a0
[   48.478473]  queue_iova+0x19c/0x1f0
[   48.478597]  cleanup_page_dma.isra.64+0x62/0xb0 [i915]
[   48.478712]  __gen8_ppgtt_cleanup+0x63/0x80 [i915]
[   48.478826]  __gen8_ppgtt_cleanup+0x42/0x80 [i915]
[   48.478940]  __gen8_ppgtt_clear+0x433/0x4b0 [i915]
[   48.479053]  __gen8_ppgtt_clear+0x462/0x4b0 [i915]
[   48.479081]  ? __sg_free_table+0x9e/0xf0
[   48.479116]  ? kfree+0x7f/0x150
[   48.479234]  i915_vma_unbind+0x1e2/0x240 [i915]
[   48.479352]  i915_vma_destroy+0x3a/0x280 [i915]
[   48.479465]  __i915_gem_free_objects+0xf0/0x2d0 [i915]
[   48.479579]  __i915_gem_free_work+0x41/0xa0 [i915]
[   48.479607]  process_one_work+0x495/0x710
[   48.479642]  worker_thread+0x4c7/0x6f0
[   48.479687]  ? process_one_work+0x710/0x710
[   48.479724]  kthread+0x1b2/0x1d0
[   48.479774]  ? kthread_create_worker_on_cpu+0xa0/0xa0
[   48.479820]  ret_from_fork+0x1f/0x30
[   48.479864]
[   48.479907] Allocated by task 631:
[   48.479944]  save_stack+0x19/0x80
[   48.479994]  __kasan_kmalloc.constprop.6+0xc1/0xd0
[   48.480038]  kmem_cache_alloc+0x91/0xf0
[   48.480082]  alloc_iova+0x2b/0x1e0
[   48.480125]  alloc_iova_fast+0x58/0x376
[   48.480166]  intel_alloc_iova+0x90/0xc0
[   48.480214]  intel_map_sg+0xde/0x1f0
[   48.480343]  i915_gem_gtt_prepare_pages+0xb8/0x170 [i915]
[   48.480465]  huge_get_pages+0x232/0x2b0 [i915]
[   48.480590]  ____i915_gem_object_get_pages+0x40/0xb0 [i915]
[   48.480712]  __i915_gem_object_get_pages+0x90/0xa0 [i915]
[   48.480834]  i915_gem_object_prepare_write+0x2d6/0x330 [i915]
[   48.480955]  create_test_object.isra.54+0x1a9/0x3e0 [i915]
[   48.481075]  igt_shared_ctx_exec+0x365/0x3c0 [i915]
[   48.481210]  __i915_subtests.cold.4+0x30/0x92 [i915]
[   48.481341]  __run_selftests.cold.3+0xa9/0x119 [i915]
[   48.481466]  i915_live_selftests+0x3c/0x70 [i915]
[   48.481583]  i915_pci_probe+0xe7/0x220 [i915]
[   48.481620]  pci_device_probe+0xe0/0x180
[   48.481665]  really_probe+0x163/0x4e0
[   48.481710]  device_driver_attach+0x85/0x90
[   48.481750]  __driver_attach+0xa5/0x180
[   48.481796]  bus_for_each_dev+0xda/0x130
[   48.481831]  bus_add_driver+0x205/0x2e0
[   48.481882]  driver_register+0xca/0x140
[   48.481927]  do_one_initcall+0x6c/0x1af
[   48.481970]  do_init_module+0x106/0x350
[   48.482010]  load_module+0x3d2c/0x3ea0
[   48.482058]  __do_sys_finit_module+0x110/0x180
[   48.482102]  do_syscall_64+0x62/0x1f0
[   48.482147]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[   48.482190]
[   48.482224] Freed by task 37:
[   48.482273]  save_stack+0x19/0x80
[   48.482318]  __kasan_slab_free+0x12e/0x180
[   48.482363]  kmem_cache_free+0x70/0x140
[   48.482406]  __free_iova+0x1d/0x30
[   48.482445]  fq_ring_free+0x15a/0x1a0
[   48.482490]  queue_iova+0x19c/0x1f0
[   48.482624]  cleanup_page_dma.isra.64+0x62/0xb0 [i915]
[   48.482749]  __gen8_ppgtt_cleanup+0x63/0x80 [i915]
[   48.482873]  __gen8_ppgtt_cleanup+0x42/0x80 [i915]
[   48.482999]  __gen8_ppgtt_clear+0x433/0x4b0 [i915]
[   48.483123]  __gen8_ppgtt_clear+0x462/0x4b0 [i915]
[   48.483250]  i915_vma_unbind+0x1e2/0x240 [i915]
[   48.483378]  i915_vma_destroy+0x3a/0x280 [i915]
[   48.483500]  __i915_gem_free_objects+0xf0/0x2d0 [i915]
[   48.483622]  __i915_gem_free_work+0x41/0xa0 [i915]
[   48.483659]  process_one_work+0x495/0x710
[   48.483704]  worker_thread+0x4c7/0x6f0
[   48.483748]  kthread+0x1b2/0x1d0
[   48.483787]  ret_from_fork+0x1f/0x30
[   48.483831]
[   48.483868] The buggy address belongs to the object at ffff88870fc19000
[   48.483868]  which belongs to the cache iommu_iova of size 40
[   48.483920] The buggy address is located 32 bytes inside of
[   48.483920]  40-byte region [ffff88870fc19000, ffff88870fc19028)
[   48.483964] The buggy address belongs to the page:
[   48.484006] page:ffffea001c3f0600 refcount:1 mapcount:0 mapping:ffff8888181a91c0 index:0x0 compound_mapcount: 0
[   48.484045] flags: 0x8000000000010200(slab|head)
[   48.484096] raw: 8000000000010200 ffffea001c421a08 ffffea001c447e88 ffff8888181a91c0
[   48.484141] raw: 0000000000000000 0000000000120012 00000001ffffffff 0000000000000000
[   48.484188] page dumped because: kasan: bad access detected
[   48.484230]
[   48.484265] Memory state around the buggy address:
[   48.484314]  ffff88870fc18f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   48.484361]  ffff88870fc18f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   48.484406] >ffff88870fc19000: fb fb fb fb fb fc fc fc fc fc fc fc fc fc fc fc
[   48.484451]                                ^
[   48.484494]  ffff88870fc19080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   48.484530]  ffff88870fc19100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108602
Fixes: e60aa7b53845 ("iommu/iova: Extend rbtree node caching")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Joerg Roedel <jroedel@suse.de>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: <stable@vger.kernel.org> # v4.15+
---
 drivers/iommu/iova.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index d499b2621239..24b9f9b19086 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -127,8 +127,9 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free)
 	struct iova *cached_iova;
 
 	cached_iova = rb_entry(iovad->cached32_node, struct iova, node);
-	if (free->pfn_hi < iovad->dma_32bit_pfn &&
-	    free->pfn_lo >= cached_iova->pfn_lo) {
+	if (free == cached_iova ||
+	    (free->pfn_hi < iovad->dma_32bit_pfn &&
+	     free->pfn_lo >= cached_iova->pfn_lo)) {
 		iovad->cached32_node = rb_next(&free->node);
 		iovad->max32_alloc_size = iovad->dma_32bit_pfn;
 	}
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH] iommu/iova: Remove stale cached32_node
@ 2019-07-20 18:08 ` Chris Wilson
  0 siblings, 0 replies; 12+ messages in thread
From: Chris Wilson @ 2019-07-20 18:08 UTC (permalink / raw)
  To: iommu; +Cc: Joerg Roedel, intel-gfx, stable, Chris Wilson, Robin Murphy

Since the cached32_node is allowed to be advanced above dma_32bit_pfn
(to provide a shortcut into the limited range), we need to be careful to
remove the to be freed node if it is the cached32_node.

[   48.477773] BUG: KASAN: use-after-free in __cached_rbnode_delete_update+0x68/0x110
[   48.477812] Read of size 8 at addr ffff88870fc19020 by task kworker/u8:1/37
[   48.477843]
[   48.477879] CPU: 1 PID: 37 Comm: kworker/u8:1 Tainted: G     U            5.2.0+ #735
[   48.477915] Hardware name: Intel Corporation NUC7i5BNK/NUC7i5BNB, BIOS BNKBL357.86A.0052.2017.0918.1346 09/18/2017
[   48.478047] Workqueue: i915 __i915_gem_free_work [i915]
[   48.478075] Call Trace:
[   48.478111]  dump_stack+0x5b/0x90
[   48.478137]  print_address_description+0x67/0x237
[   48.478178]  ? __cached_rbnode_delete_update+0x68/0x110
[   48.478212]  __kasan_report.cold.3+0x1c/0x38
[   48.478240]  ? __cached_rbnode_delete_update+0x68/0x110
[   48.478280]  ? __cached_rbnode_delete_update+0x68/0x110
[   48.478308]  __cached_rbnode_delete_update+0x68/0x110
[   48.478344]  private_free_iova+0x2b/0x60
[   48.478378]  iova_magazine_free_pfns+0x46/0xa0
[   48.478403]  free_iova_fast+0x277/0x340
[   48.478443]  fq_ring_free+0x15a/0x1a0
[   48.478473]  queue_iova+0x19c/0x1f0
[   48.478597]  cleanup_page_dma.isra.64+0x62/0xb0 [i915]
[   48.478712]  __gen8_ppgtt_cleanup+0x63/0x80 [i915]
[   48.478826]  __gen8_ppgtt_cleanup+0x42/0x80 [i915]
[   48.478940]  __gen8_ppgtt_clear+0x433/0x4b0 [i915]
[   48.479053]  __gen8_ppgtt_clear+0x462/0x4b0 [i915]
[   48.479081]  ? __sg_free_table+0x9e/0xf0
[   48.479116]  ? kfree+0x7f/0x150
[   48.479234]  i915_vma_unbind+0x1e2/0x240 [i915]
[   48.479352]  i915_vma_destroy+0x3a/0x280 [i915]
[   48.479465]  __i915_gem_free_objects+0xf0/0x2d0 [i915]
[   48.479579]  __i915_gem_free_work+0x41/0xa0 [i915]
[   48.479607]  process_one_work+0x495/0x710
[   48.479642]  worker_thread+0x4c7/0x6f0
[   48.479687]  ? process_one_work+0x710/0x710
[   48.479724]  kthread+0x1b2/0x1d0
[   48.479774]  ? kthread_create_worker_on_cpu+0xa0/0xa0
[   48.479820]  ret_from_fork+0x1f/0x30
[   48.479864]
[   48.479907] Allocated by task 631:
[   48.479944]  save_stack+0x19/0x80
[   48.479994]  __kasan_kmalloc.constprop.6+0xc1/0xd0
[   48.480038]  kmem_cache_alloc+0x91/0xf0
[   48.480082]  alloc_iova+0x2b/0x1e0
[   48.480125]  alloc_iova_fast+0x58/0x376
[   48.480166]  intel_alloc_iova+0x90/0xc0
[   48.480214]  intel_map_sg+0xde/0x1f0
[   48.480343]  i915_gem_gtt_prepare_pages+0xb8/0x170 [i915]
[   48.480465]  huge_get_pages+0x232/0x2b0 [i915]
[   48.480590]  ____i915_gem_object_get_pages+0x40/0xb0 [i915]
[   48.480712]  __i915_gem_object_get_pages+0x90/0xa0 [i915]
[   48.480834]  i915_gem_object_prepare_write+0x2d6/0x330 [i915]
[   48.480955]  create_test_object.isra.54+0x1a9/0x3e0 [i915]
[   48.481075]  igt_shared_ctx_exec+0x365/0x3c0 [i915]
[   48.481210]  __i915_subtests.cold.4+0x30/0x92 [i915]
[   48.481341]  __run_selftests.cold.3+0xa9/0x119 [i915]
[   48.481466]  i915_live_selftests+0x3c/0x70 [i915]
[   48.481583]  i915_pci_probe+0xe7/0x220 [i915]
[   48.481620]  pci_device_probe+0xe0/0x180
[   48.481665]  really_probe+0x163/0x4e0
[   48.481710]  device_driver_attach+0x85/0x90
[   48.481750]  __driver_attach+0xa5/0x180
[   48.481796]  bus_for_each_dev+0xda/0x130
[   48.481831]  bus_add_driver+0x205/0x2e0
[   48.481882]  driver_register+0xca/0x140
[   48.481927]  do_one_initcall+0x6c/0x1af
[   48.481970]  do_init_module+0x106/0x350
[   48.482010]  load_module+0x3d2c/0x3ea0
[   48.482058]  __do_sys_finit_module+0x110/0x180
[   48.482102]  do_syscall_64+0x62/0x1f0
[   48.482147]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[   48.482190]
[   48.482224] Freed by task 37:
[   48.482273]  save_stack+0x19/0x80
[   48.482318]  __kasan_slab_free+0x12e/0x180
[   48.482363]  kmem_cache_free+0x70/0x140
[   48.482406]  __free_iova+0x1d/0x30
[   48.482445]  fq_ring_free+0x15a/0x1a0
[   48.482490]  queue_iova+0x19c/0x1f0
[   48.482624]  cleanup_page_dma.isra.64+0x62/0xb0 [i915]
[   48.482749]  __gen8_ppgtt_cleanup+0x63/0x80 [i915]
[   48.482873]  __gen8_ppgtt_cleanup+0x42/0x80 [i915]
[   48.482999]  __gen8_ppgtt_clear+0x433/0x4b0 [i915]
[   48.483123]  __gen8_ppgtt_clear+0x462/0x4b0 [i915]
[   48.483250]  i915_vma_unbind+0x1e2/0x240 [i915]
[   48.483378]  i915_vma_destroy+0x3a/0x280 [i915]
[   48.483500]  __i915_gem_free_objects+0xf0/0x2d0 [i915]
[   48.483622]  __i915_gem_free_work+0x41/0xa0 [i915]
[   48.483659]  process_one_work+0x495/0x710
[   48.483704]  worker_thread+0x4c7/0x6f0
[   48.483748]  kthread+0x1b2/0x1d0
[   48.483787]  ret_from_fork+0x1f/0x30
[   48.483831]
[   48.483868] The buggy address belongs to the object at ffff88870fc19000
[   48.483868]  which belongs to the cache iommu_iova of size 40
[   48.483920] The buggy address is located 32 bytes inside of
[   48.483920]  40-byte region [ffff88870fc19000, ffff88870fc19028)
[   48.483964] The buggy address belongs to the page:
[   48.484006] page:ffffea001c3f0600 refcount:1 mapcount:0 mapping:ffff8888181a91c0 index:0x0 compound_mapcount: 0
[   48.484045] flags: 0x8000000000010200(slab|head)
[   48.484096] raw: 8000000000010200 ffffea001c421a08 ffffea001c447e88 ffff8888181a91c0
[   48.484141] raw: 0000000000000000 0000000000120012 00000001ffffffff 0000000000000000
[   48.484188] page dumped because: kasan: bad access detected
[   48.484230]
[   48.484265] Memory state around the buggy address:
[   48.484314]  ffff88870fc18f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   48.484361]  ffff88870fc18f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   48.484406] >ffff88870fc19000: fb fb fb fb fb fc fc fc fc fc fc fc fc fc fc fc
[   48.484451]                                ^
[   48.484494]  ffff88870fc19080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[   48.484530]  ffff88870fc19100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108602
Fixes: e60aa7b53845 ("iommu/iova: Extend rbtree node caching")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Joerg Roedel <jroedel@suse.de>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: <stable@vger.kernel.org> # v4.15+
---
 drivers/iommu/iova.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index d499b2621239..24b9f9b19086 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -127,8 +127,9 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free)
 	struct iova *cached_iova;
 
 	cached_iova = rb_entry(iovad->cached32_node, struct iova, node);
-	if (free->pfn_hi < iovad->dma_32bit_pfn &&
-	    free->pfn_lo >= cached_iova->pfn_lo) {
+	if (free == cached_iova ||
+	    (free->pfn_hi < iovad->dma_32bit_pfn &&
+	     free->pfn_lo >= cached_iova->pfn_lo)) {
 		iovad->cached32_node = rb_next(&free->node);
 		iovad->max32_alloc_size = iovad->dma_32bit_pfn;
 	}
-- 
2.22.0

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* ✗ Fi.CI.BAT: failure for iommu/iova: Remove stale cached32_node
  2019-07-20 18:08 ` Chris Wilson
  (?)
@ 2019-07-20 18:58 ` Patchwork
  -1 siblings, 0 replies; 12+ messages in thread
From: Patchwork @ 2019-07-20 18:58 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: iommu/iova: Remove stale cached32_node
URL   : https://patchwork.freedesktop.org/series/63970/
State : failure

== Summary ==

Applying: iommu/iova: Remove stale cached32_node
Using index info to reconstruct a base tree...
M	drivers/iommu/iova.c
Falling back to patching base and 3-way merge...
No changes -- Patch already applied.

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] iommu/iova: Remove stale cached32_node
@ 2019-07-20 21:47   ` Sasha Levin
  0 siblings, 0 replies; 12+ messages in thread
From: Sasha Levin @ 2019-07-20 21:47 UTC (permalink / raw)
  To: Sasha Levin, Chris Wilson, iommu
  Cc: , Joerg Roedel, intel-gfx, stable, Robin Murphy

Hi,

[This is an automated email]

This commit has been processed because it contains a "Fixes:" tag,
fixing commit: e60aa7b53845 iommu/iova: Extend rbtree node caching.

The bot has tested the following trees: v5.2.1, v5.1.18, v4.19.59.

v5.2.1: Build OK!
v5.1.18: Build OK!
v4.19.59: Failed to apply! Possible dependencies:
    bee60e94a1e2 ("iommu/iova: Optimise attempts to allocate iova from 32bit address range")


NOTE: The patch will not be queued to stable trees until it is upstream.

How should we proceed with this patch?

--
Thanks,
Sasha
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] iommu/iova: Remove stale cached32_node
  2019-07-20 18:08 ` Chris Wilson
                   ` (2 preceding siblings ...)
  (?)
@ 2019-07-20 21:47 ` Sasha Levin
  -1 siblings, 0 replies; 12+ messages in thread
From: Sasha Levin @ 2019-07-20 21:47 UTC (permalink / raw)
  To: Sasha Levin, Chris Wilson, iommu
  Cc: Joerg Roedel, intel-gfx, stable, Robin Murphy, Joerg Roedel

Hi,

[This is an automated email]

This commit has been processed because it contains a "Fixes:" tag,
fixing commit: e60aa7b53845 iommu/iova: Extend rbtree node caching.

The bot has tested the following trees: v5.2.1, v5.1.18, v4.19.59.

v5.2.1: Build OK!
v5.1.18: Build OK!
v4.19.59: Failed to apply! Possible dependencies:
    bee60e94a1e2 ("iommu/iova: Optimise attempts to allocate iova from 32bit address range")


NOTE: The patch will not be queued to stable trees until it is upstream.

How should we proceed with this patch?

--
Thanks,
Sasha
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] iommu/iova: Remove stale cached32_node
@ 2019-07-20 21:47   ` Sasha Levin
  0 siblings, 0 replies; 12+ messages in thread
From: Sasha Levin @ 2019-07-20 21:47 UTC (permalink / raw)
  To: Sasha Levin, Chris Wilson,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
  Cc: Joerg Roedel, intel-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	stable-u79uwXL29TY76Z2rM5mHXA, Robin Murphy

Hi,

[This is an automated email]

This commit has been processed because it contains a "Fixes:" tag,
fixing commit: e60aa7b53845 iommu/iova: Extend rbtree node caching.

The bot has tested the following trees: v5.2.1, v5.1.18, v4.19.59.

v5.2.1: Build OK!
v5.1.18: Build OK!
v4.19.59: Failed to apply! Possible dependencies:
    bee60e94a1e2 ("iommu/iova: Optimise attempts to allocate iova from 32bit address range")


NOTE: The patch will not be queued to stable trees until it is upstream.

How should we proceed with this patch?

--
Thanks,
Sasha

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] iommu/iova: Remove stale cached32_node
  2019-07-20 18:08 ` Chris Wilson
@ 2019-07-22 10:13   ` Robin Murphy
  -1 siblings, 0 replies; 12+ messages in thread
From: Robin Murphy @ 2019-07-22 10:13 UTC (permalink / raw)
  To: Chris Wilson, iommu; +Cc: intel-gfx, Joerg Roedel, Joerg Roedel, stable

Hi Chris,

On 20/07/2019 19:08, Chris Wilson wrote:
> Since the cached32_node is allowed to be advanced above dma_32bit_pfn
> (to provide a shortcut into the limited range), we need to be careful to
> remove the to be freed node if it is the cached32_node.

Thanks for digging in - just to confirm my understanding, the 
problematic situation is where both 32-bit and 64-bit nodes have been 
allocated, the topmost 32-bit node is freed, then the lowest 64-bit node 
is freed, which leaves the pointer dangling because the second free 
fails the "free->pfn_hi < iovad->dma_32bit_pfn" check. Does that match 
your reasoning?

Assuming I haven't completely misread the problem, I can't think of a 
more appropriate way to fix it, so;

Reviewed-by: Robin Murphy <robin.murphy@arm.com>

I could swear I did consider that corner case at some point, but since 
Leizhen and I rewrote this stuff about 5 times between us I'm not 
entirely surprised such a subtlety could have got lost again along the way.

Thanks,
Robin.

> [   48.477773] BUG: KASAN: use-after-free in __cached_rbnode_delete_update+0x68/0x110
> [   48.477812] Read of size 8 at addr ffff88870fc19020 by task kworker/u8:1/37
> [   48.477843]
> [   48.477879] CPU: 1 PID: 37 Comm: kworker/u8:1 Tainted: G     U            5.2.0+ #735
> [   48.477915] Hardware name: Intel Corporation NUC7i5BNK/NUC7i5BNB, BIOS BNKBL357.86A.0052.2017.0918.1346 09/18/2017
> [   48.478047] Workqueue: i915 __i915_gem_free_work [i915]
> [   48.478075] Call Trace:
> [   48.478111]  dump_stack+0x5b/0x90
> [   48.478137]  print_address_description+0x67/0x237
> [   48.478178]  ? __cached_rbnode_delete_update+0x68/0x110
> [   48.478212]  __kasan_report.cold.3+0x1c/0x38
> [   48.478240]  ? __cached_rbnode_delete_update+0x68/0x110
> [   48.478280]  ? __cached_rbnode_delete_update+0x68/0x110
> [   48.478308]  __cached_rbnode_delete_update+0x68/0x110
> [   48.478344]  private_free_iova+0x2b/0x60
> [   48.478378]  iova_magazine_free_pfns+0x46/0xa0
> [   48.478403]  free_iova_fast+0x277/0x340
> [   48.478443]  fq_ring_free+0x15a/0x1a0
> [   48.478473]  queue_iova+0x19c/0x1f0
> [   48.478597]  cleanup_page_dma.isra.64+0x62/0xb0 [i915]
> [   48.478712]  __gen8_ppgtt_cleanup+0x63/0x80 [i915]
> [   48.478826]  __gen8_ppgtt_cleanup+0x42/0x80 [i915]
> [   48.478940]  __gen8_ppgtt_clear+0x433/0x4b0 [i915]
> [   48.479053]  __gen8_ppgtt_clear+0x462/0x4b0 [i915]
> [   48.479081]  ? __sg_free_table+0x9e/0xf0
> [   48.479116]  ? kfree+0x7f/0x150
> [   48.479234]  i915_vma_unbind+0x1e2/0x240 [i915]
> [   48.479352]  i915_vma_destroy+0x3a/0x280 [i915]
> [   48.479465]  __i915_gem_free_objects+0xf0/0x2d0 [i915]
> [   48.479579]  __i915_gem_free_work+0x41/0xa0 [i915]
> [   48.479607]  process_one_work+0x495/0x710
> [   48.479642]  worker_thread+0x4c7/0x6f0
> [   48.479687]  ? process_one_work+0x710/0x710
> [   48.479724]  kthread+0x1b2/0x1d0
> [   48.479774]  ? kthread_create_worker_on_cpu+0xa0/0xa0
> [   48.479820]  ret_from_fork+0x1f/0x30
> [   48.479864]
> [   48.479907] Allocated by task 631:
> [   48.479944]  save_stack+0x19/0x80
> [   48.479994]  __kasan_kmalloc.constprop.6+0xc1/0xd0
> [   48.480038]  kmem_cache_alloc+0x91/0xf0
> [   48.480082]  alloc_iova+0x2b/0x1e0
> [   48.480125]  alloc_iova_fast+0x58/0x376
> [   48.480166]  intel_alloc_iova+0x90/0xc0
> [   48.480214]  intel_map_sg+0xde/0x1f0
> [   48.480343]  i915_gem_gtt_prepare_pages+0xb8/0x170 [i915]
> [   48.480465]  huge_get_pages+0x232/0x2b0 [i915]
> [   48.480590]  ____i915_gem_object_get_pages+0x40/0xb0 [i915]
> [   48.480712]  __i915_gem_object_get_pages+0x90/0xa0 [i915]
> [   48.480834]  i915_gem_object_prepare_write+0x2d6/0x330 [i915]
> [   48.480955]  create_test_object.isra.54+0x1a9/0x3e0 [i915]
> [   48.481075]  igt_shared_ctx_exec+0x365/0x3c0 [i915]
> [   48.481210]  __i915_subtests.cold.4+0x30/0x92 [i915]
> [   48.481341]  __run_selftests.cold.3+0xa9/0x119 [i915]
> [   48.481466]  i915_live_selftests+0x3c/0x70 [i915]
> [   48.481583]  i915_pci_probe+0xe7/0x220 [i915]
> [   48.481620]  pci_device_probe+0xe0/0x180
> [   48.481665]  really_probe+0x163/0x4e0
> [   48.481710]  device_driver_attach+0x85/0x90
> [   48.481750]  __driver_attach+0xa5/0x180
> [   48.481796]  bus_for_each_dev+0xda/0x130
> [   48.481831]  bus_add_driver+0x205/0x2e0
> [   48.481882]  driver_register+0xca/0x140
> [   48.481927]  do_one_initcall+0x6c/0x1af
> [   48.481970]  do_init_module+0x106/0x350
> [   48.482010]  load_module+0x3d2c/0x3ea0
> [   48.482058]  __do_sys_finit_module+0x110/0x180
> [   48.482102]  do_syscall_64+0x62/0x1f0
> [   48.482147]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> [   48.482190]
> [   48.482224] Freed by task 37:
> [   48.482273]  save_stack+0x19/0x80
> [   48.482318]  __kasan_slab_free+0x12e/0x180
> [   48.482363]  kmem_cache_free+0x70/0x140
> [   48.482406]  __free_iova+0x1d/0x30
> [   48.482445]  fq_ring_free+0x15a/0x1a0
> [   48.482490]  queue_iova+0x19c/0x1f0
> [   48.482624]  cleanup_page_dma.isra.64+0x62/0xb0 [i915]
> [   48.482749]  __gen8_ppgtt_cleanup+0x63/0x80 [i915]
> [   48.482873]  __gen8_ppgtt_cleanup+0x42/0x80 [i915]
> [   48.482999]  __gen8_ppgtt_clear+0x433/0x4b0 [i915]
> [   48.483123]  __gen8_ppgtt_clear+0x462/0x4b0 [i915]
> [   48.483250]  i915_vma_unbind+0x1e2/0x240 [i915]
> [   48.483378]  i915_vma_destroy+0x3a/0x280 [i915]
> [   48.483500]  __i915_gem_free_objects+0xf0/0x2d0 [i915]
> [   48.483622]  __i915_gem_free_work+0x41/0xa0 [i915]
> [   48.483659]  process_one_work+0x495/0x710
> [   48.483704]  worker_thread+0x4c7/0x6f0
> [   48.483748]  kthread+0x1b2/0x1d0
> [   48.483787]  ret_from_fork+0x1f/0x30
> [   48.483831]
> [   48.483868] The buggy address belongs to the object at ffff88870fc19000
> [   48.483868]  which belongs to the cache iommu_iova of size 40
> [   48.483920] The buggy address is located 32 bytes inside of
> [   48.483920]  40-byte region [ffff88870fc19000, ffff88870fc19028)
> [   48.483964] The buggy address belongs to the page:
> [   48.484006] page:ffffea001c3f0600 refcount:1 mapcount:0 mapping:ffff8888181a91c0 index:0x0 compound_mapcount: 0
> [   48.484045] flags: 0x8000000000010200(slab|head)
> [   48.484096] raw: 8000000000010200 ffffea001c421a08 ffffea001c447e88 ffff8888181a91c0
> [   48.484141] raw: 0000000000000000 0000000000120012 00000001ffffffff 0000000000000000
> [   48.484188] page dumped because: kasan: bad access detected
> [   48.484230]
> [   48.484265] Memory state around the buggy address:
> [   48.484314]  ffff88870fc18f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> [   48.484361]  ffff88870fc18f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> [   48.484406] >ffff88870fc19000: fb fb fb fb fb fc fc fc fc fc fc fc fc fc fc fc
> [   48.484451]                                ^
> [   48.484494]  ffff88870fc19080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> [   48.484530]  ffff88870fc19100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> 
> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108602
> Fixes: e60aa7b53845 ("iommu/iova: Extend rbtree node caching")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Joerg Roedel <jroedel@suse.de>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: <stable@vger.kernel.org> # v4.15+
> ---
>   drivers/iommu/iova.c | 5 +++--
>   1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index d499b2621239..24b9f9b19086 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -127,8 +127,9 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free)
>   	struct iova *cached_iova;
>   
>   	cached_iova = rb_entry(iovad->cached32_node, struct iova, node);
> -	if (free->pfn_hi < iovad->dma_32bit_pfn &&
> -	    free->pfn_lo >= cached_iova->pfn_lo) {
> +	if (free == cached_iova ||
> +	    (free->pfn_hi < iovad->dma_32bit_pfn &&
> +	     free->pfn_lo >= cached_iova->pfn_lo)) {
>   		iovad->cached32_node = rb_next(&free->node);
>   		iovad->max32_alloc_size = iovad->dma_32bit_pfn;
>   	}
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] iommu/iova: Remove stale cached32_node
@ 2019-07-22 10:13   ` Robin Murphy
  0 siblings, 0 replies; 12+ messages in thread
From: Robin Murphy @ 2019-07-22 10:13 UTC (permalink / raw)
  To: Chris Wilson, iommu; +Cc: intel-gfx, Joerg Roedel, stable

Hi Chris,

On 20/07/2019 19:08, Chris Wilson wrote:
> Since the cached32_node is allowed to be advanced above dma_32bit_pfn
> (to provide a shortcut into the limited range), we need to be careful to
> remove the to be freed node if it is the cached32_node.

Thanks for digging in - just to confirm my understanding, the 
problematic situation is where both 32-bit and 64-bit nodes have been 
allocated, the topmost 32-bit node is freed, then the lowest 64-bit node 
is freed, which leaves the pointer dangling because the second free 
fails the "free->pfn_hi < iovad->dma_32bit_pfn" check. Does that match 
your reasoning?

Assuming I haven't completely misread the problem, I can't think of a 
more appropriate way to fix it, so;

Reviewed-by: Robin Murphy <robin.murphy@arm.com>

I could swear I did consider that corner case at some point, but since 
Leizhen and I rewrote this stuff about 5 times between us I'm not 
entirely surprised such a subtlety could have got lost again along the way.

Thanks,
Robin.

> [   48.477773] BUG: KASAN: use-after-free in __cached_rbnode_delete_update+0x68/0x110
> [   48.477812] Read of size 8 at addr ffff88870fc19020 by task kworker/u8:1/37
> [   48.477843]
> [   48.477879] CPU: 1 PID: 37 Comm: kworker/u8:1 Tainted: G     U            5.2.0+ #735
> [   48.477915] Hardware name: Intel Corporation NUC7i5BNK/NUC7i5BNB, BIOS BNKBL357.86A.0052.2017.0918.1346 09/18/2017
> [   48.478047] Workqueue: i915 __i915_gem_free_work [i915]
> [   48.478075] Call Trace:
> [   48.478111]  dump_stack+0x5b/0x90
> [   48.478137]  print_address_description+0x67/0x237
> [   48.478178]  ? __cached_rbnode_delete_update+0x68/0x110
> [   48.478212]  __kasan_report.cold.3+0x1c/0x38
> [   48.478240]  ? __cached_rbnode_delete_update+0x68/0x110
> [   48.478280]  ? __cached_rbnode_delete_update+0x68/0x110
> [   48.478308]  __cached_rbnode_delete_update+0x68/0x110
> [   48.478344]  private_free_iova+0x2b/0x60
> [   48.478378]  iova_magazine_free_pfns+0x46/0xa0
> [   48.478403]  free_iova_fast+0x277/0x340
> [   48.478443]  fq_ring_free+0x15a/0x1a0
> [   48.478473]  queue_iova+0x19c/0x1f0
> [   48.478597]  cleanup_page_dma.isra.64+0x62/0xb0 [i915]
> [   48.478712]  __gen8_ppgtt_cleanup+0x63/0x80 [i915]
> [   48.478826]  __gen8_ppgtt_cleanup+0x42/0x80 [i915]
> [   48.478940]  __gen8_ppgtt_clear+0x433/0x4b0 [i915]
> [   48.479053]  __gen8_ppgtt_clear+0x462/0x4b0 [i915]
> [   48.479081]  ? __sg_free_table+0x9e/0xf0
> [   48.479116]  ? kfree+0x7f/0x150
> [   48.479234]  i915_vma_unbind+0x1e2/0x240 [i915]
> [   48.479352]  i915_vma_destroy+0x3a/0x280 [i915]
> [   48.479465]  __i915_gem_free_objects+0xf0/0x2d0 [i915]
> [   48.479579]  __i915_gem_free_work+0x41/0xa0 [i915]
> [   48.479607]  process_one_work+0x495/0x710
> [   48.479642]  worker_thread+0x4c7/0x6f0
> [   48.479687]  ? process_one_work+0x710/0x710
> [   48.479724]  kthread+0x1b2/0x1d0
> [   48.479774]  ? kthread_create_worker_on_cpu+0xa0/0xa0
> [   48.479820]  ret_from_fork+0x1f/0x30
> [   48.479864]
> [   48.479907] Allocated by task 631:
> [   48.479944]  save_stack+0x19/0x80
> [   48.479994]  __kasan_kmalloc.constprop.6+0xc1/0xd0
> [   48.480038]  kmem_cache_alloc+0x91/0xf0
> [   48.480082]  alloc_iova+0x2b/0x1e0
> [   48.480125]  alloc_iova_fast+0x58/0x376
> [   48.480166]  intel_alloc_iova+0x90/0xc0
> [   48.480214]  intel_map_sg+0xde/0x1f0
> [   48.480343]  i915_gem_gtt_prepare_pages+0xb8/0x170 [i915]
> [   48.480465]  huge_get_pages+0x232/0x2b0 [i915]
> [   48.480590]  ____i915_gem_object_get_pages+0x40/0xb0 [i915]
> [   48.480712]  __i915_gem_object_get_pages+0x90/0xa0 [i915]
> [   48.480834]  i915_gem_object_prepare_write+0x2d6/0x330 [i915]
> [   48.480955]  create_test_object.isra.54+0x1a9/0x3e0 [i915]
> [   48.481075]  igt_shared_ctx_exec+0x365/0x3c0 [i915]
> [   48.481210]  __i915_subtests.cold.4+0x30/0x92 [i915]
> [   48.481341]  __run_selftests.cold.3+0xa9/0x119 [i915]
> [   48.481466]  i915_live_selftests+0x3c/0x70 [i915]
> [   48.481583]  i915_pci_probe+0xe7/0x220 [i915]
> [   48.481620]  pci_device_probe+0xe0/0x180
> [   48.481665]  really_probe+0x163/0x4e0
> [   48.481710]  device_driver_attach+0x85/0x90
> [   48.481750]  __driver_attach+0xa5/0x180
> [   48.481796]  bus_for_each_dev+0xda/0x130
> [   48.481831]  bus_add_driver+0x205/0x2e0
> [   48.481882]  driver_register+0xca/0x140
> [   48.481927]  do_one_initcall+0x6c/0x1af
> [   48.481970]  do_init_module+0x106/0x350
> [   48.482010]  load_module+0x3d2c/0x3ea0
> [   48.482058]  __do_sys_finit_module+0x110/0x180
> [   48.482102]  do_syscall_64+0x62/0x1f0
> [   48.482147]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> [   48.482190]
> [   48.482224] Freed by task 37:
> [   48.482273]  save_stack+0x19/0x80
> [   48.482318]  __kasan_slab_free+0x12e/0x180
> [   48.482363]  kmem_cache_free+0x70/0x140
> [   48.482406]  __free_iova+0x1d/0x30
> [   48.482445]  fq_ring_free+0x15a/0x1a0
> [   48.482490]  queue_iova+0x19c/0x1f0
> [   48.482624]  cleanup_page_dma.isra.64+0x62/0xb0 [i915]
> [   48.482749]  __gen8_ppgtt_cleanup+0x63/0x80 [i915]
> [   48.482873]  __gen8_ppgtt_cleanup+0x42/0x80 [i915]
> [   48.482999]  __gen8_ppgtt_clear+0x433/0x4b0 [i915]
> [   48.483123]  __gen8_ppgtt_clear+0x462/0x4b0 [i915]
> [   48.483250]  i915_vma_unbind+0x1e2/0x240 [i915]
> [   48.483378]  i915_vma_destroy+0x3a/0x280 [i915]
> [   48.483500]  __i915_gem_free_objects+0xf0/0x2d0 [i915]
> [   48.483622]  __i915_gem_free_work+0x41/0xa0 [i915]
> [   48.483659]  process_one_work+0x495/0x710
> [   48.483704]  worker_thread+0x4c7/0x6f0
> [   48.483748]  kthread+0x1b2/0x1d0
> [   48.483787]  ret_from_fork+0x1f/0x30
> [   48.483831]
> [   48.483868] The buggy address belongs to the object at ffff88870fc19000
> [   48.483868]  which belongs to the cache iommu_iova of size 40
> [   48.483920] The buggy address is located 32 bytes inside of
> [   48.483920]  40-byte region [ffff88870fc19000, ffff88870fc19028)
> [   48.483964] The buggy address belongs to the page:
> [   48.484006] page:ffffea001c3f0600 refcount:1 mapcount:0 mapping:ffff8888181a91c0 index:0x0 compound_mapcount: 0
> [   48.484045] flags: 0x8000000000010200(slab|head)
> [   48.484096] raw: 8000000000010200 ffffea001c421a08 ffffea001c447e88 ffff8888181a91c0
> [   48.484141] raw: 0000000000000000 0000000000120012 00000001ffffffff 0000000000000000
> [   48.484188] page dumped because: kasan: bad access detected
> [   48.484230]
> [   48.484265] Memory state around the buggy address:
> [   48.484314]  ffff88870fc18f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> [   48.484361]  ffff88870fc18f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> [   48.484406] >ffff88870fc19000: fb fb fb fb fb fc fc fc fc fc fc fc fc fc fc fc
> [   48.484451]                                ^
> [   48.484494]  ffff88870fc19080: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> [   48.484530]  ffff88870fc19100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
> 
> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108602
> Fixes: e60aa7b53845 ("iommu/iova: Extend rbtree node caching")
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Joerg Roedel <jroedel@suse.de>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: <stable@vger.kernel.org> # v4.15+
> ---
>   drivers/iommu/iova.c | 5 +++--
>   1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index d499b2621239..24b9f9b19086 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -127,8 +127,9 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, struct iova *free)
>   	struct iova *cached_iova;
>   
>   	cached_iova = rb_entry(iovad->cached32_node, struct iova, node);
> -	if (free->pfn_hi < iovad->dma_32bit_pfn &&
> -	    free->pfn_lo >= cached_iova->pfn_lo) {
> +	if (free == cached_iova ||
> +	    (free->pfn_hi < iovad->dma_32bit_pfn &&
> +	     free->pfn_lo >= cached_iova->pfn_lo)) {
>   		iovad->cached32_node = rb_next(&free->node);
>   		iovad->max32_alloc_size = iovad->dma_32bit_pfn;
>   	}
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] iommu/iova: Remove stale cached32_node
  2019-07-22 10:13   ` Robin Murphy
@ 2019-07-22 10:46     ` Chris Wilson
  -1 siblings, 0 replies; 12+ messages in thread
From: Chris Wilson @ 2019-07-22 10:46 UTC (permalink / raw)
  To: Robin Murphy, iommu; +Cc: intel-gfx, Joerg Roedel, Joerg Roedel, stable

Quoting Robin Murphy (2019-07-22 11:13:49)
> Hi Chris,
> 
> On 20/07/2019 19:08, Chris Wilson wrote:
> > Since the cached32_node is allowed to be advanced above dma_32bit_pfn
> > (to provide a shortcut into the limited range), we need to be careful to
> > remove the to be freed node if it is the cached32_node.
> 
> Thanks for digging in - just to confirm my understanding, the 
> problematic situation is where both 32-bit and 64-bit nodes have been 
> allocated, the topmost 32-bit node is freed, then the lowest 64-bit node 
> is freed, which leaves the pointer dangling because the second free 
> fails the "free->pfn_hi < iovad->dma_32bit_pfn" check. Does that match 
> your reasoning?

Yes. Once cached32_node is set to the right of dma_32bit_pfn it fails
to be picked up in the next cycle through __cached_rbnode_delete_update
should cached32_node be the next victim.
 
> Assuming I haven't completely misread the problem, I can't think of a 
> more appropriate way to fix it, so;
> 
> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
> 
> I could swear I did consider that corner case at some point, but since 
> Leizhen and I rewrote this stuff about 5 times between us I'm not 
> entirely surprised such a subtlety could have got lost again along the way.

I admit to being impressed that rb_prev() does not appear high in the
profiles -- though I guess that is more an artifact of the scary layers
of caching above it.
-Chris

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] iommu/iova: Remove stale cached32_node
@ 2019-07-22 10:46     ` Chris Wilson
  0 siblings, 0 replies; 12+ messages in thread
From: Chris Wilson @ 2019-07-22 10:46 UTC (permalink / raw)
  To: Robin Murphy, iommu; +Cc: intel-gfx, Joerg Roedel, stable

Quoting Robin Murphy (2019-07-22 11:13:49)
> Hi Chris,
> 
> On 20/07/2019 19:08, Chris Wilson wrote:
> > Since the cached32_node is allowed to be advanced above dma_32bit_pfn
> > (to provide a shortcut into the limited range), we need to be careful to
> > remove the to be freed node if it is the cached32_node.
> 
> Thanks for digging in - just to confirm my understanding, the 
> problematic situation is where both 32-bit and 64-bit nodes have been 
> allocated, the topmost 32-bit node is freed, then the lowest 64-bit node 
> is freed, which leaves the pointer dangling because the second free 
> fails the "free->pfn_hi < iovad->dma_32bit_pfn" check. Does that match 
> your reasoning?

Yes. Once cached32_node is set to the right of dma_32bit_pfn it fails
to be picked up in the next cycle through __cached_rbnode_delete_update
should cached32_node be the next victim.
 
> Assuming I haven't completely misread the problem, I can't think of a 
> more appropriate way to fix it, so;
> 
> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
> 
> I could swear I did consider that corner case at some point, but since 
> Leizhen and I rewrote this stuff about 5 times between us I'm not 
> entirely surprised such a subtlety could have got lost again along the way.

I admit to being impressed that rb_prev() does not appear high in the
profiles -- though I guess that is more an artifact of the scary layers
of caching above it.
-Chris
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] iommu/iova: Remove stale cached32_node
  2019-07-20 18:08 ` Chris Wilson
@ 2019-07-22 15:51   ` Joerg Roedel
  -1 siblings, 0 replies; 12+ messages in thread
From: Joerg Roedel @ 2019-07-22 15:51 UTC (permalink / raw)
  To: Chris Wilson; +Cc: iommu, intel-gfx, Robin Murphy, Joerg Roedel, stable

On Sat, Jul 20, 2019 at 07:08:48PM +0100, Chris Wilson wrote:
> ---
>  drivers/iommu/iova.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)

Applied to iommu/fixes.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] iommu/iova: Remove stale cached32_node
@ 2019-07-22 15:51   ` Joerg Roedel
  0 siblings, 0 replies; 12+ messages in thread
From: Joerg Roedel @ 2019-07-22 15:51 UTC (permalink / raw)
  To: Chris Wilson; +Cc: iommu, Joerg Roedel, Robin Murphy, intel-gfx, stable

On Sat, Jul 20, 2019 at 07:08:48PM +0100, Chris Wilson wrote:
> ---
>  drivers/iommu/iova.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)

Applied to iommu/fixes.

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2019-07-22 15:51 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-20 18:08 [PATCH] iommu/iova: Remove stale cached32_node Chris Wilson
2019-07-20 18:08 ` Chris Wilson
2019-07-20 18:58 ` ✗ Fi.CI.BAT: failure for " Patchwork
2019-07-20 21:47 ` [PATCH] " Sasha Levin
2019-07-20 21:47   ` Sasha Levin
2019-07-20 21:47 ` Sasha Levin
2019-07-22 10:13 ` Robin Murphy
2019-07-22 10:13   ` Robin Murphy
2019-07-22 10:46   ` Chris Wilson
2019-07-22 10:46     ` Chris Wilson
2019-07-22 15:51 ` Joerg Roedel
2019-07-22 15:51   ` Joerg Roedel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.