* [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
@ 2020-07-28 13:58 ` Daniel Vetter
0 siblings, 0 replies; 21+ messages in thread
From: Daniel Vetter @ 2020-07-28 13:58 UTC (permalink / raw)
To: DRI Development
Cc: Intel Graphics Development, Daniel Vetter, Daniel Vetter,
Sumit Semwal, Christian König, linux-media, linaro-mm-sig,
Dave Chinner, Qian Cai, linux-xfs, linux-fsdevel,
Thomas Hellström, Andrew Morton, Jason Gunthorpe, linux-mm,
linux-rdma, Maarten Lankhorst
GPU drivers need this in their shrinkers, to be able to throw out
mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
but that loop is resolved by trylocking in shrinkers.
So full hierarchy is now (ignore some of the other branches we already
have primed):
mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
I hope that's not inconsistent with anything mm or fs does, adding
relevant people.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: "Christian König" <christian.koenig@amd.com>
Cc: linux-media@vger.kernel.org
Cc: linaro-mm-sig@lists.linaro.org
Cc: Dave Chinner <david@fromorbit.com>
Cc: Qian Cai <cai@lca.pw>
Cc: linux-xfs@vger.kernel.org
Cc: linux-fsdevel@vger.kernel.org
Cc: Thomas Hellström (Intel) <thomas_os@shipmail.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: linux-mm@kvack.org
Cc: linux-rdma@vger.kernel.org
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
drivers/dma-buf/dma-resv.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index 0e6675ec1d11..9678162a4ac5 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -104,12 +104,14 @@ static int __init dma_resv_lockdep(void)
struct mm_struct *mm = mm_alloc();
struct ww_acquire_ctx ctx;
struct dma_resv obj;
+ struct address_space mapping;
int ret;
if (!mm)
return -ENOMEM;
dma_resv_init(&obj);
+ address_space_init_once(&mapping);
mmap_read_lock(mm);
ww_acquire_init(&ctx, &reservation_ww_class);
@@ -117,6 +119,9 @@ static int __init dma_resv_lockdep(void)
if (ret == -EDEADLK)
dma_resv_lock_slow(&obj, &ctx);
fs_reclaim_acquire(GFP_KERNEL);
+ /* for unmap_mapping_range on trylocked buffer objects in shrinkers */
+ i_mmap_lock_write(&mapping);
+ i_mmap_unlock_write(&mapping);
#ifdef CONFIG_MMU_NOTIFIER
lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
__dma_fence_might_wait();
--
2.27.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
@ 2020-07-28 13:58 ` Daniel Vetter
0 siblings, 0 replies; 21+ messages in thread
From: Daniel Vetter @ 2020-07-28 13:58 UTC (permalink / raw)
To: DRI Development
Cc: linux-xfs, Thomas Hellström, linux-rdma, Daniel Vetter,
Intel Graphics Development, Dave Chinner, linaro-mm-sig,
linux-mm, Jason Gunthorpe, Qian Cai, linux-fsdevel,
Daniel Vetter, Andrew Morton, Christian König, linux-media
GPU drivers need this in their shrinkers, to be able to throw out
mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
but that loop is resolved by trylocking in shrinkers.
So full hierarchy is now (ignore some of the other branches we already
have primed):
mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
I hope that's not inconsistent with anything mm or fs does, adding
relevant people.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: "Christian König" <christian.koenig@amd.com>
Cc: linux-media@vger.kernel.org
Cc: linaro-mm-sig@lists.linaro.org
Cc: Dave Chinner <david@fromorbit.com>
Cc: Qian Cai <cai@lca.pw>
Cc: linux-xfs@vger.kernel.org
Cc: linux-fsdevel@vger.kernel.org
Cc: Thomas Hellström (Intel) <thomas_os@shipmail.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: linux-mm@kvack.org
Cc: linux-rdma@vger.kernel.org
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
drivers/dma-buf/dma-resv.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index 0e6675ec1d11..9678162a4ac5 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -104,12 +104,14 @@ static int __init dma_resv_lockdep(void)
struct mm_struct *mm = mm_alloc();
struct ww_acquire_ctx ctx;
struct dma_resv obj;
+ struct address_space mapping;
int ret;
if (!mm)
return -ENOMEM;
dma_resv_init(&obj);
+ address_space_init_once(&mapping);
mmap_read_lock(mm);
ww_acquire_init(&ctx, &reservation_ww_class);
@@ -117,6 +119,9 @@ static int __init dma_resv_lockdep(void)
if (ret == -EDEADLK)
dma_resv_lock_slow(&obj, &ctx);
fs_reclaim_acquire(GFP_KERNEL);
+ /* for unmap_mapping_range on trylocked buffer objects in shrinkers */
+ i_mmap_lock_write(&mapping);
+ i_mmap_unlock_write(&mapping);
#ifdef CONFIG_MMU_NOTIFIER
lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
__dma_fence_might_wait();
--
2.27.0
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [Intel-gfx] [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
@ 2020-07-28 13:58 ` Daniel Vetter
0 siblings, 0 replies; 21+ messages in thread
From: Daniel Vetter @ 2020-07-28 13:58 UTC (permalink / raw)
To: DRI Development
Cc: linux-xfs, linux-rdma, Daniel Vetter, Intel Graphics Development,
Dave Chinner, Sumit Semwal, linaro-mm-sig, linux-mm,
Jason Gunthorpe, Qian Cai, linux-fsdevel, Daniel Vetter,
Andrew Morton, Christian König, linux-media
GPU drivers need this in their shrinkers, to be able to throw out
mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
but that loop is resolved by trylocking in shrinkers.
So full hierarchy is now (ignore some of the other branches we already
have primed):
mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
I hope that's not inconsistent with anything mm or fs does, adding
relevant people.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: "Christian König" <christian.koenig@amd.com>
Cc: linux-media@vger.kernel.org
Cc: linaro-mm-sig@lists.linaro.org
Cc: Dave Chinner <david@fromorbit.com>
Cc: Qian Cai <cai@lca.pw>
Cc: linux-xfs@vger.kernel.org
Cc: linux-fsdevel@vger.kernel.org
Cc: Thomas Hellström (Intel) <thomas_os@shipmail.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: linux-mm@kvack.org
Cc: linux-rdma@vger.kernel.org
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
---
drivers/dma-buf/dma-resv.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
index 0e6675ec1d11..9678162a4ac5 100644
--- a/drivers/dma-buf/dma-resv.c
+++ b/drivers/dma-buf/dma-resv.c
@@ -104,12 +104,14 @@ static int __init dma_resv_lockdep(void)
struct mm_struct *mm = mm_alloc();
struct ww_acquire_ctx ctx;
struct dma_resv obj;
+ struct address_space mapping;
int ret;
if (!mm)
return -ENOMEM;
dma_resv_init(&obj);
+ address_space_init_once(&mapping);
mmap_read_lock(mm);
ww_acquire_init(&ctx, &reservation_ww_class);
@@ -117,6 +119,9 @@ static int __init dma_resv_lockdep(void)
if (ret == -EDEADLK)
dma_resv_lock_slow(&obj, &ctx);
fs_reclaim_acquire(GFP_KERNEL);
+ /* for unmap_mapping_range on trylocked buffer objects in shrinkers */
+ i_mmap_lock_write(&mapping);
+ i_mmap_unlock_write(&mapping);
#ifdef CONFIG_MMU_NOTIFIER
lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
__dma_fence_might_wait();
--
2.27.0
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
2020-07-28 13:58 ` Daniel Vetter
(?)
(?)
@ 2020-07-28 14:09 ` Patchwork
-1 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2020-07-28 14:09 UTC (permalink / raw)
To: Daniel Vetter; +Cc: intel-gfx
== Series Details ==
Series: dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
URL : https://patchwork.freedesktop.org/series/79980/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
dca1dc9ea07c dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
-:66: WARNING:NO_AUTHOR_SIGN_OFF: Missing Signed-off-by: line by nominal patch author 'Daniel Vetter <daniel.vetter@ffwll.ch>'
total: 0 errors, 1 warnings, 0 checks, 23 lines checked
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* [Intel-gfx] ✓ Fi.CI.BAT: success for dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
2020-07-28 13:58 ` Daniel Vetter
` (2 preceding siblings ...)
(?)
@ 2020-07-28 14:30 ` Patchwork
-1 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2020-07-28 14:30 UTC (permalink / raw)
To: Daniel Vetter; +Cc: intel-gfx
[-- Attachment #1.1: Type: text/plain, Size: 4919 bytes --]
== Series Details ==
Series: dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
URL : https://patchwork.freedesktop.org/series/79980/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_8804 -> Patchwork_18248
====================================================
Summary
-------
**SUCCESS**
No regressions found.
External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/index.html
Known issues
------------
Here are the changes found in Patchwork_18248 that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@gem_exec_suspend@basic-s3:
- fi-tgl-u2: [PASS][1] -> [FAIL][2] ([i915#1888])
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/fi-tgl-u2/igt@gem_exec_suspend@basic-s3.html
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/fi-tgl-u2/igt@gem_exec_suspend@basic-s3.html
* igt@kms_flip@basic-flip-vs-wf_vblank@c-edp1:
- fi-icl-u2: [PASS][3] -> [DMESG-WARN][4] ([i915#1982])
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/fi-icl-u2/igt@kms_flip@basic-flip-vs-wf_vblank@c-edp1.html
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/fi-icl-u2/igt@kms_flip@basic-flip-vs-wf_vblank@c-edp1.html
#### Possible fixes ####
* igt@gem_exec_suspend@basic-s0:
- fi-tgl-u2: [FAIL][5] ([i915#1888]) -> [PASS][6]
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/fi-tgl-u2/igt@gem_exec_suspend@basic-s0.html
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/fi-tgl-u2/igt@gem_exec_suspend@basic-s0.html
* igt@i915_module_load@reload:
- fi-apl-guc: [DMESG-WARN][7] ([i915#1635] / [i915#1982]) -> [PASS][8]
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/fi-apl-guc/igt@i915_module_load@reload.html
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/fi-apl-guc/igt@i915_module_load@reload.html
- fi-bsw-kefka: [DMESG-WARN][9] ([i915#1982]) -> [PASS][10] +1 similar issue
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/fi-bsw-kefka/igt@i915_module_load@reload.html
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/fi-bsw-kefka/igt@i915_module_load@reload.html
* igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic:
- {fi-kbl-7560u}: [DMESG-WARN][11] ([i915#1982]) -> [PASS][12]
[11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/fi-kbl-7560u/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html
[12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/fi-kbl-7560u/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-atomic.html
#### Warnings ####
* igt@kms_cursor_legacy@basic-flip-before-cursor-legacy:
- fi-kbl-x1275: [DMESG-WARN][13] ([i915#62] / [i915#92]) -> [DMESG-WARN][14] ([i915#62] / [i915#92] / [i915#95])
[13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/fi-kbl-x1275/igt@kms_cursor_legacy@basic-flip-before-cursor-legacy.html
[14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/fi-kbl-x1275/igt@kms_cursor_legacy@basic-flip-before-cursor-legacy.html
* igt@kms_force_connector_basic@force-edid:
- fi-kbl-x1275: [DMESG-WARN][15] ([i915#62] / [i915#92] / [i915#95]) -> [DMESG-WARN][16] ([i915#62] / [i915#92]) +4 similar issues
[15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/fi-kbl-x1275/igt@kms_force_connector_basic@force-edid.html
[16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/fi-kbl-x1275/igt@kms_force_connector_basic@force-edid.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[i915#1635]: https://gitlab.freedesktop.org/drm/intel/issues/1635
[i915#1888]: https://gitlab.freedesktop.org/drm/intel/issues/1888
[i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
[i915#62]: https://gitlab.freedesktop.org/drm/intel/issues/62
[i915#92]: https://gitlab.freedesktop.org/drm/intel/issues/92
[i915#95]: https://gitlab.freedesktop.org/drm/intel/issues/95
Participating hosts (41 -> 37)
------------------------------
Additional (1): fi-tgl-y
Missing (5): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-byt-clapper
Build changes
-------------
* Linux: CI_DRM_8804 -> Patchwork_18248
CI-20190529: 20190529
CI_DRM_8804: 943d034c433e5be93076cf51fd8ea5b4d7644e8b @ git://anongit.freedesktop.org/gfx-ci/linux
IGT_5749: 2fef871e791ceab7841b899691c443167550173d @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
Patchwork_18248: dca1dc9ea07c1e389ab5377902f01adcd6d4d6ec @ git://anongit.freedesktop.org/gfx-ci/linux
== Linux commits ==
dca1dc9ea07c dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/index.html
[-- Attachment #1.2: Type: text/html, Size: 6450 bytes --]
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* [Intel-gfx] ✗ Fi.CI.IGT: failure for dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
2020-07-28 13:58 ` Daniel Vetter
` (3 preceding siblings ...)
(?)
@ 2020-07-28 20:58 ` Patchwork
-1 siblings, 0 replies; 21+ messages in thread
From: Patchwork @ 2020-07-28 20:58 UTC (permalink / raw)
To: Daniel Vetter; +Cc: intel-gfx
[-- Attachment #1.1: Type: text/plain, Size: 21130 bytes --]
== Series Details ==
Series: dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
URL : https://patchwork.freedesktop.org/series/79980/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_8804_full -> Patchwork_18248_full
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with Patchwork_18248_full absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in Patchwork_18248_full, please notify your bug team to allow them
to document this new failure mode, which will reduce false positives in CI.
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in Patchwork_18248_full:
### IGT changes ###
#### Possible regressions ####
* igt@i915_suspend@fence-restore-untiled:
- shard-skl: NOTRUN -> [INCOMPLETE][1]
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-skl5/igt@i915_suspend@fence-restore-untiled.html
* igt@kms_flip@nonexisting-fb@c-dp1:
- shard-kbl: NOTRUN -> [INCOMPLETE][2]
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-kbl1/igt@kms_flip@nonexisting-fb@c-dp1.html
Known issues
------------
Here are the changes found in Patchwork_18248_full that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@gem_eio@in-flight-suspend:
- shard-skl: [PASS][3] -> [DMESG-WARN][4] ([i915#1982]) +9 similar issues
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl1/igt@gem_eio@in-flight-suspend.html
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-skl7/igt@gem_eio@in-flight-suspend.html
* igt@kms_atomic_transition@plane-all-transition-fencing@edp-1-pipe-c:
- shard-tglb: [PASS][5] -> [INCOMPLETE][6] ([i915#2242])
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-tglb5/igt@kms_atomic_transition@plane-all-transition-fencing@edp-1-pipe-c.html
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-tglb5/igt@kms_atomic_transition@plane-all-transition-fencing@edp-1-pipe-c.html
* igt@kms_big_fb@y-tiled-64bpp-rotate-180:
- shard-glk: [PASS][7] -> [DMESG-FAIL][8] ([i915#118] / [i915#95])
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-glk1/igt@kms_big_fb@y-tiled-64bpp-rotate-180.html
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-glk8/igt@kms_big_fb@y-tiled-64bpp-rotate-180.html
* igt@kms_cursor_edge_walk@pipe-c-64x64-top-edge:
- shard-glk: [PASS][9] -> [DMESG-WARN][10] ([i915#1982])
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-glk2/igt@kms_cursor_edge_walk@pipe-c-64x64-top-edge.html
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-glk6/igt@kms_cursor_edge_walk@pipe-c-64x64-top-edge.html
* igt@kms_cursor_legacy@2x-flip-vs-cursor-atomic:
- shard-glk: [PASS][11] -> [FAIL][12] ([i915#72])
[11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-glk5/igt@kms_cursor_legacy@2x-flip-vs-cursor-atomic.html
[12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-glk7/igt@kms_cursor_legacy@2x-flip-vs-cursor-atomic.html
* igt@kms_cursor_legacy@cursora-vs-flipb-varying-size:
- shard-glk: [PASS][13] -> [INCOMPLETE][14] ([i915#2241])
[13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-glk7/igt@kms_cursor_legacy@cursora-vs-flipb-varying-size.html
[14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-glk4/igt@kms_cursor_legacy@cursora-vs-flipb-varying-size.html
* igt@kms_cursor_legacy@flip-vs-cursor-legacy:
- shard-skl: [PASS][15] -> [FAIL][16] ([IGT#5])
[15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl8/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html
[16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-skl5/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html
* igt@kms_flip@2x-wf_vblank-ts-check-interruptible@ac-hdmi-a1-hdmi-a2:
- shard-glk: [PASS][17] -> [FAIL][18] ([i915#2122])
[17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-glk1/igt@kms_flip@2x-wf_vblank-ts-check-interruptible@ac-hdmi-a1-hdmi-a2.html
[18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-glk3/igt@kms_flip@2x-wf_vblank-ts-check-interruptible@ac-hdmi-a1-hdmi-a2.html
* igt@kms_flip@flip-vs-expired-vblank@a-edp1:
- shard-skl: [PASS][19] -> [FAIL][20] ([i915#2122])
[19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl1/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html
[20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-skl3/igt@kms_flip@flip-vs-expired-vblank@a-edp1.html
* igt@kms_flip@flip-vs-suspend@c-dp1:
- shard-kbl: [PASS][21] -> [DMESG-WARN][22] ([i915#180]) +5 similar issues
[21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-kbl6/igt@kms_flip@flip-vs-suspend@c-dp1.html
[22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-kbl1/igt@kms_flip@flip-vs-suspend@c-dp1.html
* igt@kms_flip@nonexisting-fb-interruptible@a-edp1:
- shard-iclb: [PASS][23] -> [INCOMPLETE][24] ([i915#2240]) +1 similar issue
[23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-iclb4/igt@kms_flip@nonexisting-fb-interruptible@a-edp1.html
[24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-iclb1/igt@kms_flip@nonexisting-fb-interruptible@a-edp1.html
* igt@kms_flip_tiling@flip-changes-tiling-y:
- shard-skl: [PASS][25] -> [FAIL][26] ([i915#699])
[25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl7/igt@kms_flip_tiling@flip-changes-tiling-y.html
[26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-skl8/igt@kms_flip_tiling@flip-changes-tiling-y.html
* igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-draw-pwrite:
- shard-tglb: [PASS][27] -> [DMESG-WARN][28] ([i915#1982]) +3 similar issues
[27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-tglb3/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-draw-pwrite.html
[28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-tglb8/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-draw-pwrite.html
* igt@kms_frontbuffer_tracking@fbc-badstride:
- shard-kbl: [PASS][29] -> [DMESG-WARN][30] ([i915#1982])
[29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-kbl2/igt@kms_frontbuffer_tracking@fbc-badstride.html
[30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-kbl4/igt@kms_frontbuffer_tracking@fbc-badstride.html
* igt@kms_psr@psr2_suspend:
- shard-iclb: [PASS][31] -> [SKIP][32] ([fdo#109441]) +1 similar issue
[31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-iclb2/igt@kms_psr@psr2_suspend.html
[32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-iclb7/igt@kms_psr@psr2_suspend.html
* igt@perf_pmu@module-unload:
- shard-apl: [PASS][33] -> [DMESG-WARN][34] ([i915#1635] / [i915#1982])
[33]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-apl4/igt@perf_pmu@module-unload.html
[34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-apl4/igt@perf_pmu@module-unload.html
#### Possible fixes ####
* igt@gem_exec_schedule@smoketest-all:
- shard-tglb: [INCOMPLETE][35] -> [PASS][36]
[35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-tglb8/igt@gem_exec_schedule@smoketest-all.html
[36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-tglb7/igt@gem_exec_schedule@smoketest-all.html
* igt@gem_exec_whisper@basic-forked:
- shard-glk: [DMESG-WARN][37] ([i915#118] / [i915#95]) -> [PASS][38] +2 similar issues
[37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-glk6/igt@gem_exec_whisper@basic-forked.html
[38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-glk2/igt@gem_exec_whisper@basic-forked.html
* igt@kms_atomic_transition@plane-all-transition-nonblocking@edp-1-pipe-d:
- shard-tglb: [INCOMPLETE][39] ([i915#2242]) -> [PASS][40] +1 similar issue
[39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-tglb3/igt@kms_atomic_transition@plane-all-transition-nonblocking@edp-1-pipe-d.html
[40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-tglb6/igt@kms_atomic_transition@plane-all-transition-nonblocking@edp-1-pipe-d.html
* igt@kms_color@pipe-c-ctm-0-25:
- shard-skl: [DMESG-WARN][41] ([i915#1982]) -> [PASS][42] +8 similar issues
[41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl9/igt@kms_color@pipe-c-ctm-0-25.html
[42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-skl2/igt@kms_color@pipe-c-ctm-0-25.html
* igt@kms_cursor_crc@pipe-b-cursor-128x42-random:
- shard-apl: [DMESG-WARN][43] ([i915#1635] / [i915#62]) -> [PASS][44] +31 similar issues
[43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-apl4/igt@kms_cursor_crc@pipe-b-cursor-128x42-random.html
[44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-apl7/igt@kms_cursor_crc@pipe-b-cursor-128x42-random.html
* igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ab-hdmi-a1-hdmi-a2:
- shard-glk: [FAIL][45] ([i915#79]) -> [PASS][46]
[45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-glk4/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ab-hdmi-a1-hdmi-a2.html
[46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-glk2/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ab-hdmi-a1-hdmi-a2.html
* igt@kms_flip@flip-vs-suspend-interruptible@c-edp1:
- shard-skl: [INCOMPLETE][47] ([i915#198]) -> [PASS][48]
[47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl4/igt@kms_flip@flip-vs-suspend-interruptible@c-edp1.html
[48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-skl10/igt@kms_flip@flip-vs-suspend-interruptible@c-edp1.html
* igt@kms_flip@nonexisting-fb-interruptible@b-dp1:
- shard-kbl: [INCOMPLETE][49] ([i915#2240]) -> [PASS][50]
[49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-kbl2/igt@kms_flip@nonexisting-fb-interruptible@b-dp1.html
[50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-kbl2/igt@kms_flip@nonexisting-fb-interruptible@b-dp1.html
* igt@kms_flip@nonexisting-fb@b-dp1:
- shard-kbl: [INCOMPLETE][51] -> [PASS][52]
[51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-kbl7/igt@kms_flip@nonexisting-fb@b-dp1.html
[52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-kbl1/igt@kms_flip@nonexisting-fb@b-dp1.html
* igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-mmap-cpu:
- shard-tglb: [DMESG-WARN][53] ([i915#1982]) -> [PASS][54]
[53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-tglb2/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-mmap-cpu.html
[54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-tglb7/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-pri-shrfb-draw-mmap-cpu.html
* igt@kms_hdr@bpc-switch-suspend:
- shard-kbl: [DMESG-WARN][55] ([i915#180]) -> [PASS][56] +3 similar issues
[55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-kbl1/igt@kms_hdr@bpc-switch-suspend.html
[56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-kbl3/igt@kms_hdr@bpc-switch-suspend.html
- shard-skl: [FAIL][57] ([i915#1188]) -> [PASS][58]
[57]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl7/igt@kms_hdr@bpc-switch-suspend.html
[58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-skl5/igt@kms_hdr@bpc-switch-suspend.html
* igt@kms_plane_alpha_blend@pipe-c-coverage-7efc:
- shard-skl: [FAIL][59] ([fdo#108145] / [i915#265]) -> [PASS][60]
[59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl2/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
[60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-skl1/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
* igt@kms_psr2_su@frontbuffer:
- shard-iclb: [SKIP][61] ([fdo#109642] / [fdo#111068]) -> [PASS][62]
[61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-iclb6/igt@kms_psr2_su@frontbuffer.html
[62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-iclb2/igt@kms_psr2_su@frontbuffer.html
* igt@kms_psr@psr2_cursor_mmap_cpu:
- shard-iclb: [SKIP][63] ([fdo#109441]) -> [PASS][64] +2 similar issues
[63]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-iclb4/igt@kms_psr@psr2_cursor_mmap_cpu.html
[64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-iclb2/igt@kms_psr@psr2_cursor_mmap_cpu.html
* igt@perf_pmu@semaphore-busy@rcs0:
- shard-kbl: [FAIL][65] ([i915#1820]) -> [PASS][66]
[65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-kbl7/igt@perf_pmu@semaphore-busy@rcs0.html
[66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-kbl1/igt@perf_pmu@semaphore-busy@rcs0.html
#### Warnings ####
* igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size:
- shard-skl: [DMESG-FAIL][67] ([i915#1982]) -> [DMESG-WARN][68] ([i915#1982])
[67]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl8/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html
[68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-skl6/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size.html
* igt@kms_plane@plane-panning-bottom-right-suspend-pipe-b-planes:
- shard-kbl: [DMESG-WARN][69] ([i915#180]) -> [INCOMPLETE][70] ([i915#155])
[69]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-kbl6/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-b-planes.html
[70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-kbl6/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-b-planes.html
* igt@kms_plane_alpha_blend@pipe-a-alpha-basic:
- shard-skl: [FAIL][71] ([fdo#108145] / [i915#265]) -> [DMESG-FAIL][72] ([fdo#108145] / [i915#1982]) +1 similar issue
[71]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl7/igt@kms_plane_alpha_blend@pipe-a-alpha-basic.html
[72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-skl5/igt@kms_plane_alpha_blend@pipe-a-alpha-basic.html
* igt@kms_plane_alpha_blend@pipe-b-alpha-opaque-fb:
- shard-apl: [DMESG-FAIL][73] ([fdo#108145] / [i915#1635] / [i915#62]) -> [FAIL][74] ([fdo#108145] / [i915#1635] / [i915#265])
[73]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-apl4/igt@kms_plane_alpha_blend@pipe-b-alpha-opaque-fb.html
[74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-apl7/igt@kms_plane_alpha_blend@pipe-b-alpha-opaque-fb.html
* igt@runner@aborted:
- shard-apl: ([FAIL][75], [FAIL][76], [FAIL][77], [FAIL][78], [FAIL][79], [FAIL][80], [FAIL][81], [FAIL][82], [FAIL][83], [FAIL][84], [FAIL][85], [FAIL][86], [FAIL][87], [FAIL][88]) ([i915#1610] / [i915#1635] / [i915#2110] / [i915#637]) -> [FAIL][89] ([i915#1635] / [i915#2110])
[75]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-apl3/igt@runner@aborted.html
[76]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-apl1/igt@runner@aborted.html
[77]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-apl4/igt@runner@aborted.html
[78]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-apl3/igt@runner@aborted.html
[79]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-apl6/igt@runner@aborted.html
[80]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-apl2/igt@runner@aborted.html
[81]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-apl4/igt@runner@aborted.html
[82]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-apl3/igt@runner@aborted.html
[83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-apl2/igt@runner@aborted.html
[84]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-apl1/igt@runner@aborted.html
[85]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-apl1/igt@runner@aborted.html
[86]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-apl8/igt@runner@aborted.html
[87]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-apl7/igt@runner@aborted.html
[88]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-apl7/igt@runner@aborted.html
[89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-apl4/igt@runner@aborted.html
- shard-skl: ([FAIL][90], [FAIL][91], [FAIL][92], [FAIL][93], [FAIL][94], [FAIL][95], [FAIL][96], [FAIL][97], [FAIL][98], [FAIL][99]) ([i915#2110]) -> ([FAIL][100], [FAIL][101]) ([i915#1436] / [i915#1611] / [i915#2029] / [i915#2110])
[90]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl2/igt@runner@aborted.html
[91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl3/igt@runner@aborted.html
[92]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl4/igt@runner@aborted.html
[93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl7/igt@runner@aborted.html
[94]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl1/igt@runner@aborted.html
[95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl3/igt@runner@aborted.html
[96]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl1/igt@runner@aborted.html
[97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl10/igt@runner@aborted.html
[98]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl7/igt@runner@aborted.html
[99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_8804/shard-skl8/igt@runner@aborted.html
[100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-skl3/igt@runner@aborted.html
[101]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/shard-skl2/igt@runner@aborted.html
[IGT#5]: https://gitlab.freedesktop.org/drm/igt-gpu-tools/issues/5
[fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145
[fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
[fdo#109642]: https://bugs.freedesktop.org/show_bug.cgi?id=109642
[fdo#111068]: https://bugs.freedesktop.org/show_bug.cgi?id=111068
[i915#118]: https://gitlab.freedesktop.org/drm/intel/issues/118
[i915#1188]: https://gitlab.freedesktop.org/drm/intel/issues/1188
[i915#1436]: https://gitlab.freedesktop.org/drm/intel/issues/1436
[i915#155]: https://gitlab.freedesktop.org/drm/intel/issues/155
[i915#1610]: https://gitlab.freedesktop.org/drm/intel/issues/1610
[i915#1611]: https://gitlab.freedesktop.org/drm/intel/issues/1611
[i915#1635]: https://gitlab.freedesktop.org/drm/intel/issues/1635
[i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180
[i915#1820]: https://gitlab.freedesktop.org/drm/intel/issues/1820
[i915#198]: https://gitlab.freedesktop.org/drm/intel/issues/198
[i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
[i915#2029]: https://gitlab.freedesktop.org/drm/intel/issues/2029
[i915#2110]: https://gitlab.freedesktop.org/drm/intel/issues/2110
[i915#2122]: https://gitlab.freedesktop.org/drm/intel/issues/2122
[i915#2240]: https://gitlab.freedesktop.org/drm/intel/issues/2240
[i915#2241]: https://gitlab.freedesktop.org/drm/intel/issues/2241
[i915#2242]: https://gitlab.freedesktop.org/drm/intel/issues/2242
[i915#265]: https://gitlab.freedesktop.org/drm/intel/issues/265
[i915#62]: https://gitlab.freedesktop.org/drm/intel/issues/62
[i915#637]: https://gitlab.freedesktop.org/drm/intel/issues/637
[i915#699]: https://gitlab.freedesktop.org/drm/intel/issues/699
[i915#72]: https://gitlab.freedesktop.org/drm/intel/issues/72
[i915#79]: https://gitlab.freedesktop.org/drm/intel/issues/79
[i915#95]: https://gitlab.freedesktop.org/drm/intel/issues/95
Participating hosts (11 -> 11)
------------------------------
No changes in participating hosts
Build changes
-------------
* Linux: CI_DRM_8804 -> Patchwork_18248
CI-20190529: 20190529
CI_DRM_8804: 943d034c433e5be93076cf51fd8ea5b4d7644e8b @ git://anongit.freedesktop.org/gfx-ci/linux
IGT_5749: 2fef871e791ceab7841b899691c443167550173d @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
Patchwork_18248: dca1dc9ea07c1e389ab5377902f01adcd6d4d6ec @ git://anongit.freedesktop.org/gfx-ci/linux
piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_18248/index.html
[-- Attachment #1.2: Type: text/html, Size: 25119 bytes --]
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Linaro-mm-sig] [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
2020-07-28 13:58 ` Daniel Vetter
(?)
@ 2020-07-30 12:03 ` Christian König
-1 siblings, 0 replies; 21+ messages in thread
From: Christian König @ 2020-07-30 12:03 UTC (permalink / raw)
To: Daniel Vetter, DRI Development
Cc: linux-xfs, Maarten Lankhorst, linux-rdma,
Intel Graphics Development, Dave Chinner, Christian König,
linaro-mm-sig, linux-mm, Jason Gunthorpe, Qian Cai,
linux-fsdevel, Daniel Vetter, Andrew Morton, linux-media
Am 28.07.20 um 15:58 schrieb Daniel Vetter:
> GPU drivers need this in their shrinkers, to be able to throw out
> mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
> but that loop is resolved by trylocking in shrinkers.
>
> So full hierarchy is now (ignore some of the other branches we already
> have primed):
>
> mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
>
> I hope that's not inconsistent with anything mm or fs does, adding
> relevant people.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: "Christian König" <christian.koenig@amd.com>
> Cc: linux-media@vger.kernel.org
> Cc: linaro-mm-sig@lists.linaro.org
> Cc: Dave Chinner <david@fromorbit.com>
> Cc: Qian Cai <cai@lca.pw>
> Cc: linux-xfs@vger.kernel.org
> Cc: linux-fsdevel@vger.kernel.org
> Cc: Thomas Hellström (Intel) <thomas_os@shipmail.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Jason Gunthorpe <jgg@mellanox.com>
> Cc: linux-mm@kvack.org
> Cc: linux-rdma@vger.kernel.org
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/dma-buf/dma-resv.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> index 0e6675ec1d11..9678162a4ac5 100644
> --- a/drivers/dma-buf/dma-resv.c
> +++ b/drivers/dma-buf/dma-resv.c
> @@ -104,12 +104,14 @@ static int __init dma_resv_lockdep(void)
> struct mm_struct *mm = mm_alloc();
> struct ww_acquire_ctx ctx;
> struct dma_resv obj;
> + struct address_space mapping;
> int ret;
>
> if (!mm)
> return -ENOMEM;
>
> dma_resv_init(&obj);
> + address_space_init_once(&mapping);
>
> mmap_read_lock(mm);
> ww_acquire_init(&ctx, &reservation_ww_class);
> @@ -117,6 +119,9 @@ static int __init dma_resv_lockdep(void)
> if (ret == -EDEADLK)
> dma_resv_lock_slow(&obj, &ctx);
> fs_reclaim_acquire(GFP_KERNEL);
> + /* for unmap_mapping_range on trylocked buffer objects in shrinkers */
> + i_mmap_lock_write(&mapping);
> + i_mmap_unlock_write(&mapping);
> #ifdef CONFIG_MMU_NOTIFIER
> lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
> __dma_fence_might_wait();
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Linaro-mm-sig] [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
@ 2020-07-30 12:03 ` Christian König
0 siblings, 0 replies; 21+ messages in thread
From: Christian König @ 2020-07-30 12:03 UTC (permalink / raw)
To: Daniel Vetter, DRI Development
Cc: linaro-mm-sig, linux-rdma, Dave Chinner,
Intel Graphics Development, linux-xfs, linux-mm, Jason Gunthorpe,
Qian Cai, linux-fsdevel, Daniel Vetter, Andrew Morton,
Christian König, linux-media
Am 28.07.20 um 15:58 schrieb Daniel Vetter:
> GPU drivers need this in their shrinkers, to be able to throw out
> mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
> but that loop is resolved by trylocking in shrinkers.
>
> So full hierarchy is now (ignore some of the other branches we already
> have primed):
>
> mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
>
> I hope that's not inconsistent with anything mm or fs does, adding
> relevant people.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: "Christian König" <christian.koenig@amd.com>
> Cc: linux-media@vger.kernel.org
> Cc: linaro-mm-sig@lists.linaro.org
> Cc: Dave Chinner <david@fromorbit.com>
> Cc: Qian Cai <cai@lca.pw>
> Cc: linux-xfs@vger.kernel.org
> Cc: linux-fsdevel@vger.kernel.org
> Cc: Thomas Hellström (Intel) <thomas_os@shipmail.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Jason Gunthorpe <jgg@mellanox.com>
> Cc: linux-mm@kvack.org
> Cc: linux-rdma@vger.kernel.org
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/dma-buf/dma-resv.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> index 0e6675ec1d11..9678162a4ac5 100644
> --- a/drivers/dma-buf/dma-resv.c
> +++ b/drivers/dma-buf/dma-resv.c
> @@ -104,12 +104,14 @@ static int __init dma_resv_lockdep(void)
> struct mm_struct *mm = mm_alloc();
> struct ww_acquire_ctx ctx;
> struct dma_resv obj;
> + struct address_space mapping;
> int ret;
>
> if (!mm)
> return -ENOMEM;
>
> dma_resv_init(&obj);
> + address_space_init_once(&mapping);
>
> mmap_read_lock(mm);
> ww_acquire_init(&ctx, &reservation_ww_class);
> @@ -117,6 +119,9 @@ static int __init dma_resv_lockdep(void)
> if (ret == -EDEADLK)
> dma_resv_lock_slow(&obj, &ctx);
> fs_reclaim_acquire(GFP_KERNEL);
> + /* for unmap_mapping_range on trylocked buffer objects in shrinkers */
> + i_mmap_lock_write(&mapping);
> + i_mmap_unlock_write(&mapping);
> #ifdef CONFIG_MMU_NOTIFIER
> lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
> __dma_fence_might_wait();
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-gfx] [Linaro-mm-sig] [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
@ 2020-07-30 12:03 ` Christian König
0 siblings, 0 replies; 21+ messages in thread
From: Christian König @ 2020-07-30 12:03 UTC (permalink / raw)
To: Daniel Vetter, DRI Development
Cc: linaro-mm-sig, linux-rdma, Dave Chinner,
Intel Graphics Development, linux-xfs, linux-mm, Jason Gunthorpe,
Qian Cai, linux-fsdevel, Daniel Vetter, Andrew Morton,
Christian König, linux-media
Am 28.07.20 um 15:58 schrieb Daniel Vetter:
> GPU drivers need this in their shrinkers, to be able to throw out
> mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
> but that loop is resolved by trylocking in shrinkers.
>
> So full hierarchy is now (ignore some of the other branches we already
> have primed):
>
> mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
>
> I hope that's not inconsistent with anything mm or fs does, adding
> relevant people.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: "Christian König" <christian.koenig@amd.com>
> Cc: linux-media@vger.kernel.org
> Cc: linaro-mm-sig@lists.linaro.org
> Cc: Dave Chinner <david@fromorbit.com>
> Cc: Qian Cai <cai@lca.pw>
> Cc: linux-xfs@vger.kernel.org
> Cc: linux-fsdevel@vger.kernel.org
> Cc: Thomas Hellström (Intel) <thomas_os@shipmail.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Jason Gunthorpe <jgg@mellanox.com>
> Cc: linux-mm@kvack.org
> Cc: linux-rdma@vger.kernel.org
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/dma-buf/dma-resv.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
> index 0e6675ec1d11..9678162a4ac5 100644
> --- a/drivers/dma-buf/dma-resv.c
> +++ b/drivers/dma-buf/dma-resv.c
> @@ -104,12 +104,14 @@ static int __init dma_resv_lockdep(void)
> struct mm_struct *mm = mm_alloc();
> struct ww_acquire_ctx ctx;
> struct dma_resv obj;
> + struct address_space mapping;
> int ret;
>
> if (!mm)
> return -ENOMEM;
>
> dma_resv_init(&obj);
> + address_space_init_once(&mapping);
>
> mmap_read_lock(mm);
> ww_acquire_init(&ctx, &reservation_ww_class);
> @@ -117,6 +119,9 @@ static int __init dma_resv_lockdep(void)
> if (ret == -EDEADLK)
> dma_resv_lock_slow(&obj, &ctx);
> fs_reclaim_acquire(GFP_KERNEL);
> + /* for unmap_mapping_range on trylocked buffer objects in shrinkers */
> + i_mmap_lock_write(&mapping);
> + i_mmap_unlock_write(&mapping);
> #ifdef CONFIG_MMU_NOTIFIER
> lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
> __dma_fence_might_wait();
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
2020-07-28 13:58 ` Daniel Vetter
(?)
@ 2020-07-30 12:17 ` Thomas Hellström (Intel)
-1 siblings, 0 replies; 21+ messages in thread
From: Thomas Hellström (Intel) @ 2020-07-30 12:17 UTC (permalink / raw)
To: Daniel Vetter, DRI Development
Cc: Intel Graphics Development, Daniel Vetter, Sumit Semwal,
Christian König, linux-media, linaro-mm-sig, Dave Chinner,
Qian Cai, linux-xfs, linux-fsdevel, Andrew Morton,
Jason Gunthorpe, linux-mm, linux-rdma, Maarten Lankhorst
On 7/28/20 3:58 PM, Daniel Vetter wrote:
> GPU drivers need this in their shrinkers, to be able to throw out
> mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
> but that loop is resolved by trylocking in shrinkers.
>
> So full hierarchy is now (ignore some of the other branches we already
> have primed):
>
> mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
>
> I hope that's not inconsistent with anything mm or fs does, adding
> relevant people.
>
Looks OK to me. The mapping_dirty_helpers run under the i_mmap_lock, but
don't allocate any memory AFAICT.
Since huge page-table-entry splitting may happen under the i_mmap_lock
from unmap_mapping_range() it might be worth figuring out how new page
directory pages are allocated, though.
/Thomas
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
@ 2020-07-30 12:17 ` Thomas Hellström (Intel)
0 siblings, 0 replies; 21+ messages in thread
From: Thomas Hellström (Intel) @ 2020-07-30 12:17 UTC (permalink / raw)
To: Daniel Vetter, DRI Development
Cc: linux-xfs, linux-rdma, Intel Graphics Development, Dave Chinner,
Christian König, linaro-mm-sig, linux-mm, Jason Gunthorpe,
Qian Cai, linux-fsdevel, Daniel Vetter, Andrew Morton,
linux-media
On 7/28/20 3:58 PM, Daniel Vetter wrote:
> GPU drivers need this in their shrinkers, to be able to throw out
> mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
> but that loop is resolved by trylocking in shrinkers.
>
> So full hierarchy is now (ignore some of the other branches we already
> have primed):
>
> mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
>
> I hope that's not inconsistent with anything mm or fs does, adding
> relevant people.
>
Looks OK to me. The mapping_dirty_helpers run under the i_mmap_lock, but
don't allocate any memory AFAICT.
Since huge page-table-entry splitting may happen under the i_mmap_lock
from unmap_mapping_range() it might be worth figuring out how new page
directory pages are allocated, though.
/Thomas
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-gfx] [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
@ 2020-07-30 12:17 ` Thomas Hellström (Intel)
0 siblings, 0 replies; 21+ messages in thread
From: Thomas Hellström (Intel) @ 2020-07-30 12:17 UTC (permalink / raw)
To: Daniel Vetter, DRI Development
Cc: linux-xfs, linux-rdma, Intel Graphics Development, Dave Chinner,
Christian König, linaro-mm-sig, linux-mm, Jason Gunthorpe,
Qian Cai, linux-fsdevel, Daniel Vetter, Andrew Morton,
Sumit Semwal, linux-media
On 7/28/20 3:58 PM, Daniel Vetter wrote:
> GPU drivers need this in their shrinkers, to be able to throw out
> mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
> but that loop is resolved by trylocking in shrinkers.
>
> So full hierarchy is now (ignore some of the other branches we already
> have primed):
>
> mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
>
> I hope that's not inconsistent with anything mm or fs does, adding
> relevant people.
>
Looks OK to me. The mapping_dirty_helpers run under the i_mmap_lock, but
don't allocate any memory AFAICT.
Since huge page-table-entry splitting may happen under the i_mmap_lock
from unmap_mapping_range() it might be worth figuring out how new page
directory pages are allocated, though.
/Thomas
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
2020-07-30 12:17 ` Thomas Hellström (Intel)
(?)
@ 2020-07-30 13:17 ` Daniel Vetter
-1 siblings, 0 replies; 21+ messages in thread
From: Daniel Vetter @ 2020-07-30 13:17 UTC (permalink / raw)
To: Thomas Hellström (Intel)
Cc: DRI Development, Intel Graphics Development, Daniel Vetter,
Sumit Semwal, Christian König,
open list:DMA BUFFER SHARING FRAMEWORK,
moderated list:DMA BUFFER SHARING FRAMEWORK, Dave Chinner,
Qian Cai, linux-xfs, linux-fsdevel, Andrew Morton,
Jason Gunthorpe, Linux MM, linux-rdma, Maarten Lankhorst
On Thu, Jul 30, 2020 at 2:17 PM Thomas Hellström (Intel)
<thomas_os@shipmail.org> wrote:
>
>
> On 7/28/20 3:58 PM, Daniel Vetter wrote:
> > GPU drivers need this in their shrinkers, to be able to throw out
> > mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
> > but that loop is resolved by trylocking in shrinkers.
> >
> > So full hierarchy is now (ignore some of the other branches we already
> > have primed):
> >
> > mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
> >
> > I hope that's not inconsistent with anything mm or fs does, adding
> > relevant people.
> >
> Looks OK to me. The mapping_dirty_helpers run under the i_mmap_lock, but
> don't allocate any memory AFAICT.
>
> Since huge page-table-entry splitting may happen under the i_mmap_lock
> from unmap_mapping_range() it might be worth figuring out how new page
> directory pages are allocated, though.
ofc I'm not an mm expert at all, but I did try to scroll through all
i_mmap_lock_write/read callers. Found the following:
- kernel/events/uprobes.c in build_map_info:
/*
* Needs GFP_NOWAIT to avoid i_mmap_rwsem recursion through
* reclaim. This is optimistic, no harm done if it fails.
*/
- I got lost in the hugetlb.c code and couldn't convince myself it's
not allocating page directories at various levels with something else
than GFP_KERNEL.
So looks like the recursion is clearly there and known, but the
hugepage code is too complex and flying over my head.
-Daniel
>
> /Thomas
>
>
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
@ 2020-07-30 13:17 ` Daniel Vetter
0 siblings, 0 replies; 21+ messages in thread
From: Daniel Vetter @ 2020-07-30 13:17 UTC (permalink / raw)
To: Thomas Hellström (Intel)
Cc: linux-xfs, linux-rdma, Intel Graphics Development, Dave Chinner,
DRI Development, Christian König,
moderated list:DMA BUFFER SHARING FRAMEWORK, Linux MM,
Jason Gunthorpe, Qian Cai, linux-fsdevel, Daniel Vetter,
Andrew Morton, open list:DMA BUFFER SHARING FRAMEWORK
On Thu, Jul 30, 2020 at 2:17 PM Thomas Hellström (Intel)
<thomas_os@shipmail.org> wrote:
>
>
> On 7/28/20 3:58 PM, Daniel Vetter wrote:
> > GPU drivers need this in their shrinkers, to be able to throw out
> > mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
> > but that loop is resolved by trylocking in shrinkers.
> >
> > So full hierarchy is now (ignore some of the other branches we already
> > have primed):
> >
> > mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
> >
> > I hope that's not inconsistent with anything mm or fs does, adding
> > relevant people.
> >
> Looks OK to me. The mapping_dirty_helpers run under the i_mmap_lock, but
> don't allocate any memory AFAICT.
>
> Since huge page-table-entry splitting may happen under the i_mmap_lock
> from unmap_mapping_range() it might be worth figuring out how new page
> directory pages are allocated, though.
ofc I'm not an mm expert at all, but I did try to scroll through all
i_mmap_lock_write/read callers. Found the following:
- kernel/events/uprobes.c in build_map_info:
/*
* Needs GFP_NOWAIT to avoid i_mmap_rwsem recursion through
* reclaim. This is optimistic, no harm done if it fails.
*/
- I got lost in the hugetlb.c code and couldn't convince myself it's
not allocating page directories at various levels with something else
than GFP_KERNEL.
So looks like the recursion is clearly there and known, but the
hugepage code is too complex and flying over my head.
-Daniel
>
> /Thomas
>
>
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-gfx] [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
@ 2020-07-30 13:17 ` Daniel Vetter
0 siblings, 0 replies; 21+ messages in thread
From: Daniel Vetter @ 2020-07-30 13:17 UTC (permalink / raw)
To: Thomas Hellström (Intel)
Cc: linux-xfs, linux-rdma, Intel Graphics Development, Dave Chinner,
DRI Development, Christian König,
moderated list:DMA BUFFER SHARING FRAMEWORK, Linux MM,
Jason Gunthorpe, Qian Cai, linux-fsdevel, Daniel Vetter,
Andrew Morton, Sumit Semwal,
open list:DMA BUFFER SHARING FRAMEWORK
On Thu, Jul 30, 2020 at 2:17 PM Thomas Hellström (Intel)
<thomas_os@shipmail.org> wrote:
>
>
> On 7/28/20 3:58 PM, Daniel Vetter wrote:
> > GPU drivers need this in their shrinkers, to be able to throw out
> > mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
> > but that loop is resolved by trylocking in shrinkers.
> >
> > So full hierarchy is now (ignore some of the other branches we already
> > have primed):
> >
> > mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
> >
> > I hope that's not inconsistent with anything mm or fs does, adding
> > relevant people.
> >
> Looks OK to me. The mapping_dirty_helpers run under the i_mmap_lock, but
> don't allocate any memory AFAICT.
>
> Since huge page-table-entry splitting may happen under the i_mmap_lock
> from unmap_mapping_range() it might be worth figuring out how new page
> directory pages are allocated, though.
ofc I'm not an mm expert at all, but I did try to scroll through all
i_mmap_lock_write/read callers. Found the following:
- kernel/events/uprobes.c in build_map_info:
/*
* Needs GFP_NOWAIT to avoid i_mmap_rwsem recursion through
* reclaim. This is optimistic, no harm done if it fails.
*/
- I got lost in the hugetlb.c code and couldn't convince myself it's
not allocating page directories at various levels with something else
than GFP_KERNEL.
So looks like the recursion is clearly there and known, but the
hugepage code is too complex and flying over my head.
-Daniel
>
> /Thomas
>
>
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
2020-07-30 13:17 ` Daniel Vetter
(?)
@ 2020-07-30 16:45 ` Thomas Hellström (Intel)
-1 siblings, 0 replies; 21+ messages in thread
From: Thomas Hellström (Intel) @ 2020-07-30 16:45 UTC (permalink / raw)
To: Daniel Vetter
Cc: DRI Development, Intel Graphics Development, Daniel Vetter,
Sumit Semwal, Christian König,
open list:DMA BUFFER SHARING FRAMEWORK,
moderated list:DMA BUFFER SHARING FRAMEWORK, Dave Chinner,
Qian Cai, linux-xfs, linux-fsdevel, Andrew Morton,
Jason Gunthorpe, Linux MM, linux-rdma, Maarten Lankhorst
On 7/30/20 3:17 PM, Daniel Vetter wrote:
> On Thu, Jul 30, 2020 at 2:17 PM Thomas Hellström (Intel)
> <thomas_os@shipmail.org> wrote:
>>
>> On 7/28/20 3:58 PM, Daniel Vetter wrote:
>>> GPU drivers need this in their shrinkers, to be able to throw out
>>> mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
>>> but that loop is resolved by trylocking in shrinkers.
>>>
>>> So full hierarchy is now (ignore some of the other branches we already
>>> have primed):
>>>
>>> mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
>>>
>>> I hope that's not inconsistent with anything mm or fs does, adding
>>> relevant people.
>>>
>> Looks OK to me. The mapping_dirty_helpers run under the i_mmap_lock, but
>> don't allocate any memory AFAICT.
>>
>> Since huge page-table-entry splitting may happen under the i_mmap_lock
>> from unmap_mapping_range() it might be worth figuring out how new page
>> directory pages are allocated, though.
> ofc I'm not an mm expert at all, but I did try to scroll through all
> i_mmap_lock_write/read callers. Found the following:
>
> - kernel/events/uprobes.c in build_map_info:
>
> /*
> * Needs GFP_NOWAIT to avoid i_mmap_rwsem recursion through
> * reclaim. This is optimistic, no harm done if it fails.
> */
>
> - I got lost in the hugetlb.c code and couldn't convince myself it's
> not allocating page directories at various levels with something else
> than GFP_KERNEL.
>
> So looks like the recursion is clearly there and known, but the
> hugepage code is too complex and flying over my head.
> -Daniel
OK, so I inverted your annotation and ran a memory hog, and got the
below splat. So clearly your proposed reclaim->i_mmap_lock locking order
is an already established one.
So
Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com>
8<---------------------------------------------------------------------------------------------
[ 308.324654] WARNING: possible circular locking dependency detected
[ 308.324655] 5.8.0-rc2+ #16 Not tainted
[ 308.324656] ------------------------------------------------------
[ 308.324657] kswapd0/98 is trying to acquire lock:
[ 308.324658] ffff92a16f758428 (&mapping->i_mmap_rwsem){++++}-{3:3},
at: rmap_walk_file+0x1c0/0x2f0
[ 308.324663]
but task is already holding lock:
[ 308.324664] ffffffffb0960240 (fs_reclaim){+.+.}-{0:0}, at:
__fs_reclaim_acquire+0x5/0x30
[ 308.324666]
which lock already depends on the new lock.
[ 308.324667]
the existing dependency chain (in reverse order) is:
[ 308.324667]
-> #1 (fs_reclaim){+.+.}-{0:0}:
[ 308.324670] fs_reclaim_acquire+0x34/0x40
[ 308.324672] dma_resv_lockdep+0x186/0x224
[ 308.324675] do_one_initcall+0x5d/0x2c0
[ 308.324676] kernel_init_freeable+0x222/0x288
[ 308.324678] kernel_init+0xa/0x107
[ 308.324679] ret_from_fork+0x1f/0x30
[ 308.324680]
-> #0 (&mapping->i_mmap_rwsem){++++}-{3:3}:
[ 308.324682] __lock_acquire+0x119f/0x1fc0
[ 308.324683] lock_acquire+0xa4/0x3b0
[ 308.324685] down_read+0x2d/0x110
[ 308.324686] rmap_walk_file+0x1c0/0x2f0
[ 308.324687] page_referenced+0x133/0x150
[ 308.324689] shrink_active_list+0x142/0x610
[ 308.324690] balance_pgdat+0x229/0x620
[ 308.324691] kswapd+0x200/0x470
[ 308.324693] kthread+0x11f/0x140
[ 308.324694] ret_from_fork+0x1f/0x30
[ 308.324694]
other info that might help us debug this:
[ 308.324695] Possible unsafe locking scenario:
[ 308.324695] CPU0 CPU1
[ 308.324696] ---- ----
[ 308.324696] lock(fs_reclaim);
[ 308.324697] lock(&mapping->i_mmap_rwsem);
[ 308.324698] lock(fs_reclaim);
[ 308.324699] lock(&mapping->i_mmap_rwsem);
[ 308.324699]
*** DEADLOCK ***
[ 308.324700] 1 lock held by kswapd0/98:
[ 308.324701] #0: ffffffffb0960240 (fs_reclaim){+.+.}-{0:0}, at:
__fs_reclaim_acquire+0x5/0x30
[ 308.324702]
stack backtrace:
[ 308.324704] CPU: 1 PID: 98 Comm: kswapd0 Not tainted 5.8.0-rc2+ #16
[ 308.324705] Hardware name: VMware, Inc. VMware Virtual Platform/440BX
Desktop Reference Platform, BIOS 6.00 07/29/2019
[ 308.324706] Call Trace:
[ 308.324710] dump_stack+0x92/0xc8
[ 308.324711] check_noncircular+0x12d/0x150
[ 308.324713] __lock_acquire+0x119f/0x1fc0
[ 308.324715] lock_acquire+0xa4/0x3b0
[ 308.324716] ? rmap_walk_file+0x1c0/0x2f0
[ 308.324717] ? __lock_acquire+0x394/0x1fc0
[ 308.324719] down_read+0x2d/0x110
[ 308.324720] ? rmap_walk_file+0x1c0/0x2f0
[ 308.324721] rmap_walk_file+0x1c0/0x2f0
[ 308.324722] page_referenced+0x133/0x150
[ 308.324724] ? __page_set_anon_rmap+0x70/0x70
[ 308.324725] ? page_get_anon_vma+0x190/0x190
[ 308.324726] shrink_active_list+0x142/0x610
[ 308.324728] balance_pgdat+0x229/0x620
[ 308.324730] kswapd+0x200/0x470
[ 308.324731] ? lockdep_hardirqs_on_prepare+0xf5/0x170
[ 308.324733] ? finish_wait+0x80/0x80
[ 308.324734] ? balance_pgdat+0x620/0x620
[ 308.324736] kthread+0x11f/0x140
[ 308.324737] ? kthread_create_worker_on_cpu+0x40/0x40
[ 308.324739] ret_from_fork+0x1f/0x30
>> /Thomas
>>
>>
>>
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
@ 2020-07-30 16:45 ` Thomas Hellström (Intel)
0 siblings, 0 replies; 21+ messages in thread
From: Thomas Hellström (Intel) @ 2020-07-30 16:45 UTC (permalink / raw)
To: Daniel Vetter
Cc: linux-xfs, linux-rdma, Intel Graphics Development, Dave Chinner,
DRI Development, Christian König,
moderated list:DMA BUFFER SHARING FRAMEWORK, Linux MM,
Jason Gunthorpe, Qian Cai, linux-fsdevel, Daniel Vetter,
Andrew Morton, open list:DMA BUFFER SHARING FRAMEWORK
On 7/30/20 3:17 PM, Daniel Vetter wrote:
> On Thu, Jul 30, 2020 at 2:17 PM Thomas Hellström (Intel)
> <thomas_os@shipmail.org> wrote:
>>
>> On 7/28/20 3:58 PM, Daniel Vetter wrote:
>>> GPU drivers need this in their shrinkers, to be able to throw out
>>> mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
>>> but that loop is resolved by trylocking in shrinkers.
>>>
>>> So full hierarchy is now (ignore some of the other branches we already
>>> have primed):
>>>
>>> mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
>>>
>>> I hope that's not inconsistent with anything mm or fs does, adding
>>> relevant people.
>>>
>> Looks OK to me. The mapping_dirty_helpers run under the i_mmap_lock, but
>> don't allocate any memory AFAICT.
>>
>> Since huge page-table-entry splitting may happen under the i_mmap_lock
>> from unmap_mapping_range() it might be worth figuring out how new page
>> directory pages are allocated, though.
> ofc I'm not an mm expert at all, but I did try to scroll through all
> i_mmap_lock_write/read callers. Found the following:
>
> - kernel/events/uprobes.c in build_map_info:
>
> /*
> * Needs GFP_NOWAIT to avoid i_mmap_rwsem recursion through
> * reclaim. This is optimistic, no harm done if it fails.
> */
>
> - I got lost in the hugetlb.c code and couldn't convince myself it's
> not allocating page directories at various levels with something else
> than GFP_KERNEL.
>
> So looks like the recursion is clearly there and known, but the
> hugepage code is too complex and flying over my head.
> -Daniel
OK, so I inverted your annotation and ran a memory hog, and got the
below splat. So clearly your proposed reclaim->i_mmap_lock locking order
is an already established one.
So
Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com>
8<---------------------------------------------------------------------------------------------
[ 308.324654] WARNING: possible circular locking dependency detected
[ 308.324655] 5.8.0-rc2+ #16 Not tainted
[ 308.324656] ------------------------------------------------------
[ 308.324657] kswapd0/98 is trying to acquire lock:
[ 308.324658] ffff92a16f758428 (&mapping->i_mmap_rwsem){++++}-{3:3},
at: rmap_walk_file+0x1c0/0x2f0
[ 308.324663]
but task is already holding lock:
[ 308.324664] ffffffffb0960240 (fs_reclaim){+.+.}-{0:0}, at:
__fs_reclaim_acquire+0x5/0x30
[ 308.324666]
which lock already depends on the new lock.
[ 308.324667]
the existing dependency chain (in reverse order) is:
[ 308.324667]
-> #1 (fs_reclaim){+.+.}-{0:0}:
[ 308.324670] fs_reclaim_acquire+0x34/0x40
[ 308.324672] dma_resv_lockdep+0x186/0x224
[ 308.324675] do_one_initcall+0x5d/0x2c0
[ 308.324676] kernel_init_freeable+0x222/0x288
[ 308.324678] kernel_init+0xa/0x107
[ 308.324679] ret_from_fork+0x1f/0x30
[ 308.324680]
-> #0 (&mapping->i_mmap_rwsem){++++}-{3:3}:
[ 308.324682] __lock_acquire+0x119f/0x1fc0
[ 308.324683] lock_acquire+0xa4/0x3b0
[ 308.324685] down_read+0x2d/0x110
[ 308.324686] rmap_walk_file+0x1c0/0x2f0
[ 308.324687] page_referenced+0x133/0x150
[ 308.324689] shrink_active_list+0x142/0x610
[ 308.324690] balance_pgdat+0x229/0x620
[ 308.324691] kswapd+0x200/0x470
[ 308.324693] kthread+0x11f/0x140
[ 308.324694] ret_from_fork+0x1f/0x30
[ 308.324694]
other info that might help us debug this:
[ 308.324695] Possible unsafe locking scenario:
[ 308.324695] CPU0 CPU1
[ 308.324696] ---- ----
[ 308.324696] lock(fs_reclaim);
[ 308.324697] lock(&mapping->i_mmap_rwsem);
[ 308.324698] lock(fs_reclaim);
[ 308.324699] lock(&mapping->i_mmap_rwsem);
[ 308.324699]
*** DEADLOCK ***
[ 308.324700] 1 lock held by kswapd0/98:
[ 308.324701] #0: ffffffffb0960240 (fs_reclaim){+.+.}-{0:0}, at:
__fs_reclaim_acquire+0x5/0x30
[ 308.324702]
stack backtrace:
[ 308.324704] CPU: 1 PID: 98 Comm: kswapd0 Not tainted 5.8.0-rc2+ #16
[ 308.324705] Hardware name: VMware, Inc. VMware Virtual Platform/440BX
Desktop Reference Platform, BIOS 6.00 07/29/2019
[ 308.324706] Call Trace:
[ 308.324710] dump_stack+0x92/0xc8
[ 308.324711] check_noncircular+0x12d/0x150
[ 308.324713] __lock_acquire+0x119f/0x1fc0
[ 308.324715] lock_acquire+0xa4/0x3b0
[ 308.324716] ? rmap_walk_file+0x1c0/0x2f0
[ 308.324717] ? __lock_acquire+0x394/0x1fc0
[ 308.324719] down_read+0x2d/0x110
[ 308.324720] ? rmap_walk_file+0x1c0/0x2f0
[ 308.324721] rmap_walk_file+0x1c0/0x2f0
[ 308.324722] page_referenced+0x133/0x150
[ 308.324724] ? __page_set_anon_rmap+0x70/0x70
[ 308.324725] ? page_get_anon_vma+0x190/0x190
[ 308.324726] shrink_active_list+0x142/0x610
[ 308.324728] balance_pgdat+0x229/0x620
[ 308.324730] kswapd+0x200/0x470
[ 308.324731] ? lockdep_hardirqs_on_prepare+0xf5/0x170
[ 308.324733] ? finish_wait+0x80/0x80
[ 308.324734] ? balance_pgdat+0x620/0x620
[ 308.324736] kthread+0x11f/0x140
[ 308.324737] ? kthread_create_worker_on_cpu+0x40/0x40
[ 308.324739] ret_from_fork+0x1f/0x30
>> /Thomas
>>
>>
>>
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-gfx] [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
@ 2020-07-30 16:45 ` Thomas Hellström (Intel)
0 siblings, 0 replies; 21+ messages in thread
From: Thomas Hellström (Intel) @ 2020-07-30 16:45 UTC (permalink / raw)
To: Daniel Vetter
Cc: linux-xfs, linux-rdma, Intel Graphics Development, Dave Chinner,
DRI Development, Christian König,
moderated list:DMA BUFFER SHARING FRAMEWORK, Linux MM,
Jason Gunthorpe, Qian Cai, linux-fsdevel, Daniel Vetter,
Andrew Morton, Sumit Semwal,
open list:DMA BUFFER SHARING FRAMEWORK
On 7/30/20 3:17 PM, Daniel Vetter wrote:
> On Thu, Jul 30, 2020 at 2:17 PM Thomas Hellström (Intel)
> <thomas_os@shipmail.org> wrote:
>>
>> On 7/28/20 3:58 PM, Daniel Vetter wrote:
>>> GPU drivers need this in their shrinkers, to be able to throw out
>>> mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
>>> but that loop is resolved by trylocking in shrinkers.
>>>
>>> So full hierarchy is now (ignore some of the other branches we already
>>> have primed):
>>>
>>> mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
>>>
>>> I hope that's not inconsistent with anything mm or fs does, adding
>>> relevant people.
>>>
>> Looks OK to me. The mapping_dirty_helpers run under the i_mmap_lock, but
>> don't allocate any memory AFAICT.
>>
>> Since huge page-table-entry splitting may happen under the i_mmap_lock
>> from unmap_mapping_range() it might be worth figuring out how new page
>> directory pages are allocated, though.
> ofc I'm not an mm expert at all, but I did try to scroll through all
> i_mmap_lock_write/read callers. Found the following:
>
> - kernel/events/uprobes.c in build_map_info:
>
> /*
> * Needs GFP_NOWAIT to avoid i_mmap_rwsem recursion through
> * reclaim. This is optimistic, no harm done if it fails.
> */
>
> - I got lost in the hugetlb.c code and couldn't convince myself it's
> not allocating page directories at various levels with something else
> than GFP_KERNEL.
>
> So looks like the recursion is clearly there and known, but the
> hugepage code is too complex and flying over my head.
> -Daniel
OK, so I inverted your annotation and ran a memory hog, and got the
below splat. So clearly your proposed reclaim->i_mmap_lock locking order
is an already established one.
So
Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com>
8<---------------------------------------------------------------------------------------------
[ 308.324654] WARNING: possible circular locking dependency detected
[ 308.324655] 5.8.0-rc2+ #16 Not tainted
[ 308.324656] ------------------------------------------------------
[ 308.324657] kswapd0/98 is trying to acquire lock:
[ 308.324658] ffff92a16f758428 (&mapping->i_mmap_rwsem){++++}-{3:3},
at: rmap_walk_file+0x1c0/0x2f0
[ 308.324663]
but task is already holding lock:
[ 308.324664] ffffffffb0960240 (fs_reclaim){+.+.}-{0:0}, at:
__fs_reclaim_acquire+0x5/0x30
[ 308.324666]
which lock already depends on the new lock.
[ 308.324667]
the existing dependency chain (in reverse order) is:
[ 308.324667]
-> #1 (fs_reclaim){+.+.}-{0:0}:
[ 308.324670] fs_reclaim_acquire+0x34/0x40
[ 308.324672] dma_resv_lockdep+0x186/0x224
[ 308.324675] do_one_initcall+0x5d/0x2c0
[ 308.324676] kernel_init_freeable+0x222/0x288
[ 308.324678] kernel_init+0xa/0x107
[ 308.324679] ret_from_fork+0x1f/0x30
[ 308.324680]
-> #0 (&mapping->i_mmap_rwsem){++++}-{3:3}:
[ 308.324682] __lock_acquire+0x119f/0x1fc0
[ 308.324683] lock_acquire+0xa4/0x3b0
[ 308.324685] down_read+0x2d/0x110
[ 308.324686] rmap_walk_file+0x1c0/0x2f0
[ 308.324687] page_referenced+0x133/0x150
[ 308.324689] shrink_active_list+0x142/0x610
[ 308.324690] balance_pgdat+0x229/0x620
[ 308.324691] kswapd+0x200/0x470
[ 308.324693] kthread+0x11f/0x140
[ 308.324694] ret_from_fork+0x1f/0x30
[ 308.324694]
other info that might help us debug this:
[ 308.324695] Possible unsafe locking scenario:
[ 308.324695] CPU0 CPU1
[ 308.324696] ---- ----
[ 308.324696] lock(fs_reclaim);
[ 308.324697] lock(&mapping->i_mmap_rwsem);
[ 308.324698] lock(fs_reclaim);
[ 308.324699] lock(&mapping->i_mmap_rwsem);
[ 308.324699]
*** DEADLOCK ***
[ 308.324700] 1 lock held by kswapd0/98:
[ 308.324701] #0: ffffffffb0960240 (fs_reclaim){+.+.}-{0:0}, at:
__fs_reclaim_acquire+0x5/0x30
[ 308.324702]
stack backtrace:
[ 308.324704] CPU: 1 PID: 98 Comm: kswapd0 Not tainted 5.8.0-rc2+ #16
[ 308.324705] Hardware name: VMware, Inc. VMware Virtual Platform/440BX
Desktop Reference Platform, BIOS 6.00 07/29/2019
[ 308.324706] Call Trace:
[ 308.324710] dump_stack+0x92/0xc8
[ 308.324711] check_noncircular+0x12d/0x150
[ 308.324713] __lock_acquire+0x119f/0x1fc0
[ 308.324715] lock_acquire+0xa4/0x3b0
[ 308.324716] ? rmap_walk_file+0x1c0/0x2f0
[ 308.324717] ? __lock_acquire+0x394/0x1fc0
[ 308.324719] down_read+0x2d/0x110
[ 308.324720] ? rmap_walk_file+0x1c0/0x2f0
[ 308.324721] rmap_walk_file+0x1c0/0x2f0
[ 308.324722] page_referenced+0x133/0x150
[ 308.324724] ? __page_set_anon_rmap+0x70/0x70
[ 308.324725] ? page_get_anon_vma+0x190/0x190
[ 308.324726] shrink_active_list+0x142/0x610
[ 308.324728] balance_pgdat+0x229/0x620
[ 308.324730] kswapd+0x200/0x470
[ 308.324731] ? lockdep_hardirqs_on_prepare+0xf5/0x170
[ 308.324733] ? finish_wait+0x80/0x80
[ 308.324734] ? balance_pgdat+0x620/0x620
[ 308.324736] kthread+0x11f/0x140
[ 308.324737] ? kthread_create_worker_on_cpu+0x40/0x40
[ 308.324739] ret_from_fork+0x1f/0x30
>> /Thomas
>>
>>
>>
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
2020-07-30 16:45 ` Thomas Hellström (Intel)
(?)
@ 2020-09-17 13:19 ` Daniel Vetter
-1 siblings, 0 replies; 21+ messages in thread
From: Daniel Vetter @ 2020-09-17 13:19 UTC (permalink / raw)
To: Thomas Hellström (Intel)
Cc: Daniel Vetter, DRI Development, Intel Graphics Development,
Daniel Vetter, Sumit Semwal, Christian König,
open list:DMA BUFFER SHARING FRAMEWORK,
moderated list:DMA BUFFER SHARING FRAMEWORK, Dave Chinner,
Qian Cai, linux-xfs, linux-fsdevel, Andrew Morton,
Jason Gunthorpe, Linux MM, linux-rdma, Maarten Lankhorst
On Thu, Jul 30, 2020 at 06:45:14PM +0200, Thomas Hellström (Intel) wrote:
>
> On 7/30/20 3:17 PM, Daniel Vetter wrote:
> > On Thu, Jul 30, 2020 at 2:17 PM Thomas Hellström (Intel)
> > <thomas_os@shipmail.org> wrote:
> > >
> > > On 7/28/20 3:58 PM, Daniel Vetter wrote:
> > > > GPU drivers need this in their shrinkers, to be able to throw out
> > > > mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
> > > > but that loop is resolved by trylocking in shrinkers.
> > > >
> > > > So full hierarchy is now (ignore some of the other branches we already
> > > > have primed):
> > > >
> > > > mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
> > > >
> > > > I hope that's not inconsistent with anything mm or fs does, adding
> > > > relevant people.
> > > >
> > > Looks OK to me. The mapping_dirty_helpers run under the i_mmap_lock, but
> > > don't allocate any memory AFAICT.
> > >
> > > Since huge page-table-entry splitting may happen under the i_mmap_lock
> > > from unmap_mapping_range() it might be worth figuring out how new page
> > > directory pages are allocated, though.
> > ofc I'm not an mm expert at all, but I did try to scroll through all
> > i_mmap_lock_write/read callers. Found the following:
> >
> > - kernel/events/uprobes.c in build_map_info:
> >
> > /*
> > * Needs GFP_NOWAIT to avoid i_mmap_rwsem recursion through
> > * reclaim. This is optimistic, no harm done if it fails.
> > */
> >
> > - I got lost in the hugetlb.c code and couldn't convince myself it's
> > not allocating page directories at various levels with something else
> > than GFP_KERNEL.
> >
> > So looks like the recursion is clearly there and known, but the
> > hugepage code is too complex and flying over my head.
> > -Daniel
>
> OK, so I inverted your annotation and ran a memory hog, and got the below
> splat. So clearly your proposed reclaim->i_mmap_lock locking order is an
> already established one.
>
> So
>
> Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com>
No one complaining that this is a terrible idea and two reviews from
people who know stuff, so I went ahead and pushed this to drm-misc-next.
Thanks for taking a look at this.
-Daniel
>
> 8<---------------------------------------------------------------------------------------------
>
> [ 308.324654] WARNING: possible circular locking dependency detected
> [ 308.324655] 5.8.0-rc2+ #16 Not tainted
> [ 308.324656] ------------------------------------------------------
> [ 308.324657] kswapd0/98 is trying to acquire lock:
> [ 308.324658] ffff92a16f758428 (&mapping->i_mmap_rwsem){++++}-{3:3}, at:
> rmap_walk_file+0x1c0/0x2f0
> [ 308.324663]
> but task is already holding lock:
> [ 308.324664] ffffffffb0960240 (fs_reclaim){+.+.}-{0:0}, at:
> __fs_reclaim_acquire+0x5/0x30
> [ 308.324666]
> which lock already depends on the new lock.
>
> [ 308.324667]
> the existing dependency chain (in reverse order) is:
> [ 308.324667]
> -> #1 (fs_reclaim){+.+.}-{0:0}:
> [ 308.324670] fs_reclaim_acquire+0x34/0x40
> [ 308.324672] dma_resv_lockdep+0x186/0x224
> [ 308.324675] do_one_initcall+0x5d/0x2c0
> [ 308.324676] kernel_init_freeable+0x222/0x288
> [ 308.324678] kernel_init+0xa/0x107
> [ 308.324679] ret_from_fork+0x1f/0x30
> [ 308.324680]
> -> #0 (&mapping->i_mmap_rwsem){++++}-{3:3}:
> [ 308.324682] __lock_acquire+0x119f/0x1fc0
> [ 308.324683] lock_acquire+0xa4/0x3b0
> [ 308.324685] down_read+0x2d/0x110
> [ 308.324686] rmap_walk_file+0x1c0/0x2f0
> [ 308.324687] page_referenced+0x133/0x150
> [ 308.324689] shrink_active_list+0x142/0x610
> [ 308.324690] balance_pgdat+0x229/0x620
> [ 308.324691] kswapd+0x200/0x470
> [ 308.324693] kthread+0x11f/0x140
> [ 308.324694] ret_from_fork+0x1f/0x30
> [ 308.324694]
> other info that might help us debug this:
>
> [ 308.324695] Possible unsafe locking scenario:
>
> [ 308.324695] CPU0 CPU1
> [ 308.324696] ---- ----
> [ 308.324696] lock(fs_reclaim);
> [ 308.324697] lock(&mapping->i_mmap_rwsem);
> [ 308.324698] lock(fs_reclaim);
> [ 308.324699] lock(&mapping->i_mmap_rwsem);
> [ 308.324699]
> *** DEADLOCK ***
>
> [ 308.324700] 1 lock held by kswapd0/98:
> [ 308.324701] #0: ffffffffb0960240 (fs_reclaim){+.+.}-{0:0}, at:
> __fs_reclaim_acquire+0x5/0x30
> [ 308.324702]
> stack backtrace:
> [ 308.324704] CPU: 1 PID: 98 Comm: kswapd0 Not tainted 5.8.0-rc2+ #16
> [ 308.324705] Hardware name: VMware, Inc. VMware Virtual Platform/440BX
> Desktop Reference Platform, BIOS 6.00 07/29/2019
> [ 308.324706] Call Trace:
> [ 308.324710] dump_stack+0x92/0xc8
> [ 308.324711] check_noncircular+0x12d/0x150
> [ 308.324713] __lock_acquire+0x119f/0x1fc0
> [ 308.324715] lock_acquire+0xa4/0x3b0
> [ 308.324716] ? rmap_walk_file+0x1c0/0x2f0
> [ 308.324717] ? __lock_acquire+0x394/0x1fc0
> [ 308.324719] down_read+0x2d/0x110
> [ 308.324720] ? rmap_walk_file+0x1c0/0x2f0
> [ 308.324721] rmap_walk_file+0x1c0/0x2f0
> [ 308.324722] page_referenced+0x133/0x150
> [ 308.324724] ? __page_set_anon_rmap+0x70/0x70
> [ 308.324725] ? page_get_anon_vma+0x190/0x190
> [ 308.324726] shrink_active_list+0x142/0x610
> [ 308.324728] balance_pgdat+0x229/0x620
> [ 308.324730] kswapd+0x200/0x470
> [ 308.324731] ? lockdep_hardirqs_on_prepare+0xf5/0x170
> [ 308.324733] ? finish_wait+0x80/0x80
> [ 308.324734] ? balance_pgdat+0x620/0x620
> [ 308.324736] kthread+0x11f/0x140
> [ 308.324737] ? kthread_create_worker_on_cpu+0x40/0x40
> [ 308.324739] ret_from_fork+0x1f/0x30
>
>
>
> > > /Thomas
> > >
> > >
> > >
> >
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
@ 2020-09-17 13:19 ` Daniel Vetter
0 siblings, 0 replies; 21+ messages in thread
From: Daniel Vetter @ 2020-09-17 13:19 UTC (permalink / raw)
To: Thomas Hellström (Intel)
Cc: linux-xfs, linux-rdma, Daniel Vetter, Intel Graphics Development,
Dave Chinner, DRI Development,
moderated list:DMA BUFFER SHARING FRAMEWORK, Linux MM,
Jason Gunthorpe, Qian Cai, linux-fsdevel, Daniel Vetter,
Andrew Morton, Christian König,
open list:DMA BUFFER SHARING FRAMEWORK
On Thu, Jul 30, 2020 at 06:45:14PM +0200, Thomas Hellström (Intel) wrote:
>
> On 7/30/20 3:17 PM, Daniel Vetter wrote:
> > On Thu, Jul 30, 2020 at 2:17 PM Thomas Hellström (Intel)
> > <thomas_os@shipmail.org> wrote:
> > >
> > > On 7/28/20 3:58 PM, Daniel Vetter wrote:
> > > > GPU drivers need this in their shrinkers, to be able to throw out
> > > > mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
> > > > but that loop is resolved by trylocking in shrinkers.
> > > >
> > > > So full hierarchy is now (ignore some of the other branches we already
> > > > have primed):
> > > >
> > > > mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
> > > >
> > > > I hope that's not inconsistent with anything mm or fs does, adding
> > > > relevant people.
> > > >
> > > Looks OK to me. The mapping_dirty_helpers run under the i_mmap_lock, but
> > > don't allocate any memory AFAICT.
> > >
> > > Since huge page-table-entry splitting may happen under the i_mmap_lock
> > > from unmap_mapping_range() it might be worth figuring out how new page
> > > directory pages are allocated, though.
> > ofc I'm not an mm expert at all, but I did try to scroll through all
> > i_mmap_lock_write/read callers. Found the following:
> >
> > - kernel/events/uprobes.c in build_map_info:
> >
> > /*
> > * Needs GFP_NOWAIT to avoid i_mmap_rwsem recursion through
> > * reclaim. This is optimistic, no harm done if it fails.
> > */
> >
> > - I got lost in the hugetlb.c code and couldn't convince myself it's
> > not allocating page directories at various levels with something else
> > than GFP_KERNEL.
> >
> > So looks like the recursion is clearly there and known, but the
> > hugepage code is too complex and flying over my head.
> > -Daniel
>
> OK, so I inverted your annotation and ran a memory hog, and got the below
> splat. So clearly your proposed reclaim->i_mmap_lock locking order is an
> already established one.
>
> So
>
> Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com>
No one complaining that this is a terrible idea and two reviews from
people who know stuff, so I went ahead and pushed this to drm-misc-next.
Thanks for taking a look at this.
-Daniel
>
> 8<---------------------------------------------------------------------------------------------
>
> [ 308.324654] WARNING: possible circular locking dependency detected
> [ 308.324655] 5.8.0-rc2+ #16 Not tainted
> [ 308.324656] ------------------------------------------------------
> [ 308.324657] kswapd0/98 is trying to acquire lock:
> [ 308.324658] ffff92a16f758428 (&mapping->i_mmap_rwsem){++++}-{3:3}, at:
> rmap_walk_file+0x1c0/0x2f0
> [ 308.324663]
> but task is already holding lock:
> [ 308.324664] ffffffffb0960240 (fs_reclaim){+.+.}-{0:0}, at:
> __fs_reclaim_acquire+0x5/0x30
> [ 308.324666]
> which lock already depends on the new lock.
>
> [ 308.324667]
> the existing dependency chain (in reverse order) is:
> [ 308.324667]
> -> #1 (fs_reclaim){+.+.}-{0:0}:
> [ 308.324670] fs_reclaim_acquire+0x34/0x40
> [ 308.324672] dma_resv_lockdep+0x186/0x224
> [ 308.324675] do_one_initcall+0x5d/0x2c0
> [ 308.324676] kernel_init_freeable+0x222/0x288
> [ 308.324678] kernel_init+0xa/0x107
> [ 308.324679] ret_from_fork+0x1f/0x30
> [ 308.324680]
> -> #0 (&mapping->i_mmap_rwsem){++++}-{3:3}:
> [ 308.324682] __lock_acquire+0x119f/0x1fc0
> [ 308.324683] lock_acquire+0xa4/0x3b0
> [ 308.324685] down_read+0x2d/0x110
> [ 308.324686] rmap_walk_file+0x1c0/0x2f0
> [ 308.324687] page_referenced+0x133/0x150
> [ 308.324689] shrink_active_list+0x142/0x610
> [ 308.324690] balance_pgdat+0x229/0x620
> [ 308.324691] kswapd+0x200/0x470
> [ 308.324693] kthread+0x11f/0x140
> [ 308.324694] ret_from_fork+0x1f/0x30
> [ 308.324694]
> other info that might help us debug this:
>
> [ 308.324695] Possible unsafe locking scenario:
>
> [ 308.324695] CPU0 CPU1
> [ 308.324696] ---- ----
> [ 308.324696] lock(fs_reclaim);
> [ 308.324697] lock(&mapping->i_mmap_rwsem);
> [ 308.324698] lock(fs_reclaim);
> [ 308.324699] lock(&mapping->i_mmap_rwsem);
> [ 308.324699]
> *** DEADLOCK ***
>
> [ 308.324700] 1 lock held by kswapd0/98:
> [ 308.324701] #0: ffffffffb0960240 (fs_reclaim){+.+.}-{0:0}, at:
> __fs_reclaim_acquire+0x5/0x30
> [ 308.324702]
> stack backtrace:
> [ 308.324704] CPU: 1 PID: 98 Comm: kswapd0 Not tainted 5.8.0-rc2+ #16
> [ 308.324705] Hardware name: VMware, Inc. VMware Virtual Platform/440BX
> Desktop Reference Platform, BIOS 6.00 07/29/2019
> [ 308.324706] Call Trace:
> [ 308.324710] dump_stack+0x92/0xc8
> [ 308.324711] check_noncircular+0x12d/0x150
> [ 308.324713] __lock_acquire+0x119f/0x1fc0
> [ 308.324715] lock_acquire+0xa4/0x3b0
> [ 308.324716] ? rmap_walk_file+0x1c0/0x2f0
> [ 308.324717] ? __lock_acquire+0x394/0x1fc0
> [ 308.324719] down_read+0x2d/0x110
> [ 308.324720] ? rmap_walk_file+0x1c0/0x2f0
> [ 308.324721] rmap_walk_file+0x1c0/0x2f0
> [ 308.324722] page_referenced+0x133/0x150
> [ 308.324724] ? __page_set_anon_rmap+0x70/0x70
> [ 308.324725] ? page_get_anon_vma+0x190/0x190
> [ 308.324726] shrink_active_list+0x142/0x610
> [ 308.324728] balance_pgdat+0x229/0x620
> [ 308.324730] kswapd+0x200/0x470
> [ 308.324731] ? lockdep_hardirqs_on_prepare+0xf5/0x170
> [ 308.324733] ? finish_wait+0x80/0x80
> [ 308.324734] ? balance_pgdat+0x620/0x620
> [ 308.324736] kthread+0x11f/0x140
> [ 308.324737] ? kthread_create_worker_on_cpu+0x40/0x40
> [ 308.324739] ret_from_fork+0x1f/0x30
>
>
>
> > > /Thomas
> > >
> > >
> > >
> >
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [Intel-gfx] [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv
@ 2020-09-17 13:19 ` Daniel Vetter
0 siblings, 0 replies; 21+ messages in thread
From: Daniel Vetter @ 2020-09-17 13:19 UTC (permalink / raw)
To: Thomas Hellström (Intel)
Cc: linux-xfs, linux-rdma, Daniel Vetter, Intel Graphics Development,
Dave Chinner, DRI Development, Sumit Semwal,
moderated list:DMA BUFFER SHARING FRAMEWORK, Linux MM,
Jason Gunthorpe, Qian Cai, linux-fsdevel, Daniel Vetter,
Andrew Morton, Christian König,
open list:DMA BUFFER SHARING FRAMEWORK
On Thu, Jul 30, 2020 at 06:45:14PM +0200, Thomas Hellström (Intel) wrote:
>
> On 7/30/20 3:17 PM, Daniel Vetter wrote:
> > On Thu, Jul 30, 2020 at 2:17 PM Thomas Hellström (Intel)
> > <thomas_os@shipmail.org> wrote:
> > >
> > > On 7/28/20 3:58 PM, Daniel Vetter wrote:
> > > > GPU drivers need this in their shrinkers, to be able to throw out
> > > > mmap'ed buffers. Note that we also need dma_resv_lock in shrinkers,
> > > > but that loop is resolved by trylocking in shrinkers.
> > > >
> > > > So full hierarchy is now (ignore some of the other branches we already
> > > > have primed):
> > > >
> > > > mmap_read_lock -> dma_resv -> shrinkers -> i_mmap_lock_write
> > > >
> > > > I hope that's not inconsistent with anything mm or fs does, adding
> > > > relevant people.
> > > >
> > > Looks OK to me. The mapping_dirty_helpers run under the i_mmap_lock, but
> > > don't allocate any memory AFAICT.
> > >
> > > Since huge page-table-entry splitting may happen under the i_mmap_lock
> > > from unmap_mapping_range() it might be worth figuring out how new page
> > > directory pages are allocated, though.
> > ofc I'm not an mm expert at all, but I did try to scroll through all
> > i_mmap_lock_write/read callers. Found the following:
> >
> > - kernel/events/uprobes.c in build_map_info:
> >
> > /*
> > * Needs GFP_NOWAIT to avoid i_mmap_rwsem recursion through
> > * reclaim. This is optimistic, no harm done if it fails.
> > */
> >
> > - I got lost in the hugetlb.c code and couldn't convince myself it's
> > not allocating page directories at various levels with something else
> > than GFP_KERNEL.
> >
> > So looks like the recursion is clearly there and known, but the
> > hugepage code is too complex and flying over my head.
> > -Daniel
>
> OK, so I inverted your annotation and ran a memory hog, and got the below
> splat. So clearly your proposed reclaim->i_mmap_lock locking order is an
> already established one.
>
> So
>
> Reviewed-by: Thomas Hellström <thomas.hellstrom@intel.com>
No one complaining that this is a terrible idea and two reviews from
people who know stuff, so I went ahead and pushed this to drm-misc-next.
Thanks for taking a look at this.
-Daniel
>
> 8<---------------------------------------------------------------------------------------------
>
> [ 308.324654] WARNING: possible circular locking dependency detected
> [ 308.324655] 5.8.0-rc2+ #16 Not tainted
> [ 308.324656] ------------------------------------------------------
> [ 308.324657] kswapd0/98 is trying to acquire lock:
> [ 308.324658] ffff92a16f758428 (&mapping->i_mmap_rwsem){++++}-{3:3}, at:
> rmap_walk_file+0x1c0/0x2f0
> [ 308.324663]
> but task is already holding lock:
> [ 308.324664] ffffffffb0960240 (fs_reclaim){+.+.}-{0:0}, at:
> __fs_reclaim_acquire+0x5/0x30
> [ 308.324666]
> which lock already depends on the new lock.
>
> [ 308.324667]
> the existing dependency chain (in reverse order) is:
> [ 308.324667]
> -> #1 (fs_reclaim){+.+.}-{0:0}:
> [ 308.324670] fs_reclaim_acquire+0x34/0x40
> [ 308.324672] dma_resv_lockdep+0x186/0x224
> [ 308.324675] do_one_initcall+0x5d/0x2c0
> [ 308.324676] kernel_init_freeable+0x222/0x288
> [ 308.324678] kernel_init+0xa/0x107
> [ 308.324679] ret_from_fork+0x1f/0x30
> [ 308.324680]
> -> #0 (&mapping->i_mmap_rwsem){++++}-{3:3}:
> [ 308.324682] __lock_acquire+0x119f/0x1fc0
> [ 308.324683] lock_acquire+0xa4/0x3b0
> [ 308.324685] down_read+0x2d/0x110
> [ 308.324686] rmap_walk_file+0x1c0/0x2f0
> [ 308.324687] page_referenced+0x133/0x150
> [ 308.324689] shrink_active_list+0x142/0x610
> [ 308.324690] balance_pgdat+0x229/0x620
> [ 308.324691] kswapd+0x200/0x470
> [ 308.324693] kthread+0x11f/0x140
> [ 308.324694] ret_from_fork+0x1f/0x30
> [ 308.324694]
> other info that might help us debug this:
>
> [ 308.324695] Possible unsafe locking scenario:
>
> [ 308.324695] CPU0 CPU1
> [ 308.324696] ---- ----
> [ 308.324696] lock(fs_reclaim);
> [ 308.324697] lock(&mapping->i_mmap_rwsem);
> [ 308.324698] lock(fs_reclaim);
> [ 308.324699] lock(&mapping->i_mmap_rwsem);
> [ 308.324699]
> *** DEADLOCK ***
>
> [ 308.324700] 1 lock held by kswapd0/98:
> [ 308.324701] #0: ffffffffb0960240 (fs_reclaim){+.+.}-{0:0}, at:
> __fs_reclaim_acquire+0x5/0x30
> [ 308.324702]
> stack backtrace:
> [ 308.324704] CPU: 1 PID: 98 Comm: kswapd0 Not tainted 5.8.0-rc2+ #16
> [ 308.324705] Hardware name: VMware, Inc. VMware Virtual Platform/440BX
> Desktop Reference Platform, BIOS 6.00 07/29/2019
> [ 308.324706] Call Trace:
> [ 308.324710] dump_stack+0x92/0xc8
> [ 308.324711] check_noncircular+0x12d/0x150
> [ 308.324713] __lock_acquire+0x119f/0x1fc0
> [ 308.324715] lock_acquire+0xa4/0x3b0
> [ 308.324716] ? rmap_walk_file+0x1c0/0x2f0
> [ 308.324717] ? __lock_acquire+0x394/0x1fc0
> [ 308.324719] down_read+0x2d/0x110
> [ 308.324720] ? rmap_walk_file+0x1c0/0x2f0
> [ 308.324721] rmap_walk_file+0x1c0/0x2f0
> [ 308.324722] page_referenced+0x133/0x150
> [ 308.324724] ? __page_set_anon_rmap+0x70/0x70
> [ 308.324725] ? page_get_anon_vma+0x190/0x190
> [ 308.324726] shrink_active_list+0x142/0x610
> [ 308.324728] balance_pgdat+0x229/0x620
> [ 308.324730] kswapd+0x200/0x470
> [ 308.324731] ? lockdep_hardirqs_on_prepare+0xf5/0x170
> [ 308.324733] ? finish_wait+0x80/0x80
> [ 308.324734] ? balance_pgdat+0x620/0x620
> [ 308.324736] kthread+0x11f/0x140
> [ 308.324737] ? kthread_create_worker_on_cpu+0x40/0x40
> [ 308.324739] ret_from_fork+0x1f/0x30
>
>
>
> > > /Thomas
> > >
> > >
> > >
> >
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2020-09-17 13:21 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-28 13:58 [PATCH] dma-resv: lockdep-prime address_space->i_mmap_rwsem for dma-resv Daniel Vetter
2020-07-28 13:58 ` [Intel-gfx] " Daniel Vetter
2020-07-28 13:58 ` Daniel Vetter
2020-07-28 14:09 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2020-07-28 14:30 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-07-28 20:58 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2020-07-30 12:03 ` [Linaro-mm-sig] [PATCH] " Christian König
2020-07-30 12:03 ` [Intel-gfx] " Christian König
2020-07-30 12:03 ` Christian König
2020-07-30 12:17 ` Thomas Hellström (Intel)
2020-07-30 12:17 ` [Intel-gfx] " Thomas Hellström (Intel)
2020-07-30 12:17 ` Thomas Hellström (Intel)
2020-07-30 13:17 ` Daniel Vetter
2020-07-30 13:17 ` [Intel-gfx] " Daniel Vetter
2020-07-30 13:17 ` Daniel Vetter
2020-07-30 16:45 ` Thomas Hellström (Intel)
2020-07-30 16:45 ` [Intel-gfx] " Thomas Hellström (Intel)
2020-07-30 16:45 ` Thomas Hellström (Intel)
2020-09-17 13:19 ` Daniel Vetter
2020-09-17 13:19 ` [Intel-gfx] " Daniel Vetter
2020-09-17 13:19 ` Daniel Vetter
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.