* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention
@ 2022-05-27 22:08 kernel test robot
2022-05-30 3:25 ` kernel test robot
0 siblings, 1 reply; 29+ messages in thread
From: kernel test robot @ 2022-05-27 22:08 UTC (permalink / raw)
To: kbuild
[-- Attachment #1: Type: text/plain, Size: 27608 bytes --]
CC: llvm(a)lists.linux.dev
CC: kbuild-all(a)lists.01.org
BCC: lkp(a)intel.com
In-Reply-To: <20220526235040.678984-15-dmitry.osipenko@collabora.com>
References: <20220526235040.678984-15-dmitry.osipenko@collabora.com>
TO: Dmitry Osipenko <dmitry.osipenko@collabora.com>
TO: David Airlie <airlied@linux.ie>
TO: Gerd Hoffmann <kraxel@redhat.com>
TO: Gurchetan Singh <gurchetansingh@chromium.org>
TO: "Chia-I Wu" <olvaffe@gmail.com>
TO: Daniel Vetter <daniel@ffwll.ch>
TO: Daniel Almeida <daniel.almeida@collabora.com>
TO: Gert Wollny <gert.wollny@collabora.com>
TO: Gustavo Padovan <gustavo.padovan@collabora.com>
TO: Daniel Stone <daniel@fooishbar.org>
TO: Tomeu Vizoso <tomeu.vizoso@collabora.com>
TO: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
TO: Maxime Ripard <mripard@kernel.org>
TO: Thomas Zimmermann <tzimmermann@suse.de>
TO: Rob Herring <robh@kernel.org>
TO: Steven Price <steven.price@arm.com>
TO: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
TO: Rob Clark <robdclark@gmail.com>
TO: Emil Velikov <emil.l.velikov@gmail.com>
TO: Robin Murphy <robin.murphy@arm.com>
TO: Qiang Yu <yuq825@gmail.com>
TO: Sumit Semwal <sumit.semwal@linaro.org>
TO: "Christian König" <christian.koenig@amd.com>
TO: "Pan, Xinhui" <Xinhui.Pan@amd.com>
TO: Thierry Reding <thierry.reding@gmail.com>
TO: Tomasz Figa <tfiga@chromium.org>
TO: Marek Szyprowski <m.szyprowski@samsung.com>
TO: Mauro Carvalho Chehab <mchehab@kernel.org>
CC: linux-media(a)vger.kernel.org
TO: Alex Deucher <alexander.deucher@amd.com>
TO: Jani Nikula <jani.nikula@linux.intel.com>
Hi Dmitry,
I love your patch! Perhaps something to improve:
[auto build test WARNING on linus/master]
[also build test WARNING on next-20220527]
[cannot apply to drm/drm-next media-tree/master drm-intel/for-linux-next v5.18]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/intel-lab-lkp/linux/commits/Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220527-075717
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git cdeffe87f790dfd1baa193020411ce9a538446d7
:::::: branch date: 22 hours ago
:::::: commit date: 22 hours ago
config: arm-randconfig-c002-20220524 (https://download.01.org/0day-ci/archive/20220528/202205280550.MWGs9cj4-lkp(a)intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project 134d7f9a4b97e9035150d970bd9e376043c4577e)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install arm cross compiling tool for clang build
# apt-get install binutils-arm-linux-gnueabi
# https://github.com/intel-lab-lkp/linux/commit/97f090c47ec995a8cf3bced98526ee3eaa25f10f
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220527-075717
git checkout 97f090c47ec995a8cf3bced98526ee3eaa25f10f
# save the config file
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=arm clang-analyzer
If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp@intel.com>
clang-analyzer warnings: (new ones prefixed by >>)
^~~~~~~
drivers/thermal/thermal_sysfs.c:602:9: warning: Call to function 'sprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'sprintf_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
return sprintf(buf, "%ld\n", state);
^~~~~~~
drivers/thermal/thermal_sysfs.c:602:9: note: Call to function 'sprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'sprintf_s' in case of C11
return sprintf(buf, "%ld\n", state);
^~~~~~~
drivers/thermal/thermal_sysfs.c:613:6: warning: Call to function 'sscanf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'sscanf_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
if (sscanf(buf, "%ld\n", &state) != 1)
^~~~~~
drivers/thermal/thermal_sysfs.c:613:6: note: Call to function 'sscanf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'sscanf_s' in case of C11
if (sscanf(buf, "%ld\n", &state) != 1)
^~~~~~
drivers/thermal/thermal_sysfs.c:702:8: warning: Call to function 'sprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'sprintf_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
ret = sprintf(buf, "%u\n", stats->total_trans);
^~~~~~~
drivers/thermal/thermal_sysfs.c:702:8: note: Call to function 'sprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'sprintf_s' in case of C11
ret = sprintf(buf, "%u\n", stats->total_trans);
^~~~~~~
drivers/thermal/thermal_sysfs.c:721:10: warning: Call to function 'sprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'sprintf_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
len += sprintf(buf + len, "state%u\t%llu\n", i,
^~~~~~~
drivers/thermal/thermal_sysfs.c:721:10: note: Call to function 'sprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'sprintf_s' in case of C11
len += sprintf(buf + len, "state%u\t%llu\n", i,
^~~~~~~
drivers/thermal/thermal_sysfs.c:741:2: warning: Call to function 'memset' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'memset_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
memset(stats->trans_table, 0,
^~~~~~
drivers/thermal/thermal_sysfs.c:741:2: note: Call to function 'memset' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'memset_s' in case of C11
memset(stats->trans_table, 0,
^~~~~~
drivers/thermal/thermal_sysfs.c:760:9: warning: Call to function 'snprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'snprintf_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
len += snprintf(buf + len, PAGE_SIZE - len, " From : To\n");
^~~~~~~~
drivers/thermal/thermal_sysfs.c:760:9: note: Call to function 'snprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'snprintf_s' in case of C11
len += snprintf(buf + len, PAGE_SIZE - len, " From : To\n");
^~~~~~~~
drivers/thermal/thermal_sysfs.c:761:9: warning: Call to function 'snprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'snprintf_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
len += snprintf(buf + len, PAGE_SIZE - len, " : ");
^~~~~~~~
drivers/thermal/thermal_sysfs.c:761:9: note: Call to function 'snprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'snprintf_s' in case of C11
len += snprintf(buf + len, PAGE_SIZE - len, " : ");
^~~~~~~~
drivers/thermal/thermal_sysfs.c:765:10: warning: Call to function 'snprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'snprintf_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
len += snprintf(buf + len, PAGE_SIZE - len, "state%2u ", i);
^~~~~~~~
drivers/thermal/thermal_sysfs.c:765:10: note: Call to function 'snprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'snprintf_s' in case of C11
len += snprintf(buf + len, PAGE_SIZE - len, "state%2u ", i);
^~~~~~~~
drivers/thermal/thermal_sysfs.c:770:9: warning: Call to function 'snprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'snprintf_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
len += snprintf(buf + len, PAGE_SIZE - len, "\n");
^~~~~~~~
drivers/thermal/thermal_sysfs.c:770:9: note: Call to function 'snprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'snprintf_s' in case of C11
len += snprintf(buf + len, PAGE_SIZE - len, "\n");
^~~~~~~~
drivers/thermal/thermal_sysfs.c:776:10: warning: Call to function 'snprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'snprintf_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
len += snprintf(buf + len, PAGE_SIZE - len, "state%2u:", i);
^~~~~~~~
drivers/thermal/thermal_sysfs.c:776:10: note: Call to function 'snprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'snprintf_s' in case of C11
len += snprintf(buf + len, PAGE_SIZE - len, "state%2u:", i);
^~~~~~~~
drivers/thermal/thermal_sysfs.c:781:11: warning: Call to function 'snprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'snprintf_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
len += snprintf(buf + len, PAGE_SIZE - len, "%8u ",
^~~~~~~~
drivers/thermal/thermal_sysfs.c:781:11: note: Call to function 'snprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'snprintf_s' in case of C11
len += snprintf(buf + len, PAGE_SIZE - len, "%8u ",
^~~~~~~~
drivers/thermal/thermal_sysfs.c:786:10: warning: Call to function 'snprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'snprintf_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
len += snprintf(buf + len, PAGE_SIZE - len, "\n");
^~~~~~~~
drivers/thermal/thermal_sysfs.c:786:10: note: Call to function 'snprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'snprintf_s' in case of C11
len += snprintf(buf + len, PAGE_SIZE - len, "\n");
^~~~~~~~
drivers/thermal/thermal_sysfs.c:881:9: warning: Call to function 'sprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'sprintf_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
return sprintf(buf, "%d\n", instance->trip);
^~~~~~~
drivers/thermal/thermal_sysfs.c:881:9: note: Call to function 'sprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'sprintf_s' in case of C11
return sprintf(buf, "%d\n", instance->trip);
^~~~~~~
drivers/thermal/thermal_sysfs.c:891:9: warning: Call to function 'sprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'sprintf_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
return sprintf(buf, "%d\n", instance->weight);
^~~~~~~
drivers/thermal/thermal_sysfs.c:891:9: note: Call to function 'sprintf' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'sprintf_s' in case of C11
return sprintf(buf, "%d\n", instance->weight);
^~~~~~~
Suppressed 16 warnings (16 in non-user code).
Use -header-filter=.* to display errors from all non-system headers. Use -system-headers to display errors from system headers as well.
7 warnings generated.
Suppressed 7 warnings (7 in non-user code).
Use -header-filter=.* to display errors from all non-system headers. Use -system-headers to display errors from system headers as well.
8 warnings generated.
lib/xarray.c:2035:18: warning: Value stored to 'node' during its initialization is never read [clang-analyzer-deadcode.DeadStores]
struct xa_node *node = xas->xa_node;
^~~~ ~~~~~~~~~~~~
lib/xarray.c:2035:18: note: Value stored to 'node' during its initialization is never read
struct xa_node *node = xas->xa_node;
^~~~ ~~~~~~~~~~~~
Suppressed 7 warnings (7 in non-user code).
Use -header-filter=.* to display errors from all non-system headers. Use -system-headers to display errors from system headers as well.
36 warnings generated.
>> drivers/dma-buf/dma-buf.c:1339:3: warning: Undefined or garbage value returned to caller [clang-analyzer-core.uninitialized.UndefReturn]
return ret;
^
drivers/dma-buf/dma-buf.c:1378:14: note: Assuming 'dmabuf' is non-null
if (WARN_ON(!dmabuf))
^
include/asm-generic/bug.h:122:25: note: expanded from macro 'WARN_ON'
int __ret_warn_on = !!(condition); \
^~~~~~~~~
drivers/dma-buf/dma-buf.c:1378:6: note: Taking false branch
if (WARN_ON(!dmabuf))
^
include/asm-generic/bug.h:123:2: note: expanded from macro 'WARN_ON'
if (unlikely(__ret_warn_on)) \
^
drivers/dma-buf/dma-buf.c:1378:2: note: Taking false branch
if (WARN_ON(!dmabuf))
^
drivers/dma-buf/dma-buf.c:1381:6: note: Assuming field 'vmap' is non-null
if (!dmabuf->ops->vmap)
^~~~~~~~~~~~~~~~~~
drivers/dma-buf/dma-buf.c:1381:2: note: Taking false branch
if (!dmabuf->ops->vmap)
^
drivers/dma-buf/dma-buf.c:1385:8: note: Calling 'dma_buf_vmap_locked'
ret = dma_buf_vmap_locked(dmabuf, map);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/dma-buf/dma-buf.c:1331:2: note: 'ret' declared without an initial value
int ret;
^~~~~~~
drivers/dma-buf/dma-buf.c:1333:2: note: Loop condition is false. Exiting loop
dma_resv_assert_held(dmabuf->resv);
^
include/linux/dma-resv.h:302:35: note: expanded from macro 'dma_resv_assert_held'
#define dma_resv_assert_held(obj) lockdep_assert_held(&(obj)->lock.base)
^
include/linux/lockdep.h:411:34: note: expanded from macro 'lockdep_assert_held'
#define lockdep_assert_held(l) do { (void)(l); } while (0)
^
drivers/dma-buf/dma-buf.c:1335:6: note: Assuming field 'vmapping_counter' is not equal to 0
if (dmabuf->vmapping_counter) {
^~~~~~~~~~~~~~~~~~~~~~~~
drivers/dma-buf/dma-buf.c:1335:2: note: Taking true branch
if (dmabuf->vmapping_counter) {
^
drivers/dma-buf/dma-buf.c:1337:3: note: Taking false branch
BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr));
^
include/asm-generic/bug.h:71:32: note: expanded from macro 'BUG_ON'
#define BUG_ON(condition) do { if (unlikely(condition)) BUG(); } while (0)
^
drivers/dma-buf/dma-buf.c:1337:3: note: Loop condition is false. Exiting loop
BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr));
^
include/asm-generic/bug.h:71:27: note: expanded from macro 'BUG_ON'
#define BUG_ON(condition) do { if (unlikely(condition)) BUG(); } while (0)
^
drivers/dma-buf/dma-buf.c:1339:3: note: Undefined or garbage value returned to caller
return ret;
^ ~~~
Suppressed 35 warnings (34 in non-user code, 1 with check filters).
Use -header-filter=.* to display errors from all non-system headers. Use -system-headers to display errors from system headers as well.
45 warnings generated.
fs/xfs/libxfs/xfs_refcount_btree.c:66:2: warning: Call to function 'memset' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'memset_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
memset(&args, 0, sizeof(args));
^~~~~~
fs/xfs/libxfs/xfs_refcount_btree.c:66:2: note: Call to function 'memset' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'memset_s' in case of C11
memset(&args, 0, sizeof(args));
^~~~~~
Suppressed 44 warnings (44 in non-user code).
Use -header-filter=.* to display errors from all non-system headers. Use -system-headers to display errors from system headers as well.
52 warnings generated.
fs/xfs/libxfs/xfs_sb.c:559:2: warning: Call to function 'memcpy' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'memcpy_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
memcpy(&to->sb_uuid, &from->sb_uuid, sizeof(to->sb_uuid));
^~~~~~
fs/xfs/libxfs/xfs_sb.c:559:2: note: Call to function 'memcpy' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'memcpy_s' in case of C11
memcpy(&to->sb_uuid, &from->sb_uuid, sizeof(to->sb_uuid));
^~~~~~
fs/xfs/libxfs/xfs_sb.c:573:2: warning: Call to function 'memcpy' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'memcpy_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
memcpy(&to->sb_fname, &from->sb_fname, sizeof(to->sb_fname));
^~~~~~
fs/xfs/libxfs/xfs_sb.c:573:2: note: Call to function 'memcpy' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'memcpy_s' in case of C11
memcpy(&to->sb_fname, &from->sb_fname, sizeof(to->sb_fname));
^~~~~~
fs/xfs/libxfs/xfs_sb.c:708:2: warning: Call to function 'memcpy' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'memcpy_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
memcpy(&to->sb_uuid, &from->sb_uuid, sizeof(to->sb_uuid));
^~~~~~
fs/xfs/libxfs/xfs_sb.c:708:2: note: Call to function 'memcpy' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'memcpy_s' in case of C11
memcpy(&to->sb_uuid, &from->sb_uuid, sizeof(to->sb_uuid));
^~~~~~
fs/xfs/libxfs/xfs_sb.c:722:2: warning: Call to function 'memcpy' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'memcpy_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
memcpy(&to->sb_fname, &from->sb_fname, sizeof(to->sb_fname));
^~~~~~
fs/xfs/libxfs/xfs_sb.c:722:2: note: Call to function 'memcpy' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'memcpy_s' in case of C11
memcpy(&to->sb_fname, &from->sb_fname, sizeof(to->sb_fname));
^~~~~~
fs/xfs/libxfs/xfs_sb.c:1128:2: warning: Call to function 'memset' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'memset_s' in case of C11 [clang-analyzer-security.insecureAPI.DeprecatedOrUnsafeBufferHandling]
memset(geo, 0, sizeof(struct xfs_fsop_geom));
^~~~~~
fs/xfs/libxfs/xfs_sb.c:1128:2: note: Call to function 'memset' is insecure as it does not provide security checks introduced in the C11 standard. Replace with analogous functions that support length arguments or provides boundary checks such as 'memset_s' in case of C11
memset(geo, 0, sizeof(struct xfs_fsop_geom));
vim +1339 drivers/dma-buf/dma-buf.c
4c78513e457f72 drivers/base/dma-buf.c Daniel Vetter 2012-04-24 1327
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1328 static int dma_buf_vmap_locked(struct dma_buf *dmabuf, struct iosys_map *map)
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1329 {
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1330 struct iosys_map ptr;
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1331 int ret;
4c78513e457f72 drivers/base/dma-buf.c Daniel Vetter 2012-04-24 1332
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1333 dma_resv_assert_held(dmabuf->resv);
4c78513e457f72 drivers/base/dma-buf.c Daniel Vetter 2012-04-24 1334
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1335 if (dmabuf->vmapping_counter) {
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1336 dmabuf->vmapping_counter++;
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1337 BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr));
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1338 *map = dmabuf->vmap_ptr;
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 @1339 return ret;
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1340 }
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1341
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1342 BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr));
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1343
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1344 ret = dmabuf->ops->vmap(dmabuf, &ptr);
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1345 if (WARN_ON_ONCE(ret))
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1346 return ret;
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1347
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1348 dmabuf->vmap_ptr = ptr;
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1349 dmabuf->vmapping_counter = 1;
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1350
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1351 *map = dmabuf->vmap_ptr;
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1352
97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1353 return 0;
4c78513e457f72 drivers/base/dma-buf.c Daniel Vetter 2012-04-24 1354 }
98f86c9e4ae320 drivers/base/dma-buf.c Dave Airlie 2012-05-20 1355
--
0-DAY CI Kernel Test Service
https://01.org/lkp
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention 2022-05-27 22:08 [PATCH v6 14/22] dma-buf: Introduce new locking convention kernel test robot @ 2022-05-30 3:25 ` kernel test robot 0 siblings, 0 replies; 29+ messages in thread From: kernel test robot @ 2022-05-30 3:25 UTC (permalink / raw) To: Dmitry Osipenko; +Cc: llvm, kbuild-all Hi Dmitry, I love your patch! Perhaps something to improve: [auto build test WARNING on linus/master] [also build test WARNING on next-20220527] [cannot apply to drm/drm-next media-tree/master drm-intel/for-linux-next v5.18] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/intel-lab-lkp/linux/commits/Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220527-075717 base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git cdeffe87f790dfd1baa193020411ce9a538446d7 config: arm-randconfig-c002-20220524 (https://download.01.org/0day-ci/archive/20220528/202205280550.MWGs9cj4-lkp@intel.com/config) compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project 134d7f9a4b97e9035150d970bd9e376043c4577e) reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # install arm cross compiling tool for clang build # apt-get install binutils-arm-linux-gnueabi # https://github.com/intel-lab-lkp/linux/commit/97f090c47ec995a8cf3bced98526ee3eaa25f10f git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220527-075717 git checkout 97f090c47ec995a8cf3bced98526ee3eaa25f10f # save the config file COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=arm clang-analyzer If you fix the issue, kindly add following tag where applicable Reported-by: kernel test robot <yujie.liu@intel.com> clang-analyzer warnings: (new ones prefixed by >>) >> drivers/dma-buf/dma-buf.c:1339:3: warning: Undefined or garbage value returned to caller [clang-analyzer-core.uninitialized.UndefReturn] return ret; ^ vim +1339 drivers/dma-buf/dma-buf.c 4c78513e457f72 drivers/base/dma-buf.c Daniel Vetter 2012-04-24 1327 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1328 static int dma_buf_vmap_locked(struct dma_buf *dmabuf, struct iosys_map *map) 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1329 { 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1330 struct iosys_map ptr; 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 @1331 int ret; 4c78513e457f72 drivers/base/dma-buf.c Daniel Vetter 2012-04-24 1332 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1333 dma_resv_assert_held(dmabuf->resv); 4c78513e457f72 drivers/base/dma-buf.c Daniel Vetter 2012-04-24 1334 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1335 if (dmabuf->vmapping_counter) { 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1336 dmabuf->vmapping_counter++; 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1337 BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1338 *map = dmabuf->vmap_ptr; 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 @1339 return ret; 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1340 } 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1341 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1342 BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr)); 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1343 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1344 ret = dmabuf->ops->vmap(dmabuf, &ptr); 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1345 if (WARN_ON_ONCE(ret)) 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1346 return ret; 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1347 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1348 dmabuf->vmap_ptr = ptr; 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1349 dmabuf->vmapping_counter = 1; 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1350 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1351 *map = dmabuf->vmap_ptr; 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1352 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 1353 return 0; 4c78513e457f72 drivers/base/dma-buf.c Daniel Vetter 2012-04-24 1354 } 98f86c9e4ae320 drivers/base/dma-buf.c Dave Airlie 2012-05-20 1355 -- 0-DAY CI Kernel Test Service https://01.org/lkp ^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v6 00/22] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers @ 2022-05-26 23:50 Dmitry Osipenko 2022-05-26 23:50 ` Dmitry Osipenko 0 siblings, 1 reply; 29+ messages in thread From: Dmitry Osipenko @ 2022-05-26 23:50 UTC (permalink / raw) To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Christian König, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin Cc: dri-devel, linux-kernel, virtualization, Dmitry Osipenko, Dmitry Osipenko, linux-tegra, linux-media, linaro-mm-sig, amd-gfx, intel-gfx, kernel Hello, This patchset introduces memory shrinker for the VirtIO-GPU DRM driver and adds memory purging and eviction support to VirtIO-GPU driver. The new dma-buf locking convention is introduced here as well. During OOM, the shrinker will release BOs that are marked as "not needed" by userspace using the new madvise IOCTL, it will also evict idling BOs to SWAP. The userspace in this case is the Mesa VirGL driver, it will mark the cached BOs as "not needed", allowing kernel driver to release memory of the cached shmem BOs on lowmem situations, preventing OOM kills. The Panfrost driver is switched to use generic memory shrinker. This patchset includes improvements and fixes for various things that I found while was working on the shrinker. The Mesa and IGT patches will be kept on hold until this kernel series will be approved and merged. This patchset was tested using Qemu and crosvm, including both cases of IOMMU off/on. Mesa: https://gitlab.freedesktop.org/digetx/mesa/-/commits/virgl-madvise IGT: https://gitlab.freedesktop.org/digetx/igt-gpu-tools/-/commits/virtio-madvise https://gitlab.freedesktop.org/digetx/igt-gpu-tools/-/commits/panfrost-madvise Changelog: v6: - Added new VirtIO-related fix patch that previously was sent separately and didn't get much attention: drm/gem: Properly annotate WW context on drm_gem_lock_reservations() error - Added new patch that fixes mapping of imported dma-bufs for Tegra DRM and other affected drivers. It's also handy to have it for switching to the new dma-buf locking convention scheme: drm/gem: Move mapping of imported dma-bufs to drm_gem_mmap_obj() - Added new patch that fixes shrinker list corruption for stable Panfrost driver: drm/panfrost: Fix shrinker list corruption by madvise IOCTL - Added new minor patch-fix for drm-shmem: drm/shmem-helper: Add missing vunmap on error - Added fixes tag to the "Put mapping ..." patch like was suggested by Steven Price. - Added new VirtIO-GPU driver improvement patch: drm/virtio: Return proper error codes instead of -1 - Reworked shrinker patches like was suggested by Daniel Vetter: - Introduced the new locking convention for dma-bufs. Tested on VirtIO-GPU, Panfrost, Lima, Tegra and Intel selftests. - Dropped separate purge() callback. Now single evict() does everything. - Dropped swap_in() callback from drm-shmem objects. DRM drivers now could and should restore only the required mappings. - Dropped dynamic counting of evictable pages. This simplifies code in exchange to *potentially* burning more CPU time on OOM. v5: - Added new for-stable patch "drm/panfrost: Put mapping instead of shmem obj on panfrost_mmu_map_fault_addr() error" that corrects GEM's refcounting in case of error. - The drm_gem_shmem_v[un]map() now takes a separate vmap_lock for imported GEMs to avoid recursive locking of DMA reservations. This addresses v4 comment from Thomas Zimmermann about the potential deadlocking of vmapping. - Added ack from Thomas Zimmermann to "drm/shmem-helper: Correct doc-comment of drm_gem_shmem_get_sg_table()" patch. - Dropped explicit shmem states from the generic shrinker patch as was requested by Thomas Zimmermann. - Improved variable names and comments of the generic shrinker code. - Extended drm_gem_shmem_print_info() with the shrinker-state info in the "drm/virtio: Support memory shrinking" patch. - Moved evict()/swap_in()/purge() callbacks from drm_gem_object_funcs to drm_gem_shmem_object in the generic shrinker patch, for more consistency. - Corrected bisectability of the patches that was broken in v4 by accident. - The virtio_gpu_plane_prepare_fb() now uses drm_gem_shmem_pin() instead of drm_gem_shmem_set_unpurgeable_and_unevictable() and does it only for shmem BOs in the "drm/virtio: Support memory shrinking" patch. - Made more functions private to drm_gem_shmem_helper.c as was requested by Thomas Zimmermann. This minimizes number of the public shmem helpers. v4: - Corrected minor W=1 warnings reported by kernel test robot for v3. - Renamed DRM_GEM_SHMEM_PAGES_STATE_ACTIVE/INACTIVE to PINNED/UNPINNED, for more clarity. v3: - Hardened shrinker's count() with usage of READ_ONCE() since we don't use atomic type for counting and technically compiler is free to re-fetch counter's variable. - "Correct drm_gem_shmem_get_sg_table() error handling" now uses PTR_ERR_OR_ZERO(), fixing typo that was made in v2. - Removed obsoleted shrinker from the Panfrost driver, which I missed to do in v2 by accident and Alyssa Rosenzweig managed to notice it. - CCed stable kernels in all patches that make fixes, even the minor ones, like was suggested by Emil Velikov and added his r-b to the patches. - Added t-b from Steven Price to the Panfrost's shrinker patch. - Corrected doc-comment of drm_gem_shmem_object.madv, like was suggested by Steven Price. Comment now says that madv=1 means "object is purged" instead of saying that value is unused. - Added more doc-comments to the new shmem shrinker API. - The "Improve DMA API usage for shmem BOs" patch got more improvements by removing the obsoleted drm_dev_set_unique() quirk and its comment. - Added patch that makes Virtio-GPU driver to use common dev_is_pci() helper, which was suggested by Robin Murphy. - Added new "drm/shmem-helper: Take GEM reservation lock instead of drm_gem_shmem locks" patch, which was suggested by Daniel Vetter. - Added new "drm/virtio: Simplify error handling of virtio_gpu_object_create()" patch. - Improved "Correct doc-comment of drm_gem_shmem_get_sg_table()" patch, like was suggested by Daniel Vetter, by saying that function returns ERR_PTR() and not errno. - virtio_gpu_purge_object() is fenced properly now, turned out virtio_gpu_notify() doesn't do fencing as I was supposing before. Stress testing of memory eviction revealed that. - Added new patch that corrects virtio_gpu_plane_cleanup_fb() to use appropriate atomic plane state. - SHMEM shrinker got eviction support. - VirtIO-GPU driver now supports memory eviction. It's enabled for a non-blob GEMs only, i.e. for VirGL. The blobs don't support dynamic attaching/detaching of guest's memory, so it's not trivial to enable them. - Added patch that removes obsoleted drm_gem_shmem_purge() - Added patch that makes drm_gem_shmem_get_pages() private. - Added patch that fixes lockup on dma_resv_reserve_fences() error. v2: - Improved shrinker by using a more fine-grained locking to reduce contention during scan of objects and dropped locking from the 'counting' callback by tracking count of shrinkable pages. This was suggested by Rob Clark in the comment to v1. - Factored out common shrinker code into drm_gem_shmem_helper.c and switched Panfrost driver to use the new common memory shrinker. This was proposed by Thomas Zimmermann in his prototype series that he shared with us in the comment to v1. Note that I only compile-tested the Panfrost driver. - Shrinker now takes object_name_lock during scan to prevent racing with dma-buf exporting. - Shrinker now takes vmap_lock during scan to prevent racing with shmem vmap/unmap code. - Added "Correct doc-comment of drm_gem_shmem_get_sg_table()" patch, which I sent out previously as a standalone change, since the drm_gem_shmem_helper.c is now touched by this patchset anyways and it doesn't hurt to group all the patches together. Dmitry Osipenko (22): drm/gem: Properly annotate WW context on drm_gem_lock_reservations() error drm/gem: Move mapping of imported dma-bufs to drm_gem_mmap_obj() drm/panfrost: Put mapping instead of shmem obj on panfrost_mmu_map_fault_addr() error drm/panfrost: Fix shrinker list corruption by madvise IOCTL drm/virtio: Correct drm_gem_shmem_get_sg_table() error handling drm/virtio: Check whether transferred 2D BO is shmem drm/virtio: Unlock reservations on virtio_gpu_object_shmem_init() error drm/virtio: Unlock reservations on dma_resv_reserve_fences() error drm/virtio: Use appropriate atomic state in virtio_gpu_plane_cleanup_fb() drm/shmem-helper: Add missing vunmap on error drm/shmem-helper: Correct doc-comment of drm_gem_shmem_get_sg_table() drm/virtio: Simplify error handling of virtio_gpu_object_create() drm/virtio: Improve DMA API usage for shmem BOs dma-buf: Introduce new locking convention drm/shmem-helper: Don't use vmap_use_count for dma-bufs drm/shmem-helper: Use reservation lock drm/shmem-helper: Add generic memory shrinker drm/gem: Add drm_gem_pin_unlocked() drm/virtio: Support memory shrinking drm/virtio: Use dev_is_pci() drm/virtio: Return proper error codes instead of -1 drm/panfrost: Switch to generic memory shrinker drivers/dma-buf/dma-buf.c | 270 ++++--- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +- drivers/gpu/drm/drm_client.c | 4 +- drivers/gpu/drm/drm_gem.c | 69 +- drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +- drivers/gpu/drm/drm_gem_shmem_helper.c | 718 ++++++++++++++---- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 10 +- drivers/gpu/drm/lima/lima_gem.c | 8 +- drivers/gpu/drm/lima/lima_sched.c | 4 +- drivers/gpu/drm/panfrost/Makefile | 1 - drivers/gpu/drm/panfrost/panfrost_device.h | 4 - drivers/gpu/drm/panfrost/panfrost_drv.c | 26 +- drivers/gpu/drm/panfrost/panfrost_gem.c | 33 +- drivers/gpu/drm/panfrost/panfrost_gem.h | 9 - .../gpu/drm/panfrost/panfrost_gem_shrinker.c | 122 --- drivers/gpu/drm/panfrost/panfrost_job.c | 18 +- drivers/gpu/drm/panfrost/panfrost_mmu.c | 21 +- drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 6 +- drivers/gpu/drm/qxl/qxl_object.c | 17 +- drivers/gpu/drm/qxl/qxl_prime.c | 4 +- drivers/gpu/drm/tegra/gem.c | 4 + drivers/gpu/drm/virtio/virtgpu_drv.c | 53 +- drivers/gpu/drm/virtio/virtgpu_drv.h | 23 +- drivers/gpu/drm/virtio/virtgpu_gem.c | 59 +- drivers/gpu/drm/virtio/virtgpu_ioctl.c | 37 + drivers/gpu/drm/virtio/virtgpu_kms.c | 16 +- drivers/gpu/drm/virtio/virtgpu_object.c | 203 +++-- drivers/gpu/drm/virtio/virtgpu_plane.c | 28 +- drivers/gpu/drm/virtio/virtgpu_vq.c | 61 +- .../common/videobuf2/videobuf2-dma-contig.c | 11 +- .../media/common/videobuf2/videobuf2-dma-sg.c | 11 +- .../common/videobuf2/videobuf2-vmalloc.c | 11 +- include/drm/drm_device.h | 4 + include/drm/drm_gem.h | 6 + include/drm/drm_gem_shmem_helper.h | 99 ++- include/linux/dma-buf.h | 14 +- include/uapi/drm/virtgpu_drm.h | 14 + 37 files changed, 1349 insertions(+), 661 deletions(-) delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c -- 2.35.3 ^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v6 14/22] dma-buf: Introduce new locking convention 2022-05-26 23:50 [PATCH v6 00/22] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko @ 2022-05-26 23:50 ` Dmitry Osipenko 0 siblings, 0 replies; 29+ messages in thread From: Dmitry Osipenko @ 2022-05-26 23:50 UTC (permalink / raw) To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Christian König, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin Cc: dri-devel, linux-kernel, virtualization, Dmitry Osipenko, Dmitry Osipenko, linux-tegra, linux-media, linaro-mm-sig, amd-gfx, intel-gfx, kernel All dma-bufs have dma-reservation lock that allows drivers to perform exclusive operations over shared dma-bufs. Today's dma-buf API has incomplete locking specification, which creates dead lock situation for dma-buf importers and exporters that don't coordinate theirs locks. This patch introduces new locking convention for dma-buf users. From now on all dma-buf importers are responsible for holding dma-buf reservation lock around operations performed over dma-bufs. This patch implements the new dma-buf locking convention by: 1. Making dma-buf API functions to take the reservation lock. 2. Adding new locked variants of the dma-buf API functions for drivers that need to manage imported dma-bufs under the held lock. 3. Converting all drivers to the new locking scheme. Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> --- drivers/dma-buf/dma-buf.c | 270 +++++++++++------- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +- drivers/gpu/drm/drm_client.c | 4 +- drivers/gpu/drm/drm_gem.c | 33 +++ drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 10 +- drivers/gpu/drm/qxl/qxl_object.c | 17 +- drivers/gpu/drm/qxl/qxl_prime.c | 4 +- .../common/videobuf2/videobuf2-dma-contig.c | 11 +- .../media/common/videobuf2/videobuf2-dma-sg.c | 11 +- .../common/videobuf2/videobuf2-vmalloc.c | 11 +- include/drm/drm_gem.h | 3 + include/linux/dma-buf.h | 14 +- 13 files changed, 241 insertions(+), 159 deletions(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 32f55640890c..64a9909ccfa2 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -552,7 +552,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) file->f_mode |= FMODE_LSEEK; dmabuf->file = file; - mutex_init(&dmabuf->lock); INIT_LIST_HEAD(&dmabuf->attachments); mutex_lock(&db_list.lock); @@ -737,14 +736,14 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, attach->importer_ops = importer_ops; attach->importer_priv = importer_priv; + dma_resv_lock(dmabuf->resv, NULL); + if (dmabuf->ops->attach) { ret = dmabuf->ops->attach(dmabuf, attach); if (ret) goto err_attach; } - dma_resv_lock(dmabuf->resv, NULL); list_add(&attach->node, &dmabuf->attachments); - dma_resv_unlock(dmabuf->resv); /* When either the importer or the exporter can't handle dynamic * mappings we cache the mapping here to avoid issues with the @@ -755,7 +754,6 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, struct sg_table *sgt; if (dma_buf_is_dynamic(attach->dmabuf)) { - dma_resv_lock(attach->dmabuf->resv, NULL); ret = dmabuf->ops->pin(attach); if (ret) goto err_unlock; @@ -768,15 +766,16 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, ret = PTR_ERR(sgt); goto err_unpin; } - if (dma_buf_is_dynamic(attach->dmabuf)) - dma_resv_unlock(attach->dmabuf->resv); attach->sgt = sgt; attach->dir = DMA_BIDIRECTIONAL; } + dma_resv_unlock(dmabuf->resv); + return attach; err_attach: + dma_resv_unlock(attach->dmabuf->resv); kfree(attach); return ERR_PTR(ret); @@ -785,10 +784,10 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, dmabuf->ops->unpin(attach); err_unlock: - if (dma_buf_is_dynamic(attach->dmabuf)) - dma_resv_unlock(attach->dmabuf->resv); + dma_resv_unlock(dmabuf->resv); dma_buf_detach(dmabuf, attach); + return ERR_PTR(ret); } EXPORT_SYMBOL_NS_GPL(dma_buf_dynamic_attach, DMA_BUF); @@ -832,24 +831,23 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach) if (WARN_ON(!dmabuf || !attach)) return; - if (attach->sgt) { - if (dma_buf_is_dynamic(attach->dmabuf)) - dma_resv_lock(attach->dmabuf->resv, NULL); + if (WARN_ON(dmabuf != attach->dmabuf)) + return; + dma_resv_lock(dmabuf->resv, NULL); + + if (attach->sgt) { __unmap_dma_buf(attach, attach->sgt, attach->dir); - if (dma_buf_is_dynamic(attach->dmabuf)) { + if (dma_buf_is_dynamic(attach->dmabuf)) dmabuf->ops->unpin(attach); - dma_resv_unlock(attach->dmabuf->resv); - } } - dma_resv_lock(dmabuf->resv, NULL); list_del(&attach->node); - dma_resv_unlock(dmabuf->resv); if (dmabuf->ops->detach) dmabuf->ops->detach(dmabuf, attach); + dma_resv_unlock(dmabuf->resv); kfree(attach); } EXPORT_SYMBOL_NS_GPL(dma_buf_detach, DMA_BUF); @@ -906,28 +904,18 @@ void dma_buf_unpin(struct dma_buf_attachment *attach) EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, DMA_BUF); /** - * dma_buf_map_attachment - Returns the scatterlist table of the attachment; + * dma_buf_map_attachment_locked - Returns the scatterlist table of the attachment; * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the * dma_buf_ops. * @attach: [in] attachment whose scatterlist is to be returned * @direction: [in] direction of DMA transfer * - * Returns sg_table containing the scatterlist to be returned; returns ERR_PTR - * on error. May return -EINTR if it is interrupted by a signal. - * - * On success, the DMA addresses and lengths in the returned scatterlist are - * PAGE_SIZE aligned. - * - * A mapping must be unmapped by using dma_buf_unmap_attachment(). Note that - * the underlying backing storage is pinned for as long as a mapping exists, - * therefore users/importers should not hold onto a mapping for undue amounts of - * time. + * Locked variant of dma_buf_map_attachment(). * - * Important: Dynamic importers must wait for the exclusive fence of the struct - * dma_resv attached to the DMA-BUF first. + * Caller is responsible for holding dmabuf's reservation lock. */ -struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, - enum dma_data_direction direction) +struct sg_table *dma_buf_map_attachment_locked(struct dma_buf_attachment *attach, + enum dma_data_direction direction) { struct sg_table *sg_table; int r; @@ -937,8 +925,7 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, if (WARN_ON(!attach || !attach->dmabuf)) return ERR_PTR(-EINVAL); - if (dma_buf_attachment_is_dynamic(attach)) - dma_resv_assert_held(attach->dmabuf->resv); + dma_resv_assert_held(attach->dmabuf->resv); if (attach->sgt) { /* @@ -953,7 +940,6 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, } if (dma_buf_is_dynamic(attach->dmabuf)) { - dma_resv_assert_held(attach->dmabuf->resv); if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) { r = attach->dmabuf->ops->pin(attach); if (r) @@ -993,42 +979,101 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, #endif /* CONFIG_DMA_API_DEBUG */ return sg_table; } -EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF); +EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_locked, DMA_BUF); /** - * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might - * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of + * dma_buf_map_attachment - Returns the scatterlist table of the attachment; + * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the * dma_buf_ops. - * @attach: [in] attachment to unmap buffer from - * @sg_table: [in] scatterlist info of the buffer to unmap - * @direction: [in] direction of DMA transfer + * @attach: [in] attachment whose scatterlist is to be returned + * @direction: [in] direction of DMA transfer * - * This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment(). + * Returns sg_table containing the scatterlist to be returned; returns ERR_PTR + * on error. May return -EINTR if it is interrupted by a signal. + * + * On success, the DMA addresses and lengths in the returned scatterlist are + * PAGE_SIZE aligned. + * + * A mapping must be unmapped by using dma_buf_unmap_attachment(). Note that + * the underlying backing storage is pinned for as long as a mapping exists, + * therefore users/importers should not hold onto a mapping for undue amounts of + * time. + * + * Important: Dynamic importers must wait for the exclusive fence of the struct + * dma_resv attached to the DMA-BUF first. */ -void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, - struct sg_table *sg_table, +struct sg_table * +dma_buf_map_attachment(struct dma_buf_attachment *attach, enum dma_data_direction direction) { + struct sg_table *sg_table; + might_sleep(); - if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) - return; + if (WARN_ON(!attach || !attach->dmabuf)) + return ERR_PTR(-EINVAL); + + dma_resv_lock(attach->dmabuf->resv, NULL); + sg_table = dma_buf_map_attachment_locked(attach, direction); + dma_resv_unlock(attach->dmabuf->resv); - if (dma_buf_attachment_is_dynamic(attach)) - dma_resv_assert_held(attach->dmabuf->resv); + return sg_table; +} +EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF); + +/** + * dma_buf_unmap_attachment_locked - Returns the scatterlist table of the attachment; + * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the + * dma_buf_ops. + * @attach: [in] attachment whose scatterlist is to be returned + * @direction: [in] direction of DMA transfer + * + * Locked variant of dma_buf_unmap_attachment(). + * + * Caller is responsible for holding dmabuf's reservation lock. + */ +void dma_buf_unmap_attachment_locked(struct dma_buf_attachment *attach, + struct sg_table *sg_table, + enum dma_data_direction direction) +{ + might_sleep(); + + dma_resv_assert_held(attach->dmabuf->resv); if (attach->sgt == sg_table) return; - if (dma_buf_is_dynamic(attach->dmabuf)) - dma_resv_assert_held(attach->dmabuf->resv); - __unmap_dma_buf(attach, sg_table, direction); if (dma_buf_is_dynamic(attach->dmabuf) && !IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) dma_buf_unpin(attach); } +EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment_locked, DMA_BUF); + +/** + * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might + * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of + * dma_buf_ops. + * @attach: [in] attachment to unmap buffer from + * @sg_table: [in] scatterlist info of the buffer to unmap + * @direction: [in] direction of DMA transfer + * + * This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment(). + */ +void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, + struct sg_table *sg_table, + enum dma_data_direction direction) +{ + might_sleep(); + + if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) + return; + + dma_resv_lock(attach->dmabuf->resv, NULL); + dma_buf_unmap_attachment_locked(attach, sg_table, direction); + dma_resv_unlock(attach->dmabuf->resv); +} EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF); /** @@ -1224,6 +1269,31 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf, } EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF); +static int dma_buf_mmap_locked(struct dma_buf *dmabuf, + struct vm_area_struct *vma, + unsigned long pgoff) +{ + dma_resv_assert_held(dmabuf->resv); + + /* check if buffer supports mmap */ + if (!dmabuf->ops->mmap) + return -EINVAL; + + /* check for offset overflow */ + if (pgoff + vma_pages(vma) < pgoff) + return -EOVERFLOW; + + /* check for overflowing the buffer's size */ + if (pgoff + vma_pages(vma) > + dmabuf->size >> PAGE_SHIFT) + return -EINVAL; + + /* readjust the vma */ + vma_set_file(vma, dmabuf->file); + vma->vm_pgoff = pgoff; + + return dmabuf->ops->mmap(dmabuf, vma); +} /** * dma_buf_mmap - Setup up a userspace mmap with the given vma @@ -1242,29 +1312,46 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF); int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, unsigned long pgoff) { + int ret; + if (WARN_ON(!dmabuf || !vma)) return -EINVAL; - /* check if buffer supports mmap */ - if (!dmabuf->ops->mmap) - return -EINVAL; + dma_resv_lock(dmabuf->resv, NULL); + ret = dma_buf_mmap_locked(dmabuf, vma, pgoff); + dma_resv_unlock(dmabuf->resv); - /* check for offset overflow */ - if (pgoff + vma_pages(vma) < pgoff) - return -EOVERFLOW; + return ret; +} +EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); - /* check for overflowing the buffer's size */ - if (pgoff + vma_pages(vma) > - dmabuf->size >> PAGE_SHIFT) - return -EINVAL; +static int dma_buf_vmap_locked(struct dma_buf *dmabuf, struct iosys_map *map) +{ + struct iosys_map ptr; + int ret; - /* readjust the vma */ - vma_set_file(vma, dmabuf->file); - vma->vm_pgoff = pgoff; + dma_resv_assert_held(dmabuf->resv); - return dmabuf->ops->mmap(dmabuf, vma); + if (dmabuf->vmapping_counter) { + dmabuf->vmapping_counter++; + BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); + *map = dmabuf->vmap_ptr; + return ret; + } + + BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr)); + + ret = dmabuf->ops->vmap(dmabuf, &ptr); + if (WARN_ON_ONCE(ret)) + return ret; + + dmabuf->vmap_ptr = ptr; + dmabuf->vmapping_counter = 1; + + *map = dmabuf->vmap_ptr; + + return 0; } -EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); /** * dma_buf_vmap - Create virtual mapping for the buffer object into kernel @@ -1284,8 +1371,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); */ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map) { - struct iosys_map ptr; - int ret = 0; + int ret; iosys_map_clear(map); @@ -1295,52 +1381,40 @@ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map) if (!dmabuf->ops->vmap) return -EINVAL; - mutex_lock(&dmabuf->lock); - if (dmabuf->vmapping_counter) { - dmabuf->vmapping_counter++; - BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); - *map = dmabuf->vmap_ptr; - goto out_unlock; - } - - BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr)); - - ret = dmabuf->ops->vmap(dmabuf, &ptr); - if (WARN_ON_ONCE(ret)) - goto out_unlock; - - dmabuf->vmap_ptr = ptr; - dmabuf->vmapping_counter = 1; - - *map = dmabuf->vmap_ptr; + dma_resv_lock(dmabuf->resv, NULL); + ret = dma_buf_vmap_locked(dmabuf, map); + dma_resv_unlock(dmabuf->resv); -out_unlock: - mutex_unlock(&dmabuf->lock); return ret; } EXPORT_SYMBOL_NS_GPL(dma_buf_vmap, DMA_BUF); -/** - * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap. - * @dmabuf: [in] buffer to vunmap - * @map: [in] vmap pointer to vunmap - */ -void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) +static void dma_buf_vunmap_locked(struct dma_buf *dmabuf, struct iosys_map *map) { - if (WARN_ON(!dmabuf)) - return; - BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); BUG_ON(dmabuf->vmapping_counter == 0); BUG_ON(!iosys_map_is_equal(&dmabuf->vmap_ptr, map)); - mutex_lock(&dmabuf->lock); if (--dmabuf->vmapping_counter == 0) { if (dmabuf->ops->vunmap) dmabuf->ops->vunmap(dmabuf, map); iosys_map_clear(&dmabuf->vmap_ptr); } - mutex_unlock(&dmabuf->lock); +} + +/** + * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap. + * @dmabuf: [in] buffer to vunmap + * @map: [in] vmap pointer to vunmap + */ +void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) +{ + if (WARN_ON(!dmabuf)) + return; + + dma_resv_lock(dmabuf->resv, NULL); + dma_buf_vunmap_locked(dmabuf, map); + dma_resv_unlock(dmabuf->resv); } EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index be6f76a30ac6..b704bdf5601a 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -882,7 +882,8 @@ static int amdgpu_ttm_backend_bind(struct ttm_device *bdev, struct sg_table *sgt; attach = gtt->gobj->import_attach; - sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); + sgt = dma_buf_map_attachment_locked(attach, + DMA_BIDIRECTIONAL); if (IS_ERR(sgt)) return PTR_ERR(sgt); @@ -1007,7 +1008,8 @@ static void amdgpu_ttm_backend_unbind(struct ttm_device *bdev, struct dma_buf_attachment *attach; attach = gtt->gobj->import_attach; - dma_buf_unmap_attachment(attach, ttm->sg, DMA_BIDIRECTIONAL); + dma_buf_unmap_attachment_locked(attach, ttm->sg, + DMA_BIDIRECTIONAL); ttm->sg = NULL; } diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c index af3b7395bf69..e9a1cd310352 100644 --- a/drivers/gpu/drm/drm_client.c +++ b/drivers/gpu/drm/drm_client.c @@ -323,7 +323,7 @@ drm_client_buffer_vmap(struct drm_client_buffer *buffer, * fd_install step out of the driver backend hooks, to make that * final step optional for internal users. */ - ret = drm_gem_vmap(buffer->gem, map); + ret = drm_gem_vmap_unlocked(buffer->gem, map); if (ret) return ret; @@ -345,7 +345,7 @@ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) { struct iosys_map *map = &buffer->map; - drm_gem_vunmap(buffer->gem, map); + drm_gem_vunmap_unlocked(buffer->gem, map); } EXPORT_SYMBOL(drm_client_buffer_vunmap); diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 7c0b025508e4..c61674887582 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1053,7 +1053,12 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size, vma->vm_ops = obj->funcs->vm_ops; if (obj->funcs->mmap) { + ret = dma_resv_lock_interruptible(obj->resv, NULL); + if (ret) + goto err_drm_gem_object_put; + ret = obj->funcs->mmap(obj, vma); + dma_resv_unlock(obj->resv); if (ret) goto err_drm_gem_object_put; WARN_ON(!(vma->vm_flags & VM_DONTEXPAND)); @@ -1158,6 +1163,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent, int drm_gem_pin(struct drm_gem_object *obj) { + dma_resv_assert_held(obj->resv); + if (obj->funcs->pin) return obj->funcs->pin(obj); else @@ -1166,6 +1173,8 @@ int drm_gem_pin(struct drm_gem_object *obj) void drm_gem_unpin(struct drm_gem_object *obj) { + dma_resv_assert_held(obj->resv); + if (obj->funcs->unpin) obj->funcs->unpin(obj); } @@ -1174,6 +1183,8 @@ int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) { int ret; + dma_resv_assert_held(obj->resv); + if (!obj->funcs->vmap) return -EOPNOTSUPP; @@ -1189,6 +1200,8 @@ EXPORT_SYMBOL(drm_gem_vmap); void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) { + dma_resv_assert_held(obj->resv); + if (iosys_map_is_null(map)) return; @@ -1200,6 +1213,26 @@ void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) } EXPORT_SYMBOL(drm_gem_vunmap); +int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) +{ + int ret; + + dma_resv_lock(obj->resv, NULL); + ret = drm_gem_vmap(obj, map); + dma_resv_unlock(obj->resv); + + return ret; +} +EXPORT_SYMBOL(drm_gem_vmap_unlocked); + +void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) +{ + dma_resv_lock(obj->resv, NULL); + drm_gem_vunmap(obj, map); + dma_resv_unlock(obj->resv); +} +EXPORT_SYMBOL(drm_gem_vunmap_unlocked); + /** * drm_gem_lock_reservations - Sets up the ww context and acquires * the lock on an array of GEM objects. diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gpu/drm/drm_gem_framebuffer_helper.c index f4619803acd0..a0bff53b158e 100644 --- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c +++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c @@ -348,7 +348,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, iosys_map_clear(&map[i]); continue; } - ret = drm_gem_vmap(obj, &map[i]); + ret = drm_gem_vmap_unlocked(obj, &map[i]); if (ret) goto err_drm_gem_vunmap; } @@ -370,7 +370,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, obj = drm_gem_fb_get_obj(fb, i); if (!obj) continue; - drm_gem_vunmap(obj, &map[i]); + drm_gem_vunmap_unlocked(obj, &map[i]); } return ret; } @@ -398,7 +398,7 @@ void drm_gem_fb_vunmap(struct drm_framebuffer *fb, continue; if (iosys_map_is_null(&map[i])) continue; - drm_gem_vunmap(obj, &map[i]); + drm_gem_vunmap_unlocked(obj, &map[i]); } } EXPORT_SYMBOL(drm_gem_fb_vunmap); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index f5062d0c6333..09502d490da8 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -72,7 +72,7 @@ static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf); void *vaddr; - vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); + vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB); if (IS_ERR(vaddr)) return PTR_ERR(vaddr); @@ -241,8 +241,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj) assert_object_held(obj); - pages = dma_buf_map_attachment(obj->base.import_attach, - DMA_BIDIRECTIONAL); + pages = dma_buf_map_attachment_locked(obj->base.import_attach, + DMA_BIDIRECTIONAL); if (IS_ERR(pages)) return PTR_ERR(pages); @@ -270,8 +270,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj) static void i915_gem_object_put_pages_dmabuf(struct drm_i915_gem_object *obj, struct sg_table *pages) { - dma_buf_unmap_attachment(obj->base.import_attach, pages, - DMA_BIDIRECTIONAL); + dma_buf_unmap_attachment_locked(obj->base.import_attach, pages, + DMA_BIDIRECTIONAL); } static const struct drm_i915_gem_object_ops i915_gem_object_dmabuf_ops = { diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index b42a657e4c2f..a64cd635fbc0 100644 --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -168,9 +168,16 @@ int qxl_bo_vmap_locked(struct qxl_bo *bo, struct iosys_map *map) bo->map_count++; goto out; } - r = ttm_bo_vmap(&bo->tbo, &bo->map); + + r = __qxl_bo_pin(bo); if (r) return r; + + r = ttm_bo_vmap(&bo->tbo, &bo->map); + if (r) { + __qxl_bo_unpin(bo); + return r; + } bo->map_count = 1; /* TODO: Remove kptr in favor of map everywhere. */ @@ -192,12 +199,6 @@ int qxl_bo_vmap(struct qxl_bo *bo, struct iosys_map *map) if (r) return r; - r = __qxl_bo_pin(bo); - if (r) { - qxl_bo_unreserve(bo); - return r; - } - r = qxl_bo_vmap_locked(bo, map); qxl_bo_unreserve(bo); return r; @@ -247,6 +248,7 @@ void qxl_bo_vunmap_locked(struct qxl_bo *bo) return; bo->kptr = NULL; ttm_bo_vunmap(&bo->tbo, &bo->map); + __qxl_bo_unpin(bo); } int qxl_bo_vunmap(struct qxl_bo *bo) @@ -258,7 +260,6 @@ int qxl_bo_vunmap(struct qxl_bo *bo) return r; qxl_bo_vunmap_locked(bo); - __qxl_bo_unpin(bo); qxl_bo_unreserve(bo); return 0; } diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c index 142d01415acb..9169c26357d3 100644 --- a/drivers/gpu/drm/qxl/qxl_prime.c +++ b/drivers/gpu/drm/qxl/qxl_prime.c @@ -59,7 +59,7 @@ int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map) struct qxl_bo *bo = gem_to_qxl_bo(obj); int ret; - ret = qxl_bo_vmap(bo, map); + ret = qxl_bo_vmap_locked(bo, map); if (ret < 0) return ret; @@ -71,5 +71,5 @@ void qxl_gem_prime_vunmap(struct drm_gem_object *obj, { struct qxl_bo *bo = gem_to_qxl_bo(obj); - qxl_bo_vunmap(bo); + qxl_bo_vunmap_locked(bo); } diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c index 678b359717c4..617062076370 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c @@ -382,18 +382,12 @@ static struct sg_table *vb2_dc_dmabuf_ops_map( struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) { struct vb2_dc_attachment *attach = db_attach->priv; - /* stealing dmabuf mutex to serialize map/unmap operations */ - struct mutex *lock = &db_attach->dmabuf->lock; struct sg_table *sgt; - mutex_lock(lock); - sgt = &attach->sgt; /* return previously mapped sg table */ - if (attach->dma_dir == dma_dir) { - mutex_unlock(lock); + if (attach->dma_dir == dma_dir) return sgt; - } /* release any previous cache */ if (attach->dma_dir != DMA_NONE) { @@ -409,14 +403,11 @@ static struct sg_table *vb2_dc_dmabuf_ops_map( if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, DMA_ATTR_SKIP_CPU_SYNC)) { pr_err("failed to map scatterlist\n"); - mutex_unlock(lock); return ERR_PTR(-EIO); } attach->dma_dir = dma_dir; - mutex_unlock(lock); - return sgt; } diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c index fa69158a65b1..d2075e7078cd 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c @@ -424,18 +424,12 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map( struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) { struct vb2_dma_sg_attachment *attach = db_attach->priv; - /* stealing dmabuf mutex to serialize map/unmap operations */ - struct mutex *lock = &db_attach->dmabuf->lock; struct sg_table *sgt; - mutex_lock(lock); - sgt = &attach->sgt; /* return previously mapped sg table */ - if (attach->dma_dir == dma_dir) { - mutex_unlock(lock); + if (attach->dma_dir == dma_dir) return sgt; - } /* release any previous cache */ if (attach->dma_dir != DMA_NONE) { @@ -446,14 +440,11 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map( /* mapping to the client with new direction */ if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) { pr_err("failed to map scatterlist\n"); - mutex_unlock(lock); return ERR_PTR(-EIO); } attach->dma_dir = dma_dir; - mutex_unlock(lock); - return sgt; } diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c index 948152f1596b..3d00a7f0aac1 100644 --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c @@ -267,18 +267,12 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map( struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) { struct vb2_vmalloc_attachment *attach = db_attach->priv; - /* stealing dmabuf mutex to serialize map/unmap operations */ - struct mutex *lock = &db_attach->dmabuf->lock; struct sg_table *sgt; - mutex_lock(lock); - sgt = &attach->sgt; /* return previously mapped sg table */ - if (attach->dma_dir == dma_dir) { - mutex_unlock(lock); + if (attach->dma_dir == dma_dir) return sgt; - } /* release any previous cache */ if (attach->dma_dir != DMA_NONE) { @@ -289,14 +283,11 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map( /* mapping to the client with new direction */ if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) { pr_err("failed to map scatterlist\n"); - mutex_unlock(lock); return ERR_PTR(-EIO); } attach->dma_dir = dma_dir; - mutex_unlock(lock); - return sgt; } diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 9d7c61a122dc..0b427939f466 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -410,4 +410,7 @@ void drm_gem_unlock_reservations(struct drm_gem_object **objs, int count, int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, u32 handle, u64 *offset); +int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); +void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); + #endif /* __DRM_GEM_H__ */ diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 71731796c8c3..23698c6b1d1e 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -326,15 +326,6 @@ struct dma_buf { /** @ops: dma_buf_ops associated with this buffer object. */ const struct dma_buf_ops *ops; - /** - * @lock: - * - * Used internally to serialize list manipulation, attach/detach and - * vmap/unmap. Note that in many cases this is superseeded by - * dma_resv_lock() on @resv. - */ - struct mutex lock; - /** * @vmapping_counter: * @@ -618,6 +609,11 @@ int dma_buf_fd(struct dma_buf *dmabuf, int flags); struct dma_buf *dma_buf_get(int fd); void dma_buf_put(struct dma_buf *dmabuf); +struct sg_table *dma_buf_map_attachment_locked(struct dma_buf_attachment *, + enum dma_data_direction); +void dma_buf_unmap_attachment_locked(struct dma_buf_attachment *, + struct sg_table *, + enum dma_data_direction); struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *, enum dma_data_direction); void dma_buf_unmap_attachment(struct dma_buf_attachment *, struct sg_table *, -- 2.35.3 ^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v6 14/22] dma-buf: Introduce new locking convention @ 2022-05-26 23:50 ` Dmitry Osipenko 0 siblings, 0 replies; 29+ messages in thread From: Dmitry Osipenko @ 2022-05-26 23:50 UTC (permalink / raw) To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Christian König, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin Cc: intel-gfx, linux-kernel, dri-devel, virtualization, linaro-mm-sig, amd-gfx, Dmitry Osipenko, linux-tegra, Dmitry Osipenko, kernel, linux-media All dma-bufs have dma-reservation lock that allows drivers to perform exclusive operations over shared dma-bufs. Today's dma-buf API has incomplete locking specification, which creates dead lock situation for dma-buf importers and exporters that don't coordinate theirs locks. This patch introduces new locking convention for dma-buf users. From now on all dma-buf importers are responsible for holding dma-buf reservation lock around operations performed over dma-bufs. This patch implements the new dma-buf locking convention by: 1. Making dma-buf API functions to take the reservation lock. 2. Adding new locked variants of the dma-buf API functions for drivers that need to manage imported dma-bufs under the held lock. 3. Converting all drivers to the new locking scheme. Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> --- drivers/dma-buf/dma-buf.c | 270 +++++++++++------- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +- drivers/gpu/drm/drm_client.c | 4 +- drivers/gpu/drm/drm_gem.c | 33 +++ drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +- drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 10 +- drivers/gpu/drm/qxl/qxl_object.c | 17 +- drivers/gpu/drm/qxl/qxl_prime.c | 4 +- .../common/videobuf2/videobuf2-dma-contig.c | 11 +- .../media/common/videobuf2/videobuf2-dma-sg.c | 11 +- .../common/videobuf2/videobuf2-vmalloc.c | 11 +- include/drm/drm_gem.h | 3 + include/linux/dma-buf.h | 14 +- 13 files changed, 241 insertions(+), 159 deletions(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 32f55640890c..64a9909ccfa2 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -552,7 +552,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) file->f_mode |= FMODE_LSEEK; dmabuf->file = file; - mutex_init(&dmabuf->lock); INIT_LIST_HEAD(&dmabuf->attachments); mutex_lock(&db_list.lock); @@ -737,14 +736,14 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, attach->importer_ops = importer_ops; attach->importer_priv = importer_priv; + dma_resv_lock(dmabuf->resv, NULL); + if (dmabuf->ops->attach) { ret = dmabuf->ops->attach(dmabuf, attach); if (ret) goto err_attach; } - dma_resv_lock(dmabuf->resv, NULL); list_add(&attach->node, &dmabuf->attachments); - dma_resv_unlock(dmabuf->resv); /* When either the importer or the exporter can't handle dynamic * mappings we cache the mapping here to avoid issues with the @@ -755,7 +754,6 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, struct sg_table *sgt; if (dma_buf_is_dynamic(attach->dmabuf)) { - dma_resv_lock(attach->dmabuf->resv, NULL); ret = dmabuf->ops->pin(attach); if (ret) goto err_unlock; @@ -768,15 +766,16 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, ret = PTR_ERR(sgt); goto err_unpin; } - if (dma_buf_is_dynamic(attach->dmabuf)) - dma_resv_unlock(attach->dmabuf->resv); attach->sgt = sgt; attach->dir = DMA_BIDIRECTIONAL; } + dma_resv_unlock(dmabuf->resv); + return attach; err_attach: + dma_resv_unlock(attach->dmabuf->resv); kfree(attach); return ERR_PTR(ret); @@ -785,10 +784,10 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, dmabuf->ops->unpin(attach); err_unlock: - if (dma_buf_is_dynamic(attach->dmabuf)) - dma_resv_unlock(attach->dmabuf->resv); + dma_resv_unlock(dmabuf->resv); dma_buf_detach(dmabuf, attach); + return ERR_PTR(ret); } EXPORT_SYMBOL_NS_GPL(dma_buf_dynamic_attach, DMA_BUF); @@ -832,24 +831,23 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach) if (WARN_ON(!dmabuf || !attach)) return; - if (attach->sgt) { - if (dma_buf_is_dynamic(attach->dmabuf)) - dma_resv_lock(attach->dmabuf->resv, NULL); + if (WARN_ON(dmabuf != attach->dmabuf)) + return; + dma_resv_lock(dmabuf->resv, NULL); + + if (attach->sgt) { __unmap_dma_buf(attach, attach->sgt, attach->dir); - if (dma_buf_is_dynamic(attach->dmabuf)) { + if (dma_buf_is_dynamic(attach->dmabuf)) dmabuf->ops->unpin(attach); - dma_resv_unlock(attach->dmabuf->resv); - } } - dma_resv_lock(dmabuf->resv, NULL); list_del(&attach->node); - dma_resv_unlock(dmabuf->resv); if (dmabuf->ops->detach) dmabuf->ops->detach(dmabuf, attach); + dma_resv_unlock(dmabuf->resv); kfree(attach); } EXPORT_SYMBOL_NS_GPL(dma_buf_detach, DMA_BUF); @@ -906,28 +904,18 @@ void dma_buf_unpin(struct dma_buf_attachment *attach) EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, DMA_BUF); /** - * dma_buf_map_attachment - Returns the scatterlist table of the attachment; + * dma_buf_map_attachment_locked - Returns the scatterlist table of the attachment; * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the * dma_buf_ops. * @attach: [in] attachment whose scatterlist is to be returned * @direction: [in] direction of DMA transfer * - * Returns sg_table containing the scatterlist to be returned; returns ERR_PTR - * on error. May return -EINTR if it is interrupted by a signal. - * - * On success, the DMA addresses and lengths in the returned scatterlist are - * PAGE_SIZE aligned. - * - * A mapping must be unmapped by using dma_buf_unmap_attachment(). Note that - * the underlying backing storage is pinned for as long as a mapping exists, - * therefore users/importers should not hold onto a mapping for undue amounts of - * time. + * Locked variant of dma_buf_map_attachment(). * - * Important: Dynamic importers must wait for the exclusive fence of the struct - * dma_resv attached to the DMA-BUF first. + * Caller is responsible for holding dmabuf's reservation lock. */ -struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, - enum dma_data_direction direction) +struct sg_table *dma_buf_map_attachment_locked(struct dma_buf_attachment *attach, + enum dma_data_direction direction) { struct sg_table *sg_table; int r; @@ -937,8 +925,7 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, if (WARN_ON(!attach || !attach->dmabuf)) return ERR_PTR(-EINVAL); - if (dma_buf_attachment_is_dynamic(attach)) - dma_resv_assert_held(attach->dmabuf->resv); + dma_resv_assert_held(attach->dmabuf->resv); if (attach->sgt) { /* @@ -953,7 +940,6 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, } if (dma_buf_is_dynamic(attach->dmabuf)) { - dma_resv_assert_held(attach->dmabuf->resv); if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) { r = attach->dmabuf->ops->pin(attach); if (r) @@ -993,42 +979,101 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, #endif /* CONFIG_DMA_API_DEBUG */ return sg_table; } -EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF); +EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_locked, DMA_BUF); /** - * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might - * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of + * dma_buf_map_attachment - Returns the scatterlist table of the attachment; + * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the * dma_buf_ops. - * @attach: [in] attachment to unmap buffer from - * @sg_table: [in] scatterlist info of the buffer to unmap - * @direction: [in] direction of DMA transfer + * @attach: [in] attachment whose scatterlist is to be returned + * @direction: [in] direction of DMA transfer * - * This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment(). + * Returns sg_table containing the scatterlist to be returned; returns ERR_PTR + * on error. May return -EINTR if it is interrupted by a signal. + * + * On success, the DMA addresses and lengths in the returned scatterlist are + * PAGE_SIZE aligned. + * + * A mapping must be unmapped by using dma_buf_unmap_attachment(). Note that + * the underlying backing storage is pinned for as long as a mapping exists, + * therefore users/importers should not hold onto a mapping for undue amounts of + * time. + * + * Important: Dynamic importers must wait for the exclusive fence of the struct + * dma_resv attached to the DMA-BUF first. */ -void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, - struct sg_table *sg_table, +struct sg_table * +dma_buf_map_attachment(struct dma_buf_attachment *attach, enum dma_data_direction direction) { + struct sg_table *sg_table; + might_sleep(); - if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) - return; + if (WARN_ON(!attach || !attach->dmabuf)) + return ERR_PTR(-EINVAL); + + dma_resv_lock(attach->dmabuf->resv, NULL); + sg_table = dma_buf_map_attachment_locked(attach, direction); + dma_resv_unlock(attach->dmabuf->resv); - if (dma_buf_attachment_is_dynamic(attach)) - dma_resv_assert_held(attach->dmabuf->resv); + return sg_table; +} +EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF); + +/** + * dma_buf_unmap_attachment_locked - Returns the scatterlist table of the attachment; + * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the + * dma_buf_ops. + * @attach: [in] attachment whose scatterlist is to be returned + * @direction: [in] direction of DMA transfer + * + * Locked variant of dma_buf_unmap_attachment(). + * + * Caller is responsible for holding dmabuf's reservation lock. + */ +void dma_buf_unmap_attachment_locked(struct dma_buf_attachment *attach, + struct sg_table *sg_table, + enum dma_data_direction direction) +{ + might_sleep(); + + dma_resv_assert_held(attach->dmabuf->resv); if (attach->sgt == sg_table) return; - if (dma_buf_is_dynamic(attach->dmabuf)) - dma_resv_assert_held(attach->dmabuf->resv); - __unmap_dma_buf(attach, sg_table, direction); if (dma_buf_is_dynamic(attach->dmabuf) && !IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) dma_buf_unpin(attach); } +EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment_locked, DMA_BUF); + +/** + * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might + * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of + * dma_buf_ops. + * @attach: [in] attachment to unmap buffer from + * @sg_table: [in] scatterlist info of the buffer to unmap + * @direction: [in] direction of DMA transfer + * + * This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment(). + */ +void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, + struct sg_table *sg_table, + enum dma_data_direction direction) +{ + might_sleep(); + + if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) + return; + + dma_resv_lock(attach->dmabuf->resv, NULL); + dma_buf_unmap_attachment_locked(attach, sg_table, direction); + dma_resv_unlock(attach->dmabuf->resv); +} EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF); /** @@ -1224,6 +1269,31 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf, } EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF); +static int dma_buf_mmap_locked(struct dma_buf *dmabuf, + struct vm_area_struct *vma, + unsigned long pgoff) +{ + dma_resv_assert_held(dmabuf->resv); + + /* check if buffer supports mmap */ + if (!dmabuf->ops->mmap) + return -EINVAL; + + /* check for offset overflow */ + if (pgoff + vma_pages(vma) < pgoff) + return -EOVERFLOW; + + /* check for overflowing the buffer's size */ + if (pgoff + vma_pages(vma) > + dmabuf->size >> PAGE_SHIFT) + return -EINVAL; + + /* readjust the vma */ + vma_set_file(vma, dmabuf->file); + vma->vm_pgoff = pgoff; + + return dmabuf->ops->mmap(dmabuf, vma); +} /** * dma_buf_mmap - Setup up a userspace mmap with the given vma @@ -1242,29 +1312,46 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF); int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, unsigned long pgoff) { + int ret; + if (WARN_ON(!dmabuf || !vma)) return -EINVAL; - /* check if buffer supports mmap */ - if (!dmabuf->ops->mmap) - return -EINVAL; + dma_resv_lock(dmabuf->resv, NULL); + ret = dma_buf_mmap_locked(dmabuf, vma, pgoff); + dma_resv_unlock(dmabuf->resv); - /* check for offset overflow */ - if (pgoff + vma_pages(vma) < pgoff) - return -EOVERFLOW; + return ret; +} +EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); - /* check for overflowing the buffer's size */ - if (pgoff + vma_pages(vma) > - dmabuf->size >> PAGE_SHIFT) - return -EINVAL; +static int dma_buf_vmap_locked(struct dma_buf *dmabuf, struct iosys_map *map) +{ + struct iosys_map ptr; + int ret; - /* readjust the vma */ - vma_set_file(vma, dmabuf->file); - vma->vm_pgoff = pgoff; + dma_resv_assert_held(dmabuf->resv); - return dmabuf->ops->mmap(dmabuf, vma); + if (dmabuf->vmapping_counter) { + dmabuf->vmapping_counter++; + BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); + *map = dmabuf->vmap_ptr; + return ret; + } + + BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr)); + + ret = dmabuf->ops->vmap(dmabuf, &ptr); + if (WARN_ON_ONCE(ret)) + return ret; + + dmabuf->vmap_ptr = ptr; + dmabuf->vmapping_counter = 1; + + *map = dmabuf->vmap_ptr; + + return 0; } -EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); /** * dma_buf_vmap - Create virtual mapping for the buffer object into kernel @@ -1284,8 +1371,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); */ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map) { - struct iosys_map ptr; - int ret = 0; + int ret; iosys_map_clear(map); @@ -1295,52 +1381,40 @@ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map) if (!dmabuf->ops->vmap) return -EINVAL; - mutex_lock(&dmabuf->lock); - if (dmabuf->vmapping_counter) { - dmabuf->vmapping_counter++; - BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); - *map = dmabuf->vmap_ptr; - goto out_unlock; - } - - BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr)); - - ret = dmabuf->ops->vmap(dmabuf, &ptr); - if (WARN_ON_ONCE(ret)) - goto out_unlock; - - dmabuf->vmap_ptr = ptr; - dmabuf->vmapping_counter = 1; - - *map = dmabuf->vmap_ptr; + dma_resv_lock(dmabuf->resv, NULL); + ret = dma_buf_vmap_locked(dmabuf, map); + dma_resv_unlock(dmabuf->resv); -out_unlock: - mutex_unlock(&dmabuf->lock); return ret; } EXPORT_SYMBOL_NS_GPL(dma_buf_vmap, DMA_BUF); -/** - * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap. - * @dmabuf: [in] buffer to vunmap - * @map: [in] vmap pointer to vunmap - */ -void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) +static void dma_buf_vunmap_locked(struct dma_buf *dmabuf, struct iosys_map *map) { - if (WARN_ON(!dmabuf)) - return; - BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); BUG_ON(dmabuf->vmapping_counter == 0); BUG_ON(!iosys_map_is_equal(&dmabuf->vmap_ptr, map)); - mutex_lock(&dmabuf->lock); if (--dmabuf->vmapping_counter == 0) { if (dmabuf->ops->vunmap) dmabuf->ops->vunmap(dmabuf, map); iosys_map_clear(&dmabuf->vmap_ptr); } - mutex_unlock(&dmabuf->lock); +} + +/** + * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap. + * @dmabuf: [in] buffer to vunmap + * @map: [in] vmap pointer to vunmap + */ +void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) +{ + if (WARN_ON(!dmabuf)) + return; + + dma_resv_lock(dmabuf->resv, NULL); + dma_buf_vunmap_locked(dmabuf, map); + dma_resv_unlock(dmabuf->resv); } EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index be6f76a30ac6..b704bdf5601a 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -882,7 +882,8 @@ static int amdgpu_ttm_backend_bind(struct ttm_device *bdev, struct sg_table *sgt; attach = gtt->gobj->import_attach; - sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); + sgt = dma_buf_map_attachment_locked(attach, + DMA_BIDIRECTIONAL); if (IS_ERR(sgt)) return PTR_ERR(sgt); @@ -1007,7 +1008,8 @@ static void amdgpu_ttm_backend_unbind(struct ttm_device *bdev, struct dma_buf_attachment *attach; attach = gtt->gobj->import_attach; - dma_buf_unmap_attachment(attach, ttm->sg, DMA_BIDIRECTIONAL); + dma_buf_unmap_attachment_locked(attach, ttm->sg, + DMA_BIDIRECTIONAL); ttm->sg = NULL; } diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c index af3b7395bf69..e9a1cd310352 100644 --- a/drivers/gpu/drm/drm_client.c +++ b/drivers/gpu/drm/drm_client.c @@ -323,7 +323,7 @@ drm_client_buffer_vmap(struct drm_client_buffer *buffer, * fd_install step out of the driver backend hooks, to make that * final step optional for internal users. */ - ret = drm_gem_vmap(buffer->gem, map); + ret = drm_gem_vmap_unlocked(buffer->gem, map); if (ret) return ret; @@ -345,7 +345,7 @@ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) { struct iosys_map *map = &buffer->map; - drm_gem_vunmap(buffer->gem, map); + drm_gem_vunmap_unlocked(buffer->gem, map); } EXPORT_SYMBOL(drm_client_buffer_vunmap); diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 7c0b025508e4..c61674887582 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1053,7 +1053,12 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size, vma->vm_ops = obj->funcs->vm_ops; if (obj->funcs->mmap) { + ret = dma_resv_lock_interruptible(obj->resv, NULL); + if (ret) + goto err_drm_gem_object_put; + ret = obj->funcs->mmap(obj, vma); + dma_resv_unlock(obj->resv); if (ret) goto err_drm_gem_object_put; WARN_ON(!(vma->vm_flags & VM_DONTEXPAND)); @@ -1158,6 +1163,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent, int drm_gem_pin(struct drm_gem_object *obj) { + dma_resv_assert_held(obj->resv); + if (obj->funcs->pin) return obj->funcs->pin(obj); else @@ -1166,6 +1173,8 @@ int drm_gem_pin(struct drm_gem_object *obj) void drm_gem_unpin(struct drm_gem_object *obj) { + dma_resv_assert_held(obj->resv); + if (obj->funcs->unpin) obj->funcs->unpin(obj); } @@ -1174,6 +1183,8 @@ int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) { int ret; + dma_resv_assert_held(obj->resv); + if (!obj->funcs->vmap) return -EOPNOTSUPP; @@ -1189,6 +1200,8 @@ EXPORT_SYMBOL(drm_gem_vmap); void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) { + dma_resv_assert_held(obj->resv); + if (iosys_map_is_null(map)) return; @@ -1200,6 +1213,26 @@ void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) } EXPORT_SYMBOL(drm_gem_vunmap); +int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) +{ + int ret; + + dma_resv_lock(obj->resv, NULL); + ret = drm_gem_vmap(obj, map); + dma_resv_unlock(obj->resv); + + return ret; +} +EXPORT_SYMBOL(drm_gem_vmap_unlocked); + +void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) +{ + dma_resv_lock(obj->resv, NULL); + drm_gem_vunmap(obj, map); + dma_resv_unlock(obj->resv); +} +EXPORT_SYMBOL(drm_gem_vunmap_unlocked); + /** * drm_gem_lock_reservations - Sets up the ww context and acquires * the lock on an array of GEM objects. diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gpu/drm/drm_gem_framebuffer_helper.c index f4619803acd0..a0bff53b158e 100644 --- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c +++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c @@ -348,7 +348,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, iosys_map_clear(&map[i]); continue; } - ret = drm_gem_vmap(obj, &map[i]); + ret = drm_gem_vmap_unlocked(obj, &map[i]); if (ret) goto err_drm_gem_vunmap; } @@ -370,7 +370,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, obj = drm_gem_fb_get_obj(fb, i); if (!obj) continue; - drm_gem_vunmap(obj, &map[i]); + drm_gem_vunmap_unlocked(obj, &map[i]); } return ret; } @@ -398,7 +398,7 @@ void drm_gem_fb_vunmap(struct drm_framebuffer *fb, continue; if (iosys_map_is_null(&map[i])) continue; - drm_gem_vunmap(obj, &map[i]); + drm_gem_vunmap_unlocked(obj, &map[i]); } } EXPORT_SYMBOL(drm_gem_fb_vunmap); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c index f5062d0c6333..09502d490da8 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c @@ -72,7 +72,7 @@ static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf); void *vaddr; - vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); + vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB); if (IS_ERR(vaddr)) return PTR_ERR(vaddr); @@ -241,8 +241,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj) assert_object_held(obj); - pages = dma_buf_map_attachment(obj->base.import_attach, - DMA_BIDIRECTIONAL); + pages = dma_buf_map_attachment_locked(obj->base.import_attach, + DMA_BIDIRECTIONAL); if (IS_ERR(pages)) return PTR_ERR(pages); @@ -270,8 +270,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj) static void i915_gem_object_put_pages_dmabuf(struct drm_i915_gem_object *obj, struct sg_table *pages) { - dma_buf_unmap_attachment(obj->base.import_attach, pages, - DMA_BIDIRECTIONAL); + dma_buf_unmap_attachment_locked(obj->base.import_attach, pages, + DMA_BIDIRECTIONAL); } static const struct drm_i915_gem_object_ops i915_gem_object_dmabuf_ops = { diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index b42a657e4c2f..a64cd635fbc0 100644 --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -168,9 +168,16 @@ int qxl_bo_vmap_locked(struct qxl_bo *bo, struct iosys_map *map) bo->map_count++; goto out; } - r = ttm_bo_vmap(&bo->tbo, &bo->map); + + r = __qxl_bo_pin(bo); if (r) return r; + + r = ttm_bo_vmap(&bo->tbo, &bo->map); + if (r) { + __qxl_bo_unpin(bo); + return r; + } bo->map_count = 1; /* TODO: Remove kptr in favor of map everywhere. */ @@ -192,12 +199,6 @@ int qxl_bo_vmap(struct qxl_bo *bo, struct iosys_map *map) if (r) return r; - r = __qxl_bo_pin(bo); - if (r) { - qxl_bo_unreserve(bo); - return r; - } - r = qxl_bo_vmap_locked(bo, map); qxl_bo_unreserve(bo); return r; @@ -247,6 +248,7 @@ void qxl_bo_vunmap_locked(struct qxl_bo *bo) return; bo->kptr = NULL; ttm_bo_vunmap(&bo->tbo, &bo->map); + __qxl_bo_unpin(bo); } int qxl_bo_vunmap(struct qxl_bo *bo) @@ -258,7 +260,6 @@ int qxl_bo_vunmap(struct qxl_bo *bo) return r; qxl_bo_vunmap_locked(bo); - __qxl_bo_unpin(bo); qxl_bo_unreserve(bo); return 0; } diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c index 142d01415acb..9169c26357d3 100644 --- a/drivers/gpu/drm/qxl/qxl_prime.c +++ b/drivers/gpu/drm/qxl/qxl_prime.c @@ -59,7 +59,7 @@ int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map) struct qxl_bo *bo = gem_to_qxl_bo(obj); int ret; - ret = qxl_bo_vmap(bo, map); + ret = qxl_bo_vmap_locked(bo, map); if (ret < 0) return ret; @@ -71,5 +71,5 @@ void qxl_gem_prime_vunmap(struct drm_gem_object *obj, { struct qxl_bo *bo = gem_to_qxl_bo(obj); - qxl_bo_vunmap(bo); + qxl_bo_vunmap_locked(bo); } diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c index 678b359717c4..617062076370 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c @@ -382,18 +382,12 @@ static struct sg_table *vb2_dc_dmabuf_ops_map( struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) { struct vb2_dc_attachment *attach = db_attach->priv; - /* stealing dmabuf mutex to serialize map/unmap operations */ - struct mutex *lock = &db_attach->dmabuf->lock; struct sg_table *sgt; - mutex_lock(lock); - sgt = &attach->sgt; /* return previously mapped sg table */ - if (attach->dma_dir == dma_dir) { - mutex_unlock(lock); + if (attach->dma_dir == dma_dir) return sgt; - } /* release any previous cache */ if (attach->dma_dir != DMA_NONE) { @@ -409,14 +403,11 @@ static struct sg_table *vb2_dc_dmabuf_ops_map( if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, DMA_ATTR_SKIP_CPU_SYNC)) { pr_err("failed to map scatterlist\n"); - mutex_unlock(lock); return ERR_PTR(-EIO); } attach->dma_dir = dma_dir; - mutex_unlock(lock); - return sgt; } diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c index fa69158a65b1..d2075e7078cd 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c @@ -424,18 +424,12 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map( struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) { struct vb2_dma_sg_attachment *attach = db_attach->priv; - /* stealing dmabuf mutex to serialize map/unmap operations */ - struct mutex *lock = &db_attach->dmabuf->lock; struct sg_table *sgt; - mutex_lock(lock); - sgt = &attach->sgt; /* return previously mapped sg table */ - if (attach->dma_dir == dma_dir) { - mutex_unlock(lock); + if (attach->dma_dir == dma_dir) return sgt; - } /* release any previous cache */ if (attach->dma_dir != DMA_NONE) { @@ -446,14 +440,11 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map( /* mapping to the client with new direction */ if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) { pr_err("failed to map scatterlist\n"); - mutex_unlock(lock); return ERR_PTR(-EIO); } attach->dma_dir = dma_dir; - mutex_unlock(lock); - return sgt; } diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c index 948152f1596b..3d00a7f0aac1 100644 --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c @@ -267,18 +267,12 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map( struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) { struct vb2_vmalloc_attachment *attach = db_attach->priv; - /* stealing dmabuf mutex to serialize map/unmap operations */ - struct mutex *lock = &db_attach->dmabuf->lock; struct sg_table *sgt; - mutex_lock(lock); - sgt = &attach->sgt; /* return previously mapped sg table */ - if (attach->dma_dir == dma_dir) { - mutex_unlock(lock); + if (attach->dma_dir == dma_dir) return sgt; - } /* release any previous cache */ if (attach->dma_dir != DMA_NONE) { @@ -289,14 +283,11 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map( /* mapping to the client with new direction */ if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) { pr_err("failed to map scatterlist\n"); - mutex_unlock(lock); return ERR_PTR(-EIO); } attach->dma_dir = dma_dir; - mutex_unlock(lock); - return sgt; } diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 9d7c61a122dc..0b427939f466 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -410,4 +410,7 @@ void drm_gem_unlock_reservations(struct drm_gem_object **objs, int count, int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, u32 handle, u64 *offset); +int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); +void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); + #endif /* __DRM_GEM_H__ */ diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h index 71731796c8c3..23698c6b1d1e 100644 --- a/include/linux/dma-buf.h +++ b/include/linux/dma-buf.h @@ -326,15 +326,6 @@ struct dma_buf { /** @ops: dma_buf_ops associated with this buffer object. */ const struct dma_buf_ops *ops; - /** - * @lock: - * - * Used internally to serialize list manipulation, attach/detach and - * vmap/unmap. Note that in many cases this is superseeded by - * dma_resv_lock() on @resv. - */ - struct mutex lock; - /** * @vmapping_counter: * @@ -618,6 +609,11 @@ int dma_buf_fd(struct dma_buf *dmabuf, int flags); struct dma_buf *dma_buf_get(int fd); void dma_buf_put(struct dma_buf *dmabuf); +struct sg_table *dma_buf_map_attachment_locked(struct dma_buf_attachment *, + enum dma_data_direction); +void dma_buf_unmap_attachment_locked(struct dma_buf_attachment *, + struct sg_table *, + enum dma_data_direction); struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *, enum dma_data_direction); void dma_buf_unmap_attachment(struct dma_buf_attachment *, struct sg_table *, -- 2.35.3 ^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention 2022-05-26 23:50 ` Dmitry Osipenko (?) @ 2022-05-27 2:37 ` kernel test robot 2022-05-27 12:44 ` Dmitry Osipenko -1 siblings, 1 reply; 29+ messages in thread From: kernel test robot @ 2022-05-27 2:37 UTC (permalink / raw) To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Christian König, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula Cc: llvm, kbuild-all, linux-media Hi Dmitry, I love your patch! Perhaps something to improve: [auto build test WARNING on linus/master] [cannot apply to drm/drm-next media-tree/master drm-intel/for-linux-next v5.18] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/intel-lab-lkp/linux/commits/Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220527-075717 base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git cdeffe87f790dfd1baa193020411ce9a538446d7 config: hexagon-randconfig-r045-20220524 (https://download.01.org/0day-ci/archive/20220527/202205271042.1WRbRP1r-lkp@intel.com/config) compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project 6f4644d194da594562027a5d458d9fb7a20ebc39) reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/intel-lab-lkp/linux/commit/97f090c47ec995a8cf3bced98526ee3eaa25f10f git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220527-075717 git checkout 97f090c47ec995a8cf3bced98526ee3eaa25f10f # save the config file mkdir build_dir && cp config build_dir/.config COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=hexagon SHELL=/bin/bash drivers/dma-buf/ If you fix the issue, kindly add following tag where applicable Reported-by: kernel test robot <lkp@intel.com> All warnings (new ones prefixed by >>): >> drivers/dma-buf/dma-buf.c:1339:10: warning: variable 'ret' is uninitialized when used here [-Wuninitialized] return ret; ^~~ drivers/dma-buf/dma-buf.c:1331:9: note: initialize the variable 'ret' to silence this warning int ret; ^ = 0 1 warning generated. vim +/ret +1339 drivers/dma-buf/dma-buf.c 1327 1328 static int dma_buf_vmap_locked(struct dma_buf *dmabuf, struct iosys_map *map) 1329 { 1330 struct iosys_map ptr; 1331 int ret; 1332 1333 dma_resv_assert_held(dmabuf->resv); 1334 1335 if (dmabuf->vmapping_counter) { 1336 dmabuf->vmapping_counter++; 1337 BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); 1338 *map = dmabuf->vmap_ptr; > 1339 return ret; 1340 } 1341 1342 BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr)); 1343 1344 ret = dmabuf->ops->vmap(dmabuf, &ptr); 1345 if (WARN_ON_ONCE(ret)) 1346 return ret; 1347 1348 dmabuf->vmap_ptr = ptr; 1349 dmabuf->vmapping_counter = 1; 1350 1351 *map = dmabuf->vmap_ptr; 1352 1353 return 0; 1354 } 1355 -- 0-DAY CI Kernel Test Service https://01.org/lkp ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention 2022-05-27 2:37 ` kernel test robot @ 2022-05-27 12:44 ` Dmitry Osipenko 0 siblings, 0 replies; 29+ messages in thread From: Dmitry Osipenko @ 2022-05-27 12:44 UTC (permalink / raw) To: kernel test robot, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Christian König, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula Cc: llvm, kbuild-all, linux-media On 5/27/22 05:37, kernel test robot wrote: >>> drivers/dma-buf/dma-buf.c:1339:10: warning: variable 'ret' is uninitialized when used here [-Wuninitialized] > return ret; > ^~~ > drivers/dma-buf/dma-buf.c:1331:9: note: initialize the variable 'ret' to silence this warning > int ret; > ^ > = 0 > 1 warning generated. Interestingly, GCC doesn't detect this problem and it's a valid warning. I'll correct it in v7. -- Best regards, Dmitry ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention @ 2022-05-27 12:44 ` Dmitry Osipenko 0 siblings, 0 replies; 29+ messages in thread From: Dmitry Osipenko @ 2022-05-27 12:44 UTC (permalink / raw) To: kbuild-all [-- Attachment #1: Type: text/plain, Size: 552 bytes --] On 5/27/22 05:37, kernel test robot wrote: >>> drivers/dma-buf/dma-buf.c:1339:10: warning: variable 'ret' is uninitialized when used here [-Wuninitialized] > return ret; > ^~~ > drivers/dma-buf/dma-buf.c:1331:9: note: initialize the variable 'ret' to silence this warning > int ret; > ^ > = 0 > 1 warning generated. Interestingly, GCC doesn't detect this problem and it's a valid warning. I'll correct it in v7. -- Best regards, Dmitry ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention 2022-05-26 23:50 ` Dmitry Osipenko (?) @ 2022-05-30 6:50 ` Christian König -1 siblings, 0 replies; 29+ messages in thread From: Christian König via Virtualization @ 2022-05-30 6:50 UTC (permalink / raw) To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin Cc: intel-gfx, linux-kernel, dri-devel, virtualization, linaro-mm-sig, amd-gfx, linux-tegra, Dmitry Osipenko, kernel, linux-media Hi Dmitry, First of all please separate out this patch from the rest of the series, since this is a complex separate structural change. Am 27.05.22 um 01:50 schrieb Dmitry Osipenko: > All dma-bufs have dma-reservation lock that allows drivers to perform > exclusive operations over shared dma-bufs. Today's dma-buf API has > incomplete locking specification, which creates dead lock situation > for dma-buf importers and exporters that don't coordinate theirs locks. Well please drop that sentence. The locking specifications are actually very well defined, it's just that some drivers are a bit broken regarding them. What you do here is rather moving all the non-dynamic drivers over to the dynamic locking specification (which is really nice to have). I have tried this before and failed because catching all the locks in the right code paths are very tricky. So expect some fallout from this and make sure the kernel test robot and CI systems are clean. > This patch introduces new locking convention for dma-buf users. From now > on all dma-buf importers are responsible for holding dma-buf reservation > lock around operations performed over dma-bufs. > > This patch implements the new dma-buf locking convention by: > > 1. Making dma-buf API functions to take the reservation lock. > > 2. Adding new locked variants of the dma-buf API functions for drivers > that need to manage imported dma-bufs under the held lock. Instead of adding new locked variants please mark all variants which expect to be called without a lock with an _unlocked postfix. This should make it easier to remove those in a follow up patch set and then fully move the locking into the importer. > 3. Converting all drivers to the new locking scheme. I have strong doubts that you got all of them. At least radeon and nouveau should grab the reservation lock in their ->attach callbacks somehow. > > Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> > --- > drivers/dma-buf/dma-buf.c | 270 +++++++++++------- > drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +- > drivers/gpu/drm/drm_client.c | 4 +- > drivers/gpu/drm/drm_gem.c | 33 +++ > drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +- > drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 10 +- > drivers/gpu/drm/qxl/qxl_object.c | 17 +- > drivers/gpu/drm/qxl/qxl_prime.c | 4 +- > .../common/videobuf2/videobuf2-dma-contig.c | 11 +- > .../media/common/videobuf2/videobuf2-dma-sg.c | 11 +- > .../common/videobuf2/videobuf2-vmalloc.c | 11 +- > include/drm/drm_gem.h | 3 + > include/linux/dma-buf.h | 14 +- > 13 files changed, 241 insertions(+), 159 deletions(-) > > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c > index 32f55640890c..64a9909ccfa2 100644 > --- a/drivers/dma-buf/dma-buf.c > +++ b/drivers/dma-buf/dma-buf.c > @@ -552,7 +552,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) > file->f_mode |= FMODE_LSEEK; > dmabuf->file = file; > > - mutex_init(&dmabuf->lock); Please make removing dmabuf->lock a separate change. Regards, Christian. > INIT_LIST_HEAD(&dmabuf->attachments); > > mutex_lock(&db_list.lock); > @@ -737,14 +736,14 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, > attach->importer_ops = importer_ops; > attach->importer_priv = importer_priv; 3. Converting all drivers to the new locking scheme. > > Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> > --- > drivers/dma-buf/dma-buf.c | 270 +++++++++++------- > drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +- > drivers/gpu/drm/drm_client.c | 4 +- > > > + dma_resv_lock(dmabuf->resv, NULL); > + > if (dmabuf->ops->attach) { > ret = dmabuf->ops->attach(dmabuf, attach); > if (ret) > goto err_attach; > } > - dma_resv_lock(dmabuf->resv, NULL); > list_add(&attach->node, &dmabuf->attachments); > - dma_resv_unlock(dmabuf->resv); > > /* When either the importer or the exporter can't handle dynamic > * mappings we cache the mapping here to avoid issues with the > @@ -755,7 +754,6 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, > struct sg_table *sgt; > > if (dma_buf_is_dynamic(attach->dmabuf)) { > - dma_resv_lock(attach->dmabuf->resv, NULL); > ret = dmabuf->ops->pin(attach); > if (ret) > goto err_unlock; > @@ -768,15 +766,16 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, > ret = PTR_ERR(sgt); > goto err_unpin; > } > - if (dma_buf_is_dynamic(attach->dmabuf)) > - dma_resv_unlock(attach->dmabuf->resv); > attach->sgt = sgt; > attach->dir = DMA_BIDIRECTIONAL; > } > > + dma_resv_unlock(dmabuf->resv); > + > return attach; > > err_attach: > + dma_resv_unlock(attach->dmabuf->resv); > kfree(attach); > return ERR_PTR(ret); > > @@ -785,10 +784,10 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, > dmabuf->ops->unpin(attach); > > err_unlock: > - if (dma_buf_is_dynamic(attach->dmabuf)) > - dma_resv_unlock(attach->dmabuf->resv); > + dma_resv_unlock(dmabuf->resv); > > dma_buf_detach(dmabuf, attach); > + > return ERR_PTR(ret); > } > EXPORT_SYMBOL_NS_GPL(dma_buf_dynamic_attach, DMA_BUF); > @@ -832,24 +831,23 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach) > if (WARN_ON(!dmabuf || !attach)) > return; > > - if (attach->sgt) { > - if (dma_buf_is_dynamic(attach->dmabuf)) > - dma_resv_lock(attach->dmabuf->resv, NULL); > + if (WARN_ON(dmabuf != attach->dmabuf)) > + return; > > + dma_resv_lock(dmabuf->resv, NULL); > + > + if (attach->sgt) { > __unmap_dma_buf(attach, attach->sgt, attach->dir); > > - if (dma_buf_is_dynamic(attach->dmabuf)) { > + if (dma_buf_is_dynamic(attach->dmabuf)) > dmabuf->ops->unpin(attach); > - dma_resv_unlock(attach->dmabuf->resv); > - } > } > > - dma_resv_lock(dmabuf->resv, NULL); > list_del(&attach->node); > - dma_resv_unlock(dmabuf->resv); > if (dmabuf->ops->detach) > dmabuf->ops->detach(dmabuf, attach); > > + dma_resv_unlock(dmabuf->resv); > kfree(attach); > } > EXPORT_SYMBOL_NS_GPL(dma_buf_detach, DMA_BUF); > @@ -906,28 +904,18 @@ void dma_buf_unpin(struct dma_buf_attachment *attach) > EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, DMA_BUF); > > /** > - * dma_buf_map_attachment - Returns the scatterlist table of the attachment; > + * dma_buf_map_attachment_locked - Returns the scatterlist table of the attachment; > * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the > * dma_buf_ops. > * @attach: [in] attachment whose scatterlist is to be returned > * @direction: [in] direction of DMA transfer > * > - * Returns sg_table containing the scatterlist to be returned; returns ERR_PTR > - * on error. May return -EINTR if it is interrupted by a signal. > - * > - * On success, the DMA addresses and lengths in the returned scatterlist are > - * PAGE_SIZE aligned. > - * > - * A mapping must be unmapped by using dma_buf_unmap_attachment(). Note that > - * the underlying backing storage is pinned for as long as a mapping exists, > - * therefore users/importers should not hold onto a mapping for undue amounts of > - * time. > + * Locked variant of dma_buf_map_attachment(). > * > - * Important: Dynamic importers must wait for the exclusive fence of the struct > - * dma_resv attached to the DMA-BUF first. > + * Caller is responsible for holding dmabuf's reservation lock. > */ > -struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, > - enum dma_data_direction direction) > +struct sg_table *dma_buf_map_attachment_locked(struct dma_buf_attachment *attach, > + enum dma_data_direction direction) > { > struct sg_table *sg_table; > int r; > @@ -937,8 +925,7 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, > if (WARN_ON(!attach || !attach->dmabuf)) > return ERR_PTR(-EINVAL); > > - if (dma_buf_attachment_is_dynamic(attach)) > - dma_resv_assert_held(attach->dmabuf->resv); > + dma_resv_assert_held(attach->dmabuf->resv); > > if (attach->sgt) { > /* > @@ -953,7 +940,6 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, > } > > if (dma_buf_is_dynamic(attach->dmabuf)) { > - dma_resv_assert_held(attach->dmabuf->resv); > if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) { > r = attach->dmabuf->ops->pin(attach); > if (r) > @@ -993,42 +979,101 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, > #endif /* CONFIG_DMA_API_DEBUG */ > return sg_table; > } > -EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF); > +EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_locked, DMA_BUF); > > /** > - * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might > - * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of > + * dma_buf_map_attachment - Returns the scatterlist table of the attachment; > + * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the > * dma_buf_ops. > - * @attach: [in] attachment to unmap buffer from > - * @sg_table: [in] scatterlist info of the buffer to unmap > - * @direction: [in] direction of DMA transfer > + * @attach: [in] attachment whose scatterlist is to be returned > + * @direction: [in] direction of DMA transfer > * > - * This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment(). > + * Returns sg_table containing the scatterlist to be returned; returns ERR_PTR > + * on error. May return -EINTR if it is interrupted by a signal. > + * > + * On success, the DMA addresses and lengths in the returned scatterlist are > + * PAGE_SIZE aligned. > + * > + * A mapping must be unmapped by using dma_buf_unmap_attachment(). Note that > + * the underlying backing storage is pinned for as long as a mapping exists, > + * therefore users/importers should not hold onto a mapping for undue amounts of > + * time. > + * > + * Important: Dynamic importers must wait for the exclusive fence of the struct > + * dma_resv attached to the DMA-BUF first. > */ > -void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, > - struct sg_table *sg_table, > +struct sg_table * > +dma_buf_map_attachment(struct dma_buf_attachment *attach, > enum dma_data_direction direction) > { > + struct sg_table *sg_table; > + > might_sleep(); > > - if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) > - return; > + if (WARN_ON(!attach || !attach->dmabuf)) > + return ERR_PTR(-EINVAL); > + > + dma_resv_lock(attach->dmabuf->resv, NULL); > + sg_table = dma_buf_map_attachment_locked(attach, direction); > + dma_resv_unlock(attach->dmabuf->resv); > > - if (dma_buf_attachment_is_dynamic(attach)) > - dma_resv_assert_held(attach->dmabuf->resv); > + return sg_table; > +} > +EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF); > + > +/** > + * dma_buf_unmap_attachment_locked - Returns the scatterlist table of the attachment; > + * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the > + * dma_buf_ops. > + * @attach: [in] attachment whose scatterlist is to be returned > + * @direction: [in] direction of DMA transfer > + * > + * Locked variant of dma_buf_unmap_attachment(). > + * > + * Caller is responsible for holding dmabuf's reservation lock. > + */ > +void dma_buf_unmap_attachment_locked(struct dma_buf_attachment *attach, > + struct sg_table *sg_table, > + enum dma_data_direction direction) > +{ > + might_sleep(); > + > + dma_resv_assert_held(attach->dmabuf->resv); > > if (attach->sgt == sg_table) > return; > > - if (dma_buf_is_dynamic(attach->dmabuf)) > - dma_resv_assert_held(attach->dmabuf->resv); > - > __unmap_dma_buf(attach, sg_table, direction); > > if (dma_buf_is_dynamic(attach->dmabuf) && > !IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) > dma_buf_unpin(attach); > } > +EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment_locked, DMA_BUF); > + > +/** > + * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might > + * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of > + * dma_buf_ops. > + * @attach: [in] attachment to unmap buffer from > + * @sg_table: [in] scatterlist info of the buffer to unmap > + * @direction: [in] direction of DMA transfer > + * > + * This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment(). > + */ > +void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, > + struct sg_table *sg_table, > + enum dma_data_direction direction) > +{ > + might_sleep(); > + > + if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) > + return; > + > + dma_resv_lock(attach->dmabuf->resv, NULL); > + dma_buf_unmap_attachment_locked(attach, sg_table, direction); > + dma_resv_unlock(attach->dmabuf->resv); > +} > EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF); > > /** > @@ -1224,6 +1269,31 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf, > } > EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF); > > +static int dma_buf_mmap_locked(struct dma_buf *dmabuf, > + struct vm_area_struct *vma, > + unsigned long pgoff) > +{ > + dma_resv_assert_held(dmabuf->resv); > + > + /* check if buffer supports mmap */ > + if (!dmabuf->ops->mmap) > + return -EINVAL; > + > + /* check for offset overflow */ > + if (pgoff + vma_pages(vma) < pgoff) > + return -EOVERFLOW; > + > + /* check for overflowing the buffer's size */ > + if (pgoff + vma_pages(vma) > > + dmabuf->size >> PAGE_SHIFT) > + return -EINVAL; > + > + /* readjust the vma */ > + vma_set_file(vma, dmabuf->file); > + vma->vm_pgoff = pgoff; > + > + return dmabuf->ops->mmap(dmabuf, vma); > +} > > /** > * dma_buf_mmap - Setup up a userspace mmap with the given vma > @@ -1242,29 +1312,46 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF); > int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, > unsigned long pgoff) > { > + int ret; > + > if (WARN_ON(!dmabuf || !vma)) > return -EINVAL; > > - /* check if buffer supports mmap */ > - if (!dmabuf->ops->mmap) > - return -EINVAL; > + dma_resv_lock(dmabuf->resv, NULL); > + ret = dma_buf_mmap_locked(dmabuf, vma, pgoff); > + dma_resv_unlock(dmabuf->resv); > > - /* check for offset overflow */ > - if (pgoff + vma_pages(vma) < pgoff) > - return -EOVERFLOW; > + return ret; > +} > +EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); > > - /* check for overflowing the buffer's size */ > - if (pgoff + vma_pages(vma) > > - dmabuf->size >> PAGE_SHIFT) > - return -EINVAL; > +static int dma_buf_vmap_locked(struct dma_buf *dmabuf, struct iosys_map *map) > +{ > + struct iosys_map ptr; > + int ret; > > - /* readjust the vma */ > - vma_set_file(vma, dmabuf->file); > - vma->vm_pgoff = pgoff; > + dma_resv_assert_held(dmabuf->resv); > > - return dmabuf->ops->mmap(dmabuf, vma); > + if (dmabuf->vmapping_counter) { > + dmabuf->vmapping_counter++; > + BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); > + *map = dmabuf->vmap_ptr; > + return ret; > + } > + > + BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr)); > + > + ret = dmabuf->ops->vmap(dmabuf, &ptr); > + if (WARN_ON_ONCE(ret)) > + return ret; > + > + dmabuf->vmap_ptr = ptr; > + dmabuf->vmapping_counter = 1; > + > + *map = dmabuf->vmap_ptr; > + > + return 0; > } > -EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); > > /** > * dma_buf_vmap - Create virtual mapping for the buffer object into kernel > @@ -1284,8 +1371,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); > */ > int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map) > { > - struct iosys_map ptr; > - int ret = 0; > + int ret; > > iosys_map_clear(map); > > @@ -1295,52 +1381,40 @@ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map) > if (!dmabuf->ops->vmap) > return -EINVAL; > > - mutex_lock(&dmabuf->lock); > - if (dmabuf->vmapping_counter) { > - dmabuf->vmapping_counter++; > - BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); > - *map = dmabuf->vmap_ptr; > - goto out_unlock; > - } > - > - BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr)); > - > - ret = dmabuf->ops->vmap(dmabuf, &ptr); > - if (WARN_ON_ONCE(ret)) > - goto out_unlock; > - > - dmabuf->vmap_ptr = ptr; > - dmabuf->vmapping_counter = 1; > - > - *map = dmabuf->vmap_ptr; > + dma_resv_lock(dmabuf->resv, NULL); > + ret = dma_buf_vmap_locked(dmabuf, map); > + dma_resv_unlock(dmabuf->resv); > > -out_unlock: > - mutex_unlock(&dmabuf->lock); > return ret; > } > EXPORT_SYMBOL_NS_GPL(dma_buf_vmap, DMA_BUF); > > -/** > - * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap. > - * @dmabuf: [in] buffer to vunmap > - * @map: [in] vmap pointer to vunmap > - */ > -void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) > +static void dma_buf_vunmap_locked(struct dma_buf *dmabuf, struct iosys_map *map) > { > - if (WARN_ON(!dmabuf)) > - return; > - > BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); > BUG_ON(dmabuf->vmapping_counter == 0); > BUG_ON(!iosys_map_is_equal(&dmabuf->vmap_ptr, map)); > > - mutex_lock(&dmabuf->lock); > if (--dmabuf->vmapping_counter == 0) { > if (dmabuf->ops->vunmap) > dmabuf->ops->vunmap(dmabuf, map); > iosys_map_clear(&dmabuf->vmap_ptr); > } > - mutex_unlock(&dmabuf->lock); > +} > + > +/** > + * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap. > + * @dmabuf: [in] buffer to vunmap > + * @map: [in] vmap pointer to vunmap > + */ > +void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) > +{ > + if (WARN_ON(!dmabuf)) > + return; > + > + dma_resv_lock(dmabuf->resv, NULL); > + dma_buf_vunmap_locked(dmabuf, map); > + dma_resv_unlock(dmabuf->resv); > } > EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF); > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c > index be6f76a30ac6..b704bdf5601a 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c > @@ -882,7 +882,8 @@ static int amdgpu_ttm_backend_bind(struct ttm_device *bdev, > struct sg_table *sgt; > > attach = gtt->gobj->import_attach; > - sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); > + sgt = dma_buf_map_attachment_locked(attach, > + DMA_BIDIRECTIONAL); > if (IS_ERR(sgt)) > return PTR_ERR(sgt); > > @@ -1007,7 +1008,8 @@ static void amdgpu_ttm_backend_unbind(struct ttm_device *bdev, > struct dma_buf_attachment *attach; > > attach = gtt->gobj->import_attach; > - dma_buf_unmap_attachment(attach, ttm->sg, DMA_BIDIRECTIONAL); > + dma_buf_unmap_attachment_locked(attach, ttm->sg, > + DMA_BIDIRECTIONAL); > ttm->sg = NULL; > } > > diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c > index af3b7395bf69..e9a1cd310352 100644 > --- a/drivers/gpu/drm/drm_client.c > +++ b/drivers/gpu/drm/drm_client.c > @@ -323,7 +323,7 @@ drm_client_buffer_vmap(struct drm_client_buffer *buffer, > * fd_install step out of the driver backend hooks, to make that > * final step optional for internal users. > */ > - ret = drm_gem_vmap(buffer->gem, map); > + ret = drm_gem_vmap_unlocked(buffer->gem, map); > if (ret) > return ret; > > @@ -345,7 +345,7 @@ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) > { > struct iosys_map *map = &buffer->map; > > - drm_gem_vunmap(buffer->gem, map); > + drm_gem_vunmap_unlocked(buffer->gem, map); > } > EXPORT_SYMBOL(drm_client_buffer_vunmap); > > diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c > index 7c0b025508e4..c61674887582 100644 > --- a/drivers/gpu/drm/drm_gem.c > +++ b/drivers/gpu/drm/drm_gem.c > @@ -1053,7 +1053,12 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size, > vma->vm_ops = obj->funcs->vm_ops; > > if (obj->funcs->mmap) { > + ret = dma_resv_lock_interruptible(obj->resv, NULL); > + if (ret) > + goto err_drm_gem_object_put; > + > ret = obj->funcs->mmap(obj, vma); > + dma_resv_unlock(obj->resv); > if (ret) > goto err_drm_gem_object_put; > WARN_ON(!(vma->vm_flags & VM_DONTEXPAND)); > @@ -1158,6 +1163,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent, > > int drm_gem_pin(struct drm_gem_object *obj) > { > + dma_resv_assert_held(obj->resv); > + > if (obj->funcs->pin) > return obj->funcs->pin(obj); > else > @@ -1166,6 +1173,8 @@ int drm_gem_pin(struct drm_gem_object *obj) > > void drm_gem_unpin(struct drm_gem_object *obj) > { > + dma_resv_assert_held(obj->resv); > + > if (obj->funcs->unpin) > obj->funcs->unpin(obj); > } > @@ -1174,6 +1183,8 @@ int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) > { > int ret; > > + dma_resv_assert_held(obj->resv); > + > if (!obj->funcs->vmap) > return -EOPNOTSUPP; > > @@ -1189,6 +1200,8 @@ EXPORT_SYMBOL(drm_gem_vmap); > > void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) > { > + dma_resv_assert_held(obj->resv); > + > if (iosys_map_is_null(map)) > return; > > @@ -1200,6 +1213,26 @@ void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) > } > EXPORT_SYMBOL(drm_gem_vunmap); > > +int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) > +{ > + int ret; > + > + dma_resv_lock(obj->resv, NULL); > + ret = drm_gem_vmap(obj, map); > + dma_resv_unlock(obj->resv); > + > + return ret; > +} > +EXPORT_SYMBOL(drm_gem_vmap_unlocked); > + > +void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) > +{ > + dma_resv_lock(obj->resv, NULL); > + drm_gem_vunmap(obj, map); > + dma_resv_unlock(obj->resv); > +} > +EXPORT_SYMBOL(drm_gem_vunmap_unlocked); > + > /** > * drm_gem_lock_reservations - Sets up the ww context and acquires > * the lock on an array of GEM objects. > diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gpu/drm/drm_gem_framebuffer_helper.c > index f4619803acd0..a0bff53b158e 100644 > --- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c > +++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c > @@ -348,7 +348,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, > iosys_map_clear(&map[i]); > continue; > } > - ret = drm_gem_vmap(obj, &map[i]); > + ret = drm_gem_vmap_unlocked(obj, &map[i]); > if (ret) > goto err_drm_gem_vunmap; > } > @@ -370,7 +370,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, > obj = drm_gem_fb_get_obj(fb, i); > if (!obj) > continue; > - drm_gem_vunmap(obj, &map[i]); > + drm_gem_vunmap_unlocked(obj, &map[i]); > } > return ret; > } > @@ -398,7 +398,7 @@ void drm_gem_fb_vunmap(struct drm_framebuffer *fb, > continue; > if (iosys_map_is_null(&map[i])) > continue; > - drm_gem_vunmap(obj, &map[i]); > + drm_gem_vunmap_unlocked(obj, &map[i]); > } > } > EXPORT_SYMBOL(drm_gem_fb_vunmap); > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c > index f5062d0c6333..09502d490da8 100644 > --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c > +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c > @@ -72,7 +72,7 @@ static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, > struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf); > void *vaddr; > > - vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); > + vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB); > if (IS_ERR(vaddr)) > return PTR_ERR(vaddr); > > @@ -241,8 +241,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj) > > assert_object_held(obj); > > - pages = dma_buf_map_attachment(obj->base.import_attach, > - DMA_BIDIRECTIONAL); > + pages = dma_buf_map_attachment_locked(obj->base.import_attach, > + DMA_BIDIRECTIONAL); > if (IS_ERR(pages)) > return PTR_ERR(pages); > > @@ -270,8 +270,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj) > static void i915_gem_object_put_pages_dmabuf(struct drm_i915_gem_object *obj, > struct sg_table *pages) > { > - dma_buf_unmap_attachment(obj->base.import_attach, pages, > - DMA_BIDIRECTIONAL); > + dma_buf_unmap_attachment_locked(obj->base.import_attach, pages, > + DMA_BIDIRECTIONAL); > } > > static const struct drm_i915_gem_object_ops i915_gem_object_dmabuf_ops = { > diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c > index b42a657e4c2f..a64cd635fbc0 100644 > --- a/drivers/gpu/drm/qxl/qxl_object.c > +++ b/drivers/gpu/drm/qxl/qxl_object.c > @@ -168,9 +168,16 @@ int qxl_bo_vmap_locked(struct qxl_bo *bo, struct iosys_map *map) > bo->map_count++; > goto out; > } > - r = ttm_bo_vmap(&bo->tbo, &bo->map); > + > + r = __qxl_bo_pin(bo); > if (r) > return r; > + > + r = ttm_bo_vmap(&bo->tbo, &bo->map); > + if (r) { > + __qxl_bo_unpin(bo); > + return r; > + } > bo->map_count = 1; > > /* TODO: Remove kptr in favor of map everywhere. */ > @@ -192,12 +199,6 @@ int qxl_bo_vmap(struct qxl_bo *bo, struct iosys_map *map) > if (r) > return r; > > - r = __qxl_bo_pin(bo); > - if (r) { > - qxl_bo_unreserve(bo); > - return r; > - } > - > r = qxl_bo_vmap_locked(bo, map); > qxl_bo_unreserve(bo); > return r; > @@ -247,6 +248,7 @@ void qxl_bo_vunmap_locked(struct qxl_bo *bo) > return; > bo->kptr = NULL; > ttm_bo_vunmap(&bo->tbo, &bo->map); > + __qxl_bo_unpin(bo); > } > > int qxl_bo_vunmap(struct qxl_bo *bo) > @@ -258,7 +260,6 @@ int qxl_bo_vunmap(struct qxl_bo *bo) > return r; > > qxl_bo_vunmap_locked(bo); > - __qxl_bo_unpin(bo); > qxl_bo_unreserve(bo); > return 0; > } > diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c > index 142d01415acb..9169c26357d3 100644 > --- a/drivers/gpu/drm/qxl/qxl_prime.c > +++ b/drivers/gpu/drm/qxl/qxl_prime.c > @@ -59,7 +59,7 @@ int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map) > struct qxl_bo *bo = gem_to_qxl_bo(obj); > int ret; > > - ret = qxl_bo_vmap(bo, map); > + ret = qxl_bo_vmap_locked(bo, map); > if (ret < 0) > return ret; > > @@ -71,5 +71,5 @@ void qxl_gem_prime_vunmap(struct drm_gem_object *obj, > { > struct qxl_bo *bo = gem_to_qxl_bo(obj); > > - qxl_bo_vunmap(bo); > + qxl_bo_vunmap_locked(bo); > } > diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c > index 678b359717c4..617062076370 100644 > --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c > +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c > @@ -382,18 +382,12 @@ static struct sg_table *vb2_dc_dmabuf_ops_map( > struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) > { > struct vb2_dc_attachment *attach = db_attach->priv; > - /* stealing dmabuf mutex to serialize map/unmap operations */ > - struct mutex *lock = &db_attach->dmabuf->lock; > struct sg_table *sgt; > > - mutex_lock(lock); > - > sgt = &attach->sgt; > /* return previously mapped sg table */ > - if (attach->dma_dir == dma_dir) { > - mutex_unlock(lock); > + if (attach->dma_dir == dma_dir) > return sgt; > - } > > /* release any previous cache */ > if (attach->dma_dir != DMA_NONE) { > @@ -409,14 +403,11 @@ static struct sg_table *vb2_dc_dmabuf_ops_map( > if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, > DMA_ATTR_SKIP_CPU_SYNC)) { > pr_err("failed to map scatterlist\n"); > - mutex_unlock(lock); > return ERR_PTR(-EIO); > } > > attach->dma_dir = dma_dir; > > - mutex_unlock(lock); > - > return sgt; > } > > diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c > index fa69158a65b1..d2075e7078cd 100644 > --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c > +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c > @@ -424,18 +424,12 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map( > struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) > { > struct vb2_dma_sg_attachment *attach = db_attach->priv; > - /* stealing dmabuf mutex to serialize map/unmap operations */ > - struct mutex *lock = &db_attach->dmabuf->lock; > struct sg_table *sgt; > > - mutex_lock(lock); > - > sgt = &attach->sgt; > /* return previously mapped sg table */ > - if (attach->dma_dir == dma_dir) { > - mutex_unlock(lock); > + if (attach->dma_dir == dma_dir) > return sgt; > - } > > /* release any previous cache */ > if (attach->dma_dir != DMA_NONE) { > @@ -446,14 +440,11 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map( > /* mapping to the client with new direction */ > if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) { > pr_err("failed to map scatterlist\n"); > - mutex_unlock(lock); > return ERR_PTR(-EIO); > } > > attach->dma_dir = dma_dir; > > - mutex_unlock(lock); > - > return sgt; > } > > diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c > index 948152f1596b..3d00a7f0aac1 100644 > --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c > +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c > @@ -267,18 +267,12 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map( > struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) > { > struct vb2_vmalloc_attachment *attach = db_attach->priv; > - /* stealing dmabuf mutex to serialize map/unmap operations */ > - struct mutex *lock = &db_attach->dmabuf->lock; > struct sg_table *sgt; > > - mutex_lock(lock); > - > sgt = &attach->sgt; > /* return previously mapped sg table */ > - if (attach->dma_dir == dma_dir) { > - mutex_unlock(lock); > + if (attach->dma_dir == dma_dir) > return sgt; > - } > > /* release any previous cache */ > if (attach->dma_dir != DMA_NONE) { > @@ -289,14 +283,11 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map( > /* mapping to the client with new direction */ > if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) { > pr_err("failed to map scatterlist\n"); > - mutex_unlock(lock); > return ERR_PTR(-EIO); > } > > attach->dma_dir = dma_dir; > > - mutex_unlock(lock); > - > return sgt; > } > > diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h > index 9d7c61a122dc..0b427939f466 100644 > --- a/include/drm/drm_gem.h > +++ b/include/drm/drm_gem.h > @@ -410,4 +410,7 @@ void drm_gem_unlock_reservations(struct drm_gem_object **objs, int count, > int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, > u32 handle, u64 *offset); > > +int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); > +void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); > + > #endif /* __DRM_GEM_H__ */ > diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h > index 71731796c8c3..23698c6b1d1e 100644 > --- a/include/linux/dma-buf.h > +++ b/include/linux/dma-buf.h > @@ -326,15 +326,6 @@ struct dma_buf { > /** @ops: dma_buf_ops associated with this buffer object. */ > const struct dma_buf_ops *ops; > > - /** > - * @lock: > - * > - * Used internally to serialize list manipulation, attach/detach and > - * vmap/unmap. Note that in many cases this is superseeded by > - * dma_resv_lock() on @resv. > - */ > - struct mutex lock; > - > /** > * @vmapping_counter: > * > @@ -618,6 +609,11 @@ int dma_buf_fd(struct dma_buf *dmabuf, int flags); > struct dma_buf *dma_buf_get(int fd); > void dma_buf_put(struct dma_buf *dmabuf); > > +struct sg_table *dma_buf_map_attachment_locked(struct dma_buf_attachment *, > + enum dma_data_direction); > +void dma_buf_unmap_attachment_locked(struct dma_buf_attachment *, > + struct sg_table *, > + enum dma_data_direction); > struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *, > enum dma_data_direction); > void dma_buf_unmap_attachment(struct dma_buf_attachment *, struct sg_table *, _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention @ 2022-05-30 6:50 ` Christian König 0 siblings, 0 replies; 29+ messages in thread From: Christian König @ 2022-05-30 6:50 UTC (permalink / raw) To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin Cc: intel-gfx, linux-kernel, dri-devel, virtualization, linaro-mm-sig, amd-gfx, linux-tegra, Dmitry Osipenko, kernel, linux-media Hi Dmitry, First of all please separate out this patch from the rest of the series, since this is a complex separate structural change. Am 27.05.22 um 01:50 schrieb Dmitry Osipenko: > All dma-bufs have dma-reservation lock that allows drivers to perform > exclusive operations over shared dma-bufs. Today's dma-buf API has > incomplete locking specification, which creates dead lock situation > for dma-buf importers and exporters that don't coordinate theirs locks. Well please drop that sentence. The locking specifications are actually very well defined, it's just that some drivers are a bit broken regarding them. What you do here is rather moving all the non-dynamic drivers over to the dynamic locking specification (which is really nice to have). I have tried this before and failed because catching all the locks in the right code paths are very tricky. So expect some fallout from this and make sure the kernel test robot and CI systems are clean. > This patch introduces new locking convention for dma-buf users. From now > on all dma-buf importers are responsible for holding dma-buf reservation > lock around operations performed over dma-bufs. > > This patch implements the new dma-buf locking convention by: > > 1. Making dma-buf API functions to take the reservation lock. > > 2. Adding new locked variants of the dma-buf API functions for drivers > that need to manage imported dma-bufs under the held lock. Instead of adding new locked variants please mark all variants which expect to be called without a lock with an _unlocked postfix. This should make it easier to remove those in a follow up patch set and then fully move the locking into the importer. > 3. Converting all drivers to the new locking scheme. I have strong doubts that you got all of them. At least radeon and nouveau should grab the reservation lock in their ->attach callbacks somehow. > > Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> > --- > drivers/dma-buf/dma-buf.c | 270 +++++++++++------- > drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +- > drivers/gpu/drm/drm_client.c | 4 +- > drivers/gpu/drm/drm_gem.c | 33 +++ > drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +- > drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 10 +- > drivers/gpu/drm/qxl/qxl_object.c | 17 +- > drivers/gpu/drm/qxl/qxl_prime.c | 4 +- > .../common/videobuf2/videobuf2-dma-contig.c | 11 +- > .../media/common/videobuf2/videobuf2-dma-sg.c | 11 +- > .../common/videobuf2/videobuf2-vmalloc.c | 11 +- > include/drm/drm_gem.h | 3 + > include/linux/dma-buf.h | 14 +- > 13 files changed, 241 insertions(+), 159 deletions(-) > > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c > index 32f55640890c..64a9909ccfa2 100644 > --- a/drivers/dma-buf/dma-buf.c > +++ b/drivers/dma-buf/dma-buf.c > @@ -552,7 +552,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) > file->f_mode |= FMODE_LSEEK; > dmabuf->file = file; > > - mutex_init(&dmabuf->lock); Please make removing dmabuf->lock a separate change. Regards, Christian. > INIT_LIST_HEAD(&dmabuf->attachments); > > mutex_lock(&db_list.lock); > @@ -737,14 +736,14 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, > attach->importer_ops = importer_ops; > attach->importer_priv = importer_priv; 3. Converting all drivers to the new locking scheme. > > Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> > --- > drivers/dma-buf/dma-buf.c | 270 +++++++++++------- > drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +- > drivers/gpu/drm/drm_client.c | 4 +- > > > + dma_resv_lock(dmabuf->resv, NULL); > + > if (dmabuf->ops->attach) { > ret = dmabuf->ops->attach(dmabuf, attach); > if (ret) > goto err_attach; > } > - dma_resv_lock(dmabuf->resv, NULL); > list_add(&attach->node, &dmabuf->attachments); > - dma_resv_unlock(dmabuf->resv); > > /* When either the importer or the exporter can't handle dynamic > * mappings we cache the mapping here to avoid issues with the > @@ -755,7 +754,6 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, > struct sg_table *sgt; > > if (dma_buf_is_dynamic(attach->dmabuf)) { > - dma_resv_lock(attach->dmabuf->resv, NULL); > ret = dmabuf->ops->pin(attach); > if (ret) > goto err_unlock; > @@ -768,15 +766,16 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, > ret = PTR_ERR(sgt); > goto err_unpin; > } > - if (dma_buf_is_dynamic(attach->dmabuf)) > - dma_resv_unlock(attach->dmabuf->resv); > attach->sgt = sgt; > attach->dir = DMA_BIDIRECTIONAL; > } > > + dma_resv_unlock(dmabuf->resv); > + > return attach; > > err_attach: > + dma_resv_unlock(attach->dmabuf->resv); > kfree(attach); > return ERR_PTR(ret); > > @@ -785,10 +784,10 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, > dmabuf->ops->unpin(attach); > > err_unlock: > - if (dma_buf_is_dynamic(attach->dmabuf)) > - dma_resv_unlock(attach->dmabuf->resv); > + dma_resv_unlock(dmabuf->resv); > > dma_buf_detach(dmabuf, attach); > + > return ERR_PTR(ret); > } > EXPORT_SYMBOL_NS_GPL(dma_buf_dynamic_attach, DMA_BUF); > @@ -832,24 +831,23 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach) > if (WARN_ON(!dmabuf || !attach)) > return; > > - if (attach->sgt) { > - if (dma_buf_is_dynamic(attach->dmabuf)) > - dma_resv_lock(attach->dmabuf->resv, NULL); > + if (WARN_ON(dmabuf != attach->dmabuf)) > + return; > > + dma_resv_lock(dmabuf->resv, NULL); > + > + if (attach->sgt) { > __unmap_dma_buf(attach, attach->sgt, attach->dir); > > - if (dma_buf_is_dynamic(attach->dmabuf)) { > + if (dma_buf_is_dynamic(attach->dmabuf)) > dmabuf->ops->unpin(attach); > - dma_resv_unlock(attach->dmabuf->resv); > - } > } > > - dma_resv_lock(dmabuf->resv, NULL); > list_del(&attach->node); > - dma_resv_unlock(dmabuf->resv); > if (dmabuf->ops->detach) > dmabuf->ops->detach(dmabuf, attach); > > + dma_resv_unlock(dmabuf->resv); > kfree(attach); > } > EXPORT_SYMBOL_NS_GPL(dma_buf_detach, DMA_BUF); > @@ -906,28 +904,18 @@ void dma_buf_unpin(struct dma_buf_attachment *attach) > EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, DMA_BUF); > > /** > - * dma_buf_map_attachment - Returns the scatterlist table of the attachment; > + * dma_buf_map_attachment_locked - Returns the scatterlist table of the attachment; > * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the > * dma_buf_ops. > * @attach: [in] attachment whose scatterlist is to be returned > * @direction: [in] direction of DMA transfer > * > - * Returns sg_table containing the scatterlist to be returned; returns ERR_PTR > - * on error. May return -EINTR if it is interrupted by a signal. > - * > - * On success, the DMA addresses and lengths in the returned scatterlist are > - * PAGE_SIZE aligned. > - * > - * A mapping must be unmapped by using dma_buf_unmap_attachment(). Note that > - * the underlying backing storage is pinned for as long as a mapping exists, > - * therefore users/importers should not hold onto a mapping for undue amounts of > - * time. > + * Locked variant of dma_buf_map_attachment(). > * > - * Important: Dynamic importers must wait for the exclusive fence of the struct > - * dma_resv attached to the DMA-BUF first. > + * Caller is responsible for holding dmabuf's reservation lock. > */ > -struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, > - enum dma_data_direction direction) > +struct sg_table *dma_buf_map_attachment_locked(struct dma_buf_attachment *attach, > + enum dma_data_direction direction) > { > struct sg_table *sg_table; > int r; > @@ -937,8 +925,7 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, > if (WARN_ON(!attach || !attach->dmabuf)) > return ERR_PTR(-EINVAL); > > - if (dma_buf_attachment_is_dynamic(attach)) > - dma_resv_assert_held(attach->dmabuf->resv); > + dma_resv_assert_held(attach->dmabuf->resv); > > if (attach->sgt) { > /* > @@ -953,7 +940,6 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, > } > > if (dma_buf_is_dynamic(attach->dmabuf)) { > - dma_resv_assert_held(attach->dmabuf->resv); > if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) { > r = attach->dmabuf->ops->pin(attach); > if (r) > @@ -993,42 +979,101 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, > #endif /* CONFIG_DMA_API_DEBUG */ > return sg_table; > } > -EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF); > +EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_locked, DMA_BUF); > > /** > - * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might > - * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of > + * dma_buf_map_attachment - Returns the scatterlist table of the attachment; > + * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the > * dma_buf_ops. > - * @attach: [in] attachment to unmap buffer from > - * @sg_table: [in] scatterlist info of the buffer to unmap > - * @direction: [in] direction of DMA transfer > + * @attach: [in] attachment whose scatterlist is to be returned > + * @direction: [in] direction of DMA transfer > * > - * This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment(). > + * Returns sg_table containing the scatterlist to be returned; returns ERR_PTR > + * on error. May return -EINTR if it is interrupted by a signal. > + * > + * On success, the DMA addresses and lengths in the returned scatterlist are > + * PAGE_SIZE aligned. > + * > + * A mapping must be unmapped by using dma_buf_unmap_attachment(). Note that > + * the underlying backing storage is pinned for as long as a mapping exists, > + * therefore users/importers should not hold onto a mapping for undue amounts of > + * time. > + * > + * Important: Dynamic importers must wait for the exclusive fence of the struct > + * dma_resv attached to the DMA-BUF first. > */ > -void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, > - struct sg_table *sg_table, > +struct sg_table * > +dma_buf_map_attachment(struct dma_buf_attachment *attach, > enum dma_data_direction direction) > { > + struct sg_table *sg_table; > + > might_sleep(); > > - if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) > - return; > + if (WARN_ON(!attach || !attach->dmabuf)) > + return ERR_PTR(-EINVAL); > + > + dma_resv_lock(attach->dmabuf->resv, NULL); > + sg_table = dma_buf_map_attachment_locked(attach, direction); > + dma_resv_unlock(attach->dmabuf->resv); > > - if (dma_buf_attachment_is_dynamic(attach)) > - dma_resv_assert_held(attach->dmabuf->resv); > + return sg_table; > +} > +EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF); > + > +/** > + * dma_buf_unmap_attachment_locked - Returns the scatterlist table of the attachment; > + * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the > + * dma_buf_ops. > + * @attach: [in] attachment whose scatterlist is to be returned > + * @direction: [in] direction of DMA transfer > + * > + * Locked variant of dma_buf_unmap_attachment(). > + * > + * Caller is responsible for holding dmabuf's reservation lock. > + */ > +void dma_buf_unmap_attachment_locked(struct dma_buf_attachment *attach, > + struct sg_table *sg_table, > + enum dma_data_direction direction) > +{ > + might_sleep(); > + > + dma_resv_assert_held(attach->dmabuf->resv); > > if (attach->sgt == sg_table) > return; > > - if (dma_buf_is_dynamic(attach->dmabuf)) > - dma_resv_assert_held(attach->dmabuf->resv); > - > __unmap_dma_buf(attach, sg_table, direction); > > if (dma_buf_is_dynamic(attach->dmabuf) && > !IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) > dma_buf_unpin(attach); > } > +EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment_locked, DMA_BUF); > + > +/** > + * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might > + * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of > + * dma_buf_ops. > + * @attach: [in] attachment to unmap buffer from > + * @sg_table: [in] scatterlist info of the buffer to unmap > + * @direction: [in] direction of DMA transfer > + * > + * This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment(). > + */ > +void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, > + struct sg_table *sg_table, > + enum dma_data_direction direction) > +{ > + might_sleep(); > + > + if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) > + return; > + > + dma_resv_lock(attach->dmabuf->resv, NULL); > + dma_buf_unmap_attachment_locked(attach, sg_table, direction); > + dma_resv_unlock(attach->dmabuf->resv); > +} > EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF); > > /** > @@ -1224,6 +1269,31 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf, > } > EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF); > > +static int dma_buf_mmap_locked(struct dma_buf *dmabuf, > + struct vm_area_struct *vma, > + unsigned long pgoff) > +{ > + dma_resv_assert_held(dmabuf->resv); > + > + /* check if buffer supports mmap */ > + if (!dmabuf->ops->mmap) > + return -EINVAL; > + > + /* check for offset overflow */ > + if (pgoff + vma_pages(vma) < pgoff) > + return -EOVERFLOW; > + > + /* check for overflowing the buffer's size */ > + if (pgoff + vma_pages(vma) > > + dmabuf->size >> PAGE_SHIFT) > + return -EINVAL; > + > + /* readjust the vma */ > + vma_set_file(vma, dmabuf->file); > + vma->vm_pgoff = pgoff; > + > + return dmabuf->ops->mmap(dmabuf, vma); > +} > > /** > * dma_buf_mmap - Setup up a userspace mmap with the given vma > @@ -1242,29 +1312,46 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF); > int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, > unsigned long pgoff) > { > + int ret; > + > if (WARN_ON(!dmabuf || !vma)) > return -EINVAL; > > - /* check if buffer supports mmap */ > - if (!dmabuf->ops->mmap) > - return -EINVAL; > + dma_resv_lock(dmabuf->resv, NULL); > + ret = dma_buf_mmap_locked(dmabuf, vma, pgoff); > + dma_resv_unlock(dmabuf->resv); > > - /* check for offset overflow */ > - if (pgoff + vma_pages(vma) < pgoff) > - return -EOVERFLOW; > + return ret; > +} > +EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); > > - /* check for overflowing the buffer's size */ > - if (pgoff + vma_pages(vma) > > - dmabuf->size >> PAGE_SHIFT) > - return -EINVAL; > +static int dma_buf_vmap_locked(struct dma_buf *dmabuf, struct iosys_map *map) > +{ > + struct iosys_map ptr; > + int ret; > > - /* readjust the vma */ > - vma_set_file(vma, dmabuf->file); > - vma->vm_pgoff = pgoff; > + dma_resv_assert_held(dmabuf->resv); > > - return dmabuf->ops->mmap(dmabuf, vma); > + if (dmabuf->vmapping_counter) { > + dmabuf->vmapping_counter++; > + BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); > + *map = dmabuf->vmap_ptr; > + return ret; > + } > + > + BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr)); > + > + ret = dmabuf->ops->vmap(dmabuf, &ptr); > + if (WARN_ON_ONCE(ret)) > + return ret; > + > + dmabuf->vmap_ptr = ptr; > + dmabuf->vmapping_counter = 1; > + > + *map = dmabuf->vmap_ptr; > + > + return 0; > } > -EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); > > /** > * dma_buf_vmap - Create virtual mapping for the buffer object into kernel > @@ -1284,8 +1371,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); > */ > int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map) > { > - struct iosys_map ptr; > - int ret = 0; > + int ret; > > iosys_map_clear(map); > > @@ -1295,52 +1381,40 @@ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map) > if (!dmabuf->ops->vmap) > return -EINVAL; > > - mutex_lock(&dmabuf->lock); > - if (dmabuf->vmapping_counter) { > - dmabuf->vmapping_counter++; > - BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); > - *map = dmabuf->vmap_ptr; > - goto out_unlock; > - } > - > - BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr)); > - > - ret = dmabuf->ops->vmap(dmabuf, &ptr); > - if (WARN_ON_ONCE(ret)) > - goto out_unlock; > - > - dmabuf->vmap_ptr = ptr; > - dmabuf->vmapping_counter = 1; > - > - *map = dmabuf->vmap_ptr; > + dma_resv_lock(dmabuf->resv, NULL); > + ret = dma_buf_vmap_locked(dmabuf, map); > + dma_resv_unlock(dmabuf->resv); > > -out_unlock: > - mutex_unlock(&dmabuf->lock); > return ret; > } > EXPORT_SYMBOL_NS_GPL(dma_buf_vmap, DMA_BUF); > > -/** > - * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap. > - * @dmabuf: [in] buffer to vunmap > - * @map: [in] vmap pointer to vunmap > - */ > -void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) > +static void dma_buf_vunmap_locked(struct dma_buf *dmabuf, struct iosys_map *map) > { > - if (WARN_ON(!dmabuf)) > - return; > - > BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); > BUG_ON(dmabuf->vmapping_counter == 0); > BUG_ON(!iosys_map_is_equal(&dmabuf->vmap_ptr, map)); > > - mutex_lock(&dmabuf->lock); > if (--dmabuf->vmapping_counter == 0) { > if (dmabuf->ops->vunmap) > dmabuf->ops->vunmap(dmabuf, map); > iosys_map_clear(&dmabuf->vmap_ptr); > } > - mutex_unlock(&dmabuf->lock); > +} > + > +/** > + * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap. > + * @dmabuf: [in] buffer to vunmap > + * @map: [in] vmap pointer to vunmap > + */ > +void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) > +{ > + if (WARN_ON(!dmabuf)) > + return; > + > + dma_resv_lock(dmabuf->resv, NULL); > + dma_buf_vunmap_locked(dmabuf, map); > + dma_resv_unlock(dmabuf->resv); > } > EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF); > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c > index be6f76a30ac6..b704bdf5601a 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c > @@ -882,7 +882,8 @@ static int amdgpu_ttm_backend_bind(struct ttm_device *bdev, > struct sg_table *sgt; > > attach = gtt->gobj->import_attach; > - sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); > + sgt = dma_buf_map_attachment_locked(attach, > + DMA_BIDIRECTIONAL); > if (IS_ERR(sgt)) > return PTR_ERR(sgt); > > @@ -1007,7 +1008,8 @@ static void amdgpu_ttm_backend_unbind(struct ttm_device *bdev, > struct dma_buf_attachment *attach; > > attach = gtt->gobj->import_attach; > - dma_buf_unmap_attachment(attach, ttm->sg, DMA_BIDIRECTIONAL); > + dma_buf_unmap_attachment_locked(attach, ttm->sg, > + DMA_BIDIRECTIONAL); > ttm->sg = NULL; > } > > diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c > index af3b7395bf69..e9a1cd310352 100644 > --- a/drivers/gpu/drm/drm_client.c > +++ b/drivers/gpu/drm/drm_client.c > @@ -323,7 +323,7 @@ drm_client_buffer_vmap(struct drm_client_buffer *buffer, > * fd_install step out of the driver backend hooks, to make that > * final step optional for internal users. > */ > - ret = drm_gem_vmap(buffer->gem, map); > + ret = drm_gem_vmap_unlocked(buffer->gem, map); > if (ret) > return ret; > > @@ -345,7 +345,7 @@ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) > { > struct iosys_map *map = &buffer->map; > > - drm_gem_vunmap(buffer->gem, map); > + drm_gem_vunmap_unlocked(buffer->gem, map); > } > EXPORT_SYMBOL(drm_client_buffer_vunmap); > > diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c > index 7c0b025508e4..c61674887582 100644 > --- a/drivers/gpu/drm/drm_gem.c > +++ b/drivers/gpu/drm/drm_gem.c > @@ -1053,7 +1053,12 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size, > vma->vm_ops = obj->funcs->vm_ops; > > if (obj->funcs->mmap) { > + ret = dma_resv_lock_interruptible(obj->resv, NULL); > + if (ret) > + goto err_drm_gem_object_put; > + > ret = obj->funcs->mmap(obj, vma); > + dma_resv_unlock(obj->resv); > if (ret) > goto err_drm_gem_object_put; > WARN_ON(!(vma->vm_flags & VM_DONTEXPAND)); > @@ -1158,6 +1163,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent, > > int drm_gem_pin(struct drm_gem_object *obj) > { > + dma_resv_assert_held(obj->resv); > + > if (obj->funcs->pin) > return obj->funcs->pin(obj); > else > @@ -1166,6 +1173,8 @@ int drm_gem_pin(struct drm_gem_object *obj) > > void drm_gem_unpin(struct drm_gem_object *obj) > { > + dma_resv_assert_held(obj->resv); > + > if (obj->funcs->unpin) > obj->funcs->unpin(obj); > } > @@ -1174,6 +1183,8 @@ int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) > { > int ret; > > + dma_resv_assert_held(obj->resv); > + > if (!obj->funcs->vmap) > return -EOPNOTSUPP; > > @@ -1189,6 +1200,8 @@ EXPORT_SYMBOL(drm_gem_vmap); > > void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) > { > + dma_resv_assert_held(obj->resv); > + > if (iosys_map_is_null(map)) > return; > > @@ -1200,6 +1213,26 @@ void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) > } > EXPORT_SYMBOL(drm_gem_vunmap); > > +int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) > +{ > + int ret; > + > + dma_resv_lock(obj->resv, NULL); > + ret = drm_gem_vmap(obj, map); > + dma_resv_unlock(obj->resv); > + > + return ret; > +} > +EXPORT_SYMBOL(drm_gem_vmap_unlocked); > + > +void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) > +{ > + dma_resv_lock(obj->resv, NULL); > + drm_gem_vunmap(obj, map); > + dma_resv_unlock(obj->resv); > +} > +EXPORT_SYMBOL(drm_gem_vunmap_unlocked); > + > /** > * drm_gem_lock_reservations - Sets up the ww context and acquires > * the lock on an array of GEM objects. > diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gpu/drm/drm_gem_framebuffer_helper.c > index f4619803acd0..a0bff53b158e 100644 > --- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c > +++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c > @@ -348,7 +348,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, > iosys_map_clear(&map[i]); > continue; > } > - ret = drm_gem_vmap(obj, &map[i]); > + ret = drm_gem_vmap_unlocked(obj, &map[i]); > if (ret) > goto err_drm_gem_vunmap; > } > @@ -370,7 +370,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, > obj = drm_gem_fb_get_obj(fb, i); > if (!obj) > continue; > - drm_gem_vunmap(obj, &map[i]); > + drm_gem_vunmap_unlocked(obj, &map[i]); > } > return ret; > } > @@ -398,7 +398,7 @@ void drm_gem_fb_vunmap(struct drm_framebuffer *fb, > continue; > if (iosys_map_is_null(&map[i])) > continue; > - drm_gem_vunmap(obj, &map[i]); > + drm_gem_vunmap_unlocked(obj, &map[i]); > } > } > EXPORT_SYMBOL(drm_gem_fb_vunmap); > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c > index f5062d0c6333..09502d490da8 100644 > --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c > +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c > @@ -72,7 +72,7 @@ static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, > struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf); > void *vaddr; > > - vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); > + vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB); > if (IS_ERR(vaddr)) > return PTR_ERR(vaddr); > > @@ -241,8 +241,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj) > > assert_object_held(obj); > > - pages = dma_buf_map_attachment(obj->base.import_attach, > - DMA_BIDIRECTIONAL); > + pages = dma_buf_map_attachment_locked(obj->base.import_attach, > + DMA_BIDIRECTIONAL); > if (IS_ERR(pages)) > return PTR_ERR(pages); > > @@ -270,8 +270,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj) > static void i915_gem_object_put_pages_dmabuf(struct drm_i915_gem_object *obj, > struct sg_table *pages) > { > - dma_buf_unmap_attachment(obj->base.import_attach, pages, > - DMA_BIDIRECTIONAL); > + dma_buf_unmap_attachment_locked(obj->base.import_attach, pages, > + DMA_BIDIRECTIONAL); > } > > static const struct drm_i915_gem_object_ops i915_gem_object_dmabuf_ops = { > diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c > index b42a657e4c2f..a64cd635fbc0 100644 > --- a/drivers/gpu/drm/qxl/qxl_object.c > +++ b/drivers/gpu/drm/qxl/qxl_object.c > @@ -168,9 +168,16 @@ int qxl_bo_vmap_locked(struct qxl_bo *bo, struct iosys_map *map) > bo->map_count++; > goto out; > } > - r = ttm_bo_vmap(&bo->tbo, &bo->map); > + > + r = __qxl_bo_pin(bo); > if (r) > return r; > + > + r = ttm_bo_vmap(&bo->tbo, &bo->map); > + if (r) { > + __qxl_bo_unpin(bo); > + return r; > + } > bo->map_count = 1; > > /* TODO: Remove kptr in favor of map everywhere. */ > @@ -192,12 +199,6 @@ int qxl_bo_vmap(struct qxl_bo *bo, struct iosys_map *map) > if (r) > return r; > > - r = __qxl_bo_pin(bo); > - if (r) { > - qxl_bo_unreserve(bo); > - return r; > - } > - > r = qxl_bo_vmap_locked(bo, map); > qxl_bo_unreserve(bo); > return r; > @@ -247,6 +248,7 @@ void qxl_bo_vunmap_locked(struct qxl_bo *bo) > return; > bo->kptr = NULL; > ttm_bo_vunmap(&bo->tbo, &bo->map); > + __qxl_bo_unpin(bo); > } > > int qxl_bo_vunmap(struct qxl_bo *bo) > @@ -258,7 +260,6 @@ int qxl_bo_vunmap(struct qxl_bo *bo) > return r; > > qxl_bo_vunmap_locked(bo); > - __qxl_bo_unpin(bo); > qxl_bo_unreserve(bo); > return 0; > } > diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c > index 142d01415acb..9169c26357d3 100644 > --- a/drivers/gpu/drm/qxl/qxl_prime.c > +++ b/drivers/gpu/drm/qxl/qxl_prime.c > @@ -59,7 +59,7 @@ int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map) > struct qxl_bo *bo = gem_to_qxl_bo(obj); > int ret; > > - ret = qxl_bo_vmap(bo, map); > + ret = qxl_bo_vmap_locked(bo, map); > if (ret < 0) > return ret; > > @@ -71,5 +71,5 @@ void qxl_gem_prime_vunmap(struct drm_gem_object *obj, > { > struct qxl_bo *bo = gem_to_qxl_bo(obj); > > - qxl_bo_vunmap(bo); > + qxl_bo_vunmap_locked(bo); > } > diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c > index 678b359717c4..617062076370 100644 > --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c > +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c > @@ -382,18 +382,12 @@ static struct sg_table *vb2_dc_dmabuf_ops_map( > struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) > { > struct vb2_dc_attachment *attach = db_attach->priv; > - /* stealing dmabuf mutex to serialize map/unmap operations */ > - struct mutex *lock = &db_attach->dmabuf->lock; > struct sg_table *sgt; > > - mutex_lock(lock); > - > sgt = &attach->sgt; > /* return previously mapped sg table */ > - if (attach->dma_dir == dma_dir) { > - mutex_unlock(lock); > + if (attach->dma_dir == dma_dir) > return sgt; > - } > > /* release any previous cache */ > if (attach->dma_dir != DMA_NONE) { > @@ -409,14 +403,11 @@ static struct sg_table *vb2_dc_dmabuf_ops_map( > if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, > DMA_ATTR_SKIP_CPU_SYNC)) { > pr_err("failed to map scatterlist\n"); > - mutex_unlock(lock); > return ERR_PTR(-EIO); > } > > attach->dma_dir = dma_dir; > > - mutex_unlock(lock); > - > return sgt; > } > > diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c > index fa69158a65b1..d2075e7078cd 100644 > --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c > +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c > @@ -424,18 +424,12 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map( > struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) > { > struct vb2_dma_sg_attachment *attach = db_attach->priv; > - /* stealing dmabuf mutex to serialize map/unmap operations */ > - struct mutex *lock = &db_attach->dmabuf->lock; > struct sg_table *sgt; > > - mutex_lock(lock); > - > sgt = &attach->sgt; > /* return previously mapped sg table */ > - if (attach->dma_dir == dma_dir) { > - mutex_unlock(lock); > + if (attach->dma_dir == dma_dir) > return sgt; > - } > > /* release any previous cache */ > if (attach->dma_dir != DMA_NONE) { > @@ -446,14 +440,11 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map( > /* mapping to the client with new direction */ > if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) { > pr_err("failed to map scatterlist\n"); > - mutex_unlock(lock); > return ERR_PTR(-EIO); > } > > attach->dma_dir = dma_dir; > > - mutex_unlock(lock); > - > return sgt; > } > > diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c > index 948152f1596b..3d00a7f0aac1 100644 > --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c > +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c > @@ -267,18 +267,12 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map( > struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) > { > struct vb2_vmalloc_attachment *attach = db_attach->priv; > - /* stealing dmabuf mutex to serialize map/unmap operations */ > - struct mutex *lock = &db_attach->dmabuf->lock; > struct sg_table *sgt; > > - mutex_lock(lock); > - > sgt = &attach->sgt; > /* return previously mapped sg table */ > - if (attach->dma_dir == dma_dir) { > - mutex_unlock(lock); > + if (attach->dma_dir == dma_dir) > return sgt; > - } > > /* release any previous cache */ > if (attach->dma_dir != DMA_NONE) { > @@ -289,14 +283,11 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map( > /* mapping to the client with new direction */ > if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) { > pr_err("failed to map scatterlist\n"); > - mutex_unlock(lock); > return ERR_PTR(-EIO); > } > > attach->dma_dir = dma_dir; > > - mutex_unlock(lock); > - > return sgt; > } > > diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h > index 9d7c61a122dc..0b427939f466 100644 > --- a/include/drm/drm_gem.h > +++ b/include/drm/drm_gem.h > @@ -410,4 +410,7 @@ void drm_gem_unlock_reservations(struct drm_gem_object **objs, int count, > int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, > u32 handle, u64 *offset); > > +int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); > +void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); > + > #endif /* __DRM_GEM_H__ */ > diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h > index 71731796c8c3..23698c6b1d1e 100644 > --- a/include/linux/dma-buf.h > +++ b/include/linux/dma-buf.h > @@ -326,15 +326,6 @@ struct dma_buf { > /** @ops: dma_buf_ops associated with this buffer object. */ > const struct dma_buf_ops *ops; > > - /** > - * @lock: > - * > - * Used internally to serialize list manipulation, attach/detach and > - * vmap/unmap. Note that in many cases this is superseeded by > - * dma_resv_lock() on @resv. > - */ > - struct mutex lock; > - > /** > * @vmapping_counter: > * > @@ -618,6 +609,11 @@ int dma_buf_fd(struct dma_buf *dmabuf, int flags); > struct dma_buf *dma_buf_get(int fd); > void dma_buf_put(struct dma_buf *dmabuf); > > +struct sg_table *dma_buf_map_attachment_locked(struct dma_buf_attachment *, > + enum dma_data_direction); > +void dma_buf_unmap_attachment_locked(struct dma_buf_attachment *, > + struct sg_table *, > + enum dma_data_direction); > struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *, > enum dma_data_direction); > void dma_buf_unmap_attachment(struct dma_buf_attachment *, struct sg_table *, ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention @ 2022-05-30 6:50 ` Christian König 0 siblings, 0 replies; 29+ messages in thread From: Christian König @ 2022-05-30 6:50 UTC (permalink / raw) To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin Cc: dri-devel, linux-kernel, virtualization, Dmitry Osipenko, linux-tegra, linux-media, linaro-mm-sig, amd-gfx, intel-gfx, kernel Hi Dmitry, First of all please separate out this patch from the rest of the series, since this is a complex separate structural change. Am 27.05.22 um 01:50 schrieb Dmitry Osipenko: > All dma-bufs have dma-reservation lock that allows drivers to perform > exclusive operations over shared dma-bufs. Today's dma-buf API has > incomplete locking specification, which creates dead lock situation > for dma-buf importers and exporters that don't coordinate theirs locks. Well please drop that sentence. The locking specifications are actually very well defined, it's just that some drivers are a bit broken regarding them. What you do here is rather moving all the non-dynamic drivers over to the dynamic locking specification (which is really nice to have). I have tried this before and failed because catching all the locks in the right code paths are very tricky. So expect some fallout from this and make sure the kernel test robot and CI systems are clean. > This patch introduces new locking convention for dma-buf users. From now > on all dma-buf importers are responsible for holding dma-buf reservation > lock around operations performed over dma-bufs. > > This patch implements the new dma-buf locking convention by: > > 1. Making dma-buf API functions to take the reservation lock. > > 2. Adding new locked variants of the dma-buf API functions for drivers > that need to manage imported dma-bufs under the held lock. Instead of adding new locked variants please mark all variants which expect to be called without a lock with an _unlocked postfix. This should make it easier to remove those in a follow up patch set and then fully move the locking into the importer. > 3. Converting all drivers to the new locking scheme. I have strong doubts that you got all of them. At least radeon and nouveau should grab the reservation lock in their ->attach callbacks somehow. > > Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> > --- > drivers/dma-buf/dma-buf.c | 270 +++++++++++------- > drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +- > drivers/gpu/drm/drm_client.c | 4 +- > drivers/gpu/drm/drm_gem.c | 33 +++ > drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +- > drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 10 +- > drivers/gpu/drm/qxl/qxl_object.c | 17 +- > drivers/gpu/drm/qxl/qxl_prime.c | 4 +- > .../common/videobuf2/videobuf2-dma-contig.c | 11 +- > .../media/common/videobuf2/videobuf2-dma-sg.c | 11 +- > .../common/videobuf2/videobuf2-vmalloc.c | 11 +- > include/drm/drm_gem.h | 3 + > include/linux/dma-buf.h | 14 +- > 13 files changed, 241 insertions(+), 159 deletions(-) > > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c > index 32f55640890c..64a9909ccfa2 100644 > --- a/drivers/dma-buf/dma-buf.c > +++ b/drivers/dma-buf/dma-buf.c > @@ -552,7 +552,6 @@ struct dma_buf *dma_buf_export(const struct dma_buf_export_info *exp_info) > file->f_mode |= FMODE_LSEEK; > dmabuf->file = file; > > - mutex_init(&dmabuf->lock); Please make removing dmabuf->lock a separate change. Regards, Christian. > INIT_LIST_HEAD(&dmabuf->attachments); > > mutex_lock(&db_list.lock); > @@ -737,14 +736,14 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, > attach->importer_ops = importer_ops; > attach->importer_priv = importer_priv; 3. Converting all drivers to the new locking scheme. > > Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> > --- > drivers/dma-buf/dma-buf.c | 270 +++++++++++------- > drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +- > drivers/gpu/drm/drm_client.c | 4 +- > > > + dma_resv_lock(dmabuf->resv, NULL); > + > if (dmabuf->ops->attach) { > ret = dmabuf->ops->attach(dmabuf, attach); > if (ret) > goto err_attach; > } > - dma_resv_lock(dmabuf->resv, NULL); > list_add(&attach->node, &dmabuf->attachments); > - dma_resv_unlock(dmabuf->resv); > > /* When either the importer or the exporter can't handle dynamic > * mappings we cache the mapping here to avoid issues with the > @@ -755,7 +754,6 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, > struct sg_table *sgt; > > if (dma_buf_is_dynamic(attach->dmabuf)) { > - dma_resv_lock(attach->dmabuf->resv, NULL); > ret = dmabuf->ops->pin(attach); > if (ret) > goto err_unlock; > @@ -768,15 +766,16 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, > ret = PTR_ERR(sgt); > goto err_unpin; > } > - if (dma_buf_is_dynamic(attach->dmabuf)) > - dma_resv_unlock(attach->dmabuf->resv); > attach->sgt = sgt; > attach->dir = DMA_BIDIRECTIONAL; > } > > + dma_resv_unlock(dmabuf->resv); > + > return attach; > > err_attach: > + dma_resv_unlock(attach->dmabuf->resv); > kfree(attach); > return ERR_PTR(ret); > > @@ -785,10 +784,10 @@ dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, > dmabuf->ops->unpin(attach); > > err_unlock: > - if (dma_buf_is_dynamic(attach->dmabuf)) > - dma_resv_unlock(attach->dmabuf->resv); > + dma_resv_unlock(dmabuf->resv); > > dma_buf_detach(dmabuf, attach); > + > return ERR_PTR(ret); > } > EXPORT_SYMBOL_NS_GPL(dma_buf_dynamic_attach, DMA_BUF); > @@ -832,24 +831,23 @@ void dma_buf_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attach) > if (WARN_ON(!dmabuf || !attach)) > return; > > - if (attach->sgt) { > - if (dma_buf_is_dynamic(attach->dmabuf)) > - dma_resv_lock(attach->dmabuf->resv, NULL); > + if (WARN_ON(dmabuf != attach->dmabuf)) > + return; > > + dma_resv_lock(dmabuf->resv, NULL); > + > + if (attach->sgt) { > __unmap_dma_buf(attach, attach->sgt, attach->dir); > > - if (dma_buf_is_dynamic(attach->dmabuf)) { > + if (dma_buf_is_dynamic(attach->dmabuf)) > dmabuf->ops->unpin(attach); > - dma_resv_unlock(attach->dmabuf->resv); > - } > } > > - dma_resv_lock(dmabuf->resv, NULL); > list_del(&attach->node); > - dma_resv_unlock(dmabuf->resv); > if (dmabuf->ops->detach) > dmabuf->ops->detach(dmabuf, attach); > > + dma_resv_unlock(dmabuf->resv); > kfree(attach); > } > EXPORT_SYMBOL_NS_GPL(dma_buf_detach, DMA_BUF); > @@ -906,28 +904,18 @@ void dma_buf_unpin(struct dma_buf_attachment *attach) > EXPORT_SYMBOL_NS_GPL(dma_buf_unpin, DMA_BUF); > > /** > - * dma_buf_map_attachment - Returns the scatterlist table of the attachment; > + * dma_buf_map_attachment_locked - Returns the scatterlist table of the attachment; > * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the > * dma_buf_ops. > * @attach: [in] attachment whose scatterlist is to be returned > * @direction: [in] direction of DMA transfer > * > - * Returns sg_table containing the scatterlist to be returned; returns ERR_PTR > - * on error. May return -EINTR if it is interrupted by a signal. > - * > - * On success, the DMA addresses and lengths in the returned scatterlist are > - * PAGE_SIZE aligned. > - * > - * A mapping must be unmapped by using dma_buf_unmap_attachment(). Note that > - * the underlying backing storage is pinned for as long as a mapping exists, > - * therefore users/importers should not hold onto a mapping for undue amounts of > - * time. > + * Locked variant of dma_buf_map_attachment(). > * > - * Important: Dynamic importers must wait for the exclusive fence of the struct > - * dma_resv attached to the DMA-BUF first. > + * Caller is responsible for holding dmabuf's reservation lock. > */ > -struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, > - enum dma_data_direction direction) > +struct sg_table *dma_buf_map_attachment_locked(struct dma_buf_attachment *attach, > + enum dma_data_direction direction) > { > struct sg_table *sg_table; > int r; > @@ -937,8 +925,7 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, > if (WARN_ON(!attach || !attach->dmabuf)) > return ERR_PTR(-EINVAL); > > - if (dma_buf_attachment_is_dynamic(attach)) > - dma_resv_assert_held(attach->dmabuf->resv); > + dma_resv_assert_held(attach->dmabuf->resv); > > if (attach->sgt) { > /* > @@ -953,7 +940,6 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, > } > > if (dma_buf_is_dynamic(attach->dmabuf)) { > - dma_resv_assert_held(attach->dmabuf->resv); > if (!IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) { > r = attach->dmabuf->ops->pin(attach); > if (r) > @@ -993,42 +979,101 @@ struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *attach, > #endif /* CONFIG_DMA_API_DEBUG */ > return sg_table; > } > -EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF); > +EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment_locked, DMA_BUF); > > /** > - * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might > - * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of > + * dma_buf_map_attachment - Returns the scatterlist table of the attachment; > + * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the > * dma_buf_ops. > - * @attach: [in] attachment to unmap buffer from > - * @sg_table: [in] scatterlist info of the buffer to unmap > - * @direction: [in] direction of DMA transfer > + * @attach: [in] attachment whose scatterlist is to be returned > + * @direction: [in] direction of DMA transfer > * > - * This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment(). > + * Returns sg_table containing the scatterlist to be returned; returns ERR_PTR > + * on error. May return -EINTR if it is interrupted by a signal. > + * > + * On success, the DMA addresses and lengths in the returned scatterlist are > + * PAGE_SIZE aligned. > + * > + * A mapping must be unmapped by using dma_buf_unmap_attachment(). Note that > + * the underlying backing storage is pinned for as long as a mapping exists, > + * therefore users/importers should not hold onto a mapping for undue amounts of > + * time. > + * > + * Important: Dynamic importers must wait for the exclusive fence of the struct > + * dma_resv attached to the DMA-BUF first. > */ > -void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, > - struct sg_table *sg_table, > +struct sg_table * > +dma_buf_map_attachment(struct dma_buf_attachment *attach, > enum dma_data_direction direction) > { > + struct sg_table *sg_table; > + > might_sleep(); > > - if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) > - return; > + if (WARN_ON(!attach || !attach->dmabuf)) > + return ERR_PTR(-EINVAL); > + > + dma_resv_lock(attach->dmabuf->resv, NULL); > + sg_table = dma_buf_map_attachment_locked(attach, direction); > + dma_resv_unlock(attach->dmabuf->resv); > > - if (dma_buf_attachment_is_dynamic(attach)) > - dma_resv_assert_held(attach->dmabuf->resv); > + return sg_table; > +} > +EXPORT_SYMBOL_NS_GPL(dma_buf_map_attachment, DMA_BUF); > + > +/** > + * dma_buf_unmap_attachment_locked - Returns the scatterlist table of the attachment; > + * mapped into _device_ address space. Is a wrapper for map_dma_buf() of the > + * dma_buf_ops. > + * @attach: [in] attachment whose scatterlist is to be returned > + * @direction: [in] direction of DMA transfer > + * > + * Locked variant of dma_buf_unmap_attachment(). > + * > + * Caller is responsible for holding dmabuf's reservation lock. > + */ > +void dma_buf_unmap_attachment_locked(struct dma_buf_attachment *attach, > + struct sg_table *sg_table, > + enum dma_data_direction direction) > +{ > + might_sleep(); > + > + dma_resv_assert_held(attach->dmabuf->resv); > > if (attach->sgt == sg_table) > return; > > - if (dma_buf_is_dynamic(attach->dmabuf)) > - dma_resv_assert_held(attach->dmabuf->resv); > - > __unmap_dma_buf(attach, sg_table, direction); > > if (dma_buf_is_dynamic(attach->dmabuf) && > !IS_ENABLED(CONFIG_DMABUF_MOVE_NOTIFY)) > dma_buf_unpin(attach); > } > +EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment_locked, DMA_BUF); > + > +/** > + * dma_buf_unmap_attachment - unmaps and decreases usecount of the buffer;might > + * deallocate the scatterlist associated. Is a wrapper for unmap_dma_buf() of > + * dma_buf_ops. > + * @attach: [in] attachment to unmap buffer from > + * @sg_table: [in] scatterlist info of the buffer to unmap > + * @direction: [in] direction of DMA transfer > + * > + * This unmaps a DMA mapping for @attached obtained by dma_buf_map_attachment(). > + */ > +void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, > + struct sg_table *sg_table, > + enum dma_data_direction direction) > +{ > + might_sleep(); > + > + if (WARN_ON(!attach || !attach->dmabuf || !sg_table)) > + return; > + > + dma_resv_lock(attach->dmabuf->resv, NULL); > + dma_buf_unmap_attachment_locked(attach, sg_table, direction); > + dma_resv_unlock(attach->dmabuf->resv); > +} > EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment, DMA_BUF); > > /** > @@ -1224,6 +1269,31 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf, > } > EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF); > > +static int dma_buf_mmap_locked(struct dma_buf *dmabuf, > + struct vm_area_struct *vma, > + unsigned long pgoff) > +{ > + dma_resv_assert_held(dmabuf->resv); > + > + /* check if buffer supports mmap */ > + if (!dmabuf->ops->mmap) > + return -EINVAL; > + > + /* check for offset overflow */ > + if (pgoff + vma_pages(vma) < pgoff) > + return -EOVERFLOW; > + > + /* check for overflowing the buffer's size */ > + if (pgoff + vma_pages(vma) > > + dmabuf->size >> PAGE_SHIFT) > + return -EINVAL; > + > + /* readjust the vma */ > + vma_set_file(vma, dmabuf->file); > + vma->vm_pgoff = pgoff; > + > + return dmabuf->ops->mmap(dmabuf, vma); > +} > > /** > * dma_buf_mmap - Setup up a userspace mmap with the given vma > @@ -1242,29 +1312,46 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_end_cpu_access, DMA_BUF); > int dma_buf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma, > unsigned long pgoff) > { > + int ret; > + > if (WARN_ON(!dmabuf || !vma)) > return -EINVAL; > > - /* check if buffer supports mmap */ > - if (!dmabuf->ops->mmap) > - return -EINVAL; > + dma_resv_lock(dmabuf->resv, NULL); > + ret = dma_buf_mmap_locked(dmabuf, vma, pgoff); > + dma_resv_unlock(dmabuf->resv); > > - /* check for offset overflow */ > - if (pgoff + vma_pages(vma) < pgoff) > - return -EOVERFLOW; > + return ret; > +} > +EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); > > - /* check for overflowing the buffer's size */ > - if (pgoff + vma_pages(vma) > > - dmabuf->size >> PAGE_SHIFT) > - return -EINVAL; > +static int dma_buf_vmap_locked(struct dma_buf *dmabuf, struct iosys_map *map) > +{ > + struct iosys_map ptr; > + int ret; > > - /* readjust the vma */ > - vma_set_file(vma, dmabuf->file); > - vma->vm_pgoff = pgoff; > + dma_resv_assert_held(dmabuf->resv); > > - return dmabuf->ops->mmap(dmabuf, vma); > + if (dmabuf->vmapping_counter) { > + dmabuf->vmapping_counter++; > + BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); > + *map = dmabuf->vmap_ptr; > + return ret; > + } > + > + BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr)); > + > + ret = dmabuf->ops->vmap(dmabuf, &ptr); > + if (WARN_ON_ONCE(ret)) > + return ret; > + > + dmabuf->vmap_ptr = ptr; > + dmabuf->vmapping_counter = 1; > + > + *map = dmabuf->vmap_ptr; > + > + return 0; > } > -EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); > > /** > * dma_buf_vmap - Create virtual mapping for the buffer object into kernel > @@ -1284,8 +1371,7 @@ EXPORT_SYMBOL_NS_GPL(dma_buf_mmap, DMA_BUF); > */ > int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map) > { > - struct iosys_map ptr; > - int ret = 0; > + int ret; > > iosys_map_clear(map); > > @@ -1295,52 +1381,40 @@ int dma_buf_vmap(struct dma_buf *dmabuf, struct iosys_map *map) > if (!dmabuf->ops->vmap) > return -EINVAL; > > - mutex_lock(&dmabuf->lock); > - if (dmabuf->vmapping_counter) { > - dmabuf->vmapping_counter++; > - BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); > - *map = dmabuf->vmap_ptr; > - goto out_unlock; > - } > - > - BUG_ON(iosys_map_is_set(&dmabuf->vmap_ptr)); > - > - ret = dmabuf->ops->vmap(dmabuf, &ptr); > - if (WARN_ON_ONCE(ret)) > - goto out_unlock; > - > - dmabuf->vmap_ptr = ptr; > - dmabuf->vmapping_counter = 1; > - > - *map = dmabuf->vmap_ptr; > + dma_resv_lock(dmabuf->resv, NULL); > + ret = dma_buf_vmap_locked(dmabuf, map); > + dma_resv_unlock(dmabuf->resv); > > -out_unlock: > - mutex_unlock(&dmabuf->lock); > return ret; > } > EXPORT_SYMBOL_NS_GPL(dma_buf_vmap, DMA_BUF); > > -/** > - * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap. > - * @dmabuf: [in] buffer to vunmap > - * @map: [in] vmap pointer to vunmap > - */ > -void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) > +static void dma_buf_vunmap_locked(struct dma_buf *dmabuf, struct iosys_map *map) > { > - if (WARN_ON(!dmabuf)) > - return; > - > BUG_ON(iosys_map_is_null(&dmabuf->vmap_ptr)); > BUG_ON(dmabuf->vmapping_counter == 0); > BUG_ON(!iosys_map_is_equal(&dmabuf->vmap_ptr, map)); > > - mutex_lock(&dmabuf->lock); > if (--dmabuf->vmapping_counter == 0) { > if (dmabuf->ops->vunmap) > dmabuf->ops->vunmap(dmabuf, map); > iosys_map_clear(&dmabuf->vmap_ptr); > } > - mutex_unlock(&dmabuf->lock); > +} > + > +/** > + * dma_buf_vunmap - Unmap a vmap obtained by dma_buf_vmap. > + * @dmabuf: [in] buffer to vunmap > + * @map: [in] vmap pointer to vunmap > + */ > +void dma_buf_vunmap(struct dma_buf *dmabuf, struct iosys_map *map) > +{ > + if (WARN_ON(!dmabuf)) > + return; > + > + dma_resv_lock(dmabuf->resv, NULL); > + dma_buf_vunmap_locked(dmabuf, map); > + dma_resv_unlock(dmabuf->resv); > } > EXPORT_SYMBOL_NS_GPL(dma_buf_vunmap, DMA_BUF); > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c > index be6f76a30ac6..b704bdf5601a 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c > @@ -882,7 +882,8 @@ static int amdgpu_ttm_backend_bind(struct ttm_device *bdev, > struct sg_table *sgt; > > attach = gtt->gobj->import_attach; > - sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); > + sgt = dma_buf_map_attachment_locked(attach, > + DMA_BIDIRECTIONAL); > if (IS_ERR(sgt)) > return PTR_ERR(sgt); > > @@ -1007,7 +1008,8 @@ static void amdgpu_ttm_backend_unbind(struct ttm_device *bdev, > struct dma_buf_attachment *attach; > > attach = gtt->gobj->import_attach; > - dma_buf_unmap_attachment(attach, ttm->sg, DMA_BIDIRECTIONAL); > + dma_buf_unmap_attachment_locked(attach, ttm->sg, > + DMA_BIDIRECTIONAL); > ttm->sg = NULL; > } > > diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c > index af3b7395bf69..e9a1cd310352 100644 > --- a/drivers/gpu/drm/drm_client.c > +++ b/drivers/gpu/drm/drm_client.c > @@ -323,7 +323,7 @@ drm_client_buffer_vmap(struct drm_client_buffer *buffer, > * fd_install step out of the driver backend hooks, to make that > * final step optional for internal users. > */ > - ret = drm_gem_vmap(buffer->gem, map); > + ret = drm_gem_vmap_unlocked(buffer->gem, map); > if (ret) > return ret; > > @@ -345,7 +345,7 @@ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) > { > struct iosys_map *map = &buffer->map; > > - drm_gem_vunmap(buffer->gem, map); > + drm_gem_vunmap_unlocked(buffer->gem, map); > } > EXPORT_SYMBOL(drm_client_buffer_vunmap); > > diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c > index 7c0b025508e4..c61674887582 100644 > --- a/drivers/gpu/drm/drm_gem.c > +++ b/drivers/gpu/drm/drm_gem.c > @@ -1053,7 +1053,12 @@ int drm_gem_mmap_obj(struct drm_gem_object *obj, unsigned long obj_size, > vma->vm_ops = obj->funcs->vm_ops; > > if (obj->funcs->mmap) { > + ret = dma_resv_lock_interruptible(obj->resv, NULL); > + if (ret) > + goto err_drm_gem_object_put; > + > ret = obj->funcs->mmap(obj, vma); > + dma_resv_unlock(obj->resv); > if (ret) > goto err_drm_gem_object_put; > WARN_ON(!(vma->vm_flags & VM_DONTEXPAND)); > @@ -1158,6 +1163,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent, > > int drm_gem_pin(struct drm_gem_object *obj) > { > + dma_resv_assert_held(obj->resv); > + > if (obj->funcs->pin) > return obj->funcs->pin(obj); > else > @@ -1166,6 +1173,8 @@ int drm_gem_pin(struct drm_gem_object *obj) > > void drm_gem_unpin(struct drm_gem_object *obj) > { > + dma_resv_assert_held(obj->resv); > + > if (obj->funcs->unpin) > obj->funcs->unpin(obj); > } > @@ -1174,6 +1183,8 @@ int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map) > { > int ret; > > + dma_resv_assert_held(obj->resv); > + > if (!obj->funcs->vmap) > return -EOPNOTSUPP; > > @@ -1189,6 +1200,8 @@ EXPORT_SYMBOL(drm_gem_vmap); > > void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) > { > + dma_resv_assert_held(obj->resv); > + > if (iosys_map_is_null(map)) > return; > > @@ -1200,6 +1213,26 @@ void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map) > } > EXPORT_SYMBOL(drm_gem_vunmap); > > +int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) > +{ > + int ret; > + > + dma_resv_lock(obj->resv, NULL); > + ret = drm_gem_vmap(obj, map); > + dma_resv_unlock(obj->resv); > + > + return ret; > +} > +EXPORT_SYMBOL(drm_gem_vmap_unlocked); > + > +void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map) > +{ > + dma_resv_lock(obj->resv, NULL); > + drm_gem_vunmap(obj, map); > + dma_resv_unlock(obj->resv); > +} > +EXPORT_SYMBOL(drm_gem_vunmap_unlocked); > + > /** > * drm_gem_lock_reservations - Sets up the ww context and acquires > * the lock on an array of GEM objects. > diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c b/drivers/gpu/drm/drm_gem_framebuffer_helper.c > index f4619803acd0..a0bff53b158e 100644 > --- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c > +++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c > @@ -348,7 +348,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, > iosys_map_clear(&map[i]); > continue; > } > - ret = drm_gem_vmap(obj, &map[i]); > + ret = drm_gem_vmap_unlocked(obj, &map[i]); > if (ret) > goto err_drm_gem_vunmap; > } > @@ -370,7 +370,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, > obj = drm_gem_fb_get_obj(fb, i); > if (!obj) > continue; > - drm_gem_vunmap(obj, &map[i]); > + drm_gem_vunmap_unlocked(obj, &map[i]); > } > return ret; > } > @@ -398,7 +398,7 @@ void drm_gem_fb_vunmap(struct drm_framebuffer *fb, > continue; > if (iosys_map_is_null(&map[i])) > continue; > - drm_gem_vunmap(obj, &map[i]); > + drm_gem_vunmap_unlocked(obj, &map[i]); > } > } > EXPORT_SYMBOL(drm_gem_fb_vunmap); > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c > index f5062d0c6333..09502d490da8 100644 > --- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c > +++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c > @@ -72,7 +72,7 @@ static int i915_gem_dmabuf_vmap(struct dma_buf *dma_buf, > struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf); > void *vaddr; > > - vaddr = i915_gem_object_pin_map_unlocked(obj, I915_MAP_WB); > + vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB); > if (IS_ERR(vaddr)) > return PTR_ERR(vaddr); > > @@ -241,8 +241,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj) > > assert_object_held(obj); > > - pages = dma_buf_map_attachment(obj->base.import_attach, > - DMA_BIDIRECTIONAL); > + pages = dma_buf_map_attachment_locked(obj->base.import_attach, > + DMA_BIDIRECTIONAL); > if (IS_ERR(pages)) > return PTR_ERR(pages); > > @@ -270,8 +270,8 @@ static int i915_gem_object_get_pages_dmabuf(struct drm_i915_gem_object *obj) > static void i915_gem_object_put_pages_dmabuf(struct drm_i915_gem_object *obj, > struct sg_table *pages) > { > - dma_buf_unmap_attachment(obj->base.import_attach, pages, > - DMA_BIDIRECTIONAL); > + dma_buf_unmap_attachment_locked(obj->base.import_attach, pages, > + DMA_BIDIRECTIONAL); > } > > static const struct drm_i915_gem_object_ops i915_gem_object_dmabuf_ops = { > diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c > index b42a657e4c2f..a64cd635fbc0 100644 > --- a/drivers/gpu/drm/qxl/qxl_object.c > +++ b/drivers/gpu/drm/qxl/qxl_object.c > @@ -168,9 +168,16 @@ int qxl_bo_vmap_locked(struct qxl_bo *bo, struct iosys_map *map) > bo->map_count++; > goto out; > } > - r = ttm_bo_vmap(&bo->tbo, &bo->map); > + > + r = __qxl_bo_pin(bo); > if (r) > return r; > + > + r = ttm_bo_vmap(&bo->tbo, &bo->map); > + if (r) { > + __qxl_bo_unpin(bo); > + return r; > + } > bo->map_count = 1; > > /* TODO: Remove kptr in favor of map everywhere. */ > @@ -192,12 +199,6 @@ int qxl_bo_vmap(struct qxl_bo *bo, struct iosys_map *map) > if (r) > return r; > > - r = __qxl_bo_pin(bo); > - if (r) { > - qxl_bo_unreserve(bo); > - return r; > - } > - > r = qxl_bo_vmap_locked(bo, map); > qxl_bo_unreserve(bo); > return r; > @@ -247,6 +248,7 @@ void qxl_bo_vunmap_locked(struct qxl_bo *bo) > return; > bo->kptr = NULL; > ttm_bo_vunmap(&bo->tbo, &bo->map); > + __qxl_bo_unpin(bo); > } > > int qxl_bo_vunmap(struct qxl_bo *bo) > @@ -258,7 +260,6 @@ int qxl_bo_vunmap(struct qxl_bo *bo) > return r; > > qxl_bo_vunmap_locked(bo); > - __qxl_bo_unpin(bo); > qxl_bo_unreserve(bo); > return 0; > } > diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c > index 142d01415acb..9169c26357d3 100644 > --- a/drivers/gpu/drm/qxl/qxl_prime.c > +++ b/drivers/gpu/drm/qxl/qxl_prime.c > @@ -59,7 +59,7 @@ int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map) > struct qxl_bo *bo = gem_to_qxl_bo(obj); > int ret; > > - ret = qxl_bo_vmap(bo, map); > + ret = qxl_bo_vmap_locked(bo, map); > if (ret < 0) > return ret; > > @@ -71,5 +71,5 @@ void qxl_gem_prime_vunmap(struct drm_gem_object *obj, > { > struct qxl_bo *bo = gem_to_qxl_bo(obj); > > - qxl_bo_vunmap(bo); > + qxl_bo_vunmap_locked(bo); > } > diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c > index 678b359717c4..617062076370 100644 > --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c > +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c > @@ -382,18 +382,12 @@ static struct sg_table *vb2_dc_dmabuf_ops_map( > struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) > { > struct vb2_dc_attachment *attach = db_attach->priv; > - /* stealing dmabuf mutex to serialize map/unmap operations */ > - struct mutex *lock = &db_attach->dmabuf->lock; > struct sg_table *sgt; > > - mutex_lock(lock); > - > sgt = &attach->sgt; > /* return previously mapped sg table */ > - if (attach->dma_dir == dma_dir) { > - mutex_unlock(lock); > + if (attach->dma_dir == dma_dir) > return sgt; > - } > > /* release any previous cache */ > if (attach->dma_dir != DMA_NONE) { > @@ -409,14 +403,11 @@ static struct sg_table *vb2_dc_dmabuf_ops_map( > if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, > DMA_ATTR_SKIP_CPU_SYNC)) { > pr_err("failed to map scatterlist\n"); > - mutex_unlock(lock); > return ERR_PTR(-EIO); > } > > attach->dma_dir = dma_dir; > > - mutex_unlock(lock); > - > return sgt; > } > > diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c > index fa69158a65b1..d2075e7078cd 100644 > --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c > +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c > @@ -424,18 +424,12 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map( > struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) > { > struct vb2_dma_sg_attachment *attach = db_attach->priv; > - /* stealing dmabuf mutex to serialize map/unmap operations */ > - struct mutex *lock = &db_attach->dmabuf->lock; > struct sg_table *sgt; > > - mutex_lock(lock); > - > sgt = &attach->sgt; > /* return previously mapped sg table */ > - if (attach->dma_dir == dma_dir) { > - mutex_unlock(lock); > + if (attach->dma_dir == dma_dir) > return sgt; > - } > > /* release any previous cache */ > if (attach->dma_dir != DMA_NONE) { > @@ -446,14 +440,11 @@ static struct sg_table *vb2_dma_sg_dmabuf_ops_map( > /* mapping to the client with new direction */ > if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) { > pr_err("failed to map scatterlist\n"); > - mutex_unlock(lock); > return ERR_PTR(-EIO); > } > > attach->dma_dir = dma_dir; > > - mutex_unlock(lock); > - > return sgt; > } > > diff --git a/drivers/media/common/videobuf2/videobuf2-vmalloc.c b/drivers/media/common/videobuf2/videobuf2-vmalloc.c > index 948152f1596b..3d00a7f0aac1 100644 > --- a/drivers/media/common/videobuf2/videobuf2-vmalloc.c > +++ b/drivers/media/common/videobuf2/videobuf2-vmalloc.c > @@ -267,18 +267,12 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map( > struct dma_buf_attachment *db_attach, enum dma_data_direction dma_dir) > { > struct vb2_vmalloc_attachment *attach = db_attach->priv; > - /* stealing dmabuf mutex to serialize map/unmap operations */ > - struct mutex *lock = &db_attach->dmabuf->lock; > struct sg_table *sgt; > > - mutex_lock(lock); > - > sgt = &attach->sgt; > /* return previously mapped sg table */ > - if (attach->dma_dir == dma_dir) { > - mutex_unlock(lock); > + if (attach->dma_dir == dma_dir) > return sgt; > - } > > /* release any previous cache */ > if (attach->dma_dir != DMA_NONE) { > @@ -289,14 +283,11 @@ static struct sg_table *vb2_vmalloc_dmabuf_ops_map( > /* mapping to the client with new direction */ > if (dma_map_sgtable(db_attach->dev, sgt, dma_dir, 0)) { > pr_err("failed to map scatterlist\n"); > - mutex_unlock(lock); > return ERR_PTR(-EIO); > } > > attach->dma_dir = dma_dir; > > - mutex_unlock(lock); > - > return sgt; > } > > diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h > index 9d7c61a122dc..0b427939f466 100644 > --- a/include/drm/drm_gem.h > +++ b/include/drm/drm_gem.h > @@ -410,4 +410,7 @@ void drm_gem_unlock_reservations(struct drm_gem_object **objs, int count, > int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, > u32 handle, u64 *offset); > > +int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); > +void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map); > + > #endif /* __DRM_GEM_H__ */ > diff --git a/include/linux/dma-buf.h b/include/linux/dma-buf.h > index 71731796c8c3..23698c6b1d1e 100644 > --- a/include/linux/dma-buf.h > +++ b/include/linux/dma-buf.h > @@ -326,15 +326,6 @@ struct dma_buf { > /** @ops: dma_buf_ops associated with this buffer object. */ > const struct dma_buf_ops *ops; > > - /** > - * @lock: > - * > - * Used internally to serialize list manipulation, attach/detach and > - * vmap/unmap. Note that in many cases this is superseeded by > - * dma_resv_lock() on @resv. > - */ > - struct mutex lock; > - > /** > * @vmapping_counter: > * > @@ -618,6 +609,11 @@ int dma_buf_fd(struct dma_buf *dmabuf, int flags); > struct dma_buf *dma_buf_get(int fd); > void dma_buf_put(struct dma_buf *dmabuf); > > +struct sg_table *dma_buf_map_attachment_locked(struct dma_buf_attachment *, > + enum dma_data_direction); > +void dma_buf_unmap_attachment_locked(struct dma_buf_attachment *, > + struct sg_table *, > + enum dma_data_direction); > struct sg_table *dma_buf_map_attachment(struct dma_buf_attachment *, > enum dma_data_direction); > void dma_buf_unmap_attachment(struct dma_buf_attachment *, struct sg_table *, ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention 2022-05-30 6:50 ` Christian König @ 2022-05-30 13:26 ` Dmitry Osipenko -1 siblings, 0 replies; 29+ messages in thread From: Dmitry Osipenko @ 2022-05-30 13:26 UTC (permalink / raw) To: Christian König, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin Cc: intel-gfx, linux-kernel, dri-devel, virtualization, linaro-mm-sig, amd-gfx, linux-tegra, Dmitry Osipenko, kernel, linux-media Hello Christian, On 5/30/22 09:50, Christian König wrote: > Hi Dmitry, > > First of all please separate out this patch from the rest of the series, > since this is a complex separate structural change. I assume all the patches will go via the DRM tree in the end since the rest of the DRM patches in this series depend on this dma-buf change. But I see that separation may ease reviewing of the dma-buf changes, so let's try it. > Am 27.05.22 um 01:50 schrieb Dmitry Osipenko: >> All dma-bufs have dma-reservation lock that allows drivers to perform >> exclusive operations over shared dma-bufs. Today's dma-buf API has >> incomplete locking specification, which creates dead lock situation >> for dma-buf importers and exporters that don't coordinate theirs locks. > > Well please drop that sentence. The locking specifications are actually > very well defined, it's just that some drivers are a bit broken > regarding them. > > What you do here is rather moving all the non-dynamic drivers over to > the dynamic locking specification (which is really nice to have). Indeed, this will be a better description, thank you! I'll update it. > I have tried this before and failed because catching all the locks in > the right code paths are very tricky. So expect some fallout from this > and make sure the kernel test robot and CI systems are clean. Sure, I'll fix up all the reported things in the next iteration. BTW, have you ever posted yours version of the patch? Will be great if we could compare the changed code paths. >> This patch introduces new locking convention for dma-buf users. From now >> on all dma-buf importers are responsible for holding dma-buf reservation >> lock around operations performed over dma-bufs. >> >> This patch implements the new dma-buf locking convention by: >> >> 1. Making dma-buf API functions to take the reservation lock. >> >> 2. Adding new locked variants of the dma-buf API functions for drivers >> that need to manage imported dma-bufs under the held lock. > > Instead of adding new locked variants please mark all variants which > expect to be called without a lock with an _unlocked postfix. > > This should make it easier to remove those in a follow up patch set and > then fully move the locking into the importer. Do we really want to move all the locks to the importers? Seems the majority of drivers should be happy with the dma-buf helpers handling the locking for them. >> 3. Converting all drivers to the new locking scheme. > > I have strong doubts that you got all of them. At least radeon and > nouveau should grab the reservation lock in their ->attach callbacks > somehow. Radeon and Nouveau use gem_prime_import_sg_table() and they take resv lock already, seems they should be okay (?) I assume all the basics should covered in this v6. At minimum Intel, Tegra, Panfrost, Lima and Rockchip drivers should be good. If I missed something, then please let me know and I'll correct it. >> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> >> --- >> drivers/dma-buf/dma-buf.c | 270 +++++++++++------- >> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +- >> drivers/gpu/drm/drm_client.c | 4 +- >> drivers/gpu/drm/drm_gem.c | 33 +++ >> drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +- >> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 10 +- >> drivers/gpu/drm/qxl/qxl_object.c | 17 +- >> drivers/gpu/drm/qxl/qxl_prime.c | 4 +- >> .../common/videobuf2/videobuf2-dma-contig.c | 11 +- >> .../media/common/videobuf2/videobuf2-dma-sg.c | 11 +- >> .../common/videobuf2/videobuf2-vmalloc.c | 11 +- >> include/drm/drm_gem.h | 3 + >> include/linux/dma-buf.h | 14 +- >> 13 files changed, 241 insertions(+), 159 deletions(-) >> >> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c >> index 32f55640890c..64a9909ccfa2 100644 >> --- a/drivers/dma-buf/dma-buf.c >> +++ b/drivers/dma-buf/dma-buf.c >> @@ -552,7 +552,6 @@ struct dma_buf *dma_buf_export(const struct >> dma_buf_export_info *exp_info) >> file->f_mode |= FMODE_LSEEK; >> dmabuf->file = file; >> - mutex_init(&dmabuf->lock); > > Please make removing dmabuf->lock a separate change. Alright -- Best regards, Dmitry ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention @ 2022-05-30 13:26 ` Dmitry Osipenko 0 siblings, 0 replies; 29+ messages in thread From: Dmitry Osipenko @ 2022-05-30 13:26 UTC (permalink / raw) To: Christian König, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin Cc: dri-devel, linux-kernel, virtualization, Dmitry Osipenko, linux-tegra, linux-media, linaro-mm-sig, amd-gfx, intel-gfx, kernel Hello Christian, On 5/30/22 09:50, Christian König wrote: > Hi Dmitry, > > First of all please separate out this patch from the rest of the series, > since this is a complex separate structural change. I assume all the patches will go via the DRM tree in the end since the rest of the DRM patches in this series depend on this dma-buf change. But I see that separation may ease reviewing of the dma-buf changes, so let's try it. > Am 27.05.22 um 01:50 schrieb Dmitry Osipenko: >> All dma-bufs have dma-reservation lock that allows drivers to perform >> exclusive operations over shared dma-bufs. Today's dma-buf API has >> incomplete locking specification, which creates dead lock situation >> for dma-buf importers and exporters that don't coordinate theirs locks. > > Well please drop that sentence. The locking specifications are actually > very well defined, it's just that some drivers are a bit broken > regarding them. > > What you do here is rather moving all the non-dynamic drivers over to > the dynamic locking specification (which is really nice to have). Indeed, this will be a better description, thank you! I'll update it. > I have tried this before and failed because catching all the locks in > the right code paths are very tricky. So expect some fallout from this > and make sure the kernel test robot and CI systems are clean. Sure, I'll fix up all the reported things in the next iteration. BTW, have you ever posted yours version of the patch? Will be great if we could compare the changed code paths. >> This patch introduces new locking convention for dma-buf users. From now >> on all dma-buf importers are responsible for holding dma-buf reservation >> lock around operations performed over dma-bufs. >> >> This patch implements the new dma-buf locking convention by: >> >> 1. Making dma-buf API functions to take the reservation lock. >> >> 2. Adding new locked variants of the dma-buf API functions for drivers >> that need to manage imported dma-bufs under the held lock. > > Instead of adding new locked variants please mark all variants which > expect to be called without a lock with an _unlocked postfix. > > This should make it easier to remove those in a follow up patch set and > then fully move the locking into the importer. Do we really want to move all the locks to the importers? Seems the majority of drivers should be happy with the dma-buf helpers handling the locking for them. >> 3. Converting all drivers to the new locking scheme. > > I have strong doubts that you got all of them. At least radeon and > nouveau should grab the reservation lock in their ->attach callbacks > somehow. Radeon and Nouveau use gem_prime_import_sg_table() and they take resv lock already, seems they should be okay (?) I assume all the basics should covered in this v6. At minimum Intel, Tegra, Panfrost, Lima and Rockchip drivers should be good. If I missed something, then please let me know and I'll correct it. >> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> >> --- >> drivers/dma-buf/dma-buf.c | 270 +++++++++++------- >> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +- >> drivers/gpu/drm/drm_client.c | 4 +- >> drivers/gpu/drm/drm_gem.c | 33 +++ >> drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +- >> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 10 +- >> drivers/gpu/drm/qxl/qxl_object.c | 17 +- >> drivers/gpu/drm/qxl/qxl_prime.c | 4 +- >> .../common/videobuf2/videobuf2-dma-contig.c | 11 +- >> .../media/common/videobuf2/videobuf2-dma-sg.c | 11 +- >> .../common/videobuf2/videobuf2-vmalloc.c | 11 +- >> include/drm/drm_gem.h | 3 + >> include/linux/dma-buf.h | 14 +- >> 13 files changed, 241 insertions(+), 159 deletions(-) >> >> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c >> index 32f55640890c..64a9909ccfa2 100644 >> --- a/drivers/dma-buf/dma-buf.c >> +++ b/drivers/dma-buf/dma-buf.c >> @@ -552,7 +552,6 @@ struct dma_buf *dma_buf_export(const struct >> dma_buf_export_info *exp_info) >> file->f_mode |= FMODE_LSEEK; >> dmabuf->file = file; >> - mutex_init(&dmabuf->lock); > > Please make removing dmabuf->lock a separate change. Alright -- Best regards, Dmitry ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention 2022-05-30 13:26 ` Dmitry Osipenko (?) @ 2022-05-30 13:41 ` Christian König -1 siblings, 0 replies; 29+ messages in thread From: Christian König via Virtualization @ 2022-05-30 13:41 UTC (permalink / raw) To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin Cc: intel-gfx, linux-kernel, dri-devel, virtualization, linaro-mm-sig, amd-gfx, linux-tegra, Dmitry Osipenko, kernel, linux-media Hi Dmitry, Am 30.05.22 um 15:26 schrieb Dmitry Osipenko: > Hello Christian, > > On 5/30/22 09:50, Christian König wrote: >> Hi Dmitry, >> >> First of all please separate out this patch from the rest of the series, >> since this is a complex separate structural change. > I assume all the patches will go via the DRM tree in the end since the > rest of the DRM patches in this series depend on this dma-buf change. > But I see that separation may ease reviewing of the dma-buf changes, so > let's try it. That sounds like you are underestimating a bit how much trouble this will be. >> I have tried this before and failed because catching all the locks in >> the right code paths are very tricky. So expect some fallout from this >> and make sure the kernel test robot and CI systems are clean. > Sure, I'll fix up all the reported things in the next iteration. > > BTW, have you ever posted yours version of the patch? Will be great if > we could compare the changed code paths. No, I never even finished creating it after realizing how much work it would be. >>> This patch introduces new locking convention for dma-buf users. From now >>> on all dma-buf importers are responsible for holding dma-buf reservation >>> lock around operations performed over dma-bufs. >>> >>> This patch implements the new dma-buf locking convention by: >>> >>> 1. Making dma-buf API functions to take the reservation lock. >>> >>> 2. Adding new locked variants of the dma-buf API functions for drivers >>> that need to manage imported dma-bufs under the held lock. >> Instead of adding new locked variants please mark all variants which >> expect to be called without a lock with an _unlocked postfix. >> >> This should make it easier to remove those in a follow up patch set and >> then fully move the locking into the importer. > Do we really want to move all the locks to the importers? Seems the > majority of drivers should be happy with the dma-buf helpers handling > the locking for them. Yes, I clearly think so. > >>> 3. Converting all drivers to the new locking scheme. >> I have strong doubts that you got all of them. At least radeon and >> nouveau should grab the reservation lock in their ->attach callbacks >> somehow. > Radeon and Nouveau use gem_prime_import_sg_table() and they take resv > lock already, seems they should be okay (?) You are looking at the wrong side. You need to fix the export code path, not the import ones. See for example attach on radeon works like this drm_gem_map_attach->drm_gem_pin->radeon_gem_prime_pin->radeon_bo_reserve->ttm_bo_reserve->dma_resv_lock. Same for nouveau and probably a few other exporters as well. That will certainly cause a deadlock if you don't fix it. I strongly suggest to do this step by step, first attach/detach and then the rest. Regards, Christian. > > I assume all the basics should covered in this v6. At minimum Intel, > Tegra, Panfrost, Lima and Rockchip drivers should be good. If I missed > something, then please let me know and I'll correct it. > >>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> >>> --- >>> drivers/dma-buf/dma-buf.c | 270 +++++++++++------- >>> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +- >>> drivers/gpu/drm/drm_client.c | 4 +- >>> drivers/gpu/drm/drm_gem.c | 33 +++ >>> drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +- >>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 10 +- >>> drivers/gpu/drm/qxl/qxl_object.c | 17 +- >>> drivers/gpu/drm/qxl/qxl_prime.c | 4 +- >>> .../common/videobuf2/videobuf2-dma-contig.c | 11 +- >>> .../media/common/videobuf2/videobuf2-dma-sg.c | 11 +- >>> .../common/videobuf2/videobuf2-vmalloc.c | 11 +- >>> include/drm/drm_gem.h | 3 + >>> include/linux/dma-buf.h | 14 +- >>> 13 files changed, 241 insertions(+), 159 deletions(-) >>> >>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c >>> index 32f55640890c..64a9909ccfa2 100644 >>> --- a/drivers/dma-buf/dma-buf.c >>> +++ b/drivers/dma-buf/dma-buf.c >>> @@ -552,7 +552,6 @@ struct dma_buf *dma_buf_export(const struct >>> dma_buf_export_info *exp_info) >>> file->f_mode |= FMODE_LSEEK; >>> dmabuf->file = file; >>> - mutex_init(&dmabuf->lock); >> Please make removing dmabuf->lock a separate change. > Alright > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention @ 2022-05-30 13:41 ` Christian König 0 siblings, 0 replies; 29+ messages in thread From: Christian König @ 2022-05-30 13:41 UTC (permalink / raw) To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin Cc: dri-devel, linux-kernel, virtualization, Dmitry Osipenko, linux-tegra, linux-media, linaro-mm-sig, amd-gfx, intel-gfx, kernel Hi Dmitry, Am 30.05.22 um 15:26 schrieb Dmitry Osipenko: > Hello Christian, > > On 5/30/22 09:50, Christian König wrote: >> Hi Dmitry, >> >> First of all please separate out this patch from the rest of the series, >> since this is a complex separate structural change. > I assume all the patches will go via the DRM tree in the end since the > rest of the DRM patches in this series depend on this dma-buf change. > But I see that separation may ease reviewing of the dma-buf changes, so > let's try it. That sounds like you are underestimating a bit how much trouble this will be. >> I have tried this before and failed because catching all the locks in >> the right code paths are very tricky. So expect some fallout from this >> and make sure the kernel test robot and CI systems are clean. > Sure, I'll fix up all the reported things in the next iteration. > > BTW, have you ever posted yours version of the patch? Will be great if > we could compare the changed code paths. No, I never even finished creating it after realizing how much work it would be. >>> This patch introduces new locking convention for dma-buf users. From now >>> on all dma-buf importers are responsible for holding dma-buf reservation >>> lock around operations performed over dma-bufs. >>> >>> This patch implements the new dma-buf locking convention by: >>> >>> 1. Making dma-buf API functions to take the reservation lock. >>> >>> 2. Adding new locked variants of the dma-buf API functions for drivers >>> that need to manage imported dma-bufs under the held lock. >> Instead of adding new locked variants please mark all variants which >> expect to be called without a lock with an _unlocked postfix. >> >> This should make it easier to remove those in a follow up patch set and >> then fully move the locking into the importer. > Do we really want to move all the locks to the importers? Seems the > majority of drivers should be happy with the dma-buf helpers handling > the locking for them. Yes, I clearly think so. > >>> 3. Converting all drivers to the new locking scheme. >> I have strong doubts that you got all of them. At least radeon and >> nouveau should grab the reservation lock in their ->attach callbacks >> somehow. > Radeon and Nouveau use gem_prime_import_sg_table() and they take resv > lock already, seems they should be okay (?) You are looking at the wrong side. You need to fix the export code path, not the import ones. See for example attach on radeon works like this drm_gem_map_attach->drm_gem_pin->radeon_gem_prime_pin->radeon_bo_reserve->ttm_bo_reserve->dma_resv_lock. Same for nouveau and probably a few other exporters as well. That will certainly cause a deadlock if you don't fix it. I strongly suggest to do this step by step, first attach/detach and then the rest. Regards, Christian. > > I assume all the basics should covered in this v6. At minimum Intel, > Tegra, Panfrost, Lima and Rockchip drivers should be good. If I missed > something, then please let me know and I'll correct it. > >>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> >>> --- >>> drivers/dma-buf/dma-buf.c | 270 +++++++++++------- >>> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +- >>> drivers/gpu/drm/drm_client.c | 4 +- >>> drivers/gpu/drm/drm_gem.c | 33 +++ >>> drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +- >>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 10 +- >>> drivers/gpu/drm/qxl/qxl_object.c | 17 +- >>> drivers/gpu/drm/qxl/qxl_prime.c | 4 +- >>> .../common/videobuf2/videobuf2-dma-contig.c | 11 +- >>> .../media/common/videobuf2/videobuf2-dma-sg.c | 11 +- >>> .../common/videobuf2/videobuf2-vmalloc.c | 11 +- >>> include/drm/drm_gem.h | 3 + >>> include/linux/dma-buf.h | 14 +- >>> 13 files changed, 241 insertions(+), 159 deletions(-) >>> >>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c >>> index 32f55640890c..64a9909ccfa2 100644 >>> --- a/drivers/dma-buf/dma-buf.c >>> +++ b/drivers/dma-buf/dma-buf.c >>> @@ -552,7 +552,6 @@ struct dma_buf *dma_buf_export(const struct >>> dma_buf_export_info *exp_info) >>> file->f_mode |= FMODE_LSEEK; >>> dmabuf->file = file; >>> - mutex_init(&dmabuf->lock); >> Please make removing dmabuf->lock a separate change. > Alright > ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention @ 2022-05-30 13:41 ` Christian König 0 siblings, 0 replies; 29+ messages in thread From: Christian König @ 2022-05-30 13:41 UTC (permalink / raw) To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin Cc: intel-gfx, linux-kernel, dri-devel, virtualization, linaro-mm-sig, amd-gfx, linux-tegra, Dmitry Osipenko, kernel, linux-media Hi Dmitry, Am 30.05.22 um 15:26 schrieb Dmitry Osipenko: > Hello Christian, > > On 5/30/22 09:50, Christian König wrote: >> Hi Dmitry, >> >> First of all please separate out this patch from the rest of the series, >> since this is a complex separate structural change. > I assume all the patches will go via the DRM tree in the end since the > rest of the DRM patches in this series depend on this dma-buf change. > But I see that separation may ease reviewing of the dma-buf changes, so > let's try it. That sounds like you are underestimating a bit how much trouble this will be. >> I have tried this before and failed because catching all the locks in >> the right code paths are very tricky. So expect some fallout from this >> and make sure the kernel test robot and CI systems are clean. > Sure, I'll fix up all the reported things in the next iteration. > > BTW, have you ever posted yours version of the patch? Will be great if > we could compare the changed code paths. No, I never even finished creating it after realizing how much work it would be. >>> This patch introduces new locking convention for dma-buf users. From now >>> on all dma-buf importers are responsible for holding dma-buf reservation >>> lock around operations performed over dma-bufs. >>> >>> This patch implements the new dma-buf locking convention by: >>> >>> 1. Making dma-buf API functions to take the reservation lock. >>> >>> 2. Adding new locked variants of the dma-buf API functions for drivers >>> that need to manage imported dma-bufs under the held lock. >> Instead of adding new locked variants please mark all variants which >> expect to be called without a lock with an _unlocked postfix. >> >> This should make it easier to remove those in a follow up patch set and >> then fully move the locking into the importer. > Do we really want to move all the locks to the importers? Seems the > majority of drivers should be happy with the dma-buf helpers handling > the locking for them. Yes, I clearly think so. > >>> 3. Converting all drivers to the new locking scheme. >> I have strong doubts that you got all of them. At least radeon and >> nouveau should grab the reservation lock in their ->attach callbacks >> somehow. > Radeon and Nouveau use gem_prime_import_sg_table() and they take resv > lock already, seems they should be okay (?) You are looking at the wrong side. You need to fix the export code path, not the import ones. See for example attach on radeon works like this drm_gem_map_attach->drm_gem_pin->radeon_gem_prime_pin->radeon_bo_reserve->ttm_bo_reserve->dma_resv_lock. Same for nouveau and probably a few other exporters as well. That will certainly cause a deadlock if you don't fix it. I strongly suggest to do this step by step, first attach/detach and then the rest. Regards, Christian. > > I assume all the basics should covered in this v6. At minimum Intel, > Tegra, Panfrost, Lima and Rockchip drivers should be good. If I missed > something, then please let me know and I'll correct it. > >>> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> >>> --- >>> drivers/dma-buf/dma-buf.c | 270 +++++++++++------- >>> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 6 +- >>> drivers/gpu/drm/drm_client.c | 4 +- >>> drivers/gpu/drm/drm_gem.c | 33 +++ >>> drivers/gpu/drm/drm_gem_framebuffer_helper.c | 6 +- >>> drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c | 10 +- >>> drivers/gpu/drm/qxl/qxl_object.c | 17 +- >>> drivers/gpu/drm/qxl/qxl_prime.c | 4 +- >>> .../common/videobuf2/videobuf2-dma-contig.c | 11 +- >>> .../media/common/videobuf2/videobuf2-dma-sg.c | 11 +- >>> .../common/videobuf2/videobuf2-vmalloc.c | 11 +- >>> include/drm/drm_gem.h | 3 + >>> include/linux/dma-buf.h | 14 +- >>> 13 files changed, 241 insertions(+), 159 deletions(-) >>> >>> diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c >>> index 32f55640890c..64a9909ccfa2 100644 >>> --- a/drivers/dma-buf/dma-buf.c >>> +++ b/drivers/dma-buf/dma-buf.c >>> @@ -552,7 +552,6 @@ struct dma_buf *dma_buf_export(const struct >>> dma_buf_export_info *exp_info) >>> file->f_mode |= FMODE_LSEEK; >>> dmabuf->file = file; >>> - mutex_init(&dmabuf->lock); >> Please make removing dmabuf->lock a separate change. > Alright > ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention 2022-05-30 13:41 ` Christian König @ 2022-05-30 13:57 ` Dmitry Osipenko -1 siblings, 0 replies; 29+ messages in thread From: Dmitry Osipenko @ 2022-05-30 13:57 UTC (permalink / raw) To: Christian König, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin Cc: intel-gfx, linux-kernel, dri-devel, virtualization, linaro-mm-sig, amd-gfx, linux-tegra, Dmitry Osipenko, kernel, linux-media On 5/30/22 16:41, Christian König wrote: > Hi Dmitry, > > Am 30.05.22 um 15:26 schrieb Dmitry Osipenko: >> Hello Christian, >> >> On 5/30/22 09:50, Christian König wrote: >>> Hi Dmitry, >>> >>> First of all please separate out this patch from the rest of the series, >>> since this is a complex separate structural change. >> I assume all the patches will go via the DRM tree in the end since the >> rest of the DRM patches in this series depend on this dma-buf change. >> But I see that separation may ease reviewing of the dma-buf changes, so >> let's try it. > > That sounds like you are underestimating a bit how much trouble this > will be. > >>> I have tried this before and failed because catching all the locks in >>> the right code paths are very tricky. So expect some fallout from this >>> and make sure the kernel test robot and CI systems are clean. >> Sure, I'll fix up all the reported things in the next iteration. >> >> BTW, have you ever posted yours version of the patch? Will be great if >> we could compare the changed code paths. > > No, I never even finished creating it after realizing how much work it > would be. > >>>> This patch introduces new locking convention for dma-buf users. From >>>> now >>>> on all dma-buf importers are responsible for holding dma-buf >>>> reservation >>>> lock around operations performed over dma-bufs. >>>> >>>> This patch implements the new dma-buf locking convention by: >>>> >>>> 1. Making dma-buf API functions to take the reservation lock. >>>> >>>> 2. Adding new locked variants of the dma-buf API functions for >>>> drivers >>>> that need to manage imported dma-bufs under the held lock. >>> Instead of adding new locked variants please mark all variants which >>> expect to be called without a lock with an _unlocked postfix. >>> >>> This should make it easier to remove those in a follow up patch set and >>> then fully move the locking into the importer. >> Do we really want to move all the locks to the importers? Seems the >> majority of drivers should be happy with the dma-buf helpers handling >> the locking for them. > > Yes, I clearly think so. > >> >>>> 3. Converting all drivers to the new locking scheme. >>> I have strong doubts that you got all of them. At least radeon and >>> nouveau should grab the reservation lock in their ->attach callbacks >>> somehow. >> Radeon and Nouveau use gem_prime_import_sg_table() and they take resv >> lock already, seems they should be okay (?) > > You are looking at the wrong side. You need to fix the export code path, > not the import ones. > > See for example attach on radeon works like this > drm_gem_map_attach->drm_gem_pin->radeon_gem_prime_pin->radeon_bo_reserve->ttm_bo_reserve->dma_resv_lock. Yeah, I was looking at the both sides, but missed this one. > Same for nouveau and probably a few other exporters as well. That will > certainly cause a deadlock if you don't fix it. > > I strongly suggest to do this step by step, first attach/detach and then > the rest. Thank you very much for the suggestions. I'll implement them in the next version. -- Best regards, Dmitry ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention @ 2022-05-30 13:57 ` Dmitry Osipenko 0 siblings, 0 replies; 29+ messages in thread From: Dmitry Osipenko @ 2022-05-30 13:57 UTC (permalink / raw) To: Christian König, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin Cc: dri-devel, linux-kernel, virtualization, Dmitry Osipenko, linux-tegra, linux-media, linaro-mm-sig, amd-gfx, intel-gfx, kernel On 5/30/22 16:41, Christian König wrote: > Hi Dmitry, > > Am 30.05.22 um 15:26 schrieb Dmitry Osipenko: >> Hello Christian, >> >> On 5/30/22 09:50, Christian König wrote: >>> Hi Dmitry, >>> >>> First of all please separate out this patch from the rest of the series, >>> since this is a complex separate structural change. >> I assume all the patches will go via the DRM tree in the end since the >> rest of the DRM patches in this series depend on this dma-buf change. >> But I see that separation may ease reviewing of the dma-buf changes, so >> let's try it. > > That sounds like you are underestimating a bit how much trouble this > will be. > >>> I have tried this before and failed because catching all the locks in >>> the right code paths are very tricky. So expect some fallout from this >>> and make sure the kernel test robot and CI systems are clean. >> Sure, I'll fix up all the reported things in the next iteration. >> >> BTW, have you ever posted yours version of the patch? Will be great if >> we could compare the changed code paths. > > No, I never even finished creating it after realizing how much work it > would be. > >>>> This patch introduces new locking convention for dma-buf users. From >>>> now >>>> on all dma-buf importers are responsible for holding dma-buf >>>> reservation >>>> lock around operations performed over dma-bufs. >>>> >>>> This patch implements the new dma-buf locking convention by: >>>> >>>> 1. Making dma-buf API functions to take the reservation lock. >>>> >>>> 2. Adding new locked variants of the dma-buf API functions for >>>> drivers >>>> that need to manage imported dma-bufs under the held lock. >>> Instead of adding new locked variants please mark all variants which >>> expect to be called without a lock with an _unlocked postfix. >>> >>> This should make it easier to remove those in a follow up patch set and >>> then fully move the locking into the importer. >> Do we really want to move all the locks to the importers? Seems the >> majority of drivers should be happy with the dma-buf helpers handling >> the locking for them. > > Yes, I clearly think so. > >> >>>> 3. Converting all drivers to the new locking scheme. >>> I have strong doubts that you got all of them. At least radeon and >>> nouveau should grab the reservation lock in their ->attach callbacks >>> somehow. >> Radeon and Nouveau use gem_prime_import_sg_table() and they take resv >> lock already, seems they should be okay (?) > > You are looking at the wrong side. You need to fix the export code path, > not the import ones. > > See for example attach on radeon works like this > drm_gem_map_attach->drm_gem_pin->radeon_gem_prime_pin->radeon_bo_reserve->ttm_bo_reserve->dma_resv_lock. Yeah, I was looking at the both sides, but missed this one. > Same for nouveau and probably a few other exporters as well. That will > certainly cause a deadlock if you don't fix it. > > I strongly suggest to do this step by step, first attach/detach and then > the rest. Thank you very much for the suggestions. I'll implement them in the next version. -- Best regards, Dmitry ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention 2022-05-30 13:57 ` Dmitry Osipenko @ 2022-06-28 21:26 ` Thomas Hellström (Intel) -1 siblings, 0 replies; 29+ messages in thread From: Thomas Hellström (Intel) @ 2022-06-28 21:26 UTC (permalink / raw) To: Dmitry Osipenko, Christian König, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin Cc: intel-gfx, linux-kernel, amd-gfx, virtualization, linaro-mm-sig, dri-devel, linux-tegra, Dmitry Osipenko, kernel, linux-media On 5/30/22 15:57, Dmitry Osipenko wrote: > On 5/30/22 16:41, Christian König wrote: >> Hi Dmitry, >> >> Am 30.05.22 um 15:26 schrieb Dmitry Osipenko: >>> Hello Christian, >>> >>> On 5/30/22 09:50, Christian König wrote: >>>> Hi Dmitry, >>>> >>>> First of all please separate out this patch from the rest of the series, >>>> since this is a complex separate structural change. >>> I assume all the patches will go via the DRM tree in the end since the >>> rest of the DRM patches in this series depend on this dma-buf change. >>> But I see that separation may ease reviewing of the dma-buf changes, so >>> let's try it. >> That sounds like you are underestimating a bit how much trouble this >> will be. >> >>>> I have tried this before and failed because catching all the locks in >>>> the right code paths are very tricky. So expect some fallout from this >>>> and make sure the kernel test robot and CI systems are clean. >>> Sure, I'll fix up all the reported things in the next iteration. >>> >>> BTW, have you ever posted yours version of the patch? Will be great if >>> we could compare the changed code paths. >> No, I never even finished creating it after realizing how much work it >> would be. >> >>>>> This patch introduces new locking convention for dma-buf users. From >>>>> now >>>>> on all dma-buf importers are responsible for holding dma-buf >>>>> reservation >>>>> lock around operations performed over dma-bufs. >>>>> >>>>> This patch implements the new dma-buf locking convention by: >>>>> >>>>> 1. Making dma-buf API functions to take the reservation lock. >>>>> >>>>> 2. Adding new locked variants of the dma-buf API functions for >>>>> drivers >>>>> that need to manage imported dma-bufs under the held lock. >>>> Instead of adding new locked variants please mark all variants which >>>> expect to be called without a lock with an _unlocked postfix. >>>> >>>> This should make it easier to remove those in a follow up patch set and >>>> then fully move the locking into the importer. >>> Do we really want to move all the locks to the importers? Seems the >>> majority of drivers should be happy with the dma-buf helpers handling >>> the locking for them. >> Yes, I clearly think so. >> >>>>> 3. Converting all drivers to the new locking scheme. >>>> I have strong doubts that you got all of them. At least radeon and >>>> nouveau should grab the reservation lock in their ->attach callbacks >>>> somehow. >>> Radeon and Nouveau use gem_prime_import_sg_table() and they take resv >>> lock already, seems they should be okay (?) >> You are looking at the wrong side. You need to fix the export code path, >> not the import ones. >> >> See for example attach on radeon works like this >> drm_gem_map_attach->drm_gem_pin->radeon_gem_prime_pin->radeon_bo_reserve->ttm_bo_reserve->dma_resv_lock. > Yeah, I was looking at the both sides, but missed this one. Also i915 will run into trouble with attach. In particular since i915 starts a full ww transaction in its attach callback to be able to lock other objects if migration is needed. I think i915 CI would catch this in a selftest. Perhaps it's worthwile to take a step back and figure out, if the importer is required to lock, which callbacks might need a ww acquire context? (And off-topic, Since we do a lot of fancy stuff under dma-resv locks including waiting for fences and other locks, IMO taking these locks uninterruptible should ring a warning bell) /Thomas > >> Same for nouveau and probably a few other exporters as well. That will >> certainly cause a deadlock if you don't fix it. >> >> I strongly suggest to do this step by step, first attach/detach and then >> the rest. > Thank you very much for the suggestions. I'll implement them in the next > version. > ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention @ 2022-06-28 21:26 ` Thomas Hellström (Intel) 0 siblings, 0 replies; 29+ messages in thread From: Thomas Hellström (Intel) @ 2022-06-28 21:26 UTC (permalink / raw) To: Dmitry Osipenko, Christian König, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin Cc: intel-gfx, linux-kernel, dri-devel, virtualization, linaro-mm-sig, amd-gfx, linux-tegra, Dmitry Osipenko, kernel, linux-media On 5/30/22 15:57, Dmitry Osipenko wrote: > On 5/30/22 16:41, Christian König wrote: >> Hi Dmitry, >> >> Am 30.05.22 um 15:26 schrieb Dmitry Osipenko: >>> Hello Christian, >>> >>> On 5/30/22 09:50, Christian König wrote: >>>> Hi Dmitry, >>>> >>>> First of all please separate out this patch from the rest of the series, >>>> since this is a complex separate structural change. >>> I assume all the patches will go via the DRM tree in the end since the >>> rest of the DRM patches in this series depend on this dma-buf change. >>> But I see that separation may ease reviewing of the dma-buf changes, so >>> let's try it. >> That sounds like you are underestimating a bit how much trouble this >> will be. >> >>>> I have tried this before and failed because catching all the locks in >>>> the right code paths are very tricky. So expect some fallout from this >>>> and make sure the kernel test robot and CI systems are clean. >>> Sure, I'll fix up all the reported things in the next iteration. >>> >>> BTW, have you ever posted yours version of the patch? Will be great if >>> we could compare the changed code paths. >> No, I never even finished creating it after realizing how much work it >> would be. >> >>>>> This patch introduces new locking convention for dma-buf users. From >>>>> now >>>>> on all dma-buf importers are responsible for holding dma-buf >>>>> reservation >>>>> lock around operations performed over dma-bufs. >>>>> >>>>> This patch implements the new dma-buf locking convention by: >>>>> >>>>> 1. Making dma-buf API functions to take the reservation lock. >>>>> >>>>> 2. Adding new locked variants of the dma-buf API functions for >>>>> drivers >>>>> that need to manage imported dma-bufs under the held lock. >>>> Instead of adding new locked variants please mark all variants which >>>> expect to be called without a lock with an _unlocked postfix. >>>> >>>> This should make it easier to remove those in a follow up patch set and >>>> then fully move the locking into the importer. >>> Do we really want to move all the locks to the importers? Seems the >>> majority of drivers should be happy with the dma-buf helpers handling >>> the locking for them. >> Yes, I clearly think so. >> >>>>> 3. Converting all drivers to the new locking scheme. >>>> I have strong doubts that you got all of them. At least radeon and >>>> nouveau should grab the reservation lock in their ->attach callbacks >>>> somehow. >>> Radeon and Nouveau use gem_prime_import_sg_table() and they take resv >>> lock already, seems they should be okay (?) >> You are looking at the wrong side. You need to fix the export code path, >> not the import ones. >> >> See for example attach on radeon works like this >> drm_gem_map_attach->drm_gem_pin->radeon_gem_prime_pin->radeon_bo_reserve->ttm_bo_reserve->dma_resv_lock. > Yeah, I was looking at the both sides, but missed this one. Also i915 will run into trouble with attach. In particular since i915 starts a full ww transaction in its attach callback to be able to lock other objects if migration is needed. I think i915 CI would catch this in a selftest. Perhaps it's worthwile to take a step back and figure out, if the importer is required to lock, which callbacks might need a ww acquire context? (And off-topic, Since we do a lot of fancy stuff under dma-resv locks including waiting for fences and other locks, IMO taking these locks uninterruptible should ring a warning bell) /Thomas > >> Same for nouveau and probably a few other exporters as well. That will >> certainly cause a deadlock if you don't fix it. >> >> I strongly suggest to do this step by step, first attach/detach and then >> the rest. > Thank you very much for the suggestions. I'll implement them in the next > version. > ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention 2022-06-28 21:26 ` Thomas Hellström (Intel) @ 2022-07-01 10:43 ` Dmitry Osipenko -1 siblings, 0 replies; 29+ messages in thread From: Dmitry Osipenko @ 2022-07-01 10:43 UTC (permalink / raw) To: Thomas Hellström (Intel), Christian König, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin Cc: intel-gfx, linux-kernel, dri-devel, virtualization, linaro-mm-sig, amd-gfx, linux-tegra, Dmitry Osipenko, kernel, linux-media On 6/29/22 00:26, Thomas Hellström (Intel) wrote: > > On 5/30/22 15:57, Dmitry Osipenko wrote: >> On 5/30/22 16:41, Christian König wrote: >>> Hi Dmitry, >>> >>> Am 30.05.22 um 15:26 schrieb Dmitry Osipenko: >>>> Hello Christian, >>>> >>>> On 5/30/22 09:50, Christian König wrote: >>>>> Hi Dmitry, >>>>> >>>>> First of all please separate out this patch from the rest of the >>>>> series, >>>>> since this is a complex separate structural change. >>>> I assume all the patches will go via the DRM tree in the end since the >>>> rest of the DRM patches in this series depend on this dma-buf change. >>>> But I see that separation may ease reviewing of the dma-buf changes, so >>>> let's try it. >>> That sounds like you are underestimating a bit how much trouble this >>> will be. >>> >>>>> I have tried this before and failed because catching all the locks in >>>>> the right code paths are very tricky. So expect some fallout from this >>>>> and make sure the kernel test robot and CI systems are clean. >>>> Sure, I'll fix up all the reported things in the next iteration. >>>> >>>> BTW, have you ever posted yours version of the patch? Will be great if >>>> we could compare the changed code paths. >>> No, I never even finished creating it after realizing how much work it >>> would be. >>> >>>>>> This patch introduces new locking convention for dma-buf users. From >>>>>> now >>>>>> on all dma-buf importers are responsible for holding dma-buf >>>>>> reservation >>>>>> lock around operations performed over dma-bufs. >>>>>> >>>>>> This patch implements the new dma-buf locking convention by: >>>>>> >>>>>> 1. Making dma-buf API functions to take the reservation lock. >>>>>> >>>>>> 2. Adding new locked variants of the dma-buf API functions for >>>>>> drivers >>>>>> that need to manage imported dma-bufs under the held lock. >>>>> Instead of adding new locked variants please mark all variants which >>>>> expect to be called without a lock with an _unlocked postfix. >>>>> >>>>> This should make it easier to remove those in a follow up patch set >>>>> and >>>>> then fully move the locking into the importer. >>>> Do we really want to move all the locks to the importers? Seems the >>>> majority of drivers should be happy with the dma-buf helpers handling >>>> the locking for them. >>> Yes, I clearly think so. >>> >>>>>> 3. Converting all drivers to the new locking scheme. >>>>> I have strong doubts that you got all of them. At least radeon and >>>>> nouveau should grab the reservation lock in their ->attach callbacks >>>>> somehow. >>>> Radeon and Nouveau use gem_prime_import_sg_table() and they take resv >>>> lock already, seems they should be okay (?) >>> You are looking at the wrong side. You need to fix the export code path, >>> not the import ones. >>> >>> See for example attach on radeon works like this >>> drm_gem_map_attach->drm_gem_pin->radeon_gem_prime_pin->radeon_bo_reserve->ttm_bo_reserve->dma_resv_lock. >>> >> Yeah, I was looking at the both sides, but missed this one. > > Also i915 will run into trouble with attach. In particular since i915 > starts a full ww transaction in its attach callback to be able to lock > other objects if migration is needed. I think i915 CI would catch this > in a selftest. Seems it indeed it should deadlock. But i915 selftests apparently should've caught it and they didn't, I'll re-check what happened. > Perhaps it's worthwile to take a step back and figure out, if the > importer is required to lock, which callbacks might need a ww acquire > context? I'll take this into account, thanks. > (And off-topic, Since we do a lot of fancy stuff under dma-resv locks > including waiting for fences and other locks, IMO taking these locks > uninterruptible should ring a warning bell) I had the same thought and had a version that used the interruptible locking variant, but then decided to fall back to the uninterruptible, don't remember why. I'll revisit this. -- Best regards, Dmitry ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention @ 2022-07-01 10:43 ` Dmitry Osipenko 0 siblings, 0 replies; 29+ messages in thread From: Dmitry Osipenko @ 2022-07-01 10:43 UTC (permalink / raw) To: Thomas Hellström (Intel), Christian König, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin Cc: intel-gfx, linux-kernel, amd-gfx, virtualization, linaro-mm-sig, dri-devel, linux-tegra, Dmitry Osipenko, kernel, linux-media On 6/29/22 00:26, Thomas Hellström (Intel) wrote: > > On 5/30/22 15:57, Dmitry Osipenko wrote: >> On 5/30/22 16:41, Christian König wrote: >>> Hi Dmitry, >>> >>> Am 30.05.22 um 15:26 schrieb Dmitry Osipenko: >>>> Hello Christian, >>>> >>>> On 5/30/22 09:50, Christian König wrote: >>>>> Hi Dmitry, >>>>> >>>>> First of all please separate out this patch from the rest of the >>>>> series, >>>>> since this is a complex separate structural change. >>>> I assume all the patches will go via the DRM tree in the end since the >>>> rest of the DRM patches in this series depend on this dma-buf change. >>>> But I see that separation may ease reviewing of the dma-buf changes, so >>>> let's try it. >>> That sounds like you are underestimating a bit how much trouble this >>> will be. >>> >>>>> I have tried this before and failed because catching all the locks in >>>>> the right code paths are very tricky. So expect some fallout from this >>>>> and make sure the kernel test robot and CI systems are clean. >>>> Sure, I'll fix up all the reported things in the next iteration. >>>> >>>> BTW, have you ever posted yours version of the patch? Will be great if >>>> we could compare the changed code paths. >>> No, I never even finished creating it after realizing how much work it >>> would be. >>> >>>>>> This patch introduces new locking convention for dma-buf users. From >>>>>> now >>>>>> on all dma-buf importers are responsible for holding dma-buf >>>>>> reservation >>>>>> lock around operations performed over dma-bufs. >>>>>> >>>>>> This patch implements the new dma-buf locking convention by: >>>>>> >>>>>> 1. Making dma-buf API functions to take the reservation lock. >>>>>> >>>>>> 2. Adding new locked variants of the dma-buf API functions for >>>>>> drivers >>>>>> that need to manage imported dma-bufs under the held lock. >>>>> Instead of adding new locked variants please mark all variants which >>>>> expect to be called without a lock with an _unlocked postfix. >>>>> >>>>> This should make it easier to remove those in a follow up patch set >>>>> and >>>>> then fully move the locking into the importer. >>>> Do we really want to move all the locks to the importers? Seems the >>>> majority of drivers should be happy with the dma-buf helpers handling >>>> the locking for them. >>> Yes, I clearly think so. >>> >>>>>> 3. Converting all drivers to the new locking scheme. >>>>> I have strong doubts that you got all of them. At least radeon and >>>>> nouveau should grab the reservation lock in their ->attach callbacks >>>>> somehow. >>>> Radeon and Nouveau use gem_prime_import_sg_table() and they take resv >>>> lock already, seems they should be okay (?) >>> You are looking at the wrong side. You need to fix the export code path, >>> not the import ones. >>> >>> See for example attach on radeon works like this >>> drm_gem_map_attach->drm_gem_pin->radeon_gem_prime_pin->radeon_bo_reserve->ttm_bo_reserve->dma_resv_lock. >>> >> Yeah, I was looking at the both sides, but missed this one. > > Also i915 will run into trouble with attach. In particular since i915 > starts a full ww transaction in its attach callback to be able to lock > other objects if migration is needed. I think i915 CI would catch this > in a selftest. Seems it indeed it should deadlock. But i915 selftests apparently should've caught it and they didn't, I'll re-check what happened. > Perhaps it's worthwile to take a step back and figure out, if the > importer is required to lock, which callbacks might need a ww acquire > context? I'll take this into account, thanks. > (And off-topic, Since we do a lot of fancy stuff under dma-resv locks > including waiting for fences and other locks, IMO taking these locks > uninterruptible should ring a warning bell) I had the same thought and had a version that used the interruptible locking variant, but then decided to fall back to the uninterruptible, don't remember why. I'll revisit this. -- Best regards, Dmitry ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention 2022-07-01 10:43 ` Dmitry Osipenko (?) @ 2022-07-04 22:38 ` Dmitry Osipenko -1 siblings, 0 replies; 29+ messages in thread From: Dmitry Osipenko @ 2022-07-04 22:38 UTC (permalink / raw) To: Thomas Hellström (Intel) Cc: intel-gfx, linux-kernel, dri-devel, virtualization, linaro-mm-sig, amd-gfx, linux-tegra, Dmitry Osipenko, kernel, linux-media, Christian König, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin On 7/1/22 13:43, Dmitry Osipenko wrote: > On 6/29/22 00:26, Thomas Hellström (Intel) wrote: >> On 5/30/22 15:57, Dmitry Osipenko wrote: >>> On 5/30/22 16:41, Christian König wrote: >>>> Hi Dmitry, >>>> >>>> Am 30.05.22 um 15:26 schrieb Dmitry Osipenko: >>>>> Hello Christian, >>>>> >>>>> On 5/30/22 09:50, Christian König wrote: >>>>>> Hi Dmitry, >>>>>> >>>>>> First of all please separate out this patch from the rest of the >>>>>> series, >>>>>> since this is a complex separate structural change. >>>>> I assume all the patches will go via the DRM tree in the end since the >>>>> rest of the DRM patches in this series depend on this dma-buf change. >>>>> But I see that separation may ease reviewing of the dma-buf changes, so >>>>> let's try it. >>>> That sounds like you are underestimating a bit how much trouble this >>>> will be. >>>> >>>>>> I have tried this before and failed because catching all the locks in >>>>>> the right code paths are very tricky. So expect some fallout from this >>>>>> and make sure the kernel test robot and CI systems are clean. >>>>> Sure, I'll fix up all the reported things in the next iteration. >>>>> >>>>> BTW, have you ever posted yours version of the patch? Will be great if >>>>> we could compare the changed code paths. >>>> No, I never even finished creating it after realizing how much work it >>>> would be. >>>> >>>>>>> This patch introduces new locking convention for dma-buf users. From >>>>>>> now >>>>>>> on all dma-buf importers are responsible for holding dma-buf >>>>>>> reservation >>>>>>> lock around operations performed over dma-bufs. >>>>>>> >>>>>>> This patch implements the new dma-buf locking convention by: >>>>>>> >>>>>>> 1. Making dma-buf API functions to take the reservation lock. >>>>>>> >>>>>>> 2. Adding new locked variants of the dma-buf API functions for >>>>>>> drivers >>>>>>> that need to manage imported dma-bufs under the held lock. >>>>>> Instead of adding new locked variants please mark all variants which >>>>>> expect to be called without a lock with an _unlocked postfix. >>>>>> >>>>>> This should make it easier to remove those in a follow up patch set >>>>>> and >>>>>> then fully move the locking into the importer. >>>>> Do we really want to move all the locks to the importers? Seems the >>>>> majority of drivers should be happy with the dma-buf helpers handling >>>>> the locking for them. >>>> Yes, I clearly think so. >>>> >>>>>>> 3. Converting all drivers to the new locking scheme. >>>>>> I have strong doubts that you got all of them. At least radeon and >>>>>> nouveau should grab the reservation lock in their ->attach callbacks >>>>>> somehow. >>>>> Radeon and Nouveau use gem_prime_import_sg_table() and they take resv >>>>> lock already, seems they should be okay (?) >>>> You are looking at the wrong side. You need to fix the export code path, >>>> not the import ones. >>>> >>>> See for example attach on radeon works like this >>>> drm_gem_map_attach->drm_gem_pin->radeon_gem_prime_pin->radeon_bo_reserve->ttm_bo_reserve->dma_resv_lock. >>>> >>> Yeah, I was looking at the both sides, but missed this one. >> Also i915 will run into trouble with attach. In particular since i915 >> starts a full ww transaction in its attach callback to be able to lock >> other objects if migration is needed. I think i915 CI would catch this >> in a selftest. > Seems it indeed it should deadlock. But i915 selftests apparently > should've caught it and they didn't, I'll re-check what happened. > The i915 selftests use a separate mock_dmabuf_ops. That's why it works for the selftests, i.e. there is no deadlock. Thomas, would i915 CI run a different set of tests or will it be the default i915 selftests ran by IGT? -- Best regards, Dmitry ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention @ 2022-07-04 22:38 ` Dmitry Osipenko 0 siblings, 0 replies; 29+ messages in thread From: Dmitry Osipenko @ 2022-07-04 22:38 UTC (permalink / raw) To: Thomas Hellström (Intel) Cc: David Airlie, Joonas Lahtinen, dri-devel, virtualization, Thierry Reding, Gerd Hoffmann, Dmitry Osipenko, kernel, Sumit Semwal, Marek Szyprowski, Rob Herring, Mauro Carvalho Chehab, Daniel Stone, Steven Price, Gustavo Padovan, Alyssa Rosenzweig, Chia-I Wu, linux-media, Thomas Zimmermann, intel-gfx, Maarten Lankhorst, Maxime Ripard, linaro-mm-sig, Jani Nikula, Rodrigo Vivi, linux-tegra, Gurchetan Singh, Tvrtko Ursulin, Daniel Almeida, amd-gfx, Tomeu Vizoso, Gert Wollny, Pan, Xinhui, Emil Velikov, linux-kernel, Tomasz Figa, Rob Clark, Qiang Yu, Daniel Vetter, Alex Deucher, Robin Murphy, Christian König On 7/1/22 13:43, Dmitry Osipenko wrote: > On 6/29/22 00:26, Thomas Hellström (Intel) wrote: >> On 5/30/22 15:57, Dmitry Osipenko wrote: >>> On 5/30/22 16:41, Christian König wrote: >>>> Hi Dmitry, >>>> >>>> Am 30.05.22 um 15:26 schrieb Dmitry Osipenko: >>>>> Hello Christian, >>>>> >>>>> On 5/30/22 09:50, Christian König wrote: >>>>>> Hi Dmitry, >>>>>> >>>>>> First of all please separate out this patch from the rest of the >>>>>> series, >>>>>> since this is a complex separate structural change. >>>>> I assume all the patches will go via the DRM tree in the end since the >>>>> rest of the DRM patches in this series depend on this dma-buf change. >>>>> But I see that separation may ease reviewing of the dma-buf changes, so >>>>> let's try it. >>>> That sounds like you are underestimating a bit how much trouble this >>>> will be. >>>> >>>>>> I have tried this before and failed because catching all the locks in >>>>>> the right code paths are very tricky. So expect some fallout from this >>>>>> and make sure the kernel test robot and CI systems are clean. >>>>> Sure, I'll fix up all the reported things in the next iteration. >>>>> >>>>> BTW, have you ever posted yours version of the patch? Will be great if >>>>> we could compare the changed code paths. >>>> No, I never even finished creating it after realizing how much work it >>>> would be. >>>> >>>>>>> This patch introduces new locking convention for dma-buf users. From >>>>>>> now >>>>>>> on all dma-buf importers are responsible for holding dma-buf >>>>>>> reservation >>>>>>> lock around operations performed over dma-bufs. >>>>>>> >>>>>>> This patch implements the new dma-buf locking convention by: >>>>>>> >>>>>>> 1. Making dma-buf API functions to take the reservation lock. >>>>>>> >>>>>>> 2. Adding new locked variants of the dma-buf API functions for >>>>>>> drivers >>>>>>> that need to manage imported dma-bufs under the held lock. >>>>>> Instead of adding new locked variants please mark all variants which >>>>>> expect to be called without a lock with an _unlocked postfix. >>>>>> >>>>>> This should make it easier to remove those in a follow up patch set >>>>>> and >>>>>> then fully move the locking into the importer. >>>>> Do we really want to move all the locks to the importers? Seems the >>>>> majority of drivers should be happy with the dma-buf helpers handling >>>>> the locking for them. >>>> Yes, I clearly think so. >>>> >>>>>>> 3. Converting all drivers to the new locking scheme. >>>>>> I have strong doubts that you got all of them. At least radeon and >>>>>> nouveau should grab the reservation lock in their ->attach callbacks >>>>>> somehow. >>>>> Radeon and Nouveau use gem_prime_import_sg_table() and they take resv >>>>> lock already, seems they should be okay (?) >>>> You are looking at the wrong side. You need to fix the export code path, >>>> not the import ones. >>>> >>>> See for example attach on radeon works like this >>>> drm_gem_map_attach->drm_gem_pin->radeon_gem_prime_pin->radeon_bo_reserve->ttm_bo_reserve->dma_resv_lock. >>>> >>> Yeah, I was looking at the both sides, but missed this one. >> Also i915 will run into trouble with attach. In particular since i915 >> starts a full ww transaction in its attach callback to be able to lock >> other objects if migration is needed. I think i915 CI would catch this >> in a selftest. > Seems it indeed it should deadlock. But i915 selftests apparently > should've caught it and they didn't, I'll re-check what happened. > The i915 selftests use a separate mock_dmabuf_ops. That's why it works for the selftests, i.e. there is no deadlock. Thomas, would i915 CI run a different set of tests or will it be the default i915 selftests ran by IGT? -- Best regards, Dmitry ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention @ 2022-07-04 22:38 ` Dmitry Osipenko 0 siblings, 0 replies; 29+ messages in thread From: Dmitry Osipenko @ 2022-07-04 22:38 UTC (permalink / raw) To: Thomas Hellström (Intel) Cc: David Airlie, dri-devel, virtualization, Thierry Reding, Gerd Hoffmann, Dmitry Osipenko, kernel, Sumit Semwal, Marek Szyprowski, Mauro Carvalho Chehab, Steven Price, Gustavo Padovan, Alyssa Rosenzweig, linux-media, Thomas Zimmermann, intel-gfx, linaro-mm-sig, Rodrigo Vivi, linux-tegra, Gurchetan Singh, Tvrtko Ursulin, Daniel Almeida, amd-gfx, Tomeu Vizoso, Gert Wollny, Pan, Xinhui, Emil Velikov, linux-kernel, Tomasz Figa, Qiang Yu, Alex Deucher, Robin Murphy, Christian König On 7/1/22 13:43, Dmitry Osipenko wrote: > On 6/29/22 00:26, Thomas Hellström (Intel) wrote: >> On 5/30/22 15:57, Dmitry Osipenko wrote: >>> On 5/30/22 16:41, Christian König wrote: >>>> Hi Dmitry, >>>> >>>> Am 30.05.22 um 15:26 schrieb Dmitry Osipenko: >>>>> Hello Christian, >>>>> >>>>> On 5/30/22 09:50, Christian König wrote: >>>>>> Hi Dmitry, >>>>>> >>>>>> First of all please separate out this patch from the rest of the >>>>>> series, >>>>>> since this is a complex separate structural change. >>>>> I assume all the patches will go via the DRM tree in the end since the >>>>> rest of the DRM patches in this series depend on this dma-buf change. >>>>> But I see that separation may ease reviewing of the dma-buf changes, so >>>>> let's try it. >>>> That sounds like you are underestimating a bit how much trouble this >>>> will be. >>>> >>>>>> I have tried this before and failed because catching all the locks in >>>>>> the right code paths are very tricky. So expect some fallout from this >>>>>> and make sure the kernel test robot and CI systems are clean. >>>>> Sure, I'll fix up all the reported things in the next iteration. >>>>> >>>>> BTW, have you ever posted yours version of the patch? Will be great if >>>>> we could compare the changed code paths. >>>> No, I never even finished creating it after realizing how much work it >>>> would be. >>>> >>>>>>> This patch introduces new locking convention for dma-buf users. From >>>>>>> now >>>>>>> on all dma-buf importers are responsible for holding dma-buf >>>>>>> reservation >>>>>>> lock around operations performed over dma-bufs. >>>>>>> >>>>>>> This patch implements the new dma-buf locking convention by: >>>>>>> >>>>>>> 1. Making dma-buf API functions to take the reservation lock. >>>>>>> >>>>>>> 2. Adding new locked variants of the dma-buf API functions for >>>>>>> drivers >>>>>>> that need to manage imported dma-bufs under the held lock. >>>>>> Instead of adding new locked variants please mark all variants which >>>>>> expect to be called without a lock with an _unlocked postfix. >>>>>> >>>>>> This should make it easier to remove those in a follow up patch set >>>>>> and >>>>>> then fully move the locking into the importer. >>>>> Do we really want to move all the locks to the importers? Seems the >>>>> majority of drivers should be happy with the dma-buf helpers handling >>>>> the locking for them. >>>> Yes, I clearly think so. >>>> >>>>>>> 3. Converting all drivers to the new locking scheme. >>>>>> I have strong doubts that you got all of them. At least radeon and >>>>>> nouveau should grab the reservation lock in their ->attach callbacks >>>>>> somehow. >>>>> Radeon and Nouveau use gem_prime_import_sg_table() and they take resv >>>>> lock already, seems they should be okay (?) >>>> You are looking at the wrong side. You need to fix the export code path, >>>> not the import ones. >>>> >>>> See for example attach on radeon works like this >>>> drm_gem_map_attach->drm_gem_pin->radeon_gem_prime_pin->radeon_bo_reserve->ttm_bo_reserve->dma_resv_lock. >>>> >>> Yeah, I was looking at the both sides, but missed this one. >> Also i915 will run into trouble with attach. In particular since i915 >> starts a full ww transaction in its attach callback to be able to lock >> other objects if migration is needed. I think i915 CI would catch this >> in a selftest. > Seems it indeed it should deadlock. But i915 selftests apparently > should've caught it and they didn't, I'll re-check what happened. > The i915 selftests use a separate mock_dmabuf_ops. That's why it works for the selftests, i.e. there is no deadlock. Thomas, would i915 CI run a different set of tests or will it be the default i915 selftests ran by IGT? -- Best regards, Dmitry ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention 2022-07-04 22:38 ` Dmitry Osipenko (?) @ 2022-07-05 10:52 ` Dmitry Osipenko -1 siblings, 0 replies; 29+ messages in thread From: Dmitry Osipenko @ 2022-07-05 10:52 UTC (permalink / raw) To: Thomas Hellström (Intel) Cc: intel-gfx, linux-kernel, dri-devel, virtualization, linaro-mm-sig, amd-gfx, linux-tegra, Dmitry Osipenko, kernel, linux-media, Christian König, David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu, Daniel Vetter, Daniel Almeida, Gert Wollny, Gustavo Padovan, Daniel Stone, Tomeu Vizoso, Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, Rob Herring, Steven Price, Alyssa Rosenzweig, Rob Clark, Emil Velikov, Robin Murphy, Qiang Yu, Sumit Semwal, Pan, Xinhui, Thierry Reding, Tomasz Figa, Marek Szyprowski, Mauro Carvalho Chehab, Alex Deucher, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin On 7/5/22 01:38, Dmitry Osipenko wrote: ... >>> Also i915 will run into trouble with attach. In particular since i915 >>> starts a full ww transaction in its attach callback to be able to lock >>> other objects if migration is needed. I think i915 CI would catch this >>> in a selftest. >> Seems it indeed it should deadlock. But i915 selftests apparently >> should've caught it and they didn't, I'll re-check what happened. >> > > The i915 selftests use a separate mock_dmabuf_ops. That's why it works > for the selftests, i.e. there is no deadlock. > > Thomas, would i915 CI run a different set of tests or will it be the > default i915 selftests ran by IGT? > Nevermind, I had a local kernel change that was forgotten about.. it prevented the i915 live tests from running. -- Best regards, Dmitry ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention @ 2022-07-05 10:52 ` Dmitry Osipenko 0 siblings, 0 replies; 29+ messages in thread From: Dmitry Osipenko @ 2022-07-05 10:52 UTC (permalink / raw) To: Thomas Hellström (Intel) Cc: David Airlie, Joonas Lahtinen, dri-devel, virtualization, Thierry Reding, Gerd Hoffmann, Dmitry Osipenko, kernel, Sumit Semwal, Marek Szyprowski, Rob Herring, Mauro Carvalho Chehab, Daniel Stone, Steven Price, Gustavo Padovan, Alyssa Rosenzweig, Chia-I Wu, linux-media, Thomas Zimmermann, intel-gfx, Maarten Lankhorst, Maxime Ripard, linaro-mm-sig, Jani Nikula, Rodrigo Vivi, linux-tegra, Gurchetan Singh, Tvrtko Ursulin, Daniel Almeida, amd-gfx, Tomeu Vizoso, Gert Wollny, Pan, Xinhui, Emil Velikov, linux-kernel, Tomasz Figa, Rob Clark, Qiang Yu, Daniel Vetter, Alex Deucher, Robin Murphy, Christian König On 7/5/22 01:38, Dmitry Osipenko wrote: ... >>> Also i915 will run into trouble with attach. In particular since i915 >>> starts a full ww transaction in its attach callback to be able to lock >>> other objects if migration is needed. I think i915 CI would catch this >>> in a selftest. >> Seems it indeed it should deadlock. But i915 selftests apparently >> should've caught it and they didn't, I'll re-check what happened. >> > > The i915 selftests use a separate mock_dmabuf_ops. That's why it works > for the selftests, i.e. there is no deadlock. > > Thomas, would i915 CI run a different set of tests or will it be the > default i915 selftests ran by IGT? > Nevermind, I had a local kernel change that was forgotten about.. it prevented the i915 live tests from running. -- Best regards, Dmitry ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention @ 2022-07-05 10:52 ` Dmitry Osipenko 0 siblings, 0 replies; 29+ messages in thread From: Dmitry Osipenko @ 2022-07-05 10:52 UTC (permalink / raw) To: Thomas Hellström (Intel) Cc: David Airlie, dri-devel, virtualization, Thierry Reding, Gerd Hoffmann, Dmitry Osipenko, kernel, Sumit Semwal, Marek Szyprowski, Mauro Carvalho Chehab, Steven Price, Gustavo Padovan, Alyssa Rosenzweig, linux-media, Thomas Zimmermann, intel-gfx, linaro-mm-sig, Rodrigo Vivi, linux-tegra, Gurchetan Singh, Tvrtko Ursulin, Daniel Almeida, amd-gfx, Tomeu Vizoso, Gert Wollny, Pan, Xinhui, Emil Velikov, linux-kernel, Tomasz Figa, Qiang Yu, Alex Deucher, Robin Murphy, Christian König On 7/5/22 01:38, Dmitry Osipenko wrote: ... >>> Also i915 will run into trouble with attach. In particular since i915 >>> starts a full ww transaction in its attach callback to be able to lock >>> other objects if migration is needed. I think i915 CI would catch this >>> in a selftest. >> Seems it indeed it should deadlock. But i915 selftests apparently >> should've caught it and they didn't, I'll re-check what happened. >> > > The i915 selftests use a separate mock_dmabuf_ops. That's why it works > for the selftests, i.e. there is no deadlock. > > Thomas, would i915 CI run a different set of tests or will it be the > default i915 selftests ran by IGT? > Nevermind, I had a local kernel change that was forgotten about.. it prevented the i915 live tests from running. -- Best regards, Dmitry ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention @ 2022-05-27 12:21 ` kernel test robot 0 siblings, 0 replies; 29+ messages in thread From: Dan Carpenter @ 2022-05-30 7:05 UTC (permalink / raw) To: kbuild-all [-- Attachment #1: Type: text/plain, Size: 9217 bytes --] [ I trimmed the CC list -dan ] Hi Dmitry, url: https://github.com/intel-lab-lkp/linux/commits/Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220527-075717 base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git cdeffe87f790dfd1baa193020411ce9a538446d7 config: i386-randconfig-m021 (https://download.01.org/0day-ci/archive/20220527/202205272006.EZ53cUSD-lkp(a)intel.com/config) compiler: gcc-11 (Debian 11.3.0-1) 11.3.0 If you fix the issue, kindly add following tag where applicable Reported-by: kernel test robot <lkp@intel.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com> New smatch warnings: drivers/dma-buf/dma-buf.c:791 dma_buf_dynamic_attach() warn: inconsistent returns 'dmabuf->resv'. drivers/dma-buf/dma-buf.c:1339 dma_buf_vmap_locked() error: uninitialized symbol 'ret'. Old smatch warnings: drivers/dma-buf/dma-buf.c:576 dma_buf_export() warn: '&dmabuf->list_node' not removed from list vim +791 drivers/dma-buf/dma-buf.c 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 714 struct dma_buf_attachment * 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 715 dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 716 const struct dma_buf_attach_ops *importer_ops, bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 717 void *importer_priv) d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 718 { d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 719 struct dma_buf_attachment *attach; d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 720 int ret; d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 721 d1aa06a1eaf5f7 drivers/base/dma-buf.c Laurent Pinchart 2012-01-26 722 if (WARN_ON(!dmabuf || !dev)) d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 723 return ERR_PTR(-EINVAL); d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 724 4981cdb063e3e9 drivers/dma-buf/dma-buf.c Christian König 2020-02-19 725 if (WARN_ON(importer_ops && !importer_ops->move_notify)) 4981cdb063e3e9 drivers/dma-buf/dma-buf.c Christian König 2020-02-19 726 return ERR_PTR(-EINVAL); 4981cdb063e3e9 drivers/dma-buf/dma-buf.c Christian König 2020-02-19 727 db7942b6292306 drivers/dma-buf/dma-buf.c Markus Elfring 2017-05-08 728 attach = kzalloc(sizeof(*attach), GFP_KERNEL); 34d84ec4881d13 drivers/dma-buf/dma-buf.c Markus Elfring 2017-05-08 729 if (!attach) a9fbc3b73127ef drivers/base/dma-buf.c Laurent Pinchart 2012-01-26 730 return ERR_PTR(-ENOMEM); d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 731 d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 732 attach->dev = dev; d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 733 attach->dmabuf = dmabuf; 09606b5446c25b drivers/dma-buf/dma-buf.c Christian König 2018-03-22 734 if (importer_ops) 09606b5446c25b drivers/dma-buf/dma-buf.c Christian König 2018-03-22 735 attach->peer2peer = importer_ops->allow_peer2peer; bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 736 attach->importer_ops = importer_ops; bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 737 attach->importer_priv = importer_priv; 2ed9201bdd9a8e drivers/base/dma-buf.c Laurent Pinchart 2012-01-26 738 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 739 dma_resv_lock(dmabuf->resv, NULL); 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 740 d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 741 if (dmabuf->ops->attach) { a19741e5e5a9f1 drivers/dma-buf/dma-buf.c Christian König 2018-05-28 742 ret = dmabuf->ops->attach(dmabuf, attach); d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 743 if (ret) d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 744 goto err_attach; d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 745 } d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 746 list_add(&attach->node, &dmabuf->attachments); d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 747 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 748 /* When either the importer or the exporter can't handle dynamic 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 749 * mappings we cache the mapping here to avoid issues with the 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 750 * reservation object lock. 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 751 */ 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 752 if (dma_buf_attachment_is_dynamic(attach) != 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 753 dma_buf_is_dynamic(dmabuf)) { 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 754 struct sg_table *sgt; 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 755 bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 756 if (dma_buf_is_dynamic(attach->dmabuf)) { 7e008b02557cce drivers/dma-buf/dma-buf.c Christian König 2021-05-17 757 ret = dmabuf->ops->pin(attach); bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 758 if (ret) bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 759 goto err_unlock; bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 760 } 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 761 84335675f2223c drivers/dma-buf/dma-buf.c Daniel Vetter 2021-01-15 762 sgt = __map_dma_buf(attach, DMA_BIDIRECTIONAL); 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 763 if (!sgt) 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 764 sgt = ERR_PTR(-ENOMEM); 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 765 if (IS_ERR(sgt)) { 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 766 ret = PTR_ERR(sgt); bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 767 goto err_unpin; 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 768 } 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 769 attach->sgt = sgt; 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 770 attach->dir = DMA_BIDIRECTIONAL; 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 771 } 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 772 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 773 dma_resv_unlock(dmabuf->resv); 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 774 d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 775 return attach; d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 776 d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 777 err_attach: 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 778 dma_resv_unlock(attach->dmabuf->resv); This is a false positive, but Smatch would prefer if the lock names were the same everywhere: dma_resv_unlock(dmabuf->resv); d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 779 kfree(attach); d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 780 return ERR_PTR(ret); 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 781 bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 782 err_unpin: bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 783 if (dma_buf_is_dynamic(attach->dmabuf)) 7e008b02557cce drivers/dma-buf/dma-buf.c Christian König 2021-05-17 784 dmabuf->ops->unpin(attach); bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 785 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 786 err_unlock: 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 787 dma_resv_unlock(dmabuf->resv); 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 788 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 789 dma_buf_detach(dmabuf, attach); 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 790 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 @791 return ERR_PTR(ret); 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 792 } -- 0-DAY CI Kernel Test Service https://01.org/lkp ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v6 14/22] dma-buf: Introduce new locking convention @ 2022-05-27 12:21 ` kernel test robot 0 siblings, 0 replies; 29+ messages in thread From: kernel test robot @ 2022-05-27 12:21 UTC (permalink / raw) To: kbuild [-- Attachment #1: Type: text/plain, Size: 13966 bytes --] CC: kbuild-all(a)lists.01.org BCC: lkp(a)intel.com In-Reply-To: <20220526235040.678984-15-dmitry.osipenko@collabora.com> References: <20220526235040.678984-15-dmitry.osipenko@collabora.com> TO: Dmitry Osipenko <dmitry.osipenko@collabora.com> TO: David Airlie <airlied@linux.ie> TO: Gerd Hoffmann <kraxel@redhat.com> TO: Gurchetan Singh <gurchetansingh@chromium.org> TO: "Chia-I Wu" <olvaffe@gmail.com> TO: Daniel Vetter <daniel@ffwll.ch> TO: Daniel Almeida <daniel.almeida@collabora.com> TO: Gert Wollny <gert.wollny@collabora.com> TO: Gustavo Padovan <gustavo.padovan@collabora.com> TO: Daniel Stone <daniel@fooishbar.org> TO: Tomeu Vizoso <tomeu.vizoso@collabora.com> TO: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> TO: Maxime Ripard <mripard@kernel.org> TO: Thomas Zimmermann <tzimmermann@suse.de> TO: Rob Herring <robh@kernel.org> TO: Steven Price <steven.price@arm.com> TO: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com> TO: Rob Clark <robdclark@gmail.com> TO: Emil Velikov <emil.l.velikov@gmail.com> TO: Robin Murphy <robin.murphy@arm.com> TO: Qiang Yu <yuq825@gmail.com> TO: Sumit Semwal <sumit.semwal@linaro.org> TO: "Christian König" <christian.koenig@amd.com> TO: "Pan, Xinhui" <Xinhui.Pan@amd.com> TO: Thierry Reding <thierry.reding@gmail.com> TO: Tomasz Figa <tfiga@chromium.org> TO: Marek Szyprowski <m.szyprowski@samsung.com> TO: Mauro Carvalho Chehab <mchehab@kernel.org> CC: linux-media(a)vger.kernel.org TO: Alex Deucher <alexander.deucher@amd.com> TO: Jani Nikula <jani.nikula@linux.intel.com> Hi Dmitry, I love your patch! Perhaps something to improve: [auto build test WARNING on linus/master] [also build test WARNING on next-20220527] [cannot apply to drm/drm-next media-tree/master drm-intel/for-linux-next v5.18] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/intel-lab-lkp/linux/commits/Dmitry-Osipenko/Add-generic-memory-shrinker-to-VirtIO-GPU-and-Panfrost-DRM-drivers/20220527-075717 base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git cdeffe87f790dfd1baa193020411ce9a538446d7 :::::: branch date: 12 hours ago :::::: commit date: 12 hours ago config: i386-randconfig-m021 (https://download.01.org/0day-ci/archive/20220527/202205272006.EZ53cUSD-lkp(a)intel.com/config) compiler: gcc-11 (Debian 11.3.0-1) 11.3.0 If you fix the issue, kindly add following tag where applicable Reported-by: kernel test robot <lkp@intel.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com> New smatch warnings: drivers/dma-buf/dma-buf.c:791 dma_buf_dynamic_attach() warn: inconsistent returns 'dmabuf->resv'. drivers/dma-buf/dma-buf.c:1339 dma_buf_vmap_locked() error: uninitialized symbol 'ret'. Old smatch warnings: drivers/dma-buf/dma-buf.c:576 dma_buf_export() warn: '&dmabuf->list_node' not removed from list vim +791 drivers/dma-buf/dma-buf.c 84335675f2223c drivers/dma-buf/dma-buf.c Daniel Vetter 2021-01-15 691 d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 692 /** 85804b70cca68d drivers/dma-buf/dma-buf.c Daniel Vetter 2020-12-11 693 * dma_buf_dynamic_attach - Add the device to dma_buf's attachments list d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 694 * @dmabuf: [in] buffer to attach device to. d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 695 * @dev: [in] device to be attached. 6f49c2515e2258 drivers/dma-buf/dma-buf.c Randy Dunlap 2020-04-07 696 * @importer_ops: [in] importer operations for the attachment 6f49c2515e2258 drivers/dma-buf/dma-buf.c Randy Dunlap 2020-04-07 697 * @importer_priv: [in] importer private pointer for the attachment d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 698 * 2904a8c1311f02 drivers/dma-buf/dma-buf.c Daniel Vetter 2016-12-09 699 * Returns struct dma_buf_attachment pointer for this attachment. Attachments 2904a8c1311f02 drivers/dma-buf/dma-buf.c Daniel Vetter 2016-12-09 700 * must be cleaned up by calling dma_buf_detach(). 2904a8c1311f02 drivers/dma-buf/dma-buf.c Daniel Vetter 2016-12-09 701 * 85804b70cca68d drivers/dma-buf/dma-buf.c Daniel Vetter 2020-12-11 702 * Optionally this calls &dma_buf_ops.attach to allow device-specific attach 85804b70cca68d drivers/dma-buf/dma-buf.c Daniel Vetter 2020-12-11 703 * functionality. 85804b70cca68d drivers/dma-buf/dma-buf.c Daniel Vetter 2020-12-11 704 * 2904a8c1311f02 drivers/dma-buf/dma-buf.c Daniel Vetter 2016-12-09 705 * Returns: 2904a8c1311f02 drivers/dma-buf/dma-buf.c Daniel Vetter 2016-12-09 706 * 2904a8c1311f02 drivers/dma-buf/dma-buf.c Daniel Vetter 2016-12-09 707 * A pointer to newly created &dma_buf_attachment on success, or a negative 2904a8c1311f02 drivers/dma-buf/dma-buf.c Daniel Vetter 2016-12-09 708 * error code wrapped into a pointer on failure. 2904a8c1311f02 drivers/dma-buf/dma-buf.c Daniel Vetter 2016-12-09 709 * 2904a8c1311f02 drivers/dma-buf/dma-buf.c Daniel Vetter 2016-12-09 710 * Note that this can fail if the backing storage of @dmabuf is in a place not 2904a8c1311f02 drivers/dma-buf/dma-buf.c Daniel Vetter 2016-12-09 711 * accessible to @dev, and cannot be moved to a more suitable place. This is 2904a8c1311f02 drivers/dma-buf/dma-buf.c Daniel Vetter 2016-12-09 712 * indicated with the error code -EBUSY. d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 713 */ 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 714 struct dma_buf_attachment * 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 715 dma_buf_dynamic_attach(struct dma_buf *dmabuf, struct device *dev, bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 716 const struct dma_buf_attach_ops *importer_ops, bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 717 void *importer_priv) d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 718 { d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 719 struct dma_buf_attachment *attach; d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 720 int ret; d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 721 d1aa06a1eaf5f7 drivers/base/dma-buf.c Laurent Pinchart 2012-01-26 722 if (WARN_ON(!dmabuf || !dev)) d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 723 return ERR_PTR(-EINVAL); d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 724 4981cdb063e3e9 drivers/dma-buf/dma-buf.c Christian König 2020-02-19 725 if (WARN_ON(importer_ops && !importer_ops->move_notify)) 4981cdb063e3e9 drivers/dma-buf/dma-buf.c Christian König 2020-02-19 726 return ERR_PTR(-EINVAL); 4981cdb063e3e9 drivers/dma-buf/dma-buf.c Christian König 2020-02-19 727 db7942b6292306 drivers/dma-buf/dma-buf.c Markus Elfring 2017-05-08 728 attach = kzalloc(sizeof(*attach), GFP_KERNEL); 34d84ec4881d13 drivers/dma-buf/dma-buf.c Markus Elfring 2017-05-08 729 if (!attach) a9fbc3b73127ef drivers/base/dma-buf.c Laurent Pinchart 2012-01-26 730 return ERR_PTR(-ENOMEM); d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 731 d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 732 attach->dev = dev; d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 733 attach->dmabuf = dmabuf; 09606b5446c25b drivers/dma-buf/dma-buf.c Christian König 2018-03-22 734 if (importer_ops) 09606b5446c25b drivers/dma-buf/dma-buf.c Christian König 2018-03-22 735 attach->peer2peer = importer_ops->allow_peer2peer; bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 736 attach->importer_ops = importer_ops; bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 737 attach->importer_priv = importer_priv; 2ed9201bdd9a8e drivers/base/dma-buf.c Laurent Pinchart 2012-01-26 738 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 739 dma_resv_lock(dmabuf->resv, NULL); 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 740 d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 741 if (dmabuf->ops->attach) { a19741e5e5a9f1 drivers/dma-buf/dma-buf.c Christian König 2018-05-28 742 ret = dmabuf->ops->attach(dmabuf, attach); d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 743 if (ret) d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 744 goto err_attach; d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 745 } d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 746 list_add(&attach->node, &dmabuf->attachments); d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 747 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 748 /* When either the importer or the exporter can't handle dynamic 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 749 * mappings we cache the mapping here to avoid issues with the 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 750 * reservation object lock. 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 751 */ 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 752 if (dma_buf_attachment_is_dynamic(attach) != 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 753 dma_buf_is_dynamic(dmabuf)) { 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 754 struct sg_table *sgt; 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 755 bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 756 if (dma_buf_is_dynamic(attach->dmabuf)) { 7e008b02557cce drivers/dma-buf/dma-buf.c Christian König 2021-05-17 757 ret = dmabuf->ops->pin(attach); bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 758 if (ret) bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 759 goto err_unlock; bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 760 } 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 761 84335675f2223c drivers/dma-buf/dma-buf.c Daniel Vetter 2021-01-15 762 sgt = __map_dma_buf(attach, DMA_BIDIRECTIONAL); 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 763 if (!sgt) 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 764 sgt = ERR_PTR(-ENOMEM); 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 765 if (IS_ERR(sgt)) { 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 766 ret = PTR_ERR(sgt); bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 767 goto err_unpin; 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 768 } 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 769 attach->sgt = sgt; 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 770 attach->dir = DMA_BIDIRECTIONAL; 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 771 } 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 772 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 773 dma_resv_unlock(dmabuf->resv); 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 774 d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 775 return attach; d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 776 d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 777 err_attach: 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 778 dma_resv_unlock(attach->dmabuf->resv); d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 779 kfree(attach); d15bd7ee445d07 drivers/base/dma-buf.c Sumit Semwal 2011-12-26 780 return ERR_PTR(ret); 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 781 bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 782 err_unpin: bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 783 if (dma_buf_is_dynamic(attach->dmabuf)) 7e008b02557cce drivers/dma-buf/dma-buf.c Christian König 2021-05-17 784 dmabuf->ops->unpin(attach); bb42df4662a447 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 785 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 786 err_unlock: 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 787 dma_resv_unlock(dmabuf->resv); 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 788 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 789 dma_buf_detach(dmabuf, attach); 97f090c47ec995 drivers/dma-buf/dma-buf.c Dmitry Osipenko 2022-05-27 790 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 @791 return ERR_PTR(ret); 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 792 } 16b0314aa746be drivers/dma-buf/dma-buf.c Greg Kroah-Hartman 2021-10-10 793 EXPORT_SYMBOL_NS_GPL(dma_buf_dynamic_attach, DMA_BUF); 15fd552d186cb0 drivers/dma-buf/dma-buf.c Christian König 2018-07-03 794 -- 0-DAY CI Kernel Test Service https://01.org/lkp ^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2022-07-05 13:14 UTC | newest] Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2022-05-27 22:08 [PATCH v6 14/22] dma-buf: Introduce new locking convention kernel test robot 2022-05-30 3:25 ` kernel test robot -- strict thread matches above, loose matches on Subject: below -- 2022-05-26 23:50 [PATCH v6 00/22] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers Dmitry Osipenko 2022-05-26 23:50 ` [PATCH v6 14/22] dma-buf: Introduce new locking convention Dmitry Osipenko 2022-05-26 23:50 ` Dmitry Osipenko 2022-05-27 2:37 ` kernel test robot 2022-05-27 12:44 ` Dmitry Osipenko 2022-05-27 12:44 ` Dmitry Osipenko 2022-05-30 6:50 ` Christian König via Virtualization 2022-05-30 6:50 ` Christian König 2022-05-30 6:50 ` Christian König 2022-05-30 13:26 ` Dmitry Osipenko 2022-05-30 13:26 ` Dmitry Osipenko 2022-05-30 13:41 ` Christian König via Virtualization 2022-05-30 13:41 ` Christian König 2022-05-30 13:41 ` Christian König 2022-05-30 13:57 ` Dmitry Osipenko 2022-05-30 13:57 ` Dmitry Osipenko 2022-06-28 21:26 ` Thomas Hellström (Intel) 2022-06-28 21:26 ` Thomas Hellström (Intel) 2022-07-01 10:43 ` Dmitry Osipenko 2022-07-01 10:43 ` Dmitry Osipenko 2022-07-04 22:38 ` Dmitry Osipenko 2022-07-04 22:38 ` Dmitry Osipenko 2022-07-04 22:38 ` Dmitry Osipenko 2022-07-05 10:52 ` Dmitry Osipenko 2022-07-05 10:52 ` Dmitry Osipenko 2022-07-05 10:52 ` Dmitry Osipenko 2022-05-30 7:05 ` Dan Carpenter 2022-05-27 12:21 ` kernel test robot
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.