* [PATCH rdma-core 0/3] Dma-buf related fixes @ 2021-02-04 18:50 ` Jianxin Xiong 0 siblings, 0 replies; 14+ messages in thread From: Jianxin Xiong @ 2021-02-04 18:50 UTC (permalink / raw) To: linux-rdma, dri-devel Cc: Jianxin Xiong, Doug Ledford, Jason Gunthorpe, Leon Romanovsky, Sumit Semwal, Christian Koenig, Daniel Vetter, Edward Srouji, Yishai Hadas, John Hubbard, Ali Alnubani, Gal Pressman, Emil Velikov This series fixes a few issues related to the dma-buf support. It consists of three patches. The first patch fixes a compilation warning for 32-bit builds. Patch 2 renames a function parameter and expands an abbreviation. Patch 3 adds check for DRM headers. Pull request at github: https://github.com/linux-rdma/rdma-core/pull/942 Jianxin Xiong (3): verbs: Fix gcc warnings when building for 32bit systems pyverbs,tests: Cosmetic improvements for dma-buf allocation routines configure: Add check for the presence of DRM headers CMakeLists.txt | 7 +++++ buildlib/Finddrm.cmake | 19 ++++++++++++ buildlib/config.h.in | 2 ++ libibverbs/cmd_mr.c | 2 +- libibverbs/verbs.c | 2 +- pyverbs/dmabuf.pyx | 12 ++++---- pyverbs/dmabuf_alloc.c | 59 +++++++++++++++++++++++++++++++------- pyverbs/dmabuf_alloc.h | 2 +- pyverbs/mr.pyx | 6 ++-- tests/test_mr.py | 78 +++++++++++++++++++++++++------------------------- 10 files changed, 127 insertions(+), 62 deletions(-) create mode 100644 buildlib/Finddrm.cmake -- 1.8.3.1 ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH rdma-core 0/3] Dma-buf related fixes @ 2021-02-04 18:50 ` Jianxin Xiong 0 siblings, 0 replies; 14+ messages in thread From: Jianxin Xiong @ 2021-02-04 18:50 UTC (permalink / raw) To: linux-rdma, dri-devel Cc: Yishai Hadas, Leon Romanovsky, John Hubbard, Edward Srouji, Emil Velikov, Gal Pressman, Ali Alnubani, Jason Gunthorpe, Doug Ledford, Daniel Vetter, Christian Koenig, Jianxin Xiong This series fixes a few issues related to the dma-buf support. It consists of three patches. The first patch fixes a compilation warning for 32-bit builds. Patch 2 renames a function parameter and expands an abbreviation. Patch 3 adds check for DRM headers. Pull request at github: https://github.com/linux-rdma/rdma-core/pull/942 Jianxin Xiong (3): verbs: Fix gcc warnings when building for 32bit systems pyverbs,tests: Cosmetic improvements for dma-buf allocation routines configure: Add check for the presence of DRM headers CMakeLists.txt | 7 +++++ buildlib/Finddrm.cmake | 19 ++++++++++++ buildlib/config.h.in | 2 ++ libibverbs/cmd_mr.c | 2 +- libibverbs/verbs.c | 2 +- pyverbs/dmabuf.pyx | 12 ++++---- pyverbs/dmabuf_alloc.c | 59 +++++++++++++++++++++++++++++++------- pyverbs/dmabuf_alloc.h | 2 +- pyverbs/mr.pyx | 6 ++-- tests/test_mr.py | 78 +++++++++++++++++++++++++------------------------- 10 files changed, 127 insertions(+), 62 deletions(-) create mode 100644 buildlib/Finddrm.cmake -- 1.8.3.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH rdma-core 1/3] verbs: Fix gcc warnings when building for 32bit systems 2021-02-04 18:50 ` Jianxin Xiong @ 2021-02-04 18:50 ` Jianxin Xiong -1 siblings, 0 replies; 14+ messages in thread From: Jianxin Xiong @ 2021-02-04 18:50 UTC (permalink / raw) To: linux-rdma, dri-devel Cc: Jianxin Xiong, Doug Ledford, Jason Gunthorpe, Leon Romanovsky, Sumit Semwal, Christian Koenig, Daniel Vetter, Edward Srouji, Yishai Hadas, John Hubbard, Ali Alnubani, Gal Pressman, Emil Velikov Commit 6b0a3238289f ("verbs: Support dma-buf based memory region") caused a build failure when building for 32b systems with gcc: $ mkdir build && cd build && CFLAGS="-m32" cmake -GNinja .. \ -DIOCTL_MODE=both -DNO_PYVERBS=1 -DENABLE_WERROR=1 && ninja ... ../libibverbs/cmd_mr.c: In function 'ibv_cmd_reg_dmabuf_mr': ../libibverbs/cmd_mr.c:152:21: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast] vmr->ibv_mr.addr = (void *)offset; ... ../libibverbs/verbs.c: In function 'ibv_reg_dmabuf_mr': ../libibverbs/verbs.c:387:13: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast] mr->addr = (void *)offset; ... Reported-by: Ali Alnubani <alialnu@nvidia.com> Signed-off-by: Jianxin Xiong <jianxin.xiong@intel.com> --- libibverbs/cmd_mr.c | 2 +- libibverbs/verbs.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/libibverbs/cmd_mr.c b/libibverbs/cmd_mr.c index af0fad7..736fce0 100644 --- a/libibverbs/cmd_mr.c +++ b/libibverbs/cmd_mr.c @@ -149,7 +149,7 @@ int ibv_cmd_reg_dmabuf_mr(struct ibv_pd *pd, uint64_t offset, size_t length, vmr->ibv_mr.lkey = lkey; vmr->ibv_mr.rkey = rkey; vmr->ibv_mr.pd = pd; - vmr->ibv_mr.addr = (void *)offset; + vmr->ibv_mr.addr = (void *)(uintptr_t)offset; vmr->ibv_mr.length = length; vmr->mr_type = IBV_MR_TYPE_DMABUF_MR; return 0; diff --git a/libibverbs/verbs.c b/libibverbs/verbs.c index b93046a..f666695 100644 --- a/libibverbs/verbs.c +++ b/libibverbs/verbs.c @@ -384,7 +384,7 @@ struct ibv_mr *ibv_reg_dmabuf_mr(struct ibv_pd *pd, uint64_t offset, mr->context = pd->context; mr->pd = pd; - mr->addr = (void *)offset; + mr->addr = (void *)(uintptr_t)offset; mr->length = length; return mr; } -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH rdma-core 1/3] verbs: Fix gcc warnings when building for 32bit systems @ 2021-02-04 18:50 ` Jianxin Xiong 0 siblings, 0 replies; 14+ messages in thread From: Jianxin Xiong @ 2021-02-04 18:50 UTC (permalink / raw) To: linux-rdma, dri-devel Cc: Yishai Hadas, Leon Romanovsky, John Hubbard, Edward Srouji, Emil Velikov, Gal Pressman, Ali Alnubani, Jason Gunthorpe, Doug Ledford, Daniel Vetter, Christian Koenig, Jianxin Xiong Commit 6b0a3238289f ("verbs: Support dma-buf based memory region") caused a build failure when building for 32b systems with gcc: $ mkdir build && cd build && CFLAGS="-m32" cmake -GNinja .. \ -DIOCTL_MODE=both -DNO_PYVERBS=1 -DENABLE_WERROR=1 && ninja ... ../libibverbs/cmd_mr.c: In function 'ibv_cmd_reg_dmabuf_mr': ../libibverbs/cmd_mr.c:152:21: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast] vmr->ibv_mr.addr = (void *)offset; ... ../libibverbs/verbs.c: In function 'ibv_reg_dmabuf_mr': ../libibverbs/verbs.c:387:13: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast] mr->addr = (void *)offset; ... Reported-by: Ali Alnubani <alialnu@nvidia.com> Signed-off-by: Jianxin Xiong <jianxin.xiong@intel.com> --- libibverbs/cmd_mr.c | 2 +- libibverbs/verbs.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/libibverbs/cmd_mr.c b/libibverbs/cmd_mr.c index af0fad7..736fce0 100644 --- a/libibverbs/cmd_mr.c +++ b/libibverbs/cmd_mr.c @@ -149,7 +149,7 @@ int ibv_cmd_reg_dmabuf_mr(struct ibv_pd *pd, uint64_t offset, size_t length, vmr->ibv_mr.lkey = lkey; vmr->ibv_mr.rkey = rkey; vmr->ibv_mr.pd = pd; - vmr->ibv_mr.addr = (void *)offset; + vmr->ibv_mr.addr = (void *)(uintptr_t)offset; vmr->ibv_mr.length = length; vmr->mr_type = IBV_MR_TYPE_DMABUF_MR; return 0; diff --git a/libibverbs/verbs.c b/libibverbs/verbs.c index b93046a..f666695 100644 --- a/libibverbs/verbs.c +++ b/libibverbs/verbs.c @@ -384,7 +384,7 @@ struct ibv_mr *ibv_reg_dmabuf_mr(struct ibv_pd *pd, uint64_t offset, mr->context = pd->context; mr->pd = pd; - mr->addr = (void *)offset; + mr->addr = (void *)(uintptr_t)offset; mr->length = length; return mr; } -- 1.8.3.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH rdma-core 2/3] pyverbs,tests: Cosmetic improvements for dma-buf allocation routines 2021-02-04 18:50 ` Jianxin Xiong @ 2021-02-04 18:50 ` Jianxin Xiong -1 siblings, 0 replies; 14+ messages in thread From: Jianxin Xiong @ 2021-02-04 18:50 UTC (permalink / raw) To: linux-rdma, dri-devel Cc: Jianxin Xiong, Doug Ledford, Jason Gunthorpe, Leon Romanovsky, Sumit Semwal, Christian Koenig, Daniel Vetter, Edward Srouji, Yishai Hadas, John Hubbard, Ali Alnubani, Gal Pressman, Emil Velikov Rename the parameter 'unit' to 'gpu'. Expand GTT to the full name in the comments. Signed-off-by: Jianxin Xiong <jianxin.xiong@intel.com> --- pyverbs/dmabuf.pyx | 12 ++++---- pyverbs/dmabuf_alloc.c | 12 ++++---- pyverbs/dmabuf_alloc.h | 2 +- pyverbs/mr.pyx | 6 ++-- tests/test_mr.py | 78 +++++++++++++++++++++++++------------------------- 5 files changed, 55 insertions(+), 55 deletions(-) diff --git a/pyverbs/dmabuf.pyx b/pyverbs/dmabuf.pyx index b9406bd..9ed7f02 100644 --- a/pyverbs/dmabuf.pyx +++ b/pyverbs/dmabuf.pyx @@ -12,7 +12,7 @@ from pyverbs.mr cimport DmaBufMR cdef extern from "dmabuf_alloc.h": cdef struct dmabuf: pass - dmabuf *dmabuf_alloc(unsigned long size, int unit, int gtt) + dmabuf *dmabuf_alloc(unsigned long size, int gpu, int gtt) void dmabuf_free(dmabuf *dmabuf) int dmabuf_get_drm_fd(dmabuf *dmabuf) int dmabuf_get_fd(dmabuf *dmabuf) @@ -20,20 +20,20 @@ cdef extern from "dmabuf_alloc.h": cdef class DmaBuf: - def __init__(self, size, unit=0, gtt=0): + def __init__(self, size, gpu=0, gtt=0): """ Allocate DmaBuf object from a GPU device. This is done through the DRI device interface. Usually this requires the effective user id being a member of the 'render' group. :param size: The size (in number of bytes) of the buffer. - :param unit: The unit number of the GPU to allocate the buffer from. - :param gtt: Allocate from GTT instead of VRAM. + :param gpu: The GPU unit to allocate the buffer from. + :param gtt: Allocate from GTT(Graphics Translation Table) instead of VRAM. :return: The newly created DmaBuf object on success. """ self.dmabuf_mrs = weakref.WeakSet() - self.dmabuf = dmabuf_alloc(size, unit, gtt) + self.dmabuf = dmabuf_alloc(size, gpu, gtt) if self.dmabuf == NULL: - raise PyverbsRDMAErrno(f'Failed to allocate dmabuf of size {size} on unit {unit}') + raise PyverbsRDMAErrno(f'Failed to allocate dmabuf of size {size} on gpu {gpu}') self.drm_fd = dmabuf_get_drm_fd(<dmabuf *>self.dmabuf) self.fd = dmabuf_get_fd(<dmabuf *>self.dmabuf) self.map_offset = dmabuf_get_offset(<dmabuf *>self.dmabuf) diff --git a/pyverbs/dmabuf_alloc.c b/pyverbs/dmabuf_alloc.c index 05eae75..93267bf 100644 --- a/pyverbs/dmabuf_alloc.c +++ b/pyverbs/dmabuf_alloc.c @@ -95,7 +95,7 @@ static int amdgpu_mmap_offset(struct drm *drm, uint32_t handle, return 0; } -static struct drm *drm_open(int unit) +static struct drm *drm_open(int gpu) { char path[32]; struct drm_version version = {}; @@ -107,7 +107,7 @@ static struct drm *drm_open(int unit) if (!drm) return NULL; - snprintf(path, sizeof(path), "/dev/dri/renderD%d", unit + 128); + snprintf(path, sizeof(path), "/dev/dri/renderD%d", gpu + 128); drm->fd = open(path, O_RDWR); if (drm->fd < 0) @@ -204,10 +204,10 @@ struct dmabuf { /* * dmabuf_alloc - allocate a dmabuf from GPU * @size - byte size of the buffer to allocate - * @unit - the GPU unit to use - * @gtt - if true, allocate from GTT instead of VRAM + * @gpu - the GPU unit to use + * @gtt - if true, allocate from GTT(Graphics Translation Table) instead of VRAM */ -struct dmabuf *dmabuf_alloc(uint64_t size, int unit, int gtt) +struct dmabuf *dmabuf_alloc(uint64_t size, int gpu, int gtt) { struct dmabuf *dmabuf; int err; @@ -216,7 +216,7 @@ struct dmabuf *dmabuf_alloc(uint64_t size, int unit, int gtt) if (!dmabuf) return NULL; - dmabuf->drm = drm_open(unit); + dmabuf->drm = drm_open(gpu); if (!dmabuf->drm) goto out_free; diff --git a/pyverbs/dmabuf_alloc.h b/pyverbs/dmabuf_alloc.h index f1b03c5..4698b11 100644 --- a/pyverbs/dmabuf_alloc.h +++ b/pyverbs/dmabuf_alloc.h @@ -10,7 +10,7 @@ struct dmabuf; -struct dmabuf *dmabuf_alloc(uint64_t size, int unit, int gtt); +struct dmabuf *dmabuf_alloc(uint64_t size, int gpu, int gtt); void dmabuf_free(struct dmabuf *dmabuf); int dmabuf_get_drm_fd(struct dmabuf *dmabuf); int dmabuf_get_fd(struct dmabuf *dmabuf); diff --git a/pyverbs/mr.pyx b/pyverbs/mr.pyx index aad47e2..d05d044 100644 --- a/pyverbs/mr.pyx +++ b/pyverbs/mr.pyx @@ -384,7 +384,7 @@ cdef class DMMR(MR): cdef class DmaBufMR(MR): def __init__(self, PD pd not None, length, access, DmaBuf dmabuf=None, - offset=0, unit=0, gtt=0): + offset=0, gpu=0, gtt=0): """ Initializes a DmaBufMR (DMA-BUF Memory Region) of the given length and access flags using the given PD and DmaBuf objects. @@ -393,14 +393,14 @@ cdef class DmaBufMR(MR): :param access: Access flags, see ibv_access_flags enum :param dmabuf: A DmaBuf object. One will be allocated if absent :param offset: Byte offset from the beginning of the dma-buf - :param unit: GPU unit for internal dmabuf allocation + :param gpu: GPU unit for internal dmabuf allocation :param gtt: If true allocate internal dmabuf from GTT instead of VRAM :return: The newly created DMABUFMR """ self.logger = logging.getLogger(self.__class__.__name__) if dmabuf is None: self.is_dmabuf_internal = True - dmabuf = DmaBuf(length + offset, unit, gtt) + dmabuf = DmaBuf(length + offset, gpu, gtt) self.mr = v.ibv_reg_dmabuf_mr(pd.pd, offset, length, offset, dmabuf.fd, access) if self.mr == NULL: raise PyverbsRDMAErrno(f'Failed to register a dma-buf MR. length: {length}, access flags: {access}') diff --git a/tests/test_mr.py b/tests/test_mr.py index 03a645f..028be71 100644 --- a/tests/test_mr.py +++ b/tests/test_mr.py @@ -429,14 +429,14 @@ class DMMRTest(PyverbsAPITestCase): dm_mr.close() -def check_dmabuf_support(unit=0): +def check_dmabuf_support(gpu=0): """ Check if dma-buf allocation is supported by the system. Skip the test on failure. """ - device_num = 128 + unit + device_num = 128 + gpu try: - DmaBuf(1, unit=unit) + DmaBuf(1, gpu=gpu) except PyverbsRDMAError as ex: if ex.error_code == errno.ENOENT: raise unittest.SkipTest(f'Device /dev/dri/renderD{device_num} is not present') @@ -446,13 +446,13 @@ def check_dmabuf_support(unit=0): raise unittest.SkipTest(f'Allocating dmabuf is not supported by /dev/dri/renderD{device_num}') -def check_dmabuf_mr_support(pd, unit=0): +def check_dmabuf_mr_support(pd, gpu=0): """ Check if dma-buf MR registration is supported by the driver. Skip the test on failure """ try: - DmaBufMR(pd, 1, 0, unit=unit) + DmaBufMR(pd, 1, 0, gpu=gpu) except PyverbsRDMAError as ex: if ex.error_code == errno.EOPNOTSUPP: raise unittest.SkipTest('Reg dma-buf MR is not supported by the RDMA driver') @@ -464,22 +464,22 @@ class DmaBufMRTest(PyverbsAPITestCase): """ def setUp(self): super().setUp() - self.unit = self.config['gpu'] + self.gpu = self.config['gpu'] self.gtt = self.config['gtt'] def test_dmabuf_reg_mr(self): """ Test ibv_reg_dmabuf_mr() """ - check_dmabuf_support(self.unit) + check_dmabuf_support(self.gpu) for ctx, attr, attr_ex in self.devices: with PD(ctx) as pd: - check_dmabuf_mr_support(pd, self.unit) + check_dmabuf_mr_support(pd, self.gpu) flags = u.get_dmabuf_access_flags(ctx) for f in flags: len = u.get_mr_length() for off in [0, len//2]: - with DmaBufMR(pd, len, f, offset=off, unit=self.unit, + with DmaBufMR(pd, len, f, offset=off, gpu=self.gpu, gtt=self.gtt) as mr: pass @@ -487,15 +487,15 @@ class DmaBufMRTest(PyverbsAPITestCase): """ Test ibv_dereg_mr() with DmaBufMR """ - check_dmabuf_support(self.unit) + check_dmabuf_support(self.gpu) for ctx, attr, attr_ex in self.devices: with PD(ctx) as pd: - check_dmabuf_mr_support(pd, self.unit) + check_dmabuf_mr_support(pd, self.gpu) flags = u.get_dmabuf_access_flags(ctx) for f in flags: len = u.get_mr_length() for off in [0, len//2]: - with DmaBufMR(pd, len, f, offset=off, unit=self.unit, + with DmaBufMR(pd, len, f, offset=off, gpu=self.gpu, gtt=self.gtt) as mr: mr.close() @@ -503,15 +503,15 @@ class DmaBufMRTest(PyverbsAPITestCase): """ Verify that explicit call to DmaBufMR's close() doesn't fail """ - check_dmabuf_support(self.unit) + check_dmabuf_support(self.gpu) for ctx, attr, attr_ex in self.devices: with PD(ctx) as pd: - check_dmabuf_mr_support(pd, self.unit) + check_dmabuf_mr_support(pd, self.gpu) flags = u.get_dmabuf_access_flags(ctx) for f in flags: len = u.get_mr_length() for off in [0, len//2]: - with DmaBufMR(pd, len, f, offset=off, unit=self.unit, + with DmaBufMR(pd, len, f, offset=off, gpu=self.gpu, gtt=self.gtt) as mr: # Pyverbs supports multiple destruction of objects, # we are not expecting an exception here. @@ -522,10 +522,10 @@ class DmaBufMRTest(PyverbsAPITestCase): """ Verify that illegal flags combination fails as expected """ - check_dmabuf_support(self.unit) + check_dmabuf_support(self.gpu) for ctx, attr, attr_ex in self.devices: with PD(ctx) as pd: - check_dmabuf_mr_support(pd, self.unit) + check_dmabuf_mr_support(pd, self.gpu) for i in range(5): flags = random.sample([e.IBV_ACCESS_REMOTE_WRITE, e.IBV_ACCESS_REMOTE_ATOMIC], @@ -535,7 +535,7 @@ class DmaBufMRTest(PyverbsAPITestCase): mr_flags += i.value try: DmaBufMR(pd, u.get_mr_length(), mr_flags, - unit=self.unit, gtt=self.gtt) + gpu=self.gpu, gtt=self.gtt) except PyverbsRDMAError as err: assert 'Failed to register a dma-buf MR' in err.args[0] else: @@ -545,17 +545,17 @@ class DmaBufMRTest(PyverbsAPITestCase): """ Test writing to DmaBufMR's buffer """ - check_dmabuf_support(self.unit) + check_dmabuf_support(self.gpu) for ctx, attr, attr_ex in self.devices: with PD(ctx) as pd: - check_dmabuf_mr_support(pd, self.unit) + check_dmabuf_mr_support(pd, self.gpu) for i in range(10): mr_len = u.get_mr_length() flags = u.get_dmabuf_access_flags(ctx) for f in flags: for mr_off in [0, mr_len//2]: with DmaBufMR(pd, mr_len, f, offset=mr_off, - unit=self.unit, gtt=self.gtt) as mr: + gpu=self.gpu, gtt=self.gtt) as mr: write_len = min(random.randint(1, MAX_IO_LEN), mr_len) mr.write('a' * write_len, write_len) @@ -564,17 +564,17 @@ class DmaBufMRTest(PyverbsAPITestCase): """ Test reading from DmaBufMR's buffer """ - check_dmabuf_support(self.unit) + check_dmabuf_support(self.gpu) for ctx, attr, attr_ex in self.devices: with PD(ctx) as pd: - check_dmabuf_mr_support(pd, self.unit) + check_dmabuf_mr_support(pd, self.gpu) for i in range(10): mr_len = u.get_mr_length() flags = u.get_dmabuf_access_flags(ctx) for f in flags: for mr_off in [0, mr_len//2]: with DmaBufMR(pd, mr_len, f, offset=mr_off, - unit=self.unit, gtt=self.gtt) as mr: + gpu=self.gpu, gtt=self.gtt) as mr: write_len = min(random.randint(1, MAX_IO_LEN), mr_len) write_str = 'a' * write_len @@ -588,14 +588,14 @@ class DmaBufMRTest(PyverbsAPITestCase): """ Test reading lkey property """ - check_dmabuf_support(self.unit) + check_dmabuf_support(self.gpu) for ctx, attr, attr_ex in self.devices: with PD(ctx) as pd: - check_dmabuf_mr_support(pd, self.unit) + check_dmabuf_mr_support(pd, self.gpu) length = u.get_mr_length() flags = u.get_dmabuf_access_flags(ctx) for f in flags: - with DmaBufMR(pd, length, f, unit=self.unit, + with DmaBufMR(pd, length, f, gpu=self.gpu, gtt=self.gtt) as mr: mr.lkey @@ -603,38 +603,38 @@ class DmaBufMRTest(PyverbsAPITestCase): """ Test reading rkey property """ - check_dmabuf_support(self.unit) + check_dmabuf_support(self.gpu) for ctx, attr, attr_ex in self.devices: with PD(ctx) as pd: - check_dmabuf_mr_support(pd, self.unit) + check_dmabuf_mr_support(pd, self.gpu) length = u.get_mr_length() flags = u.get_dmabuf_access_flags(ctx) for f in flags: - with DmaBufMR(pd, length, f, unit=self.unit, + with DmaBufMR(pd, length, f, gpu=self.gpu, gtt=self.gtt) as mr: mr.rkey class DmaBufRC(RCResources): - def __init__(self, dev_name, ib_port, gid_index, unit, gtt): + def __init__(self, dev_name, ib_port, gid_index, gpu, gtt): """ Initialize an DmaBufRC object. :param dev_name: Device name to be used :param ib_port: IB port of the device to use :param gid_index: Which GID index to use - :param unit: GPU unit to allocate dmabuf from + :param gpu: GPU unit to allocate dmabuf from :gtt: Allocate dmabuf from GTT instead og VRAM """ - self.unit = unit + self.gpu = gpu self.gtt = gtt super(DmaBufRC, self).__init__(dev_name=dev_name, ib_port=ib_port, gid_index=gid_index) def create_mr(self): - check_dmabuf_support(self.unit) - check_dmabuf_mr_support(self.pd, self.unit) + check_dmabuf_support(self.gpu) + check_dmabuf_mr_support(self.pd, self.gpu) access = e.IBV_ACCESS_LOCAL_WRITE | e.IBV_ACCESS_REMOTE_WRITE - mr = DmaBufMR(self.pd, self.msg_size, access, unit=self.unit, + mr = DmaBufMR(self.pd, self.msg_size, access, gpu=self.gpu, gtt=self.gtt) self.mr = mr @@ -649,7 +649,7 @@ class DmaBufTestCase(RDMATestCase): def setUp(self): super(DmaBufTestCase, self).setUp() self.iters = 100 - self.unit = self.config['gpu'] + self.gpu = self.config['gpu'] self.gtt = self.config['gtt'] def create_players(self, resource, **resource_arg): @@ -671,7 +671,7 @@ class DmaBufTestCase(RDMATestCase): """ Test send/recv using dma-buf MR over RC """ - client, server = self.create_players(DmaBufRC, unit=self.unit, + client, server = self.create_players(DmaBufRC, gpu=self.gpu, gtt=self.gtt) u.traffic(client, server, self.iters, self.gid_index, self.ib_port) @@ -679,7 +679,7 @@ class DmaBufTestCase(RDMATestCase): """ Test rdma write using dma-buf MR """ - client, server = self.create_players(DmaBufRC, unit=self.unit, + client, server = self.create_players(DmaBufRC, gpu=self.gpu, gtt=self.gtt) server.rkey = client.mr.rkey server.remote_addr = client.mr.offset -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH rdma-core 2/3] pyverbs, tests: Cosmetic improvements for dma-buf allocation routines @ 2021-02-04 18:50 ` Jianxin Xiong 0 siblings, 0 replies; 14+ messages in thread From: Jianxin Xiong @ 2021-02-04 18:50 UTC (permalink / raw) To: linux-rdma, dri-devel Cc: Yishai Hadas, Leon Romanovsky, John Hubbard, Edward Srouji, Emil Velikov, Gal Pressman, Ali Alnubani, Jason Gunthorpe, Doug Ledford, Daniel Vetter, Christian Koenig, Jianxin Xiong Rename the parameter 'unit' to 'gpu'. Expand GTT to the full name in the comments. Signed-off-by: Jianxin Xiong <jianxin.xiong@intel.com> --- pyverbs/dmabuf.pyx | 12 ++++---- pyverbs/dmabuf_alloc.c | 12 ++++---- pyverbs/dmabuf_alloc.h | 2 +- pyverbs/mr.pyx | 6 ++-- tests/test_mr.py | 78 +++++++++++++++++++++++++------------------------- 5 files changed, 55 insertions(+), 55 deletions(-) diff --git a/pyverbs/dmabuf.pyx b/pyverbs/dmabuf.pyx index b9406bd..9ed7f02 100644 --- a/pyverbs/dmabuf.pyx +++ b/pyverbs/dmabuf.pyx @@ -12,7 +12,7 @@ from pyverbs.mr cimport DmaBufMR cdef extern from "dmabuf_alloc.h": cdef struct dmabuf: pass - dmabuf *dmabuf_alloc(unsigned long size, int unit, int gtt) + dmabuf *dmabuf_alloc(unsigned long size, int gpu, int gtt) void dmabuf_free(dmabuf *dmabuf) int dmabuf_get_drm_fd(dmabuf *dmabuf) int dmabuf_get_fd(dmabuf *dmabuf) @@ -20,20 +20,20 @@ cdef extern from "dmabuf_alloc.h": cdef class DmaBuf: - def __init__(self, size, unit=0, gtt=0): + def __init__(self, size, gpu=0, gtt=0): """ Allocate DmaBuf object from a GPU device. This is done through the DRI device interface. Usually this requires the effective user id being a member of the 'render' group. :param size: The size (in number of bytes) of the buffer. - :param unit: The unit number of the GPU to allocate the buffer from. - :param gtt: Allocate from GTT instead of VRAM. + :param gpu: The GPU unit to allocate the buffer from. + :param gtt: Allocate from GTT(Graphics Translation Table) instead of VRAM. :return: The newly created DmaBuf object on success. """ self.dmabuf_mrs = weakref.WeakSet() - self.dmabuf = dmabuf_alloc(size, unit, gtt) + self.dmabuf = dmabuf_alloc(size, gpu, gtt) if self.dmabuf == NULL: - raise PyverbsRDMAErrno(f'Failed to allocate dmabuf of size {size} on unit {unit}') + raise PyverbsRDMAErrno(f'Failed to allocate dmabuf of size {size} on gpu {gpu}') self.drm_fd = dmabuf_get_drm_fd(<dmabuf *>self.dmabuf) self.fd = dmabuf_get_fd(<dmabuf *>self.dmabuf) self.map_offset = dmabuf_get_offset(<dmabuf *>self.dmabuf) diff --git a/pyverbs/dmabuf_alloc.c b/pyverbs/dmabuf_alloc.c index 05eae75..93267bf 100644 --- a/pyverbs/dmabuf_alloc.c +++ b/pyverbs/dmabuf_alloc.c @@ -95,7 +95,7 @@ static int amdgpu_mmap_offset(struct drm *drm, uint32_t handle, return 0; } -static struct drm *drm_open(int unit) +static struct drm *drm_open(int gpu) { char path[32]; struct drm_version version = {}; @@ -107,7 +107,7 @@ static struct drm *drm_open(int unit) if (!drm) return NULL; - snprintf(path, sizeof(path), "/dev/dri/renderD%d", unit + 128); + snprintf(path, sizeof(path), "/dev/dri/renderD%d", gpu + 128); drm->fd = open(path, O_RDWR); if (drm->fd < 0) @@ -204,10 +204,10 @@ struct dmabuf { /* * dmabuf_alloc - allocate a dmabuf from GPU * @size - byte size of the buffer to allocate - * @unit - the GPU unit to use - * @gtt - if true, allocate from GTT instead of VRAM + * @gpu - the GPU unit to use + * @gtt - if true, allocate from GTT(Graphics Translation Table) instead of VRAM */ -struct dmabuf *dmabuf_alloc(uint64_t size, int unit, int gtt) +struct dmabuf *dmabuf_alloc(uint64_t size, int gpu, int gtt) { struct dmabuf *dmabuf; int err; @@ -216,7 +216,7 @@ struct dmabuf *dmabuf_alloc(uint64_t size, int unit, int gtt) if (!dmabuf) return NULL; - dmabuf->drm = drm_open(unit); + dmabuf->drm = drm_open(gpu); if (!dmabuf->drm) goto out_free; diff --git a/pyverbs/dmabuf_alloc.h b/pyverbs/dmabuf_alloc.h index f1b03c5..4698b11 100644 --- a/pyverbs/dmabuf_alloc.h +++ b/pyverbs/dmabuf_alloc.h @@ -10,7 +10,7 @@ struct dmabuf; -struct dmabuf *dmabuf_alloc(uint64_t size, int unit, int gtt); +struct dmabuf *dmabuf_alloc(uint64_t size, int gpu, int gtt); void dmabuf_free(struct dmabuf *dmabuf); int dmabuf_get_drm_fd(struct dmabuf *dmabuf); int dmabuf_get_fd(struct dmabuf *dmabuf); diff --git a/pyverbs/mr.pyx b/pyverbs/mr.pyx index aad47e2..d05d044 100644 --- a/pyverbs/mr.pyx +++ b/pyverbs/mr.pyx @@ -384,7 +384,7 @@ cdef class DMMR(MR): cdef class DmaBufMR(MR): def __init__(self, PD pd not None, length, access, DmaBuf dmabuf=None, - offset=0, unit=0, gtt=0): + offset=0, gpu=0, gtt=0): """ Initializes a DmaBufMR (DMA-BUF Memory Region) of the given length and access flags using the given PD and DmaBuf objects. @@ -393,14 +393,14 @@ cdef class DmaBufMR(MR): :param access: Access flags, see ibv_access_flags enum :param dmabuf: A DmaBuf object. One will be allocated if absent :param offset: Byte offset from the beginning of the dma-buf - :param unit: GPU unit for internal dmabuf allocation + :param gpu: GPU unit for internal dmabuf allocation :param gtt: If true allocate internal dmabuf from GTT instead of VRAM :return: The newly created DMABUFMR """ self.logger = logging.getLogger(self.__class__.__name__) if dmabuf is None: self.is_dmabuf_internal = True - dmabuf = DmaBuf(length + offset, unit, gtt) + dmabuf = DmaBuf(length + offset, gpu, gtt) self.mr = v.ibv_reg_dmabuf_mr(pd.pd, offset, length, offset, dmabuf.fd, access) if self.mr == NULL: raise PyverbsRDMAErrno(f'Failed to register a dma-buf MR. length: {length}, access flags: {access}') diff --git a/tests/test_mr.py b/tests/test_mr.py index 03a645f..028be71 100644 --- a/tests/test_mr.py +++ b/tests/test_mr.py @@ -429,14 +429,14 @@ class DMMRTest(PyverbsAPITestCase): dm_mr.close() -def check_dmabuf_support(unit=0): +def check_dmabuf_support(gpu=0): """ Check if dma-buf allocation is supported by the system. Skip the test on failure. """ - device_num = 128 + unit + device_num = 128 + gpu try: - DmaBuf(1, unit=unit) + DmaBuf(1, gpu=gpu) except PyverbsRDMAError as ex: if ex.error_code == errno.ENOENT: raise unittest.SkipTest(f'Device /dev/dri/renderD{device_num} is not present') @@ -446,13 +446,13 @@ def check_dmabuf_support(unit=0): raise unittest.SkipTest(f'Allocating dmabuf is not supported by /dev/dri/renderD{device_num}') -def check_dmabuf_mr_support(pd, unit=0): +def check_dmabuf_mr_support(pd, gpu=0): """ Check if dma-buf MR registration is supported by the driver. Skip the test on failure """ try: - DmaBufMR(pd, 1, 0, unit=unit) + DmaBufMR(pd, 1, 0, gpu=gpu) except PyverbsRDMAError as ex: if ex.error_code == errno.EOPNOTSUPP: raise unittest.SkipTest('Reg dma-buf MR is not supported by the RDMA driver') @@ -464,22 +464,22 @@ class DmaBufMRTest(PyverbsAPITestCase): """ def setUp(self): super().setUp() - self.unit = self.config['gpu'] + self.gpu = self.config['gpu'] self.gtt = self.config['gtt'] def test_dmabuf_reg_mr(self): """ Test ibv_reg_dmabuf_mr() """ - check_dmabuf_support(self.unit) + check_dmabuf_support(self.gpu) for ctx, attr, attr_ex in self.devices: with PD(ctx) as pd: - check_dmabuf_mr_support(pd, self.unit) + check_dmabuf_mr_support(pd, self.gpu) flags = u.get_dmabuf_access_flags(ctx) for f in flags: len = u.get_mr_length() for off in [0, len//2]: - with DmaBufMR(pd, len, f, offset=off, unit=self.unit, + with DmaBufMR(pd, len, f, offset=off, gpu=self.gpu, gtt=self.gtt) as mr: pass @@ -487,15 +487,15 @@ class DmaBufMRTest(PyverbsAPITestCase): """ Test ibv_dereg_mr() with DmaBufMR """ - check_dmabuf_support(self.unit) + check_dmabuf_support(self.gpu) for ctx, attr, attr_ex in self.devices: with PD(ctx) as pd: - check_dmabuf_mr_support(pd, self.unit) + check_dmabuf_mr_support(pd, self.gpu) flags = u.get_dmabuf_access_flags(ctx) for f in flags: len = u.get_mr_length() for off in [0, len//2]: - with DmaBufMR(pd, len, f, offset=off, unit=self.unit, + with DmaBufMR(pd, len, f, offset=off, gpu=self.gpu, gtt=self.gtt) as mr: mr.close() @@ -503,15 +503,15 @@ class DmaBufMRTest(PyverbsAPITestCase): """ Verify that explicit call to DmaBufMR's close() doesn't fail """ - check_dmabuf_support(self.unit) + check_dmabuf_support(self.gpu) for ctx, attr, attr_ex in self.devices: with PD(ctx) as pd: - check_dmabuf_mr_support(pd, self.unit) + check_dmabuf_mr_support(pd, self.gpu) flags = u.get_dmabuf_access_flags(ctx) for f in flags: len = u.get_mr_length() for off in [0, len//2]: - with DmaBufMR(pd, len, f, offset=off, unit=self.unit, + with DmaBufMR(pd, len, f, offset=off, gpu=self.gpu, gtt=self.gtt) as mr: # Pyverbs supports multiple destruction of objects, # we are not expecting an exception here. @@ -522,10 +522,10 @@ class DmaBufMRTest(PyverbsAPITestCase): """ Verify that illegal flags combination fails as expected """ - check_dmabuf_support(self.unit) + check_dmabuf_support(self.gpu) for ctx, attr, attr_ex in self.devices: with PD(ctx) as pd: - check_dmabuf_mr_support(pd, self.unit) + check_dmabuf_mr_support(pd, self.gpu) for i in range(5): flags = random.sample([e.IBV_ACCESS_REMOTE_WRITE, e.IBV_ACCESS_REMOTE_ATOMIC], @@ -535,7 +535,7 @@ class DmaBufMRTest(PyverbsAPITestCase): mr_flags += i.value try: DmaBufMR(pd, u.get_mr_length(), mr_flags, - unit=self.unit, gtt=self.gtt) + gpu=self.gpu, gtt=self.gtt) except PyverbsRDMAError as err: assert 'Failed to register a dma-buf MR' in err.args[0] else: @@ -545,17 +545,17 @@ class DmaBufMRTest(PyverbsAPITestCase): """ Test writing to DmaBufMR's buffer """ - check_dmabuf_support(self.unit) + check_dmabuf_support(self.gpu) for ctx, attr, attr_ex in self.devices: with PD(ctx) as pd: - check_dmabuf_mr_support(pd, self.unit) + check_dmabuf_mr_support(pd, self.gpu) for i in range(10): mr_len = u.get_mr_length() flags = u.get_dmabuf_access_flags(ctx) for f in flags: for mr_off in [0, mr_len//2]: with DmaBufMR(pd, mr_len, f, offset=mr_off, - unit=self.unit, gtt=self.gtt) as mr: + gpu=self.gpu, gtt=self.gtt) as mr: write_len = min(random.randint(1, MAX_IO_LEN), mr_len) mr.write('a' * write_len, write_len) @@ -564,17 +564,17 @@ class DmaBufMRTest(PyverbsAPITestCase): """ Test reading from DmaBufMR's buffer """ - check_dmabuf_support(self.unit) + check_dmabuf_support(self.gpu) for ctx, attr, attr_ex in self.devices: with PD(ctx) as pd: - check_dmabuf_mr_support(pd, self.unit) + check_dmabuf_mr_support(pd, self.gpu) for i in range(10): mr_len = u.get_mr_length() flags = u.get_dmabuf_access_flags(ctx) for f in flags: for mr_off in [0, mr_len//2]: with DmaBufMR(pd, mr_len, f, offset=mr_off, - unit=self.unit, gtt=self.gtt) as mr: + gpu=self.gpu, gtt=self.gtt) as mr: write_len = min(random.randint(1, MAX_IO_LEN), mr_len) write_str = 'a' * write_len @@ -588,14 +588,14 @@ class DmaBufMRTest(PyverbsAPITestCase): """ Test reading lkey property """ - check_dmabuf_support(self.unit) + check_dmabuf_support(self.gpu) for ctx, attr, attr_ex in self.devices: with PD(ctx) as pd: - check_dmabuf_mr_support(pd, self.unit) + check_dmabuf_mr_support(pd, self.gpu) length = u.get_mr_length() flags = u.get_dmabuf_access_flags(ctx) for f in flags: - with DmaBufMR(pd, length, f, unit=self.unit, + with DmaBufMR(pd, length, f, gpu=self.gpu, gtt=self.gtt) as mr: mr.lkey @@ -603,38 +603,38 @@ class DmaBufMRTest(PyverbsAPITestCase): """ Test reading rkey property """ - check_dmabuf_support(self.unit) + check_dmabuf_support(self.gpu) for ctx, attr, attr_ex in self.devices: with PD(ctx) as pd: - check_dmabuf_mr_support(pd, self.unit) + check_dmabuf_mr_support(pd, self.gpu) length = u.get_mr_length() flags = u.get_dmabuf_access_flags(ctx) for f in flags: - with DmaBufMR(pd, length, f, unit=self.unit, + with DmaBufMR(pd, length, f, gpu=self.gpu, gtt=self.gtt) as mr: mr.rkey class DmaBufRC(RCResources): - def __init__(self, dev_name, ib_port, gid_index, unit, gtt): + def __init__(self, dev_name, ib_port, gid_index, gpu, gtt): """ Initialize an DmaBufRC object. :param dev_name: Device name to be used :param ib_port: IB port of the device to use :param gid_index: Which GID index to use - :param unit: GPU unit to allocate dmabuf from + :param gpu: GPU unit to allocate dmabuf from :gtt: Allocate dmabuf from GTT instead og VRAM """ - self.unit = unit + self.gpu = gpu self.gtt = gtt super(DmaBufRC, self).__init__(dev_name=dev_name, ib_port=ib_port, gid_index=gid_index) def create_mr(self): - check_dmabuf_support(self.unit) - check_dmabuf_mr_support(self.pd, self.unit) + check_dmabuf_support(self.gpu) + check_dmabuf_mr_support(self.pd, self.gpu) access = e.IBV_ACCESS_LOCAL_WRITE | e.IBV_ACCESS_REMOTE_WRITE - mr = DmaBufMR(self.pd, self.msg_size, access, unit=self.unit, + mr = DmaBufMR(self.pd, self.msg_size, access, gpu=self.gpu, gtt=self.gtt) self.mr = mr @@ -649,7 +649,7 @@ class DmaBufTestCase(RDMATestCase): def setUp(self): super(DmaBufTestCase, self).setUp() self.iters = 100 - self.unit = self.config['gpu'] + self.gpu = self.config['gpu'] self.gtt = self.config['gtt'] def create_players(self, resource, **resource_arg): @@ -671,7 +671,7 @@ class DmaBufTestCase(RDMATestCase): """ Test send/recv using dma-buf MR over RC """ - client, server = self.create_players(DmaBufRC, unit=self.unit, + client, server = self.create_players(DmaBufRC, gpu=self.gpu, gtt=self.gtt) u.traffic(client, server, self.iters, self.gid_index, self.ib_port) @@ -679,7 +679,7 @@ class DmaBufTestCase(RDMATestCase): """ Test rdma write using dma-buf MR """ - client, server = self.create_players(DmaBufRC, unit=self.unit, + client, server = self.create_players(DmaBufRC, gpu=self.gpu, gtt=self.gtt) server.rkey = client.mr.rkey server.remote_addr = client.mr.offset -- 1.8.3.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH rdma-core 2/3] pyverbs,tests: Cosmetic improvements for dma-buf allocation routines 2021-02-04 18:50 ` [PATCH rdma-core 2/3] pyverbs, tests: " Jianxin Xiong @ 2021-02-04 23:04 ` John Hubbard -1 siblings, 0 replies; 14+ messages in thread From: John Hubbard @ 2021-02-04 23:04 UTC (permalink / raw) To: Jianxin Xiong, linux-rdma, dri-devel Cc: Doug Ledford, Jason Gunthorpe, Leon Romanovsky, Sumit Semwal, Christian Koenig, Daniel Vetter, Edward Srouji, Yishai Hadas, Ali Alnubani, Gal Pressman, Emil Velikov On 2/4/21 10:50 AM, Jianxin Xiong wrote: > Rename the parameter 'unit' to 'gpu'. Expand GTT to the full name in the > comments. > > Signed-off-by: Jianxin Xiong <jianxin.xiong@intel.com> > --- > pyverbs/dmabuf.pyx | 12 ++++---- > pyverbs/dmabuf_alloc.c | 12 ++++---- > pyverbs/dmabuf_alloc.h | 2 +- > pyverbs/mr.pyx | 6 ++-- > tests/test_mr.py | 78 +++++++++++++++++++++++++------------------------- > 5 files changed, 55 insertions(+), 55 deletions(-) > Looks good! If you care, you might want to add a space, like this, to the few GTT cases: GTT (Graphics Translation Table) Obviously not worth spinning another version for that, as it is still readable as-is. Just mentioning it for the sake of pointless perfectionism, and in case someone ever wonders why it was missed during a review. :) Either way, feel free to add: Reviewed-by: John Hubbard <jhubbard@nvidia.com> thanks, -- John Hubbard NVIDIA > diff --git a/pyverbs/dmabuf.pyx b/pyverbs/dmabuf.pyx > index b9406bd..9ed7f02 100644 > --- a/pyverbs/dmabuf.pyx > +++ b/pyverbs/dmabuf.pyx > @@ -12,7 +12,7 @@ from pyverbs.mr cimport DmaBufMR > cdef extern from "dmabuf_alloc.h": > cdef struct dmabuf: > pass > - dmabuf *dmabuf_alloc(unsigned long size, int unit, int gtt) > + dmabuf *dmabuf_alloc(unsigned long size, int gpu, int gtt) > void dmabuf_free(dmabuf *dmabuf) > int dmabuf_get_drm_fd(dmabuf *dmabuf) > int dmabuf_get_fd(dmabuf *dmabuf) > @@ -20,20 +20,20 @@ cdef extern from "dmabuf_alloc.h": > > > cdef class DmaBuf: > - def __init__(self, size, unit=0, gtt=0): > + def __init__(self, size, gpu=0, gtt=0): > """ > Allocate DmaBuf object from a GPU device. This is done through the > DRI device interface. Usually this requires the effective user id > being a member of the 'render' group. > :param size: The size (in number of bytes) of the buffer. > - :param unit: The unit number of the GPU to allocate the buffer from. > - :param gtt: Allocate from GTT instead of VRAM. > + :param gpu: The GPU unit to allocate the buffer from. > + :param gtt: Allocate from GTT(Graphics Translation Table) instead of VRAM. > :return: The newly created DmaBuf object on success. > """ > self.dmabuf_mrs = weakref.WeakSet() > - self.dmabuf = dmabuf_alloc(size, unit, gtt) > + self.dmabuf = dmabuf_alloc(size, gpu, gtt) > if self.dmabuf == NULL: > - raise PyverbsRDMAErrno(f'Failed to allocate dmabuf of size {size} on unit {unit}') > + raise PyverbsRDMAErrno(f'Failed to allocate dmabuf of size {size} on gpu {gpu}') > self.drm_fd = dmabuf_get_drm_fd(<dmabuf *>self.dmabuf) > self.fd = dmabuf_get_fd(<dmabuf *>self.dmabuf) > self.map_offset = dmabuf_get_offset(<dmabuf *>self.dmabuf) > diff --git a/pyverbs/dmabuf_alloc.c b/pyverbs/dmabuf_alloc.c > index 05eae75..93267bf 100644 > --- a/pyverbs/dmabuf_alloc.c > +++ b/pyverbs/dmabuf_alloc.c > @@ -95,7 +95,7 @@ static int amdgpu_mmap_offset(struct drm *drm, uint32_t handle, > return 0; > } > > -static struct drm *drm_open(int unit) > +static struct drm *drm_open(int gpu) > { > char path[32]; > struct drm_version version = {}; > @@ -107,7 +107,7 @@ static struct drm *drm_open(int unit) > if (!drm) > return NULL; > > - snprintf(path, sizeof(path), "/dev/dri/renderD%d", unit + 128); > + snprintf(path, sizeof(path), "/dev/dri/renderD%d", gpu + 128); > > drm->fd = open(path, O_RDWR); > if (drm->fd < 0) > @@ -204,10 +204,10 @@ struct dmabuf { > /* > * dmabuf_alloc - allocate a dmabuf from GPU > * @size - byte size of the buffer to allocate > - * @unit - the GPU unit to use > - * @gtt - if true, allocate from GTT instead of VRAM > + * @gpu - the GPU unit to use > + * @gtt - if true, allocate from GTT(Graphics Translation Table) instead of VRAM > */ > -struct dmabuf *dmabuf_alloc(uint64_t size, int unit, int gtt) > +struct dmabuf *dmabuf_alloc(uint64_t size, int gpu, int gtt) > { > struct dmabuf *dmabuf; > int err; > @@ -216,7 +216,7 @@ struct dmabuf *dmabuf_alloc(uint64_t size, int unit, int gtt) > if (!dmabuf) > return NULL; > > - dmabuf->drm = drm_open(unit); > + dmabuf->drm = drm_open(gpu); > if (!dmabuf->drm) > goto out_free; > > diff --git a/pyverbs/dmabuf_alloc.h b/pyverbs/dmabuf_alloc.h > index f1b03c5..4698b11 100644 > --- a/pyverbs/dmabuf_alloc.h > +++ b/pyverbs/dmabuf_alloc.h > @@ -10,7 +10,7 @@ > > struct dmabuf; > > -struct dmabuf *dmabuf_alloc(uint64_t size, int unit, int gtt); > +struct dmabuf *dmabuf_alloc(uint64_t size, int gpu, int gtt); > void dmabuf_free(struct dmabuf *dmabuf); > int dmabuf_get_drm_fd(struct dmabuf *dmabuf); > int dmabuf_get_fd(struct dmabuf *dmabuf); > diff --git a/pyverbs/mr.pyx b/pyverbs/mr.pyx > index aad47e2..d05d044 100644 > --- a/pyverbs/mr.pyx > +++ b/pyverbs/mr.pyx > @@ -384,7 +384,7 @@ cdef class DMMR(MR): > > cdef class DmaBufMR(MR): > def __init__(self, PD pd not None, length, access, DmaBuf dmabuf=None, > - offset=0, unit=0, gtt=0): > + offset=0, gpu=0, gtt=0): > """ > Initializes a DmaBufMR (DMA-BUF Memory Region) of the given length > and access flags using the given PD and DmaBuf objects. > @@ -393,14 +393,14 @@ cdef class DmaBufMR(MR): > :param access: Access flags, see ibv_access_flags enum > :param dmabuf: A DmaBuf object. One will be allocated if absent > :param offset: Byte offset from the beginning of the dma-buf > - :param unit: GPU unit for internal dmabuf allocation > + :param gpu: GPU unit for internal dmabuf allocation > :param gtt: If true allocate internal dmabuf from GTT instead of VRAM > :return: The newly created DMABUFMR > """ > self.logger = logging.getLogger(self.__class__.__name__) > if dmabuf is None: > self.is_dmabuf_internal = True > - dmabuf = DmaBuf(length + offset, unit, gtt) > + dmabuf = DmaBuf(length + offset, gpu, gtt) > self.mr = v.ibv_reg_dmabuf_mr(pd.pd, offset, length, offset, dmabuf.fd, access) > if self.mr == NULL: > raise PyverbsRDMAErrno(f'Failed to register a dma-buf MR. length: {length}, access flags: {access}') > diff --git a/tests/test_mr.py b/tests/test_mr.py > index 03a645f..028be71 100644 > --- a/tests/test_mr.py > +++ b/tests/test_mr.py > @@ -429,14 +429,14 @@ class DMMRTest(PyverbsAPITestCase): > dm_mr.close() > > > -def check_dmabuf_support(unit=0): > +def check_dmabuf_support(gpu=0): > """ > Check if dma-buf allocation is supported by the system. > Skip the test on failure. > """ > - device_num = 128 + unit > + device_num = 128 + gpu > try: > - DmaBuf(1, unit=unit) > + DmaBuf(1, gpu=gpu) > except PyverbsRDMAError as ex: > if ex.error_code == errno.ENOENT: > raise unittest.SkipTest(f'Device /dev/dri/renderD{device_num} is not present') > @@ -446,13 +446,13 @@ def check_dmabuf_support(unit=0): > raise unittest.SkipTest(f'Allocating dmabuf is not supported by /dev/dri/renderD{device_num}') > > > -def check_dmabuf_mr_support(pd, unit=0): > +def check_dmabuf_mr_support(pd, gpu=0): > """ > Check if dma-buf MR registration is supported by the driver. > Skip the test on failure > """ > try: > - DmaBufMR(pd, 1, 0, unit=unit) > + DmaBufMR(pd, 1, 0, gpu=gpu) > except PyverbsRDMAError as ex: > if ex.error_code == errno.EOPNOTSUPP: > raise unittest.SkipTest('Reg dma-buf MR is not supported by the RDMA driver') > @@ -464,22 +464,22 @@ class DmaBufMRTest(PyverbsAPITestCase): > """ > def setUp(self): > super().setUp() > - self.unit = self.config['gpu'] > + self.gpu = self.config['gpu'] > self.gtt = self.config['gtt'] > > def test_dmabuf_reg_mr(self): > """ > Test ibv_reg_dmabuf_mr() > """ > - check_dmabuf_support(self.unit) > + check_dmabuf_support(self.gpu) > for ctx, attr, attr_ex in self.devices: > with PD(ctx) as pd: > - check_dmabuf_mr_support(pd, self.unit) > + check_dmabuf_mr_support(pd, self.gpu) > flags = u.get_dmabuf_access_flags(ctx) > for f in flags: > len = u.get_mr_length() > for off in [0, len//2]: > - with DmaBufMR(pd, len, f, offset=off, unit=self.unit, > + with DmaBufMR(pd, len, f, offset=off, gpu=self.gpu, > gtt=self.gtt) as mr: > pass > > @@ -487,15 +487,15 @@ class DmaBufMRTest(PyverbsAPITestCase): > """ > Test ibv_dereg_mr() with DmaBufMR > """ > - check_dmabuf_support(self.unit) > + check_dmabuf_support(self.gpu) > for ctx, attr, attr_ex in self.devices: > with PD(ctx) as pd: > - check_dmabuf_mr_support(pd, self.unit) > + check_dmabuf_mr_support(pd, self.gpu) > flags = u.get_dmabuf_access_flags(ctx) > for f in flags: > len = u.get_mr_length() > for off in [0, len//2]: > - with DmaBufMR(pd, len, f, offset=off, unit=self.unit, > + with DmaBufMR(pd, len, f, offset=off, gpu=self.gpu, > gtt=self.gtt) as mr: > mr.close() > > @@ -503,15 +503,15 @@ class DmaBufMRTest(PyverbsAPITestCase): > """ > Verify that explicit call to DmaBufMR's close() doesn't fail > """ > - check_dmabuf_support(self.unit) > + check_dmabuf_support(self.gpu) > for ctx, attr, attr_ex in self.devices: > with PD(ctx) as pd: > - check_dmabuf_mr_support(pd, self.unit) > + check_dmabuf_mr_support(pd, self.gpu) > flags = u.get_dmabuf_access_flags(ctx) > for f in flags: > len = u.get_mr_length() > for off in [0, len//2]: > - with DmaBufMR(pd, len, f, offset=off, unit=self.unit, > + with DmaBufMR(pd, len, f, offset=off, gpu=self.gpu, > gtt=self.gtt) as mr: > # Pyverbs supports multiple destruction of objects, > # we are not expecting an exception here. > @@ -522,10 +522,10 @@ class DmaBufMRTest(PyverbsAPITestCase): > """ > Verify that illegal flags combination fails as expected > """ > - check_dmabuf_support(self.unit) > + check_dmabuf_support(self.gpu) > for ctx, attr, attr_ex in self.devices: > with PD(ctx) as pd: > - check_dmabuf_mr_support(pd, self.unit) > + check_dmabuf_mr_support(pd, self.gpu) > for i in range(5): > flags = random.sample([e.IBV_ACCESS_REMOTE_WRITE, > e.IBV_ACCESS_REMOTE_ATOMIC], > @@ -535,7 +535,7 @@ class DmaBufMRTest(PyverbsAPITestCase): > mr_flags += i.value > try: > DmaBufMR(pd, u.get_mr_length(), mr_flags, > - unit=self.unit, gtt=self.gtt) > + gpu=self.gpu, gtt=self.gtt) > except PyverbsRDMAError as err: > assert 'Failed to register a dma-buf MR' in err.args[0] > else: > @@ -545,17 +545,17 @@ class DmaBufMRTest(PyverbsAPITestCase): > """ > Test writing to DmaBufMR's buffer > """ > - check_dmabuf_support(self.unit) > + check_dmabuf_support(self.gpu) > for ctx, attr, attr_ex in self.devices: > with PD(ctx) as pd: > - check_dmabuf_mr_support(pd, self.unit) > + check_dmabuf_mr_support(pd, self.gpu) > for i in range(10): > mr_len = u.get_mr_length() > flags = u.get_dmabuf_access_flags(ctx) > for f in flags: > for mr_off in [0, mr_len//2]: > with DmaBufMR(pd, mr_len, f, offset=mr_off, > - unit=self.unit, gtt=self.gtt) as mr: > + gpu=self.gpu, gtt=self.gtt) as mr: > write_len = min(random.randint(1, MAX_IO_LEN), > mr_len) > mr.write('a' * write_len, write_len) > @@ -564,17 +564,17 @@ class DmaBufMRTest(PyverbsAPITestCase): > """ > Test reading from DmaBufMR's buffer > """ > - check_dmabuf_support(self.unit) > + check_dmabuf_support(self.gpu) > for ctx, attr, attr_ex in self.devices: > with PD(ctx) as pd: > - check_dmabuf_mr_support(pd, self.unit) > + check_dmabuf_mr_support(pd, self.gpu) > for i in range(10): > mr_len = u.get_mr_length() > flags = u.get_dmabuf_access_flags(ctx) > for f in flags: > for mr_off in [0, mr_len//2]: > with DmaBufMR(pd, mr_len, f, offset=mr_off, > - unit=self.unit, gtt=self.gtt) as mr: > + gpu=self.gpu, gtt=self.gtt) as mr: > write_len = min(random.randint(1, MAX_IO_LEN), > mr_len) > write_str = 'a' * write_len > @@ -588,14 +588,14 @@ class DmaBufMRTest(PyverbsAPITestCase): > """ > Test reading lkey property > """ > - check_dmabuf_support(self.unit) > + check_dmabuf_support(self.gpu) > for ctx, attr, attr_ex in self.devices: > with PD(ctx) as pd: > - check_dmabuf_mr_support(pd, self.unit) > + check_dmabuf_mr_support(pd, self.gpu) > length = u.get_mr_length() > flags = u.get_dmabuf_access_flags(ctx) > for f in flags: > - with DmaBufMR(pd, length, f, unit=self.unit, > + with DmaBufMR(pd, length, f, gpu=self.gpu, > gtt=self.gtt) as mr: > mr.lkey > > @@ -603,38 +603,38 @@ class DmaBufMRTest(PyverbsAPITestCase): > """ > Test reading rkey property > """ > - check_dmabuf_support(self.unit) > + check_dmabuf_support(self.gpu) > for ctx, attr, attr_ex in self.devices: > with PD(ctx) as pd: > - check_dmabuf_mr_support(pd, self.unit) > + check_dmabuf_mr_support(pd, self.gpu) > length = u.get_mr_length() > flags = u.get_dmabuf_access_flags(ctx) > for f in flags: > - with DmaBufMR(pd, length, f, unit=self.unit, > + with DmaBufMR(pd, length, f, gpu=self.gpu, > gtt=self.gtt) as mr: > mr.rkey > > > class DmaBufRC(RCResources): > - def __init__(self, dev_name, ib_port, gid_index, unit, gtt): > + def __init__(self, dev_name, ib_port, gid_index, gpu, gtt): > """ > Initialize an DmaBufRC object. > :param dev_name: Device name to be used > :param ib_port: IB port of the device to use > :param gid_index: Which GID index to use > - :param unit: GPU unit to allocate dmabuf from > + :param gpu: GPU unit to allocate dmabuf from > :gtt: Allocate dmabuf from GTT instead og VRAM > """ > - self.unit = unit > + self.gpu = gpu > self.gtt = gtt > super(DmaBufRC, self).__init__(dev_name=dev_name, ib_port=ib_port, > gid_index=gid_index) > > def create_mr(self): > - check_dmabuf_support(self.unit) > - check_dmabuf_mr_support(self.pd, self.unit) > + check_dmabuf_support(self.gpu) > + check_dmabuf_mr_support(self.pd, self.gpu) > access = e.IBV_ACCESS_LOCAL_WRITE | e.IBV_ACCESS_REMOTE_WRITE > - mr = DmaBufMR(self.pd, self.msg_size, access, unit=self.unit, > + mr = DmaBufMR(self.pd, self.msg_size, access, gpu=self.gpu, > gtt=self.gtt) > self.mr = mr > > @@ -649,7 +649,7 @@ class DmaBufTestCase(RDMATestCase): > def setUp(self): > super(DmaBufTestCase, self).setUp() > self.iters = 100 > - self.unit = self.config['gpu'] > + self.gpu = self.config['gpu'] > self.gtt = self.config['gtt'] > > def create_players(self, resource, **resource_arg): > @@ -671,7 +671,7 @@ class DmaBufTestCase(RDMATestCase): > """ > Test send/recv using dma-buf MR over RC > """ > - client, server = self.create_players(DmaBufRC, unit=self.unit, > + client, server = self.create_players(DmaBufRC, gpu=self.gpu, > gtt=self.gtt) > u.traffic(client, server, self.iters, self.gid_index, self.ib_port) > > @@ -679,7 +679,7 @@ class DmaBufTestCase(RDMATestCase): > """ > Test rdma write using dma-buf MR > """ > - client, server = self.create_players(DmaBufRC, unit=self.unit, > + client, server = self.create_players(DmaBufRC, gpu=self.gpu, > gtt=self.gtt) > server.rkey = client.mr.rkey > server.remote_addr = client.mr.offset > ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH rdma-core 2/3] pyverbs,tests: Cosmetic improvements for dma-buf allocation routines @ 2021-02-04 23:04 ` John Hubbard 0 siblings, 0 replies; 14+ messages in thread From: John Hubbard @ 2021-02-04 23:04 UTC (permalink / raw) To: Jianxin Xiong, linux-rdma, dri-devel Cc: Yishai Hadas, Leon Romanovsky, Edward Srouji, Emil Velikov, Gal Pressman, Ali Alnubani, Christian Koenig, Jason Gunthorpe, Doug Ledford, Daniel Vetter On 2/4/21 10:50 AM, Jianxin Xiong wrote: > Rename the parameter 'unit' to 'gpu'. Expand GTT to the full name in the > comments. > > Signed-off-by: Jianxin Xiong <jianxin.xiong@intel.com> > --- > pyverbs/dmabuf.pyx | 12 ++++---- > pyverbs/dmabuf_alloc.c | 12 ++++---- > pyverbs/dmabuf_alloc.h | 2 +- > pyverbs/mr.pyx | 6 ++-- > tests/test_mr.py | 78 +++++++++++++++++++++++++------------------------- > 5 files changed, 55 insertions(+), 55 deletions(-) > Looks good! If you care, you might want to add a space, like this, to the few GTT cases: GTT (Graphics Translation Table) Obviously not worth spinning another version for that, as it is still readable as-is. Just mentioning it for the sake of pointless perfectionism, and in case someone ever wonders why it was missed during a review. :) Either way, feel free to add: Reviewed-by: John Hubbard <jhubbard@nvidia.com> thanks, -- John Hubbard NVIDIA > diff --git a/pyverbs/dmabuf.pyx b/pyverbs/dmabuf.pyx > index b9406bd..9ed7f02 100644 > --- a/pyverbs/dmabuf.pyx > +++ b/pyverbs/dmabuf.pyx > @@ -12,7 +12,7 @@ from pyverbs.mr cimport DmaBufMR > cdef extern from "dmabuf_alloc.h": > cdef struct dmabuf: > pass > - dmabuf *dmabuf_alloc(unsigned long size, int unit, int gtt) > + dmabuf *dmabuf_alloc(unsigned long size, int gpu, int gtt) > void dmabuf_free(dmabuf *dmabuf) > int dmabuf_get_drm_fd(dmabuf *dmabuf) > int dmabuf_get_fd(dmabuf *dmabuf) > @@ -20,20 +20,20 @@ cdef extern from "dmabuf_alloc.h": > > > cdef class DmaBuf: > - def __init__(self, size, unit=0, gtt=0): > + def __init__(self, size, gpu=0, gtt=0): > """ > Allocate DmaBuf object from a GPU device. This is done through the > DRI device interface. Usually this requires the effective user id > being a member of the 'render' group. > :param size: The size (in number of bytes) of the buffer. > - :param unit: The unit number of the GPU to allocate the buffer from. > - :param gtt: Allocate from GTT instead of VRAM. > + :param gpu: The GPU unit to allocate the buffer from. > + :param gtt: Allocate from GTT(Graphics Translation Table) instead of VRAM. > :return: The newly created DmaBuf object on success. > """ > self.dmabuf_mrs = weakref.WeakSet() > - self.dmabuf = dmabuf_alloc(size, unit, gtt) > + self.dmabuf = dmabuf_alloc(size, gpu, gtt) > if self.dmabuf == NULL: > - raise PyverbsRDMAErrno(f'Failed to allocate dmabuf of size {size} on unit {unit}') > + raise PyverbsRDMAErrno(f'Failed to allocate dmabuf of size {size} on gpu {gpu}') > self.drm_fd = dmabuf_get_drm_fd(<dmabuf *>self.dmabuf) > self.fd = dmabuf_get_fd(<dmabuf *>self.dmabuf) > self.map_offset = dmabuf_get_offset(<dmabuf *>self.dmabuf) > diff --git a/pyverbs/dmabuf_alloc.c b/pyverbs/dmabuf_alloc.c > index 05eae75..93267bf 100644 > --- a/pyverbs/dmabuf_alloc.c > +++ b/pyverbs/dmabuf_alloc.c > @@ -95,7 +95,7 @@ static int amdgpu_mmap_offset(struct drm *drm, uint32_t handle, > return 0; > } > > -static struct drm *drm_open(int unit) > +static struct drm *drm_open(int gpu) > { > char path[32]; > struct drm_version version = {}; > @@ -107,7 +107,7 @@ static struct drm *drm_open(int unit) > if (!drm) > return NULL; > > - snprintf(path, sizeof(path), "/dev/dri/renderD%d", unit + 128); > + snprintf(path, sizeof(path), "/dev/dri/renderD%d", gpu + 128); > > drm->fd = open(path, O_RDWR); > if (drm->fd < 0) > @@ -204,10 +204,10 @@ struct dmabuf { > /* > * dmabuf_alloc - allocate a dmabuf from GPU > * @size - byte size of the buffer to allocate > - * @unit - the GPU unit to use > - * @gtt - if true, allocate from GTT instead of VRAM > + * @gpu - the GPU unit to use > + * @gtt - if true, allocate from GTT(Graphics Translation Table) instead of VRAM > */ > -struct dmabuf *dmabuf_alloc(uint64_t size, int unit, int gtt) > +struct dmabuf *dmabuf_alloc(uint64_t size, int gpu, int gtt) > { > struct dmabuf *dmabuf; > int err; > @@ -216,7 +216,7 @@ struct dmabuf *dmabuf_alloc(uint64_t size, int unit, int gtt) > if (!dmabuf) > return NULL; > > - dmabuf->drm = drm_open(unit); > + dmabuf->drm = drm_open(gpu); > if (!dmabuf->drm) > goto out_free; > > diff --git a/pyverbs/dmabuf_alloc.h b/pyverbs/dmabuf_alloc.h > index f1b03c5..4698b11 100644 > --- a/pyverbs/dmabuf_alloc.h > +++ b/pyverbs/dmabuf_alloc.h > @@ -10,7 +10,7 @@ > > struct dmabuf; > > -struct dmabuf *dmabuf_alloc(uint64_t size, int unit, int gtt); > +struct dmabuf *dmabuf_alloc(uint64_t size, int gpu, int gtt); > void dmabuf_free(struct dmabuf *dmabuf); > int dmabuf_get_drm_fd(struct dmabuf *dmabuf); > int dmabuf_get_fd(struct dmabuf *dmabuf); > diff --git a/pyverbs/mr.pyx b/pyverbs/mr.pyx > index aad47e2..d05d044 100644 > --- a/pyverbs/mr.pyx > +++ b/pyverbs/mr.pyx > @@ -384,7 +384,7 @@ cdef class DMMR(MR): > > cdef class DmaBufMR(MR): > def __init__(self, PD pd not None, length, access, DmaBuf dmabuf=None, > - offset=0, unit=0, gtt=0): > + offset=0, gpu=0, gtt=0): > """ > Initializes a DmaBufMR (DMA-BUF Memory Region) of the given length > and access flags using the given PD and DmaBuf objects. > @@ -393,14 +393,14 @@ cdef class DmaBufMR(MR): > :param access: Access flags, see ibv_access_flags enum > :param dmabuf: A DmaBuf object. One will be allocated if absent > :param offset: Byte offset from the beginning of the dma-buf > - :param unit: GPU unit for internal dmabuf allocation > + :param gpu: GPU unit for internal dmabuf allocation > :param gtt: If true allocate internal dmabuf from GTT instead of VRAM > :return: The newly created DMABUFMR > """ > self.logger = logging.getLogger(self.__class__.__name__) > if dmabuf is None: > self.is_dmabuf_internal = True > - dmabuf = DmaBuf(length + offset, unit, gtt) > + dmabuf = DmaBuf(length + offset, gpu, gtt) > self.mr = v.ibv_reg_dmabuf_mr(pd.pd, offset, length, offset, dmabuf.fd, access) > if self.mr == NULL: > raise PyverbsRDMAErrno(f'Failed to register a dma-buf MR. length: {length}, access flags: {access}') > diff --git a/tests/test_mr.py b/tests/test_mr.py > index 03a645f..028be71 100644 > --- a/tests/test_mr.py > +++ b/tests/test_mr.py > @@ -429,14 +429,14 @@ class DMMRTest(PyverbsAPITestCase): > dm_mr.close() > > > -def check_dmabuf_support(unit=0): > +def check_dmabuf_support(gpu=0): > """ > Check if dma-buf allocation is supported by the system. > Skip the test on failure. > """ > - device_num = 128 + unit > + device_num = 128 + gpu > try: > - DmaBuf(1, unit=unit) > + DmaBuf(1, gpu=gpu) > except PyverbsRDMAError as ex: > if ex.error_code == errno.ENOENT: > raise unittest.SkipTest(f'Device /dev/dri/renderD{device_num} is not present') > @@ -446,13 +446,13 @@ def check_dmabuf_support(unit=0): > raise unittest.SkipTest(f'Allocating dmabuf is not supported by /dev/dri/renderD{device_num}') > > > -def check_dmabuf_mr_support(pd, unit=0): > +def check_dmabuf_mr_support(pd, gpu=0): > """ > Check if dma-buf MR registration is supported by the driver. > Skip the test on failure > """ > try: > - DmaBufMR(pd, 1, 0, unit=unit) > + DmaBufMR(pd, 1, 0, gpu=gpu) > except PyverbsRDMAError as ex: > if ex.error_code == errno.EOPNOTSUPP: > raise unittest.SkipTest('Reg dma-buf MR is not supported by the RDMA driver') > @@ -464,22 +464,22 @@ class DmaBufMRTest(PyverbsAPITestCase): > """ > def setUp(self): > super().setUp() > - self.unit = self.config['gpu'] > + self.gpu = self.config['gpu'] > self.gtt = self.config['gtt'] > > def test_dmabuf_reg_mr(self): > """ > Test ibv_reg_dmabuf_mr() > """ > - check_dmabuf_support(self.unit) > + check_dmabuf_support(self.gpu) > for ctx, attr, attr_ex in self.devices: > with PD(ctx) as pd: > - check_dmabuf_mr_support(pd, self.unit) > + check_dmabuf_mr_support(pd, self.gpu) > flags = u.get_dmabuf_access_flags(ctx) > for f in flags: > len = u.get_mr_length() > for off in [0, len//2]: > - with DmaBufMR(pd, len, f, offset=off, unit=self.unit, > + with DmaBufMR(pd, len, f, offset=off, gpu=self.gpu, > gtt=self.gtt) as mr: > pass > > @@ -487,15 +487,15 @@ class DmaBufMRTest(PyverbsAPITestCase): > """ > Test ibv_dereg_mr() with DmaBufMR > """ > - check_dmabuf_support(self.unit) > + check_dmabuf_support(self.gpu) > for ctx, attr, attr_ex in self.devices: > with PD(ctx) as pd: > - check_dmabuf_mr_support(pd, self.unit) > + check_dmabuf_mr_support(pd, self.gpu) > flags = u.get_dmabuf_access_flags(ctx) > for f in flags: > len = u.get_mr_length() > for off in [0, len//2]: > - with DmaBufMR(pd, len, f, offset=off, unit=self.unit, > + with DmaBufMR(pd, len, f, offset=off, gpu=self.gpu, > gtt=self.gtt) as mr: > mr.close() > > @@ -503,15 +503,15 @@ class DmaBufMRTest(PyverbsAPITestCase): > """ > Verify that explicit call to DmaBufMR's close() doesn't fail > """ > - check_dmabuf_support(self.unit) > + check_dmabuf_support(self.gpu) > for ctx, attr, attr_ex in self.devices: > with PD(ctx) as pd: > - check_dmabuf_mr_support(pd, self.unit) > + check_dmabuf_mr_support(pd, self.gpu) > flags = u.get_dmabuf_access_flags(ctx) > for f in flags: > len = u.get_mr_length() > for off in [0, len//2]: > - with DmaBufMR(pd, len, f, offset=off, unit=self.unit, > + with DmaBufMR(pd, len, f, offset=off, gpu=self.gpu, > gtt=self.gtt) as mr: > # Pyverbs supports multiple destruction of objects, > # we are not expecting an exception here. > @@ -522,10 +522,10 @@ class DmaBufMRTest(PyverbsAPITestCase): > """ > Verify that illegal flags combination fails as expected > """ > - check_dmabuf_support(self.unit) > + check_dmabuf_support(self.gpu) > for ctx, attr, attr_ex in self.devices: > with PD(ctx) as pd: > - check_dmabuf_mr_support(pd, self.unit) > + check_dmabuf_mr_support(pd, self.gpu) > for i in range(5): > flags = random.sample([e.IBV_ACCESS_REMOTE_WRITE, > e.IBV_ACCESS_REMOTE_ATOMIC], > @@ -535,7 +535,7 @@ class DmaBufMRTest(PyverbsAPITestCase): > mr_flags += i.value > try: > DmaBufMR(pd, u.get_mr_length(), mr_flags, > - unit=self.unit, gtt=self.gtt) > + gpu=self.gpu, gtt=self.gtt) > except PyverbsRDMAError as err: > assert 'Failed to register a dma-buf MR' in err.args[0] > else: > @@ -545,17 +545,17 @@ class DmaBufMRTest(PyverbsAPITestCase): > """ > Test writing to DmaBufMR's buffer > """ > - check_dmabuf_support(self.unit) > + check_dmabuf_support(self.gpu) > for ctx, attr, attr_ex in self.devices: > with PD(ctx) as pd: > - check_dmabuf_mr_support(pd, self.unit) > + check_dmabuf_mr_support(pd, self.gpu) > for i in range(10): > mr_len = u.get_mr_length() > flags = u.get_dmabuf_access_flags(ctx) > for f in flags: > for mr_off in [0, mr_len//2]: > with DmaBufMR(pd, mr_len, f, offset=mr_off, > - unit=self.unit, gtt=self.gtt) as mr: > + gpu=self.gpu, gtt=self.gtt) as mr: > write_len = min(random.randint(1, MAX_IO_LEN), > mr_len) > mr.write('a' * write_len, write_len) > @@ -564,17 +564,17 @@ class DmaBufMRTest(PyverbsAPITestCase): > """ > Test reading from DmaBufMR's buffer > """ > - check_dmabuf_support(self.unit) > + check_dmabuf_support(self.gpu) > for ctx, attr, attr_ex in self.devices: > with PD(ctx) as pd: > - check_dmabuf_mr_support(pd, self.unit) > + check_dmabuf_mr_support(pd, self.gpu) > for i in range(10): > mr_len = u.get_mr_length() > flags = u.get_dmabuf_access_flags(ctx) > for f in flags: > for mr_off in [0, mr_len//2]: > with DmaBufMR(pd, mr_len, f, offset=mr_off, > - unit=self.unit, gtt=self.gtt) as mr: > + gpu=self.gpu, gtt=self.gtt) as mr: > write_len = min(random.randint(1, MAX_IO_LEN), > mr_len) > write_str = 'a' * write_len > @@ -588,14 +588,14 @@ class DmaBufMRTest(PyverbsAPITestCase): > """ > Test reading lkey property > """ > - check_dmabuf_support(self.unit) > + check_dmabuf_support(self.gpu) > for ctx, attr, attr_ex in self.devices: > with PD(ctx) as pd: > - check_dmabuf_mr_support(pd, self.unit) > + check_dmabuf_mr_support(pd, self.gpu) > length = u.get_mr_length() > flags = u.get_dmabuf_access_flags(ctx) > for f in flags: > - with DmaBufMR(pd, length, f, unit=self.unit, > + with DmaBufMR(pd, length, f, gpu=self.gpu, > gtt=self.gtt) as mr: > mr.lkey > > @@ -603,38 +603,38 @@ class DmaBufMRTest(PyverbsAPITestCase): > """ > Test reading rkey property > """ > - check_dmabuf_support(self.unit) > + check_dmabuf_support(self.gpu) > for ctx, attr, attr_ex in self.devices: > with PD(ctx) as pd: > - check_dmabuf_mr_support(pd, self.unit) > + check_dmabuf_mr_support(pd, self.gpu) > length = u.get_mr_length() > flags = u.get_dmabuf_access_flags(ctx) > for f in flags: > - with DmaBufMR(pd, length, f, unit=self.unit, > + with DmaBufMR(pd, length, f, gpu=self.gpu, > gtt=self.gtt) as mr: > mr.rkey > > > class DmaBufRC(RCResources): > - def __init__(self, dev_name, ib_port, gid_index, unit, gtt): > + def __init__(self, dev_name, ib_port, gid_index, gpu, gtt): > """ > Initialize an DmaBufRC object. > :param dev_name: Device name to be used > :param ib_port: IB port of the device to use > :param gid_index: Which GID index to use > - :param unit: GPU unit to allocate dmabuf from > + :param gpu: GPU unit to allocate dmabuf from > :gtt: Allocate dmabuf from GTT instead og VRAM > """ > - self.unit = unit > + self.gpu = gpu > self.gtt = gtt > super(DmaBufRC, self).__init__(dev_name=dev_name, ib_port=ib_port, > gid_index=gid_index) > > def create_mr(self): > - check_dmabuf_support(self.unit) > - check_dmabuf_mr_support(self.pd, self.unit) > + check_dmabuf_support(self.gpu) > + check_dmabuf_mr_support(self.pd, self.gpu) > access = e.IBV_ACCESS_LOCAL_WRITE | e.IBV_ACCESS_REMOTE_WRITE > - mr = DmaBufMR(self.pd, self.msg_size, access, unit=self.unit, > + mr = DmaBufMR(self.pd, self.msg_size, access, gpu=self.gpu, > gtt=self.gtt) > self.mr = mr > > @@ -649,7 +649,7 @@ class DmaBufTestCase(RDMATestCase): > def setUp(self): > super(DmaBufTestCase, self).setUp() > self.iters = 100 > - self.unit = self.config['gpu'] > + self.gpu = self.config['gpu'] > self.gtt = self.config['gtt'] > > def create_players(self, resource, **resource_arg): > @@ -671,7 +671,7 @@ class DmaBufTestCase(RDMATestCase): > """ > Test send/recv using dma-buf MR over RC > """ > - client, server = self.create_players(DmaBufRC, unit=self.unit, > + client, server = self.create_players(DmaBufRC, gpu=self.gpu, > gtt=self.gtt) > u.traffic(client, server, self.iters, self.gid_index, self.ib_port) > > @@ -679,7 +679,7 @@ class DmaBufTestCase(RDMATestCase): > """ > Test rdma write using dma-buf MR > """ > - client, server = self.create_players(DmaBufRC, unit=self.unit, > + client, server = self.create_players(DmaBufRC, gpu=self.gpu, > gtt=self.gtt) > server.rkey = client.mr.rkey > server.remote_addr = client.mr.offset > _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH rdma-core 3/3] configure: Add check for the presence of DRM headers 2021-02-04 18:50 ` Jianxin Xiong @ 2021-02-04 18:50 ` Jianxin Xiong -1 siblings, 0 replies; 14+ messages in thread From: Jianxin Xiong @ 2021-02-04 18:50 UTC (permalink / raw) To: linux-rdma, dri-devel Cc: Jianxin Xiong, Doug Ledford, Jason Gunthorpe, Leon Romanovsky, Sumit Semwal, Christian Koenig, Daniel Vetter, Edward Srouji, Yishai Hadas, John Hubbard, Ali Alnubani, Gal Pressman, Emil Velikov Compilation of pyverbs/dmabuf_alloc.c depends on a few DRM headers that are installed by either the kernel-header or the libdrm package. The installation is optional and the location is not unique. The standard locations (such as /usr/include/drm, /usr/include/libdrm) are checked first. If failed, pkg-config is tried to find the include path of custom libdrm installation. The dmabuf allocation routines now return suitable error when the headers are not available. The related tests will recognize this error code and skip. Signed-off-by: Jianxin Xiong <jianxin.xiong@intel.com> --- CMakeLists.txt | 7 +++++++ buildlib/Finddrm.cmake | 19 +++++++++++++++++++ buildlib/config.h.in | 2 ++ pyverbs/dmabuf_alloc.c | 47 ++++++++++++++++++++++++++++++++++++++++++----- 4 files changed, 70 insertions(+), 5 deletions(-) create mode 100644 buildlib/Finddrm.cmake diff --git a/CMakeLists.txt b/CMakeLists.txt index 4113423..feaba3a 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -515,6 +515,13 @@ find_package(Systemd) include_directories(${SYSTEMD_INCLUDE_DIRS}) RDMA_DoFixup("${SYSTEMD_FOUND}" "systemd/sd-daemon.h") +# drm headers +find_package(drm) +if (DRM_INCLUDE_DIRS) + include_directories(${DRM_INCLUDE_DIRS}) + set(HAVE_DRM_H 1) +endif() + #------------------------- # Apply fixups diff --git a/buildlib/Finddrm.cmake b/buildlib/Finddrm.cmake new file mode 100644 index 0000000..6f8e5f2 --- /dev/null +++ b/buildlib/Finddrm.cmake @@ -0,0 +1,19 @@ +# COPYRIGHT (c) 2021 Intel Corporation. +# Licensed under BSD (MIT variant) or GPLv2. See COPYING. + +# Check standard locations first +find_path(DRM_INCLUDE_DIRS "drm.h" PATH_SUFFIXES "drm" "libdrm") + +# Check custom libdrm installation, if any +if (NOT DRM_INCLUDE_DIRS) + execute_process(COMMAND pkg-config --cflags-only-I libdrm + OUTPUT_VARIABLE _LIBDRM + RESULT_VARIABLE _LIBDRM_RESULT + ERROR_QUIET) + + if (NOT _LIBDRM_RESULT) + string(REGEX REPLACE "^-I" "" DRM_INCLUDE_DIRS "${_LIBDRM}") + endif() + unset(_LIBDRM) + unset(_LIBDRM_RESULT) +endif() diff --git a/buildlib/config.h.in b/buildlib/config.h.in index c5b0bf5..e8dff54 100644 --- a/buildlib/config.h.in +++ b/buildlib/config.h.in @@ -67,6 +67,8 @@ # define VERBS_WRITE_ONLY 0 #endif +#define HAVE_DRM_H @HAVE_DRM_H@ + // Configuration defaults #define IBACM_SERVER_MODE_UNIX 0 diff --git a/pyverbs/dmabuf_alloc.c b/pyverbs/dmabuf_alloc.c index 93267bf..22a8ab8 100644 --- a/pyverbs/dmabuf_alloc.c +++ b/pyverbs/dmabuf_alloc.c @@ -9,13 +9,18 @@ #include <unistd.h> #include <string.h> #include <errno.h> -#include <drm/drm.h> -#include <drm/i915_drm.h> -#include <drm/amdgpu_drm.h> -#include <drm/radeon_drm.h> + +#include "config.h" +#include "dmabuf_alloc.h" + +#if HAVE_DRM_H + +#include <drm.h> +#include <i915_drm.h> +#include <amdgpu_drm.h> +#include <radeon_drm.h> #include <fcntl.h> #include <sys/ioctl.h> -#include "dmabuf_alloc.h" /* * Abstraction of the buffer allocation mechanism using the DRM interface. @@ -276,3 +281,35 @@ uint64_t dmabuf_get_offset(struct dmabuf *dmabuf) return dmabuf->map_offset; } +#else + +struct dmabuf *dmabuf_alloc(uint64_t size, int gpu, int gtt) +{ + errno = EOPNOTSUPP; + return NULL; +} + +void dmabuf_free(struct dmabuf *dmabuf) +{ + errno = EOPNOTSUPP; +} + +int dmabuf_get_drm_fd(struct dmabuf *dmabuf) +{ + errno = EOPNOTSUPP; + return -1; +} + +int dmabuf_get_fd(struct dmabuf *dmabuf) +{ + errno = EOPNOTSUPP; + return -1; +} + +uint64_t dmabuf_get_offset(struct dmabuf *dmabuf) +{ + errno = EOPNOTSUPP; + return -1; +} + +#endif -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH rdma-core 3/3] configure: Add check for the presence of DRM headers @ 2021-02-04 18:50 ` Jianxin Xiong 0 siblings, 0 replies; 14+ messages in thread From: Jianxin Xiong @ 2021-02-04 18:50 UTC (permalink / raw) To: linux-rdma, dri-devel Cc: Yishai Hadas, Leon Romanovsky, John Hubbard, Edward Srouji, Emil Velikov, Gal Pressman, Ali Alnubani, Jason Gunthorpe, Doug Ledford, Daniel Vetter, Christian Koenig, Jianxin Xiong Compilation of pyverbs/dmabuf_alloc.c depends on a few DRM headers that are installed by either the kernel-header or the libdrm package. The installation is optional and the location is not unique. The standard locations (such as /usr/include/drm, /usr/include/libdrm) are checked first. If failed, pkg-config is tried to find the include path of custom libdrm installation. The dmabuf allocation routines now return suitable error when the headers are not available. The related tests will recognize this error code and skip. Signed-off-by: Jianxin Xiong <jianxin.xiong@intel.com> --- CMakeLists.txt | 7 +++++++ buildlib/Finddrm.cmake | 19 +++++++++++++++++++ buildlib/config.h.in | 2 ++ pyverbs/dmabuf_alloc.c | 47 ++++++++++++++++++++++++++++++++++++++++++----- 4 files changed, 70 insertions(+), 5 deletions(-) create mode 100644 buildlib/Finddrm.cmake diff --git a/CMakeLists.txt b/CMakeLists.txt index 4113423..feaba3a 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -515,6 +515,13 @@ find_package(Systemd) include_directories(${SYSTEMD_INCLUDE_DIRS}) RDMA_DoFixup("${SYSTEMD_FOUND}" "systemd/sd-daemon.h") +# drm headers +find_package(drm) +if (DRM_INCLUDE_DIRS) + include_directories(${DRM_INCLUDE_DIRS}) + set(HAVE_DRM_H 1) +endif() + #------------------------- # Apply fixups diff --git a/buildlib/Finddrm.cmake b/buildlib/Finddrm.cmake new file mode 100644 index 0000000..6f8e5f2 --- /dev/null +++ b/buildlib/Finddrm.cmake @@ -0,0 +1,19 @@ +# COPYRIGHT (c) 2021 Intel Corporation. +# Licensed under BSD (MIT variant) or GPLv2. See COPYING. + +# Check standard locations first +find_path(DRM_INCLUDE_DIRS "drm.h" PATH_SUFFIXES "drm" "libdrm") + +# Check custom libdrm installation, if any +if (NOT DRM_INCLUDE_DIRS) + execute_process(COMMAND pkg-config --cflags-only-I libdrm + OUTPUT_VARIABLE _LIBDRM + RESULT_VARIABLE _LIBDRM_RESULT + ERROR_QUIET) + + if (NOT _LIBDRM_RESULT) + string(REGEX REPLACE "^-I" "" DRM_INCLUDE_DIRS "${_LIBDRM}") + endif() + unset(_LIBDRM) + unset(_LIBDRM_RESULT) +endif() diff --git a/buildlib/config.h.in b/buildlib/config.h.in index c5b0bf5..e8dff54 100644 --- a/buildlib/config.h.in +++ b/buildlib/config.h.in @@ -67,6 +67,8 @@ # define VERBS_WRITE_ONLY 0 #endif +#define HAVE_DRM_H @HAVE_DRM_H@ + // Configuration defaults #define IBACM_SERVER_MODE_UNIX 0 diff --git a/pyverbs/dmabuf_alloc.c b/pyverbs/dmabuf_alloc.c index 93267bf..22a8ab8 100644 --- a/pyverbs/dmabuf_alloc.c +++ b/pyverbs/dmabuf_alloc.c @@ -9,13 +9,18 @@ #include <unistd.h> #include <string.h> #include <errno.h> -#include <drm/drm.h> -#include <drm/i915_drm.h> -#include <drm/amdgpu_drm.h> -#include <drm/radeon_drm.h> + +#include "config.h" +#include "dmabuf_alloc.h" + +#if HAVE_DRM_H + +#include <drm.h> +#include <i915_drm.h> +#include <amdgpu_drm.h> +#include <radeon_drm.h> #include <fcntl.h> #include <sys/ioctl.h> -#include "dmabuf_alloc.h" /* * Abstraction of the buffer allocation mechanism using the DRM interface. @@ -276,3 +281,35 @@ uint64_t dmabuf_get_offset(struct dmabuf *dmabuf) return dmabuf->map_offset; } +#else + +struct dmabuf *dmabuf_alloc(uint64_t size, int gpu, int gtt) +{ + errno = EOPNOTSUPP; + return NULL; +} + +void dmabuf_free(struct dmabuf *dmabuf) +{ + errno = EOPNOTSUPP; +} + +int dmabuf_get_drm_fd(struct dmabuf *dmabuf) +{ + errno = EOPNOTSUPP; + return -1; +} + +int dmabuf_get_fd(struct dmabuf *dmabuf) +{ + errno = EOPNOTSUPP; + return -1; +} + +uint64_t dmabuf_get_offset(struct dmabuf *dmabuf) +{ + errno = EOPNOTSUPP; + return -1; +} + +#endif -- 1.8.3.1 _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH rdma-core 3/3] configure: Add check for the presence of DRM headers 2021-02-04 18:50 ` Jianxin Xiong @ 2021-02-04 21:12 ` Jason Gunthorpe -1 siblings, 0 replies; 14+ messages in thread From: Jason Gunthorpe @ 2021-02-04 21:12 UTC (permalink / raw) To: Jianxin Xiong Cc: linux-rdma, dri-devel, Doug Ledford, Leon Romanovsky, Sumit Semwal, Christian Koenig, Daniel Vetter, Edward Srouji, Yishai Hadas, John Hubbard, Ali Alnubani, Gal Pressman, Emil Velikov On Thu, Feb 04, 2021 at 10:50:51AM -0800, Jianxin Xiong wrote: > Compilation of pyverbs/dmabuf_alloc.c depends on a few DRM headers > that are installed by either the kernel-header or the libdrm package. > The installation is optional and the location is not unique. > > The standard locations (such as /usr/include/drm, /usr/include/libdrm) > are checked first. If failed, pkg-config is tried to find the include > path of custom libdrm installation. The dmabuf allocation routines now > return suitable error when the headers are not available. The related > tests will recognize this error code and skip. > > Signed-off-by: Jianxin Xiong <jianxin.xiong@intel.com> > CMakeLists.txt | 7 +++++++ > buildlib/Finddrm.cmake | 19 +++++++++++++++++++ > buildlib/config.h.in | 2 ++ > pyverbs/dmabuf_alloc.c | 47 ++++++++++++++++++++++++++++++++++++++++++----- > 4 files changed, 70 insertions(+), 5 deletions(-) > create mode 100644 buildlib/Finddrm.cmake > > diff --git a/CMakeLists.txt b/CMakeLists.txt > index 4113423..feaba3a 100644 > +++ b/CMakeLists.txt > @@ -515,6 +515,13 @@ find_package(Systemd) > include_directories(${SYSTEMD_INCLUDE_DIRS}) > RDMA_DoFixup("${SYSTEMD_FOUND}" "systemd/sd-daemon.h") > > +# drm headers > +find_package(drm) > +if (DRM_INCLUDE_DIRS) > + include_directories(${DRM_INCLUDE_DIRS}) > + set(HAVE_DRM_H 1) > +endif() > + > #------------------------- > # Apply fixups > > diff --git a/buildlib/Finddrm.cmake b/buildlib/Finddrm.cmake > new file mode 100644 > index 0000000..6f8e5f2 > +++ b/buildlib/Finddrm.cmake > @@ -0,0 +1,19 @@ > +# COPYRIGHT (c) 2021 Intel Corporation. > +# Licensed under BSD (MIT variant) or GPLv2. See COPYING. > + > +# Check standard locations first > +find_path(DRM_INCLUDE_DIRS "drm.h" PATH_SUFFIXES "drm" "libdrm") > + > +# Check custom libdrm installation, if any > +if (NOT DRM_INCLUDE_DIRS) > + execute_process(COMMAND pkg-config --cflags-only-I libdrm > + OUTPUT_VARIABLE _LIBDRM > + RESULT_VARIABLE _LIBDRM_RESULT > + ERROR_QUIET) > + > + if (NOT _LIBDRM_RESULT) > + string(REGEX REPLACE "^-I" "" DRM_INCLUDE_DIRS "${_LIBDRM}") > + endif() > + unset(_LIBDRM) > + unset(_LIBDRM_RESULT) > +endif() I think this should be using pkg_check_modules() ?? Look at the NL stuff: pkg_check_modules(NL libnl-3.0 libnl-route-3.0 REQUIRED) include_directories(${NL_INCLUDE_DIRS}) link_directories(${NL_LIBRARY_DIRS}) > +#if HAVE_DRM_H > + Would rather you use cmake to conditionally compile a dmabuf_alloc.c or a dmabuf_alloc_stub.c than ifdef the entire file Jaason ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH rdma-core 3/3] configure: Add check for the presence of DRM headers @ 2021-02-04 21:12 ` Jason Gunthorpe 0 siblings, 0 replies; 14+ messages in thread From: Jason Gunthorpe @ 2021-02-04 21:12 UTC (permalink / raw) To: Jianxin Xiong Cc: Yishai Hadas, Leon Romanovsky, linux-rdma, John Hubbard, Edward Srouji, Emil Velikov, Gal Pressman, dri-devel, Doug Ledford, Ali Alnubani, Daniel Vetter, Christian Koenig On Thu, Feb 04, 2021 at 10:50:51AM -0800, Jianxin Xiong wrote: > Compilation of pyverbs/dmabuf_alloc.c depends on a few DRM headers > that are installed by either the kernel-header or the libdrm package. > The installation is optional and the location is not unique. > > The standard locations (such as /usr/include/drm, /usr/include/libdrm) > are checked first. If failed, pkg-config is tried to find the include > path of custom libdrm installation. The dmabuf allocation routines now > return suitable error when the headers are not available. The related > tests will recognize this error code and skip. > > Signed-off-by: Jianxin Xiong <jianxin.xiong@intel.com> > CMakeLists.txt | 7 +++++++ > buildlib/Finddrm.cmake | 19 +++++++++++++++++++ > buildlib/config.h.in | 2 ++ > pyverbs/dmabuf_alloc.c | 47 ++++++++++++++++++++++++++++++++++++++++++----- > 4 files changed, 70 insertions(+), 5 deletions(-) > create mode 100644 buildlib/Finddrm.cmake > > diff --git a/CMakeLists.txt b/CMakeLists.txt > index 4113423..feaba3a 100644 > +++ b/CMakeLists.txt > @@ -515,6 +515,13 @@ find_package(Systemd) > include_directories(${SYSTEMD_INCLUDE_DIRS}) > RDMA_DoFixup("${SYSTEMD_FOUND}" "systemd/sd-daemon.h") > > +# drm headers > +find_package(drm) > +if (DRM_INCLUDE_DIRS) > + include_directories(${DRM_INCLUDE_DIRS}) > + set(HAVE_DRM_H 1) > +endif() > + > #------------------------- > # Apply fixups > > diff --git a/buildlib/Finddrm.cmake b/buildlib/Finddrm.cmake > new file mode 100644 > index 0000000..6f8e5f2 > +++ b/buildlib/Finddrm.cmake > @@ -0,0 +1,19 @@ > +# COPYRIGHT (c) 2021 Intel Corporation. > +# Licensed under BSD (MIT variant) or GPLv2. See COPYING. > + > +# Check standard locations first > +find_path(DRM_INCLUDE_DIRS "drm.h" PATH_SUFFIXES "drm" "libdrm") > + > +# Check custom libdrm installation, if any > +if (NOT DRM_INCLUDE_DIRS) > + execute_process(COMMAND pkg-config --cflags-only-I libdrm > + OUTPUT_VARIABLE _LIBDRM > + RESULT_VARIABLE _LIBDRM_RESULT > + ERROR_QUIET) > + > + if (NOT _LIBDRM_RESULT) > + string(REGEX REPLACE "^-I" "" DRM_INCLUDE_DIRS "${_LIBDRM}") > + endif() > + unset(_LIBDRM) > + unset(_LIBDRM_RESULT) > +endif() I think this should be using pkg_check_modules() ?? Look at the NL stuff: pkg_check_modules(NL libnl-3.0 libnl-route-3.0 REQUIRED) include_directories(${NL_INCLUDE_DIRS}) link_directories(${NL_LIBRARY_DIRS}) > +#if HAVE_DRM_H > + Would rather you use cmake to conditionally compile a dmabuf_alloc.c or a dmabuf_alloc_stub.c than ifdef the entire file Jaason _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [PATCH rdma-core 3/3] configure: Add check for the presence of DRM headers 2021-02-04 21:12 ` Jason Gunthorpe @ 2021-02-04 22:11 ` Xiong, Jianxin -1 siblings, 0 replies; 14+ messages in thread From: Xiong, Jianxin @ 2021-02-04 22:11 UTC (permalink / raw) To: Jason Gunthorpe Cc: linux-rdma, dri-devel, Doug Ledford, Leon Romanovsky, Sumit Semwal, Christian Koenig, Vetter, Daniel, Edward Srouji, Yishai Hadas, John Hubbard, Ali Alnubani, Gal Pressman, Emil Velikov > -----Original Message----- > From: Jason Gunthorpe <jgg@nvidia.com> > Sent: Thursday, February 04, 2021 1:12 PM > To: Xiong, Jianxin <jianxin.xiong@intel.com> > Cc: linux-rdma@vger.kernel.org; dri-devel@lists.freedesktop.org; Doug Ledford <dledford@redhat.com>; Leon Romanovsky > <leon@kernel.org>; Sumit Semwal <sumit.semwal@linaro.org>; Christian Koenig <christian.koenig@amd.com>; Vetter, Daniel > <daniel.vetter@intel.com>; Edward Srouji <edwards@nvidia.com>; Yishai Hadas <yishaih@nvidia.com>; John Hubbard > <jhubbard@nvidia.com>; Ali Alnubani <alialnu@nvidia.com>; Gal Pressman <galpress@amazon.com>; Emil Velikov > <emil.l.velikov@gmail.com> > Subject: Re: [PATCH rdma-core 3/3] configure: Add check for the presence of DRM headers > > On Thu, Feb 04, 2021 at 10:50:51AM -0800, Jianxin Xiong wrote: > > Compilation of pyverbs/dmabuf_alloc.c depends on a few DRM headers > > that are installed by either the kernel-header or the libdrm package. > > The installation is optional and the location is not unique. > > > > The standard locations (such as /usr/include/drm, /usr/include/libdrm) > > are checked first. If failed, pkg-config is tried to find the include > > path of custom libdrm installation. The dmabuf allocation routines now > > return suitable error when the headers are not available. The related > > tests will recognize this error code and skip. > > > > Signed-off-by: Jianxin Xiong <jianxin.xiong@intel.com> > > CMakeLists.txt | 7 +++++++ > > buildlib/Finddrm.cmake | 19 +++++++++++++++++++ > > buildlib/config.h.in | 2 ++ > > pyverbs/dmabuf_alloc.c | 47 > > ++++++++++++++++++++++++++++++++++++++++++----- > > 4 files changed, 70 insertions(+), 5 deletions(-) create mode 100644 > > buildlib/Finddrm.cmake > > > > diff --git a/CMakeLists.txt b/CMakeLists.txt index 4113423..feaba3a > > 100644 > > +++ b/CMakeLists.txt > > @@ -515,6 +515,13 @@ find_package(Systemd) > > include_directories(${SYSTEMD_INCLUDE_DIRS}) > > RDMA_DoFixup("${SYSTEMD_FOUND}" "systemd/sd-daemon.h") > > > > +# drm headers > > +find_package(drm) > > +if (DRM_INCLUDE_DIRS) > > + include_directories(${DRM_INCLUDE_DIRS}) > > + set(HAVE_DRM_H 1) > > +endif() > > + > > #------------------------- > > # Apply fixups > > > > diff --git a/buildlib/Finddrm.cmake b/buildlib/Finddrm.cmake new file > > mode 100644 index 0000000..6f8e5f2 > > +++ b/buildlib/Finddrm.cmake > > @@ -0,0 +1,19 @@ > > +# COPYRIGHT (c) 2021 Intel Corporation. > > +# Licensed under BSD (MIT variant) or GPLv2. See COPYING. > > + > > +# Check standard locations first > > +find_path(DRM_INCLUDE_DIRS "drm.h" PATH_SUFFIXES "drm" "libdrm") > > + > > +# Check custom libdrm installation, if any if (NOT DRM_INCLUDE_DIRS) > > + execute_process(COMMAND pkg-config --cflags-only-I libdrm > > + OUTPUT_VARIABLE _LIBDRM > > + RESULT_VARIABLE _LIBDRM_RESULT > > + ERROR_QUIET) > > + > > + if (NOT _LIBDRM_RESULT) > > + string(REGEX REPLACE "^-I" "" DRM_INCLUDE_DIRS "${_LIBDRM}") > > + endif() > > + unset(_LIBDRM) > > + unset(_LIBDRM_RESULT) > > +endif() > > I think this should be using pkg_check_modules() ?? > > Look at the NL stuff: > > pkg_check_modules(NL libnl-3.0 libnl-route-3.0 REQUIRED) > include_directories(${NL_INCLUDE_DIRS}) > link_directories(${NL_LIBRARY_DIRS}) > Yes, this is much simpler than the pkg-config method. > > +#if HAVE_DRM_H > > + > > Would rather you use cmake to conditionally compile a dmabuf_alloc.c or a dmabuf_alloc_stub.c than ifdef the entire file Sure, will try that. > > Jaason ^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [PATCH rdma-core 3/3] configure: Add check for the presence of DRM headers @ 2021-02-04 22:11 ` Xiong, Jianxin 0 siblings, 0 replies; 14+ messages in thread From: Xiong, Jianxin @ 2021-02-04 22:11 UTC (permalink / raw) To: Jason Gunthorpe Cc: Yishai Hadas, Leon Romanovsky, linux-rdma, John Hubbard, Edward Srouji, Emil Velikov, Gal Pressman, dri-devel, Doug Ledford, Ali Alnubani, Vetter, Daniel, Christian Koenig > -----Original Message----- > From: Jason Gunthorpe <jgg@nvidia.com> > Sent: Thursday, February 04, 2021 1:12 PM > To: Xiong, Jianxin <jianxin.xiong@intel.com> > Cc: linux-rdma@vger.kernel.org; dri-devel@lists.freedesktop.org; Doug Ledford <dledford@redhat.com>; Leon Romanovsky > <leon@kernel.org>; Sumit Semwal <sumit.semwal@linaro.org>; Christian Koenig <christian.koenig@amd.com>; Vetter, Daniel > <daniel.vetter@intel.com>; Edward Srouji <edwards@nvidia.com>; Yishai Hadas <yishaih@nvidia.com>; John Hubbard > <jhubbard@nvidia.com>; Ali Alnubani <alialnu@nvidia.com>; Gal Pressman <galpress@amazon.com>; Emil Velikov > <emil.l.velikov@gmail.com> > Subject: Re: [PATCH rdma-core 3/3] configure: Add check for the presence of DRM headers > > On Thu, Feb 04, 2021 at 10:50:51AM -0800, Jianxin Xiong wrote: > > Compilation of pyverbs/dmabuf_alloc.c depends on a few DRM headers > > that are installed by either the kernel-header or the libdrm package. > > The installation is optional and the location is not unique. > > > > The standard locations (such as /usr/include/drm, /usr/include/libdrm) > > are checked first. If failed, pkg-config is tried to find the include > > path of custom libdrm installation. The dmabuf allocation routines now > > return suitable error when the headers are not available. The related > > tests will recognize this error code and skip. > > > > Signed-off-by: Jianxin Xiong <jianxin.xiong@intel.com> > > CMakeLists.txt | 7 +++++++ > > buildlib/Finddrm.cmake | 19 +++++++++++++++++++ > > buildlib/config.h.in | 2 ++ > > pyverbs/dmabuf_alloc.c | 47 > > ++++++++++++++++++++++++++++++++++++++++++----- > > 4 files changed, 70 insertions(+), 5 deletions(-) create mode 100644 > > buildlib/Finddrm.cmake > > > > diff --git a/CMakeLists.txt b/CMakeLists.txt index 4113423..feaba3a > > 100644 > > +++ b/CMakeLists.txt > > @@ -515,6 +515,13 @@ find_package(Systemd) > > include_directories(${SYSTEMD_INCLUDE_DIRS}) > > RDMA_DoFixup("${SYSTEMD_FOUND}" "systemd/sd-daemon.h") > > > > +# drm headers > > +find_package(drm) > > +if (DRM_INCLUDE_DIRS) > > + include_directories(${DRM_INCLUDE_DIRS}) > > + set(HAVE_DRM_H 1) > > +endif() > > + > > #------------------------- > > # Apply fixups > > > > diff --git a/buildlib/Finddrm.cmake b/buildlib/Finddrm.cmake new file > > mode 100644 index 0000000..6f8e5f2 > > +++ b/buildlib/Finddrm.cmake > > @@ -0,0 +1,19 @@ > > +# COPYRIGHT (c) 2021 Intel Corporation. > > +# Licensed under BSD (MIT variant) or GPLv2. See COPYING. > > + > > +# Check standard locations first > > +find_path(DRM_INCLUDE_DIRS "drm.h" PATH_SUFFIXES "drm" "libdrm") > > + > > +# Check custom libdrm installation, if any if (NOT DRM_INCLUDE_DIRS) > > + execute_process(COMMAND pkg-config --cflags-only-I libdrm > > + OUTPUT_VARIABLE _LIBDRM > > + RESULT_VARIABLE _LIBDRM_RESULT > > + ERROR_QUIET) > > + > > + if (NOT _LIBDRM_RESULT) > > + string(REGEX REPLACE "^-I" "" DRM_INCLUDE_DIRS "${_LIBDRM}") > > + endif() > > + unset(_LIBDRM) > > + unset(_LIBDRM_RESULT) > > +endif() > > I think this should be using pkg_check_modules() ?? > > Look at the NL stuff: > > pkg_check_modules(NL libnl-3.0 libnl-route-3.0 REQUIRED) > include_directories(${NL_INCLUDE_DIRS}) > link_directories(${NL_LIBRARY_DIRS}) > Yes, this is much simpler than the pkg-config method. > > +#if HAVE_DRM_H > > + > > Would rather you use cmake to conditionally compile a dmabuf_alloc.c or a dmabuf_alloc_stub.c than ifdef the entire file Sure, will try that. > > Jaason _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel ^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2021-02-04 23:05 UTC | newest] Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2021-02-04 18:50 [PATCH rdma-core 0/3] Dma-buf related fixes Jianxin Xiong 2021-02-04 18:50 ` Jianxin Xiong 2021-02-04 18:50 ` [PATCH rdma-core 1/3] verbs: Fix gcc warnings when building for 32bit systems Jianxin Xiong 2021-02-04 18:50 ` Jianxin Xiong 2021-02-04 18:50 ` [PATCH rdma-core 2/3] pyverbs,tests: Cosmetic improvements for dma-buf allocation routines Jianxin Xiong 2021-02-04 18:50 ` [PATCH rdma-core 2/3] pyverbs, tests: " Jianxin Xiong 2021-02-04 23:04 ` [PATCH rdma-core 2/3] pyverbs,tests: " John Hubbard 2021-02-04 23:04 ` John Hubbard 2021-02-04 18:50 ` [PATCH rdma-core 3/3] configure: Add check for the presence of DRM headers Jianxin Xiong 2021-02-04 18:50 ` Jianxin Xiong 2021-02-04 21:12 ` Jason Gunthorpe 2021-02-04 21:12 ` Jason Gunthorpe 2021-02-04 22:11 ` Xiong, Jianxin 2021-02-04 22:11 ` Xiong, Jianxin
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.