* [PATCH rdma-next 0/5] VIRTIO_NET Emulation Offload
@ 2019-12-12 11:09 Leon Romanovsky
2019-12-12 11:09 ` [PATCH mlx5-next 1/5] net/mlx5: Add Virtio Emulation related device capabilities Leon Romanovsky
` (5 more replies)
0 siblings, 6 replies; 11+ messages in thread
From: Leon Romanovsky @ 2019-12-12 11:09 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe
Cc: Leon Romanovsky, RDMA mailing list, Shahaf Shuler, Yishai Hadas,
Saeed Mahameed, linux-netdev
From: Leon Romanovsky <leonro@mellanox.com>
Hi,
In this series, we introduce VIRTIO_NET_Q HW offload capability, so SW will
be able to create special general object with relevant virtqueue properties.
This series is based on -rc patches:
https://lore.kernel.org/linux-rdma/20191212100237.330654-1-leon@kernel.org
Thanks
Yishai Hadas (5):
net/mlx5: Add Virtio Emulation related device capabilities
net/mlx5: Expose vDPA emulation device capabilities
IB/mlx5: Extend caps stage to handle VAR capabilities
IB/mlx5: Introduce VAR object and its alloc/destroy methods
IB/mlx5: Add mmap support for VAR
drivers/infiniband/hw/mlx5/main.c | 202 ++++++++++++++++++-
drivers/infiniband/hw/mlx5/mlx5_ib.h | 17 ++
drivers/net/ethernet/mellanox/mlx5/core/fw.c | 7 +
include/linux/mlx5/device.h | 9 +
include/linux/mlx5/mlx5_ifc.h | 15 ++
include/uapi/rdma/mlx5_user_ioctl_cmds.h | 17 ++
6 files changed, 264 insertions(+), 3 deletions(-)
--
2.20.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH mlx5-next 1/5] net/mlx5: Add Virtio Emulation related device capabilities
2019-12-12 11:09 [PATCH rdma-next 0/5] VIRTIO_NET Emulation Offload Leon Romanovsky
@ 2019-12-12 11:09 ` Leon Romanovsky
2019-12-12 11:09 ` [PATCH mlx5-next 2/5] net/mlx5: Expose vDPA emulation " Leon Romanovsky
` (4 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Leon Romanovsky @ 2019-12-12 11:09 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe
Cc: Leon Romanovsky, RDMA mailing list, Shahaf Shuler, Yishai Hadas,
Saeed Mahameed, linux-netdev
From: Yishai Hadas <yishaih@mellanox.com>
Add Virtio Emulation related fields to the device capabilities.
It includes a general bit to indicate whether Virtio Emulation is
supported and the capabilities structure itself.
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Reviewed-by: Shahaf Shuler <shahafs@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
include/linux/mlx5/mlx5_ifc.h | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index 5d54fccf87fc..c6abaf4f1c55 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -87,6 +87,7 @@ enum {
enum {
MLX5_GENERAL_OBJ_TYPES_CAP_SW_ICM = (1ULL << MLX5_OBJ_TYPE_SW_ICM),
MLX5_GENERAL_OBJ_TYPES_CAP_GENEVE_TLV_OPT = (1ULL << 11),
+ MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_NET_Q = (1ULL << 13),
};
enum {
@@ -953,6 +954,19 @@ struct mlx5_ifc_device_event_cap_bits {
u8 user_unaffiliated_events[4][0x40];
};
+struct mlx5_ifc_device_virtio_emulation_cap_bits {
+ u8 reserved_at_0[0x20];
+
+ u8 reserved_at_20[0x13];
+ u8 log_doorbell_stride[0x5];
+ u8 reserved_at_38[0x3];
+ u8 log_doorbell_bar_size[0x5];
+
+ u8 doorbell_bar_offset[0x40];
+
+ u8 reserved_at_80[0x780];
+};
+
enum {
MLX5_ATOMIC_CAPS_ATOMIC_SIZE_QP_1_BYTE = 0x0,
MLX5_ATOMIC_CAPS_ATOMIC_SIZE_QP_2_BYTES = 0x2,
@@ -2751,6 +2765,7 @@ union mlx5_ifc_hca_cap_union_bits {
struct mlx5_ifc_fpga_cap_bits fpga_cap;
struct mlx5_ifc_tls_cap_bits tls_cap;
struct mlx5_ifc_device_mem_cap_bits device_mem_cap;
+ struct mlx5_ifc_device_virtio_emulation_cap_bits virtio_emulation_cap;
u8 reserved_at_0[0x8000];
};
--
2.20.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH mlx5-next 2/5] net/mlx5: Expose vDPA emulation device capabilities
2019-12-12 11:09 [PATCH rdma-next 0/5] VIRTIO_NET Emulation Offload Leon Romanovsky
2019-12-12 11:09 ` [PATCH mlx5-next 1/5] net/mlx5: Add Virtio Emulation related device capabilities Leon Romanovsky
@ 2019-12-12 11:09 ` Leon Romanovsky
2019-12-12 11:09 ` [PATCH rdma-next 3/5] IB/mlx5: Extend caps stage to handle VAR capabilities Leon Romanovsky
` (3 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Leon Romanovsky @ 2019-12-12 11:09 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe
Cc: Leon Romanovsky, RDMA mailing list, Shahaf Shuler, Yishai Hadas,
Saeed Mahameed, linux-netdev
From: Yishai Hadas <yishaih@mellanox.com>
Expose vDPA emulation device capabilities from the core layer.
It includes reading the capabilities from the firmware and exposing
helper functions to access the data.
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Reviewed-by: Shahaf Shuler <shahafs@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
drivers/net/ethernet/mellanox/mlx5/core/fw.c | 7 +++++++
include/linux/mlx5/device.h | 9 +++++++++
2 files changed, 16 insertions(+)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fw.c b/drivers/net/ethernet/mellanox/mlx5/core/fw.c
index a19790dee7b2..c375edfe528c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fw.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fw.c
@@ -245,6 +245,13 @@ int mlx5_query_hca_caps(struct mlx5_core_dev *dev)
return err;
}
+ if (MLX5_CAP_GEN_64(dev, general_obj_types) &
+ MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_NET_Q) {
+ err = mlx5_core_get_caps(dev, MLX5_CAP_VDPA_EMULATION);
+ if (err)
+ return err;
+ }
+
return 0;
}
diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h
index cc1c230f10ee..1a1c53f0262d 100644
--- a/include/linux/mlx5/device.h
+++ b/include/linux/mlx5/device.h
@@ -1105,6 +1105,7 @@ enum mlx5_cap_type {
MLX5_CAP_DEV_MEM,
MLX5_CAP_RESERVED_16,
MLX5_CAP_TLS,
+ MLX5_CAP_VDPA_EMULATION = 0x13,
MLX5_CAP_DEV_EVENT = 0x14,
/* NUM OF CAP Types */
MLX5_CAP_NUM
@@ -1297,6 +1298,14 @@ enum mlx5_qcam_feature_groups {
#define MLX5_CAP_DEV_EVENT(mdev, cap)\
MLX5_ADDR_OF(device_event_cap, (mdev)->caps.hca_cur[MLX5_CAP_DEV_EVENT], cap)
+#define MLX5_CAP_DEV_VDPA_EMULATION(mdev, cap)\
+ MLX5_GET(device_virtio_emulation_cap, \
+ (mdev)->caps.hca_cur[MLX5_CAP_VDPA_EMULATION], cap)
+
+#define MLX5_CAP64_DEV_VDPA_EMULATION(mdev, cap)\
+ MLX5_GET64(device_virtio_emulation_cap, \
+ (mdev)->caps.hca_cur[MLX5_CAP_VDPA_EMULATION], cap)
+
enum {
MLX5_CMD_STAT_OK = 0x0,
MLX5_CMD_STAT_INT_ERR = 0x1,
--
2.20.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH rdma-next 3/5] IB/mlx5: Extend caps stage to handle VAR capabilities
2019-12-12 11:09 [PATCH rdma-next 0/5] VIRTIO_NET Emulation Offload Leon Romanovsky
2019-12-12 11:09 ` [PATCH mlx5-next 1/5] net/mlx5: Add Virtio Emulation related device capabilities Leon Romanovsky
2019-12-12 11:09 ` [PATCH mlx5-next 2/5] net/mlx5: Expose vDPA emulation " Leon Romanovsky
@ 2019-12-12 11:09 ` Leon Romanovsky
2019-12-12 11:09 ` [PATCH rdma-next 4/5] IB/mlx5: Introduce VAR object and its alloc/destroy methods Leon Romanovsky
` (2 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Leon Romanovsky @ 2019-12-12 11:09 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe
Cc: Leon Romanovsky, RDMA mailing list, Shahaf Shuler, Yishai Hadas,
Saeed Mahameed, linux-netdev
From: Yishai Hadas <yishaih@mellanox.com>
Extend caps stage to handle VAR capabilities.
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
drivers/infiniband/hw/mlx5/main.c | 40 ++++++++++++++++++++++++++--
drivers/infiniband/hw/mlx5/mlx5_ib.h | 10 +++++++
2 files changed, 48 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 4d89d85226c2..79a5b8824b9d 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -6335,6 +6335,35 @@ static const struct ib_device_ops mlx5_ib_dev_dm_ops = {
.reg_dm_mr = mlx5_ib_reg_dm_mr,
};
+static int mlx5_ib_init_var_table(struct mlx5_ib_dev *dev)
+{
+ struct mlx5_core_dev *mdev = dev->mdev;
+ struct mlx5_var_table *var_table = &dev->var_table;
+ u8 log_doorbell_bar_size;
+ u8 log_doorbell_stride;
+ u64 bar_size;
+
+ log_doorbell_bar_size = MLX5_CAP_DEV_VDPA_EMULATION(mdev,
+ log_doorbell_bar_size);
+ log_doorbell_stride = MLX5_CAP_DEV_VDPA_EMULATION(mdev,
+ log_doorbell_stride);
+ var_table->hw_start_addr = dev->mdev->bar_addr +
+ MLX5_CAP64_DEV_VDPA_EMULATION(mdev,
+ doorbell_bar_offset);
+ bar_size = (1ULL << log_doorbell_bar_size) * 4096;
+ var_table->stride_size = 1ULL << log_doorbell_stride;
+ var_table->num_var_hw_entries = bar_size / var_table->stride_size;
+ mutex_init(&var_table->bitmap_lock);
+ var_table->bitmap = bitmap_zalloc(var_table->num_var_hw_entries,
+ GFP_KERNEL);
+ return (var_table->bitmap) ? 0 : -ENOMEM;
+}
+
+static void mlx5_ib_stage_caps_cleanup(struct mlx5_ib_dev *dev)
+{
+ bitmap_free(dev->var_table.bitmap);
+}
+
static int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev)
{
struct mlx5_core_dev *mdev = dev->mdev;
@@ -6422,6 +6451,13 @@ static int mlx5_ib_stage_caps_init(struct mlx5_ib_dev *dev)
MLX5_CAP_GEN(dev->mdev, disable_local_lb_mc)))
mutex_init(&dev->lb.mutex);
+ if (MLX5_CAP_GEN_64(dev->mdev, general_obj_types) &
+ MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_NET_Q) {
+ err = mlx5_ib_init_var_table(dev);
+ if (err)
+ return err;
+ }
+
dev->ib_dev.use_cq_dim = true;
return 0;
@@ -6770,7 +6806,7 @@ static const struct mlx5_ib_profile pf_profile = {
mlx5_ib_stage_flow_db_cleanup),
STAGE_CREATE(MLX5_IB_STAGE_CAPS,
mlx5_ib_stage_caps_init,
- NULL),
+ mlx5_ib_stage_caps_cleanup),
STAGE_CREATE(MLX5_IB_STAGE_NON_DEFAULT_CB,
mlx5_ib_stage_non_default_cb,
NULL),
@@ -6827,7 +6863,7 @@ const struct mlx5_ib_profile raw_eth_profile = {
mlx5_ib_stage_flow_db_cleanup),
STAGE_CREATE(MLX5_IB_STAGE_CAPS,
mlx5_ib_stage_caps_init,
- NULL),
+ mlx5_ib_stage_caps_cleanup),
STAGE_CREATE(MLX5_IB_STAGE_NON_DEFAULT_CB,
mlx5_ib_stage_raw_eth_non_default_cb,
NULL),
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index b06f32ff5748..23ad949e247f 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -959,6 +959,15 @@ struct mlx5_devx_event_table {
struct xarray event_xa;
};
+struct mlx5_var_table {
+ /* serialize updating the bitmap */
+ struct mutex bitmap_lock;
+ unsigned long *bitmap;
+ u64 hw_start_addr;
+ u32 stride_size;
+ u64 num_var_hw_entries;
+};
+
struct mlx5_ib_dev {
struct ib_device ib_dev;
struct mlx5_core_dev *mdev;
@@ -1013,6 +1022,7 @@ struct mlx5_ib_dev {
struct mlx5_srq_table srq_table;
struct mlx5_async_ctx async_ctx;
struct mlx5_devx_event_table devx_event_table;
+ struct mlx5_var_table var_table;
struct xarray sig_mrs;
};
--
2.20.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH rdma-next 4/5] IB/mlx5: Introduce VAR object and its alloc/destroy methods
2019-12-12 11:09 [PATCH rdma-next 0/5] VIRTIO_NET Emulation Offload Leon Romanovsky
` (2 preceding siblings ...)
2019-12-12 11:09 ` [PATCH rdma-next 3/5] IB/mlx5: Extend caps stage to handle VAR capabilities Leon Romanovsky
@ 2019-12-12 11:09 ` Leon Romanovsky
2020-01-07 19:36 ` Jason Gunthorpe
2019-12-12 11:09 ` [PATCH rdma-next 5/5] IB/mlx5: Add mmap support for VAR Leon Romanovsky
2020-01-07 19:37 ` [PATCH rdma-next 0/5] VIRTIO_NET Emulation Offload Jason Gunthorpe
5 siblings, 1 reply; 11+ messages in thread
From: Leon Romanovsky @ 2019-12-12 11:09 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe
Cc: Leon Romanovsky, RDMA mailing list, Shahaf Shuler, Yishai Hadas,
Saeed Mahameed, linux-netdev
From: Yishai Hadas <yishaih@mellanox.com>
Introduce VAR object and its alloc/destroy KABI methods. The internal
implementation uses the IB core API to manage mmap/munamp calls.
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
drivers/infiniband/hw/mlx5/main.c | 157 +++++++++++++++++++++++
drivers/infiniband/hw/mlx5/mlx5_ib.h | 7 +
include/uapi/rdma/mlx5_user_ioctl_cmds.h | 17 +++
3 files changed, 181 insertions(+)
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 79a5b8824b9d..873480b07686 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -2078,6 +2078,7 @@ static void mlx5_ib_mmap_free(struct rdma_user_mmap_entry *entry)
{
struct mlx5_user_mmap_entry *mentry = to_mmmap(entry);
struct mlx5_ib_dev *dev = to_mdev(entry->ucontext->device);
+ struct mlx5_var_table *var_table = &dev->var_table;
struct mlx5_ib_dm *mdm;
switch (mentry->mmap_flag) {
@@ -2087,6 +2088,12 @@ static void mlx5_ib_mmap_free(struct rdma_user_mmap_entry *entry)
mdm->size);
kfree(mdm);
break;
+ case MLX5_IB_MMAP_TYPE_VAR:
+ mutex_lock(&var_table->bitmap_lock);
+ clear_bit(mentry->page_idx, var_table->bitmap);
+ mutex_unlock(&var_table->bitmap_lock);
+ kfree(mentry);
+ break;
default:
WARN_ON(true);
}
@@ -2255,6 +2262,15 @@ static int mlx5_ib_mmap_offset(struct mlx5_ib_dev *dev,
return ret;
}
+static u64 mlx5_entry_to_mmap_offset(struct mlx5_user_mmap_entry *entry)
+{
+ u16 cmd = entry->rdma_entry.start_pgoff >> 16;
+ u16 index = entry->rdma_entry.start_pgoff & 0xFFFF;
+
+ return (((index >> 8) << 16) | (cmd << MLX5_IB_MMAP_CMD_SHIFT) |
+ (index & 0xFF)) << PAGE_SHIFT;
+}
+
static int mlx5_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma)
{
struct mlx5_ib_ucontext *context = to_mucontext(ibcontext);
@@ -6034,6 +6050,145 @@ static void mlx5_ib_cleanup_multiport_master(struct mlx5_ib_dev *dev)
mlx5_nic_vport_disable_roce(dev->mdev);
}
+static int var_obj_cleanup(struct ib_uobject *uobject,
+ enum rdma_remove_reason why,
+ struct uverbs_attr_bundle *attrs)
+{
+ struct mlx5_user_mmap_entry *obj = uobject->object;
+
+ rdma_user_mmap_entry_remove(&obj->rdma_entry);
+ return 0;
+}
+
+static struct mlx5_user_mmap_entry *
+alloc_var_entry(struct mlx5_ib_ucontext *c)
+{
+ struct mlx5_user_mmap_entry *entry;
+ struct mlx5_var_table *var_table;
+ u32 page_idx;
+ int err;
+
+ var_table = &to_mdev(c->ibucontext.device)->var_table;
+ entry = kzalloc(sizeof(*entry), GFP_KERNEL);
+ if (!entry)
+ return ERR_PTR(-ENOMEM);
+
+ mutex_lock(&var_table->bitmap_lock);
+ page_idx = find_first_zero_bit(var_table->bitmap,
+ var_table->num_var_hw_entries);
+ if (page_idx >= var_table->num_var_hw_entries) {
+ err = -ENOSPC;
+ mutex_unlock(&var_table->bitmap_lock);
+ goto end;
+ }
+
+ set_bit(page_idx, var_table->bitmap);
+ mutex_unlock(&var_table->bitmap_lock);
+
+ entry->address = var_table->hw_start_addr +
+ (page_idx * var_table->stride_size);
+ entry->page_idx = page_idx;
+ entry->mmap_flag = MLX5_IB_MMAP_TYPE_VAR;
+
+ err = rdma_user_mmap_entry_insert_range(
+ &c->ibucontext, &entry->rdma_entry, var_table->stride_size,
+ MLX5_IB_MMAP_OFFSET_START << 16,
+ (MLX5_IB_MMAP_OFFSET_END << 16) + (1UL << 16) - 1);
+ if (err)
+ goto err_insert;
+
+ return entry;
+
+err_insert:
+ mutex_lock(&var_table->bitmap_lock);
+ clear_bit(page_idx, var_table->bitmap);
+ mutex_unlock(&var_table->bitmap_lock);
+end:
+ kfree(entry);
+ return ERR_PTR(err);
+}
+
+static int UVERBS_HANDLER(MLX5_IB_METHOD_VAR_OBJ_ALLOC)(
+ struct uverbs_attr_bundle *attrs)
+{
+ struct ib_uobject *uobj = uverbs_attr_get_uobject(
+ attrs, MLX5_IB_ATTR_VAR_OBJ_ALLOC_HANDLE);
+ struct mlx5_ib_ucontext *c;
+ struct mlx5_user_mmap_entry *entry;
+ u64 mmap_offset;
+ u32 length;
+ int err;
+
+ c = to_mucontext(ib_uverbs_get_ucontext(attrs));
+ if (IS_ERR(c))
+ return PTR_ERR(c);
+
+ entry = alloc_var_entry(c);
+ if (IS_ERR(entry))
+ return PTR_ERR(entry);
+
+ mmap_offset = mlx5_entry_to_mmap_offset(entry);
+ length = entry->rdma_entry.npages * PAGE_SIZE;
+ uobj->object = entry;
+
+ err = uverbs_copy_to(attrs, MLX5_IB_ATTR_VAR_OBJ_ALLOC_MMAP_OFFSET,
+ &mmap_offset, sizeof(mmap_offset));
+ if (err)
+ goto err;
+
+ err = uverbs_copy_to(attrs, MLX5_IB_ATTR_VAR_OBJ_ALLOC_PAGE_ID,
+ &entry->page_idx, sizeof(entry->page_idx));
+ if (err)
+ goto err;
+
+ err = uverbs_copy_to(attrs, MLX5_IB_ATTR_VAR_OBJ_ALLOC_MMAP_LENGTH,
+ &length, sizeof(length));
+ if (err)
+ goto err;
+
+ return 0;
+
+err:
+ rdma_user_mmap_entry_remove(&entry->rdma_entry);
+ return err;
+}
+
+DECLARE_UVERBS_NAMED_METHOD(
+ MLX5_IB_METHOD_VAR_OBJ_ALLOC,
+ UVERBS_ATTR_IDR(MLX5_IB_ATTR_VAR_OBJ_ALLOC_HANDLE,
+ MLX5_IB_OBJECT_VAR,
+ UVERBS_ACCESS_NEW,
+ UA_MANDATORY),
+ UVERBS_ATTR_PTR_OUT(MLX5_IB_ATTR_VAR_OBJ_ALLOC_PAGE_ID,
+ UVERBS_ATTR_TYPE(u32),
+ UA_MANDATORY),
+ UVERBS_ATTR_PTR_OUT(MLX5_IB_ATTR_VAR_OBJ_ALLOC_MMAP_LENGTH,
+ UVERBS_ATTR_TYPE(u32),
+ UA_MANDATORY),
+ UVERBS_ATTR_PTR_OUT(MLX5_IB_ATTR_VAR_OBJ_ALLOC_MMAP_OFFSET,
+ UVERBS_ATTR_TYPE(u64),
+ UA_MANDATORY));
+
+DECLARE_UVERBS_NAMED_METHOD_DESTROY(
+ MLX5_IB_METHOD_VAR_OBJ_DESTROY,
+ UVERBS_ATTR_IDR(MLX5_IB_ATTR_VAR_OBJ_DESTROY_HANDLE,
+ MLX5_IB_OBJECT_VAR,
+ UVERBS_ACCESS_DESTROY,
+ UA_MANDATORY));
+
+DECLARE_UVERBS_NAMED_OBJECT(MLX5_IB_OBJECT_VAR,
+ UVERBS_TYPE_ALLOC_IDR(var_obj_cleanup),
+ &UVERBS_METHOD(MLX5_IB_METHOD_VAR_OBJ_ALLOC),
+ &UVERBS_METHOD(MLX5_IB_METHOD_VAR_OBJ_DESTROY));
+
+static bool var_is_supported(struct ib_device *device)
+{
+ struct mlx5_ib_dev *dev = to_mdev(device);
+
+ return (MLX5_CAP_GEN_64(dev->mdev, general_obj_types) &
+ MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_NET_Q);
+}
+
ADD_UVERBS_ATTRIBUTES_SIMPLE(
mlx5_ib_dm,
UVERBS_OBJECT_DM,
@@ -6064,6 +6219,8 @@ static const struct uapi_definition mlx5_ib_defs[] = {
UAPI_DEF_CHAIN_OBJ_TREE(UVERBS_OBJECT_FLOW_ACTION,
&mlx5_ib_flow_action),
UAPI_DEF_CHAIN_OBJ_TREE(UVERBS_OBJECT_DM, &mlx5_ib_dm),
+ UAPI_DEF_CHAIN_OBJ_TREE_NAMED(MLX5_IB_OBJECT_VAR,
+ UAPI_DEF_IS_OBJ_SUPPORTED(var_is_supported)),
{}
};
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index 23ad949e247f..489128fe8603 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -71,6 +71,11 @@
#define MLX5_MKEY_PAGE_SHIFT_MASK __mlx5_mask(mkc, log_page_size)
+enum {
+ MLX5_IB_MMAP_OFFSET_START = 9,
+ MLX5_IB_MMAP_OFFSET_END = 255,
+};
+
enum {
MLX5_IB_MMAP_CMD_SHIFT = 8,
MLX5_IB_MMAP_CMD_MASK = 0xff,
@@ -120,6 +125,7 @@ enum {
enum mlx5_ib_mmap_type {
MLX5_IB_MMAP_TYPE_MEMIC = 1,
+ MLX5_IB_MMAP_TYPE_VAR = 2,
};
#define MLX5_LOG_SW_ICM_BLOCK_SIZE(dev) \
@@ -563,6 +569,7 @@ struct mlx5_user_mmap_entry {
struct rdma_user_mmap_entry rdma_entry;
u8 mmap_flag;
u64 address;
+ u32 page_idx;
};
struct mlx5_ib_dm {
diff --git a/include/uapi/rdma/mlx5_user_ioctl_cmds.h b/include/uapi/rdma/mlx5_user_ioctl_cmds.h
index 20d88307f75f..afe7da6f2b8e 100644
--- a/include/uapi/rdma/mlx5_user_ioctl_cmds.h
+++ b/include/uapi/rdma/mlx5_user_ioctl_cmds.h
@@ -115,6 +115,22 @@ enum mlx5_ib_devx_obj_methods {
MLX5_IB_METHOD_DEVX_OBJ_ASYNC_QUERY,
};
+enum mlx5_ib_var_alloc_attrs {
+ MLX5_IB_ATTR_VAR_OBJ_ALLOC_HANDLE = (1U << UVERBS_ID_NS_SHIFT),
+ MLX5_IB_ATTR_VAR_OBJ_ALLOC_MMAP_OFFSET,
+ MLX5_IB_ATTR_VAR_OBJ_ALLOC_MMAP_LENGTH,
+ MLX5_IB_ATTR_VAR_OBJ_ALLOC_PAGE_ID,
+};
+
+enum mlx5_ib_var_obj_destroy_attrs {
+ MLX5_IB_ATTR_VAR_OBJ_DESTROY_HANDLE = (1U << UVERBS_ID_NS_SHIFT),
+};
+
+enum mlx5_ib_var_obj_methods {
+ MLX5_IB_METHOD_VAR_OBJ_ALLOC = (1U << UVERBS_ID_NS_SHIFT),
+ MLX5_IB_METHOD_VAR_OBJ_DESTROY,
+};
+
enum mlx5_ib_devx_umem_reg_attrs {
MLX5_IB_ATTR_DEVX_UMEM_REG_HANDLE = (1U << UVERBS_ID_NS_SHIFT),
MLX5_IB_ATTR_DEVX_UMEM_REG_ADDR,
@@ -156,6 +172,7 @@ enum mlx5_ib_objects {
MLX5_IB_OBJECT_FLOW_MATCHER,
MLX5_IB_OBJECT_DEVX_ASYNC_CMD_FD,
MLX5_IB_OBJECT_DEVX_ASYNC_EVENT_FD,
+ MLX5_IB_OBJECT_VAR,
};
enum mlx5_ib_flow_matcher_create_attrs {
--
2.20.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH rdma-next 5/5] IB/mlx5: Add mmap support for VAR
2019-12-12 11:09 [PATCH rdma-next 0/5] VIRTIO_NET Emulation Offload Leon Romanovsky
` (3 preceding siblings ...)
2019-12-12 11:09 ` [PATCH rdma-next 4/5] IB/mlx5: Introduce VAR object and its alloc/destroy methods Leon Romanovsky
@ 2019-12-12 11:09 ` Leon Romanovsky
2020-01-07 19:37 ` [PATCH rdma-next 0/5] VIRTIO_NET Emulation Offload Jason Gunthorpe
5 siblings, 0 replies; 11+ messages in thread
From: Leon Romanovsky @ 2019-12-12 11:09 UTC (permalink / raw)
To: Doug Ledford, Jason Gunthorpe
Cc: Leon Romanovsky, RDMA mailing list, Shahaf Shuler, Yishai Hadas,
Saeed Mahameed, linux-netdev
From: Yishai Hadas <yishaih@mellanox.com>
Add mmap support for VAR, it uses the 'offset' command mode with
involvement of IB core APIs to find the previously allocated mmap entry.
Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
---
drivers/infiniband/hw/mlx5/main.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 873480b07686..52bc86ab9490 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -2253,7 +2253,10 @@ static int mlx5_ib_mmap_offset(struct mlx5_ib_dev *dev,
mentry = to_mmmap(entry);
pfn = (mentry->address >> PAGE_SHIFT);
- prot = pgprot_writecombine(vma->vm_page_prot);
+ if (mentry->mmap_flag == MLX5_IB_MMAP_TYPE_VAR)
+ prot = pgprot_noncached(vma->vm_page_prot);
+ else
+ prot = pgprot_writecombine(vma->vm_page_prot);
ret = rdma_user_mmap_io(ucontext, vma, pfn,
entry->npages * PAGE_SIZE,
prot,
--
2.20.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH rdma-next 4/5] IB/mlx5: Introduce VAR object and its alloc/destroy methods
2019-12-12 11:09 ` [PATCH rdma-next 4/5] IB/mlx5: Introduce VAR object and its alloc/destroy methods Leon Romanovsky
@ 2020-01-07 19:36 ` Jason Gunthorpe
2020-01-08 8:12 ` Yishai Hadas
0 siblings, 1 reply; 11+ messages in thread
From: Jason Gunthorpe @ 2020-01-07 19:36 UTC (permalink / raw)
To: Leon Romanovsky
Cc: Doug Ledford, Leon Romanovsky, RDMA mailing list, Shahaf Shuler,
Yishai Hadas, Saeed Mahameed, linux-netdev
On Thu, Dec 12, 2019 at 01:09:27PM +0200, Leon Romanovsky wrote:
> From: Yishai Hadas <yishaih@mellanox.com>
>
> Introduce VAR object and its alloc/destroy KABI methods. The internal
> implementation uses the IB core API to manage mmap/munamp calls.
>
> Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
> drivers/infiniband/hw/mlx5/main.c | 157 +++++++++++++++++++++++
> drivers/infiniband/hw/mlx5/mlx5_ib.h | 7 +
> include/uapi/rdma/mlx5_user_ioctl_cmds.h | 17 +++
> 3 files changed, 181 insertions(+)
>
> diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
> index 79a5b8824b9d..873480b07686 100644
> +++ b/drivers/infiniband/hw/mlx5/main.c
> @@ -2078,6 +2078,7 @@ static void mlx5_ib_mmap_free(struct rdma_user_mmap_entry *entry)
> {
> struct mlx5_user_mmap_entry *mentry = to_mmmap(entry);
> struct mlx5_ib_dev *dev = to_mdev(entry->ucontext->device);
> + struct mlx5_var_table *var_table = &dev->var_table;
> struct mlx5_ib_dm *mdm;
>
> switch (mentry->mmap_flag) {
> @@ -2087,6 +2088,12 @@ static void mlx5_ib_mmap_free(struct rdma_user_mmap_entry *entry)
> mdm->size);
> kfree(mdm);
> break;
> + case MLX5_IB_MMAP_TYPE_VAR:
> + mutex_lock(&var_table->bitmap_lock);
> + clear_bit(mentry->page_idx, var_table->bitmap);
> + mutex_unlock(&var_table->bitmap_lock);
> + kfree(mentry);
> + break;
> default:
> WARN_ON(true);
> }
> @@ -2255,6 +2262,15 @@ static int mlx5_ib_mmap_offset(struct mlx5_ib_dev *dev,
> return ret;
> }
>
> +static u64 mlx5_entry_to_mmap_offset(struct mlx5_user_mmap_entry *entry)
> +{
> + u16 cmd = entry->rdma_entry.start_pgoff >> 16;
> + u16 index = entry->rdma_entry.start_pgoff & 0xFFFF;
> +
> + return (((index >> 8) << 16) | (cmd << MLX5_IB_MMAP_CMD_SHIFT) |
> + (index & 0xFF)) << PAGE_SHIFT;
> +}
> +
> static int mlx5_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma)
> {
> struct mlx5_ib_ucontext *context = to_mucontext(ibcontext);
> @@ -6034,6 +6050,145 @@ static void mlx5_ib_cleanup_multiport_master(struct mlx5_ib_dev *dev)
> mlx5_nic_vport_disable_roce(dev->mdev);
> }
>
> +static int var_obj_cleanup(struct ib_uobject *uobject,
> + enum rdma_remove_reason why,
> + struct uverbs_attr_bundle *attrs)
> +{
> + struct mlx5_user_mmap_entry *obj = uobject->object;
> +
> + rdma_user_mmap_entry_remove(&obj->rdma_entry);
> + return 0;
> +}
> +
> +static struct mlx5_user_mmap_entry *
> +alloc_var_entry(struct mlx5_ib_ucontext *c)
> +{
> + struct mlx5_user_mmap_entry *entry;
> + struct mlx5_var_table *var_table;
> + u32 page_idx;
> + int err;
> +
> + var_table = &to_mdev(c->ibucontext.device)->var_table;
> + entry = kzalloc(sizeof(*entry), GFP_KERNEL);
> + if (!entry)
> + return ERR_PTR(-ENOMEM);
> +
> + mutex_lock(&var_table->bitmap_lock);
> + page_idx = find_first_zero_bit(var_table->bitmap,
> + var_table->num_var_hw_entries);
> + if (page_idx >= var_table->num_var_hw_entries) {
> + err = -ENOSPC;
> + mutex_unlock(&var_table->bitmap_lock);
> + goto end;
> + }
> +
> + set_bit(page_idx, var_table->bitmap);
> + mutex_unlock(&var_table->bitmap_lock);
> +
> + entry->address = var_table->hw_start_addr +
> + (page_idx * var_table->stride_size);
> + entry->page_idx = page_idx;
> + entry->mmap_flag = MLX5_IB_MMAP_TYPE_VAR;
> +
> + err = rdma_user_mmap_entry_insert_range(
> + &c->ibucontext, &entry->rdma_entry, var_table->stride_size,
> + MLX5_IB_MMAP_OFFSET_START << 16,
> + (MLX5_IB_MMAP_OFFSET_END << 16) + (1UL << 16) - 1);
> + if (err)
> + goto err_insert;
> +
> + return entry;
> +
> +err_insert:
> + mutex_lock(&var_table->bitmap_lock);
> + clear_bit(page_idx, var_table->bitmap);
> + mutex_unlock(&var_table->bitmap_lock);
> +end:
> + kfree(entry);
> + return ERR_PTR(err);
> +}
> +
> +static int UVERBS_HANDLER(MLX5_IB_METHOD_VAR_OBJ_ALLOC)(
> + struct uverbs_attr_bundle *attrs)
> +{
> + struct ib_uobject *uobj = uverbs_attr_get_uobject(
> + attrs, MLX5_IB_ATTR_VAR_OBJ_ALLOC_HANDLE);
> + struct mlx5_ib_ucontext *c;
> + struct mlx5_user_mmap_entry *entry;
> + u64 mmap_offset;
> + u32 length;
> + int err;
> +
> + c = to_mucontext(ib_uverbs_get_ucontext(attrs));
> + if (IS_ERR(c))
> + return PTR_ERR(c);
> +
> + entry = alloc_var_entry(c);
> + if (IS_ERR(entry))
> + return PTR_ERR(entry);
> +
> + mmap_offset = mlx5_entry_to_mmap_offset(entry);
> + length = entry->rdma_entry.npages * PAGE_SIZE;
> + uobj->object = entry;
> +
> + err = uverbs_copy_to(attrs, MLX5_IB_ATTR_VAR_OBJ_ALLOC_MMAP_OFFSET,
> + &mmap_offset, sizeof(mmap_offset));
> + if (err)
> + goto err;
> +
> + err = uverbs_copy_to(attrs, MLX5_IB_ATTR_VAR_OBJ_ALLOC_PAGE_ID,
> + &entry->page_idx, sizeof(entry->page_idx));
> + if (err)
> + goto err;
> +
> + err = uverbs_copy_to(attrs, MLX5_IB_ATTR_VAR_OBJ_ALLOC_MMAP_LENGTH,
> + &length, sizeof(length));
> + if (err)
> + goto err;
> +
> + return 0;
> +
> +err:
> + rdma_user_mmap_entry_remove(&entry->rdma_entry);
> + return err;
> +}
> +
> +DECLARE_UVERBS_NAMED_METHOD(
> + MLX5_IB_METHOD_VAR_OBJ_ALLOC,
> + UVERBS_ATTR_IDR(MLX5_IB_ATTR_VAR_OBJ_ALLOC_HANDLE,
> + MLX5_IB_OBJECT_VAR,
> + UVERBS_ACCESS_NEW,
> + UA_MANDATORY),
> + UVERBS_ATTR_PTR_OUT(MLX5_IB_ATTR_VAR_OBJ_ALLOC_PAGE_ID,
> + UVERBS_ATTR_TYPE(u32),
> + UA_MANDATORY),
> + UVERBS_ATTR_PTR_OUT(MLX5_IB_ATTR_VAR_OBJ_ALLOC_MMAP_LENGTH,
> + UVERBS_ATTR_TYPE(u32),
> + UA_MANDATORY),
> + UVERBS_ATTR_PTR_OUT(MLX5_IB_ATTR_VAR_OBJ_ALLOC_MMAP_OFFSET,
> + UVERBS_ATTR_TYPE(u64),
> + UA_MANDATORY));
> +
> +DECLARE_UVERBS_NAMED_METHOD_DESTROY(
> + MLX5_IB_METHOD_VAR_OBJ_DESTROY,
> + UVERBS_ATTR_IDR(MLX5_IB_ATTR_VAR_OBJ_DESTROY_HANDLE,
> + MLX5_IB_OBJECT_VAR,
> + UVERBS_ACCESS_DESTROY,
> + UA_MANDATORY));
> +
> +DECLARE_UVERBS_NAMED_OBJECT(MLX5_IB_OBJECT_VAR,
> + UVERBS_TYPE_ALLOC_IDR(var_obj_cleanup),
> + &UVERBS_METHOD(MLX5_IB_METHOD_VAR_OBJ_ALLOC),
> + &UVERBS_METHOD(MLX5_IB_METHOD_VAR_OBJ_DESTROY));
> +
> +static bool var_is_supported(struct ib_device *device)
> +{
> + struct mlx5_ib_dev *dev = to_mdev(device);
> +
> + return (MLX5_CAP_GEN_64(dev->mdev, general_obj_types) &
> + MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_NET_Q);
> +}
> +
> ADD_UVERBS_ATTRIBUTES_SIMPLE(
> mlx5_ib_dm,
> UVERBS_OBJECT_DM,
> @@ -6064,6 +6219,8 @@ static const struct uapi_definition mlx5_ib_defs[] = {
> UAPI_DEF_CHAIN_OBJ_TREE(UVERBS_OBJECT_FLOW_ACTION,
> &mlx5_ib_flow_action),
> UAPI_DEF_CHAIN_OBJ_TREE(UVERBS_OBJECT_DM, &mlx5_ib_dm),
> + UAPI_DEF_CHAIN_OBJ_TREE_NAMED(MLX5_IB_OBJECT_VAR,
> + UAPI_DEF_IS_OBJ_SUPPORTED(var_is_supported)),
> {}
> };
>
> diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
> index 23ad949e247f..489128fe8603 100644
> +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
> @@ -71,6 +71,11 @@
>
> #define MLX5_MKEY_PAGE_SHIFT_MASK __mlx5_mask(mkc, log_page_size)
>
> +enum {
> + MLX5_IB_MMAP_OFFSET_START = 9,
> + MLX5_IB_MMAP_OFFSET_END = 255,
> +};
> +
> enum {
> MLX5_IB_MMAP_CMD_SHIFT = 8,
> MLX5_IB_MMAP_CMD_MASK = 0xff,
> @@ -120,6 +125,7 @@ enum {
>
> enum mlx5_ib_mmap_type {
> MLX5_IB_MMAP_TYPE_MEMIC = 1,
> + MLX5_IB_MMAP_TYPE_VAR = 2,
> };
>
> #define MLX5_LOG_SW_ICM_BLOCK_SIZE(dev) \
> @@ -563,6 +569,7 @@ struct mlx5_user_mmap_entry {
> struct rdma_user_mmap_entry rdma_entry;
> u8 mmap_flag;
> u64 address;
> + u32 page_idx;
Why are we storing this in the global struct when it is never read
except by the caller of alloc_var_entry()? Return it from
alloc_var_entry?
Also the final patch in the series should be here as at this point
mmap will succeed but return the wrong cachability flags.
Since Leon is away I can fix this two things if you agree.
Jason
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH rdma-next 0/5] VIRTIO_NET Emulation Offload
2019-12-12 11:09 [PATCH rdma-next 0/5] VIRTIO_NET Emulation Offload Leon Romanovsky
` (4 preceding siblings ...)
2019-12-12 11:09 ` [PATCH rdma-next 5/5] IB/mlx5: Add mmap support for VAR Leon Romanovsky
@ 2020-01-07 19:37 ` Jason Gunthorpe
2020-01-10 18:30 ` Leon Romanovsky
5 siblings, 1 reply; 11+ messages in thread
From: Jason Gunthorpe @ 2020-01-07 19:37 UTC (permalink / raw)
To: Leon Romanovsky, Saeed Mahameed
Cc: Doug Ledford, Leon Romanovsky, RDMA mailing list, Shahaf Shuler,
Yishai Hadas, linux-netdev
On Thu, Dec 12, 2019 at 01:09:23PM +0200, Leon Romanovsky wrote:
> From: Leon Romanovsky <leonro@mellanox.com>
>
> Hi,
>
> In this series, we introduce VIRTIO_NET_Q HW offload capability, so SW will
> be able to create special general object with relevant virtqueue properties.
>
> This series is based on -rc patches:
> https://lore.kernel.org/linux-rdma/20191212100237.330654-1-leon@kernel.org
>
> Thanks
>
> Yishai Hadas (5):
> net/mlx5: Add Virtio Emulation related device capabilities
> net/mlx5: Expose vDPA emulation device capabilities
This series looks OK enough to me. Saeed can you update the share
branch with the two patches?
https://patchwork.kernel.org/patch/11287947/
https://patchwork.kernel.org/patch/11287955/
Thanks,
Jason
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH rdma-next 4/5] IB/mlx5: Introduce VAR object and its alloc/destroy methods
2020-01-07 19:36 ` Jason Gunthorpe
@ 2020-01-08 8:12 ` Yishai Hadas
0 siblings, 0 replies; 11+ messages in thread
From: Yishai Hadas @ 2020-01-08 8:12 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Leon Romanovsky, Doug Ledford, Leon Romanovsky,
RDMA mailing list, Shahaf Shuler, Yishai Hadas, Saeed Mahameed,
linux-netdev
On 1/7/2020 9:36 PM, Jason Gunthorpe wrote:
> On Thu, Dec 12, 2019 at 01:09:27PM +0200, Leon Romanovsky wrote:
>> From: Yishai Hadas <yishaih@mellanox.com>
>>
>> Introduce VAR object and its alloc/destroy KABI methods. The internal
>> implementation uses the IB core API to manage mmap/munamp calls.
>>
>> Signed-off-by: Yishai Hadas <yishaih@mellanox.com>
>> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
>> drivers/infiniband/hw/mlx5/main.c | 157 +++++++++++++++++++++++
>> drivers/infiniband/hw/mlx5/mlx5_ib.h | 7 +
>> include/uapi/rdma/mlx5_user_ioctl_cmds.h | 17 +++
>> 3 files changed, 181 insertions(+)
>>
>> diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
>> index 79a5b8824b9d..873480b07686 100644
>> +++ b/drivers/infiniband/hw/mlx5/main.c
>> @@ -2078,6 +2078,7 @@ static void mlx5_ib_mmap_free(struct rdma_user_mmap_entry *entry)
>> {
>> struct mlx5_user_mmap_entry *mentry = to_mmmap(entry);
>> struct mlx5_ib_dev *dev = to_mdev(entry->ucontext->device);
>> + struct mlx5_var_table *var_table = &dev->var_table;
>> struct mlx5_ib_dm *mdm;
>>
>> switch (mentry->mmap_flag) {
>> @@ -2087,6 +2088,12 @@ static void mlx5_ib_mmap_free(struct rdma_user_mmap_entry *entry)
>> mdm->size);
>> kfree(mdm);
>> break;
>> + case MLX5_IB_MMAP_TYPE_VAR:
>> + mutex_lock(&var_table->bitmap_lock);
>> + clear_bit(mentry->page_idx, var_table->bitmap);
>> + mutex_unlock(&var_table->bitmap_lock);
>> + kfree(mentry);
>> + break;
>> default:
>> WARN_ON(true);
>> }
>> @@ -2255,6 +2262,15 @@ static int mlx5_ib_mmap_offset(struct mlx5_ib_dev *dev,
>> return ret;
>> }
>>
>> +static u64 mlx5_entry_to_mmap_offset(struct mlx5_user_mmap_entry *entry)
>> +{
>> + u16 cmd = entry->rdma_entry.start_pgoff >> 16;
>> + u16 index = entry->rdma_entry.start_pgoff & 0xFFFF;
>> +
>> + return (((index >> 8) << 16) | (cmd << MLX5_IB_MMAP_CMD_SHIFT) |
>> + (index & 0xFF)) << PAGE_SHIFT;
>> +}
>> +
>> static int mlx5_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma)
>> {
>> struct mlx5_ib_ucontext *context = to_mucontext(ibcontext);
>> @@ -6034,6 +6050,145 @@ static void mlx5_ib_cleanup_multiport_master(struct mlx5_ib_dev *dev)
>> mlx5_nic_vport_disable_roce(dev->mdev);
>> }
>>
>> +static int var_obj_cleanup(struct ib_uobject *uobject,
>> + enum rdma_remove_reason why,
>> + struct uverbs_attr_bundle *attrs)
>> +{
>> + struct mlx5_user_mmap_entry *obj = uobject->object;
>> +
>> + rdma_user_mmap_entry_remove(&obj->rdma_entry);
>> + return 0;
>> +}
>> +
>> +static struct mlx5_user_mmap_entry *
>> +alloc_var_entry(struct mlx5_ib_ucontext *c)
>> +{
>> + struct mlx5_user_mmap_entry *entry;
>> + struct mlx5_var_table *var_table;
>> + u32 page_idx;
>> + int err;
>> +
>> + var_table = &to_mdev(c->ibucontext.device)->var_table;
>> + entry = kzalloc(sizeof(*entry), GFP_KERNEL);
>> + if (!entry)
>> + return ERR_PTR(-ENOMEM);
>> +
>> + mutex_lock(&var_table->bitmap_lock);
>> + page_idx = find_first_zero_bit(var_table->bitmap,
>> + var_table->num_var_hw_entries);
>> + if (page_idx >= var_table->num_var_hw_entries) {
>> + err = -ENOSPC;
>> + mutex_unlock(&var_table->bitmap_lock);
>> + goto end;
>> + }
>> +
>> + set_bit(page_idx, var_table->bitmap);
>> + mutex_unlock(&var_table->bitmap_lock);
>> +
>> + entry->address = var_table->hw_start_addr +
>> + (page_idx * var_table->stride_size);
>> + entry->page_idx = page_idx;
>> + entry->mmap_flag = MLX5_IB_MMAP_TYPE_VAR;
>> +
>> + err = rdma_user_mmap_entry_insert_range(
>> + &c->ibucontext, &entry->rdma_entry, var_table->stride_size,
>> + MLX5_IB_MMAP_OFFSET_START << 16,
>> + (MLX5_IB_MMAP_OFFSET_END << 16) + (1UL << 16) - 1);
>> + if (err)
>> + goto err_insert;
>> +
>> + return entry;
>> +
>> +err_insert:
>> + mutex_lock(&var_table->bitmap_lock);
>> + clear_bit(page_idx, var_table->bitmap);
>> + mutex_unlock(&var_table->bitmap_lock);
>> +end:
>> + kfree(entry);
>> + return ERR_PTR(err);
>> +}
>> +
>> +static int UVERBS_HANDLER(MLX5_IB_METHOD_VAR_OBJ_ALLOC)(
>> + struct uverbs_attr_bundle *attrs)
>> +{
>> + struct ib_uobject *uobj = uverbs_attr_get_uobject(
>> + attrs, MLX5_IB_ATTR_VAR_OBJ_ALLOC_HANDLE);
>> + struct mlx5_ib_ucontext *c;
>> + struct mlx5_user_mmap_entry *entry;
>> + u64 mmap_offset;
>> + u32 length;
>> + int err;
>> +
>> + c = to_mucontext(ib_uverbs_get_ucontext(attrs));
>> + if (IS_ERR(c))
>> + return PTR_ERR(c);
>> +
>> + entry = alloc_var_entry(c);
>> + if (IS_ERR(entry))
>> + return PTR_ERR(entry);
>> +
>> + mmap_offset = mlx5_entry_to_mmap_offset(entry);
>> + length = entry->rdma_entry.npages * PAGE_SIZE;
>> + uobj->object = entry;
>> +
>> + err = uverbs_copy_to(attrs, MLX5_IB_ATTR_VAR_OBJ_ALLOC_MMAP_OFFSET,
>> + &mmap_offset, sizeof(mmap_offset));
>> + if (err)
>> + goto err;
>> +
>> + err = uverbs_copy_to(attrs, MLX5_IB_ATTR_VAR_OBJ_ALLOC_PAGE_ID,
>> + &entry->page_idx, sizeof(entry->page_idx));
>> + if (err)
>> + goto err;
>> +
>> + err = uverbs_copy_to(attrs, MLX5_IB_ATTR_VAR_OBJ_ALLOC_MMAP_LENGTH,
>> + &length, sizeof(length));
>> + if (err)
>> + goto err;
>> +
>> + return 0;
>> +
>> +err:
>> + rdma_user_mmap_entry_remove(&entry->rdma_entry);
>> + return err;
>> +}
>> +
>> +DECLARE_UVERBS_NAMED_METHOD(
>> + MLX5_IB_METHOD_VAR_OBJ_ALLOC,
>> + UVERBS_ATTR_IDR(MLX5_IB_ATTR_VAR_OBJ_ALLOC_HANDLE,
>> + MLX5_IB_OBJECT_VAR,
>> + UVERBS_ACCESS_NEW,
>> + UA_MANDATORY),
>> + UVERBS_ATTR_PTR_OUT(MLX5_IB_ATTR_VAR_OBJ_ALLOC_PAGE_ID,
>> + UVERBS_ATTR_TYPE(u32),
>> + UA_MANDATORY),
>> + UVERBS_ATTR_PTR_OUT(MLX5_IB_ATTR_VAR_OBJ_ALLOC_MMAP_LENGTH,
>> + UVERBS_ATTR_TYPE(u32),
>> + UA_MANDATORY),
>> + UVERBS_ATTR_PTR_OUT(MLX5_IB_ATTR_VAR_OBJ_ALLOC_MMAP_OFFSET,
>> + UVERBS_ATTR_TYPE(u64),
>> + UA_MANDATORY));
>> +
>> +DECLARE_UVERBS_NAMED_METHOD_DESTROY(
>> + MLX5_IB_METHOD_VAR_OBJ_DESTROY,
>> + UVERBS_ATTR_IDR(MLX5_IB_ATTR_VAR_OBJ_DESTROY_HANDLE,
>> + MLX5_IB_OBJECT_VAR,
>> + UVERBS_ACCESS_DESTROY,
>> + UA_MANDATORY));
>> +
>> +DECLARE_UVERBS_NAMED_OBJECT(MLX5_IB_OBJECT_VAR,
>> + UVERBS_TYPE_ALLOC_IDR(var_obj_cleanup),
>> + &UVERBS_METHOD(MLX5_IB_METHOD_VAR_OBJ_ALLOC),
>> + &UVERBS_METHOD(MLX5_IB_METHOD_VAR_OBJ_DESTROY));
>> +
>> +static bool var_is_supported(struct ib_device *device)
>> +{
>> + struct mlx5_ib_dev *dev = to_mdev(device);
>> +
>> + return (MLX5_CAP_GEN_64(dev->mdev, general_obj_types) &
>> + MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_NET_Q);
>> +}
>> +
>> ADD_UVERBS_ATTRIBUTES_SIMPLE(
>> mlx5_ib_dm,
>> UVERBS_OBJECT_DM,
>> @@ -6064,6 +6219,8 @@ static const struct uapi_definition mlx5_ib_defs[] = {
>> UAPI_DEF_CHAIN_OBJ_TREE(UVERBS_OBJECT_FLOW_ACTION,
>> &mlx5_ib_flow_action),
>> UAPI_DEF_CHAIN_OBJ_TREE(UVERBS_OBJECT_DM, &mlx5_ib_dm),
>> + UAPI_DEF_CHAIN_OBJ_TREE_NAMED(MLX5_IB_OBJECT_VAR,
>> + UAPI_DEF_IS_OBJ_SUPPORTED(var_is_supported)),
>> {}
>> };
>>
>> diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
>> index 23ad949e247f..489128fe8603 100644
>> +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
>> @@ -71,6 +71,11 @@
>>
>> #define MLX5_MKEY_PAGE_SHIFT_MASK __mlx5_mask(mkc, log_page_size)
>>
>> +enum {
>> + MLX5_IB_MMAP_OFFSET_START = 9,
>> + MLX5_IB_MMAP_OFFSET_END = 255,
>> +};
>> +
>> enum {
>> MLX5_IB_MMAP_CMD_SHIFT = 8,
>> MLX5_IB_MMAP_CMD_MASK = 0xff,
>> @@ -120,6 +125,7 @@ enum {
>>
>> enum mlx5_ib_mmap_type {
>> MLX5_IB_MMAP_TYPE_MEMIC = 1,
>> + MLX5_IB_MMAP_TYPE_VAR = 2,
>> };
>>
>> #define MLX5_LOG_SW_ICM_BLOCK_SIZE(dev) \
>> @@ -563,6 +569,7 @@ struct mlx5_user_mmap_entry {
>> struct rdma_user_mmap_entry rdma_entry;
>> u8 mmap_flag;
>> u64 address;
>> + u32 page_idx;
>
> Why are we storing this in the global struct when it is never read
> except by the caller of alloc_var_entry()? Return it from
> alloc_var_entry?
>
It's required as part of mlx5_ib_mmap_free() to claer the matching bit
map entry of the device var table, see above in this patch.
> Also the final patch in the series should be here as at this point
> mmap will succeed but return the wrong cachability flags.
>
Right, let's squash it to this patch.
> Since Leon is away I can fix this two things if you agree.
Yes, thanks.
Yishai
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH rdma-next 0/5] VIRTIO_NET Emulation Offload
2020-01-07 19:37 ` [PATCH rdma-next 0/5] VIRTIO_NET Emulation Offload Jason Gunthorpe
@ 2020-01-10 18:30 ` Leon Romanovsky
2020-01-12 23:54 ` Jason Gunthorpe
0 siblings, 1 reply; 11+ messages in thread
From: Leon Romanovsky @ 2020-01-10 18:30 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Saeed Mahameed, Doug Ledford, RDMA mailing list, Shahaf Shuler,
Yishai Hadas, linux-netdev
On Tue, Jan 07, 2020 at 03:37:44PM -0400, Jason Gunthorpe wrote:
> On Thu, Dec 12, 2019 at 01:09:23PM +0200, Leon Romanovsky wrote:
> > From: Leon Romanovsky <leonro@mellanox.com>
> >
> > Hi,
> >
> > In this series, we introduce VIRTIO_NET_Q HW offload capability, so SW will
> > be able to create special general object with relevant virtqueue properties.
> >
> > This series is based on -rc patches:
> > https://lore.kernel.org/linux-rdma/20191212100237.330654-1-leon@kernel.org
> >
> > Thanks
> >
> > Yishai Hadas (5):
> > net/mlx5: Add Virtio Emulation related device capabilities
> > net/mlx5: Expose vDPA emulation device capabilities
>
> This series looks OK enough to me. Saeed can you update the share
> branch with the two patches?
Merged, thanks,
ca1992c62cad net/mlx5: Expose vDPA emulation device capabilities
90fbca595243 net/mlx5: Add Virtio Emulation related device capabilities
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH rdma-next 0/5] VIRTIO_NET Emulation Offload
2020-01-10 18:30 ` Leon Romanovsky
@ 2020-01-12 23:54 ` Jason Gunthorpe
0 siblings, 0 replies; 11+ messages in thread
From: Jason Gunthorpe @ 2020-01-12 23:54 UTC (permalink / raw)
To: Leon Romanovsky
Cc: Saeed Mahameed, Doug Ledford, RDMA mailing list, Shahaf Shuler,
Yishai Hadas, linux-netdev
On Fri, Jan 10, 2020 at 08:30:41PM +0200, Leon Romanovsky wrote:
> On Tue, Jan 07, 2020 at 03:37:44PM -0400, Jason Gunthorpe wrote:
> > On Thu, Dec 12, 2019 at 01:09:23PM +0200, Leon Romanovsky wrote:
> > > From: Leon Romanovsky <leonro@mellanox.com>
> > >
> > > Hi,
> > >
> > > In this series, we introduce VIRTIO_NET_Q HW offload capability, so SW will
> > > be able to create special general object with relevant virtqueue properties.
> > >
> > > This series is based on -rc patches:
> > > https://lore.kernel.org/linux-rdma/20191212100237.330654-1-leon@kernel.org
> > >
> > > Thanks
> > >
> > > Yishai Hadas (5):
> > > net/mlx5: Add Virtio Emulation related device capabilities
> > > net/mlx5: Expose vDPA emulation device capabilities
> >
> > This series looks OK enough to me. Saeed can you update the share
> > branch with the two patches?
>
> Merged, thanks,
>
> ca1992c62cad net/mlx5: Expose vDPA emulation device capabilities
> 90fbca595243 net/mlx5: Add Virtio Emulation related device capabilities
Done, thanks
Jason
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2020-01-12 23:54 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-12 11:09 [PATCH rdma-next 0/5] VIRTIO_NET Emulation Offload Leon Romanovsky
2019-12-12 11:09 ` [PATCH mlx5-next 1/5] net/mlx5: Add Virtio Emulation related device capabilities Leon Romanovsky
2019-12-12 11:09 ` [PATCH mlx5-next 2/5] net/mlx5: Expose vDPA emulation " Leon Romanovsky
2019-12-12 11:09 ` [PATCH rdma-next 3/5] IB/mlx5: Extend caps stage to handle VAR capabilities Leon Romanovsky
2019-12-12 11:09 ` [PATCH rdma-next 4/5] IB/mlx5: Introduce VAR object and its alloc/destroy methods Leon Romanovsky
2020-01-07 19:36 ` Jason Gunthorpe
2020-01-08 8:12 ` Yishai Hadas
2019-12-12 11:09 ` [PATCH rdma-next 5/5] IB/mlx5: Add mmap support for VAR Leon Romanovsky
2020-01-07 19:37 ` [PATCH rdma-next 0/5] VIRTIO_NET Emulation Offload Jason Gunthorpe
2020-01-10 18:30 ` Leon Romanovsky
2020-01-12 23:54 ` Jason Gunthorpe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).