All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH for-next 0/6] IB/core: Small fixes and refactors for IDR locking series
@ 2017-04-18  9:03 Matan Barak
       [not found] ` <1492506222-28999-1-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 14+ messages in thread
From: Matan Barak @ 2017-04-18  9:03 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Doug Ledford, Jason Gunthorpe, Sean Hefty, Liran Liss,
	Majd Dibbiny, Yishai Hadas, Ira Weiny, Christoph Lameter,
	Matan Barak

Hi Doug,

This series comes after the comments Sean and Jason sent in the linux-rdma
mailing list for the "Change IDR usage and locking in uverbs" series.
It's focused on small refactors, beautification and fixes
for that series. It doesn't aim to change anything logically.

Thanks,
Matan

Matan Barak (6):
  IB/core: Rename write flag to exclusive in rdma_core
  IB/core: Don't pass the lock state to _rdma_remove_commit_uobject
  IB/core: Nullify ib_uobject during allocation
  IB/core: A small refactor in destroy WQ handler
  IB/core: Don't use is_async in event files to infer events size
  IB/core: Rename uverbs event file structure

 drivers/infiniband/core/rdma_core.c        |  86 ++++++++---------
 drivers/infiniband/core/uverbs.h           |  21 +++--
 drivers/infiniband/core/uverbs_cmd.c       |  16 +---
 drivers/infiniband/core/uverbs_main.c      | 146 ++++++++++++++---------------
 drivers/infiniband/core/uverbs_std_types.c |  20 ++--
 include/rdma/uverbs_types.h                |  33 +++----
 6 files changed, 158 insertions(+), 164 deletions(-)

-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH for-next 1/6] IB/core: Rename write flag to exclusive in rdma_core
       [not found] ` <1492506222-28999-1-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2017-04-18  9:03   ` Matan Barak
       [not found]     ` <1492506222-28999-2-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  2017-04-18  9:03   ` [PATCH for-next 2/6] IB/core: Don't pass the lock state to _rdma_remove_commit_uobject Matan Barak
                     ` (5 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Matan Barak @ 2017-04-18  9:03 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Doug Ledford, Jason Gunthorpe, Sean Hefty, Liran Liss,
	Majd Dibbiny, Yishai Hadas, Ira Weiny, Christoph Lameter,
	Matan Barak

We rename the "write" flags to "exclusive", as it's used for both
WRITE and DESTROY actions.

Fixes: 3832125624b7 ('IB/core: Add support for idr types')
Signed-off-by: Matan Barak <matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
---
 drivers/infiniband/core/rdma_core.c | 60 +++++++++++++++++++------------------
 include/rdma/uverbs_types.h         | 33 ++++++++++----------
 2 files changed, 48 insertions(+), 45 deletions(-)

diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c
index e5bdf7f..88d1e59 100644
--- a/drivers/infiniband/core/rdma_core.c
+++ b/drivers/infiniband/core/rdma_core.c
@@ -44,7 +44,7 @@ void uverbs_uobject_get(struct ib_uobject *uobject)
 	kref_get(&uobject->ref);
 }
 
-static void uverbs_uobject_put_ref(struct kref *ref)
+static void uverbs_uobject_free(struct kref *ref)
 {
 	struct ib_uobject *uobj =
 		container_of(ref, struct ib_uobject, ref);
@@ -57,21 +57,23 @@ static void uverbs_uobject_put_ref(struct kref *ref)
 
 void uverbs_uobject_put(struct ib_uobject *uobject)
 {
-	kref_put(&uobject->ref, uverbs_uobject_put_ref);
+	kref_put(&uobject->ref, uverbs_uobject_free);
 }
 
-static int uverbs_try_lock_object(struct ib_uobject *uobj, bool write)
+static int uverbs_try_lock_object(struct ib_uobject *uobj, bool exclusive)
 {
 	/*
-	 * When a read is required, we use a positive counter. Each read
-	 * request checks that the value != -1 and increment it. Write
-	 * requires an exclusive access, thus we check that the counter is
-	 * zero (nobody claimed this object) and we set it to -1.
-	 * Releasing a read lock is done by simply decreasing the counter.
-	 * As for writes, since only a single write is permitted, setting
-	 * it to zero is enough for releasing it.
+	 * When a shared access is required, we use a positive counter. Each
+	 * shared access request checks that the value != -1 and increment it.
+	 * Exclusive access is required for operations like write or destroy.
+	 * In exclusive access mode, we check that the counter is zero (nobody
+	 * claimed this object) and we set it to -1. Releasing a shared access
+	 * lock is done simply by decreasing the counter. As for exclusive
+	 * access locks, since only a single one of them is is allowed
+	 * concurrently, setting the counter to zero is enough for releasing
+	 * this lock.
 	 */
-	if (!write)
+	if (!exclusive)
 		return __atomic_add_unless(&uobj->usecnt, 1, -1) == -1 ?
 			-EBUSY : 0;
 
@@ -135,7 +137,7 @@ static void uverbs_idr_remove_uobj(struct ib_uobject *uobj)
 /* Returns the ib_uobject or an error. The caller should check for IS_ERR. */
 static struct ib_uobject *lookup_get_idr_uobject(const struct uverbs_obj_type *type,
 						 struct ib_ucontext *ucontext,
-						 int id, bool write)
+						 int id, bool exclusive)
 {
 	struct ib_uobject *uobj;
 
@@ -155,14 +157,14 @@ static struct ib_uobject *lookup_get_idr_uobject(const struct uverbs_obj_type *t
 
 static struct ib_uobject *lookup_get_fd_uobject(const struct uverbs_obj_type *type,
 						struct ib_ucontext *ucontext,
-						int id, bool write)
+						int id, bool exclusive)
 {
 	struct file *f;
 	struct ib_uobject *uobject;
 	const struct uverbs_obj_fd_type *fd_type =
 		container_of(type, struct uverbs_obj_fd_type, type);
 
-	if (write)
+	if (exclusive)
 		return ERR_PTR(-EOPNOTSUPP);
 
 	f = fget(id);
@@ -186,12 +188,12 @@ static struct ib_uobject *lookup_get_fd_uobject(const struct uverbs_obj_type *ty
 
 struct ib_uobject *rdma_lookup_get_uobject(const struct uverbs_obj_type *type,
 					   struct ib_ucontext *ucontext,
-					   int id, bool write)
+					   int id, bool exclusive)
 {
 	struct ib_uobject *uobj;
 	int ret;
 
-	uobj = type->type_class->lookup_get(type, ucontext, id, write);
+	uobj = type->type_class->lookup_get(type, ucontext, id, exclusive);
 	if (IS_ERR(uobj))
 		return uobj;
 
@@ -200,7 +202,7 @@ struct ib_uobject *rdma_lookup_get_uobject(const struct uverbs_obj_type *type,
 		goto free;
 	}
 
-	ret = uverbs_try_lock_object(uobj, write);
+	ret = uverbs_try_lock_object(uobj, exclusive);
 	if (ret) {
 		WARN(ucontext->cleanup_reason,
 		     "ib_uverbs: Trying to lookup_get while cleanup context\n");
@@ -209,7 +211,7 @@ struct ib_uobject *rdma_lookup_get_uobject(const struct uverbs_obj_type *type,
 
 	return uobj;
 free:
-	uobj->type->type_class->lookup_put(uobj, write);
+	uobj->type->type_class->lookup_put(uobj, exclusive);
 	uverbs_uobject_put(uobj);
 	return ERR_PTR(ret);
 }
@@ -350,10 +352,10 @@ static int __must_check remove_commit_fd_uobject(struct ib_uobject *uobj,
 	return ret;
 }
 
-static void lockdep_check(struct ib_uobject *uobj, bool write)
+static void lockdep_check(struct ib_uobject *uobj, bool exclusive)
 {
 #ifdef CONFIG_LOCKDEP
-	if (write)
+	if (exclusive)
 		WARN_ON(atomic_read(&uobj->usecnt) > 0);
 	else
 		WARN_ON(atomic_read(&uobj->usecnt) == -1);
@@ -465,29 +467,29 @@ void rdma_alloc_abort_uobject(struct ib_uobject *uobj)
 	uobj->type->type_class->alloc_abort(uobj);
 }
 
-static void lookup_put_idr_uobject(struct ib_uobject *uobj, bool write)
+static void lookup_put_idr_uobject(struct ib_uobject *uobj, bool exclusive)
 {
 }
 
-static void lookup_put_fd_uobject(struct ib_uobject *uobj, bool write)
+static void lookup_put_fd_uobject(struct ib_uobject *uobj, bool exclusive)
 {
 	struct file *filp = uobj->object;
 
-	WARN_ON(write);
+	WARN_ON(exclusive);
 	/* This indirectly calls uverbs_close_fd and free the object */
 	fput(filp);
 }
 
-void rdma_lookup_put_uobject(struct ib_uobject *uobj, bool write)
+void rdma_lookup_put_uobject(struct ib_uobject *uobj, bool exclusive)
 {
-	lockdep_check(uobj, write);
-	uobj->type->type_class->lookup_put(uobj, write);
+	lockdep_check(uobj, exclusive);
+	uobj->type->type_class->lookup_put(uobj, exclusive);
 	/*
 	 * In order to unlock an object, either decrease its usecnt for
-	 * read access or zero it in case of write access. See
+	 * read access or zero it in case of exclusive access. See
 	 * uverbs_try_lock_object for locking schema information.
 	 */
-	if (!write)
+	if (!exclusive)
 		atomic_dec(&uobj->usecnt);
 	else
 		atomic_set(&uobj->usecnt, 0);
@@ -512,7 +514,7 @@ void rdma_lookup_put_uobject(struct ib_uobject *uobj, bool write)
 	 * When the other thread continue - without the RCU, it would
 	 * access freed memory. However, the rcu_read_lock delays the free
 	 * until the rcu_read_lock of the READ operation quits. Since the
-	 * write lock of the object is still taken by the DESTROY flow, the
+	 * exclusive lock of the object is still taken by the DESTROY flow, the
 	 * READ operation will get -EBUSY and it'll just bail out.
 	 */
 	.needs_kfree_rcu = true,
diff --git a/include/rdma/uverbs_types.h b/include/rdma/uverbs_types.h
index a376921..351ea18 100644
--- a/include/rdma/uverbs_types.h
+++ b/include/rdma/uverbs_types.h
@@ -54,17 +54,18 @@ struct uverbs_obj_type_class {
 	 *		 destroyed.
 	 * [lookup]:	 Starts with lookup_get which fetches and locks the
 	 *		 object. After the handler finished using the object, it
-	 *		 needs to call lookup_put to unlock it. The write flag
-	 *		 indicates if the object is locked for exclusive access.
-	 * [remove]:	 Starts with lookup_get with write flag set. This locks
-	 *		 the object for exclusive access. If the handler code
-	 *		 completed successfully, remove_commit is called and
-	 *		 the ib_uobject is removed from the context's uobjects
-	 *		 repository and put. The object itself is destroyed as
-	 *		 well. Once remove succeeds new krefs to the object
-	 *		 cannot be acquired by other threads or userspace and
-	 *		 the hardware driver is removed from the object.
-	 *		 Other krefs on the object may still exist.
+	 *		 needs to call lookup_put to unlock it. The exclusive
+	 *		 flag indicates if the object is locked for exclusive
+	 *		 access.
+	 * [remove]:	 Starts with lookup_get with exclusive flag set. This
+	 *		 locks the object for exclusive access. If the handler
+	 *		 code completed successfully, remove_commit is called
+	 *		 and the ib_uobject is removed from the context's
+	 *		 uobjects repository and put. The object itself is
+	 *		 destroyed as well. Once remove succeeds new krefs to
+	 *		 the object cannot be acquired by other threads or
+	 *		 userspace and the hardware driver is removed from the
+	 *		 object. Other krefs on the object may still exist.
 	 *		 If the handler code failed, lookup_put should be
 	 *		 called. This callback is used when the context
 	 *		 is destroyed as well (process termination,
@@ -77,10 +78,10 @@ struct uverbs_obj_type_class {
 
 	struct ib_uobject *(*lookup_get)(const struct uverbs_obj_type *type,
 					 struct ib_ucontext *ucontext, int id,
-					 bool write);
-	void (*lookup_put)(struct ib_uobject *uobj, bool write);
+					 bool exclusive);
+	void (*lookup_put)(struct ib_uobject *uobj, bool exclusive);
 	/*
-	 * Must be called with the write lock held. If successful uobj is
+	 * Must be called with the exclusive lock held. If successful uobj is
 	 * invalid on return. On failure uobject is left completely
 	 * unchanged
 	 */
@@ -121,8 +122,8 @@ struct uverbs_obj_idr_type {
 
 struct ib_uobject *rdma_lookup_get_uobject(const struct uverbs_obj_type *type,
 					   struct ib_ucontext *ucontext,
-					   int id, bool write);
-void rdma_lookup_put_uobject(struct ib_uobject *uobj, bool write);
+					   int id, bool exclusive);
+void rdma_lookup_put_uobject(struct ib_uobject *uobj, bool exclusive);
 struct ib_uobject *rdma_alloc_begin_uobject(const struct uverbs_obj_type *type,
 					    struct ib_ucontext *ucontext);
 void rdma_alloc_abort_uobject(struct ib_uobject *uobj);
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH for-next 2/6] IB/core: Don't pass the lock state to _rdma_remove_commit_uobject
       [not found] ` <1492506222-28999-1-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  2017-04-18  9:03   ` [PATCH for-next 1/6] IB/core: Rename write flag to exclusive in rdma_core Matan Barak
@ 2017-04-18  9:03   ` Matan Barak
       [not found]     ` <1492506222-28999-3-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  2017-04-18  9:03   ` [PATCH for-next 3/6] IB/core: Nullify ib_uobject during allocation Matan Barak
                     ` (4 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Matan Barak @ 2017-04-18  9:03 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Doug Ledford, Jason Gunthorpe, Sean Hefty, Liran Liss,
	Majd Dibbiny, Yishai Hadas, Ira Weiny, Christoph Lameter,
	Matan Barak

The only scenario where this function was called while the lock is
already taken is in the context cleanup scenario. Thus, in order not
to pass the lock state to this function, we just call the remove logic
straight from the cleanup context function.

Fixes: 3832125624b7 ('IB/core: Add support for idr types')
Signed-off-by: Matan Barak <matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
---
 drivers/infiniband/core/rdma_core.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c
index 88d1e59..699a659 100644
--- a/drivers/infiniband/core/rdma_core.c
+++ b/drivers/infiniband/core/rdma_core.c
@@ -363,8 +363,7 @@ static void lockdep_check(struct ib_uobject *uobj, bool exclusive)
 }
 
 static int __must_check _rdma_remove_commit_uobject(struct ib_uobject *uobj,
-						    enum rdma_remove_reason why,
-						    bool lock)
+						    enum rdma_remove_reason why)
 {
 	int ret;
 	struct ib_ucontext *ucontext = uobj->context;
@@ -375,11 +374,9 @@ static int __must_check _rdma_remove_commit_uobject(struct ib_uobject *uobj,
 		atomic_set(&uobj->usecnt, 0);
 		uobj->type->type_class->lookup_put(uobj, true);
 	} else {
-		if (lock)
-			mutex_lock(&ucontext->uobjects_lock);
+		mutex_lock(&ucontext->uobjects_lock);
 		list_del(&uobj->list);
-		if (lock)
-			mutex_unlock(&ucontext->uobjects_lock);
+		mutex_unlock(&ucontext->uobjects_lock);
 		/* put the ref we took when we created the object */
 		uverbs_uobject_put(uobj);
 	}
@@ -401,7 +398,7 @@ int __must_check rdma_remove_commit_uobject(struct ib_uobject *uobj)
 		return 0;
 	}
 	lockdep_check(uobj, true);
-	ret = _rdma_remove_commit_uobject(uobj, RDMA_REMOVE_DESTROY, true);
+	ret = _rdma_remove_commit_uobject(uobj, RDMA_REMOVE_DESTROY);
 
 	up_read(&ucontext->cleanup_rwsem);
 	return ret;
@@ -534,8 +531,7 @@ static void _uverbs_close_fd(struct ib_uobject_file *uobj_file)
 		goto unlock;
 
 	ucontext = uobj_file->uobj.context;
-	ret = _rdma_remove_commit_uobject(&uobj_file->uobj, RDMA_REMOVE_CLOSE,
-					  true);
+	ret = _rdma_remove_commit_uobject(&uobj_file->uobj, RDMA_REMOVE_CLOSE);
 	up_read(&ucontext->cleanup_rwsem);
 	if (ret)
 		pr_warn("uverbs: unable to clean up uobject file in uverbs_close_fd.\n");
@@ -583,7 +579,7 @@ void uverbs_cleanup_ucontext(struct ib_ucontext *ucontext, bool device_removed)
 		 */
 		mutex_lock(&ucontext->uobjects_lock);
 		list_for_each_entry_safe(obj, next_obj, &ucontext->uobjects,
-					 list)
+					 list) {
 			if (obj->type->destroy_order == cur_order) {
 				int ret;
 
@@ -592,15 +588,19 @@ void uverbs_cleanup_ucontext(struct ib_ucontext *ucontext, bool device_removed)
 				 * racing with a lookup_get.
 				 */
 				WARN_ON(uverbs_try_lock_object(obj, true));
-				ret = _rdma_remove_commit_uobject(obj, reason,
-								  false);
+				ret = obj->type->type_class->remove_commit(obj,
+									   reason);
+				list_del(&obj->list);
 				if (ret)
 					pr_warn("ib_uverbs: failed to remove uobject id %d order %u\n",
 						obj->id, cur_order);
+				/* put the ref we took when we created the object */
+				uverbs_uobject_put(obj);
 			} else {
 				next_order = min(next_order,
 						 obj->type->destroy_order);
 			}
+		}
 		mutex_unlock(&ucontext->uobjects_lock);
 		cur_order = next_order;
 	}
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH for-next 3/6] IB/core: Nullify ib_uobject during allocation
       [not found] ` <1492506222-28999-1-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  2017-04-18  9:03   ` [PATCH for-next 1/6] IB/core: Rename write flag to exclusive in rdma_core Matan Barak
  2017-04-18  9:03   ` [PATCH for-next 2/6] IB/core: Don't pass the lock state to _rdma_remove_commit_uobject Matan Barak
@ 2017-04-18  9:03   ` Matan Barak
  2017-04-18  9:03   ` [PATCH for-next 4/6] IB/core: A small refactor in destroy WQ handler Matan Barak
                     ` (3 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Matan Barak @ 2017-04-18  9:03 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Doug Ledford, Jason Gunthorpe, Sean Hefty, Liran Liss,
	Majd Dibbiny, Yishai Hadas, Ira Weiny, Christoph Lameter,
	Matan Barak

Currently, we initialize all fields of ib_uobject straight after
allocation. Therefore, a kmalloc was sufficient. Since ib_uobject
could be embedded in a type specific structure, we nullify it to
spare programmer errors.

Fixes: 3832125624b7 ('IB/core: Add support for idr types')
Signed-off-by: Matan Barak <matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
---
 drivers/infiniband/core/rdma_core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c
index 699a659..41c31a2 100644
--- a/drivers/infiniband/core/rdma_core.c
+++ b/drivers/infiniband/core/rdma_core.c
@@ -84,7 +84,7 @@ static int uverbs_try_lock_object(struct ib_uobject *uobj, bool exclusive)
 static struct ib_uobject *alloc_uobj(struct ib_ucontext *context,
 				     const struct uverbs_obj_type *type)
 {
-	struct ib_uobject *uobj = kmalloc(type->obj_size, GFP_KERNEL);
+	struct ib_uobject *uobj = kzalloc(type->obj_size, GFP_KERNEL);
 
 	if (!uobj)
 		return ERR_PTR(-ENOMEM);
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH for-next 4/6] IB/core: A small refactor in destroy WQ handler
       [not found] ` <1492506222-28999-1-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
                     ` (2 preceding siblings ...)
  2017-04-18  9:03   ` [PATCH for-next 3/6] IB/core: Nullify ib_uobject during allocation Matan Barak
@ 2017-04-18  9:03   ` Matan Barak
       [not found]     ` <1492506222-28999-5-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  2017-04-18  9:03   ` [PATCH for-next 5/6] IB/core: Don't use is_async in event files to infer events size Matan Barak
                     ` (2 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Matan Barak @ 2017-04-18  9:03 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Doug Ledford, Jason Gunthorpe, Sean Hefty, Liran Liss,
	Majd Dibbiny, Yishai Hadas, Ira Weiny, Christoph Lameter,
	Matan Barak

Instead of having uverbs_uobject_put both in the error flow and the
good flow, we unite them.

Fixes: fd3c7904db6e ('IB/core: Change idr objects to use the new schema')
Signed-off-by: Matan Barak <matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
---
 drivers/infiniband/core/uverbs_cmd.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
index b9024fa..66cb22e 100644
--- a/drivers/infiniband/core/uverbs_cmd.c
+++ b/drivers/infiniband/core/uverbs_cmd.c
@@ -2989,18 +2989,12 @@ int ib_uverbs_ex_destroy_wq(struct ib_uverbs_file *file,
 	uverbs_uobject_get(uobj);
 
 	ret = uobj_remove_commit(uobj);
-	if (ret) {
-		uverbs_uobject_put(uobj);
-		return ret;
-	}
-
 	resp.events_reported = obj->uevent.events_reported;
 	uverbs_uobject_put(uobj);
-	ret = ib_copy_to_udata(ucore, &resp, resp.response_length);
 	if (ret)
 		return ret;
 
-	return 0;
+	return ib_copy_to_udata(ucore, &resp, resp.response_length);
 }
 
 int ib_uverbs_ex_modify_wq(struct ib_uverbs_file *file,
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH for-next 5/6] IB/core: Don't use is_async in event files to infer events size
       [not found] ` <1492506222-28999-1-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
                     ` (3 preceding siblings ...)
  2017-04-18  9:03   ` [PATCH for-next 4/6] IB/core: A small refactor in destroy WQ handler Matan Barak
@ 2017-04-18  9:03   ` Matan Barak
       [not found]     ` <1492506222-28999-6-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  2017-04-18  9:03   ` [PATCH for-next 6/6] IB/core: Rename uverbs event file structure Matan Barak
  2017-04-20 15:45   ` [PATCH for-next 0/6] IB/core: Small fixes and refactors for IDR locking series Doug Ledford
  6 siblings, 1 reply; 14+ messages in thread
From: Matan Barak @ 2017-04-18  9:03 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Doug Ledford, Jason Gunthorpe, Sean Hefty, Liran Liss,
	Majd Dibbiny, Yishai Hadas, Ira Weiny, Christoph Lameter,
	Matan Barak

Previously, we inferred the events size in ib_uverbs_event_read by
using the is_async flag. Instead of that, we pass the event size
directly.

Fixes: 1e7710f3f656 ('IB/core: Change completion channel to use the reworked objects schema')
Signed-off-by: Matan Barak <matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
---
 drivers/infiniband/core/uverbs_main.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
index 0b0dab8..4ab0e5d 100644
--- a/drivers/infiniband/core/uverbs_main.c
+++ b/drivers/infiniband/core/uverbs_main.c
@@ -257,10 +257,9 @@ static ssize_t ib_uverbs_event_read(struct ib_uverbs_event_file *file,
 				    struct ib_uverbs_file *uverbs_file,
 				    struct file *filp, char __user *buf,
 				    size_t count, loff_t *pos,
-				    bool is_async)
+				    size_t eventsz)
 {
 	struct ib_uverbs_event *event;
-	int eventsz;
 	int ret = 0;
 
 	spin_lock_irq(&file->lock);
@@ -290,11 +289,6 @@ static ssize_t ib_uverbs_event_read(struct ib_uverbs_event_file *file,
 
 	event = list_entry(file->event_list.next, struct ib_uverbs_event, list);
 
-	if (is_async)
-		eventsz = sizeof (struct ib_uverbs_async_event_desc);
-	else
-		eventsz = sizeof (struct ib_uverbs_comp_event_desc);
-
 	if (eventsz > count) {
 		ret   = -EINVAL;
 		event = NULL;
@@ -326,7 +320,8 @@ static ssize_t ib_uverbs_async_event_read(struct file *filp, char __user *buf,
 	struct ib_uverbs_async_event_file *file = filp->private_data;
 
 	return ib_uverbs_event_read(&file->ev_file, file->uverbs_file, filp,
-				    buf, count, pos, true);
+				    buf, count, pos,
+				    sizeof(struct ib_uverbs_async_event_desc));
 }
 
 static ssize_t ib_uverbs_comp_event_read(struct file *filp, char __user *buf,
@@ -337,7 +332,8 @@ static ssize_t ib_uverbs_comp_event_read(struct file *filp, char __user *buf,
 
 	return ib_uverbs_event_read(&comp_ev_file->ev_file,
 				    comp_ev_file->uobj_file.ufile, filp,
-				    buf, count, pos, false);
+				    buf, count, pos,
+				    sizeof(struct ib_uverbs_comp_event_desc));
 }
 
 static unsigned int ib_uverbs_event_poll(struct ib_uverbs_event_file *file,
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH for-next 6/6] IB/core: Rename uverbs event file structure
       [not found] ` <1492506222-28999-1-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
                     ` (4 preceding siblings ...)
  2017-04-18  9:03   ` [PATCH for-next 5/6] IB/core: Don't use is_async in event files to infer events size Matan Barak
@ 2017-04-18  9:03   ` Matan Barak
       [not found]     ` <1492506222-28999-7-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
  2017-04-20 15:45   ` [PATCH for-next 0/6] IB/core: Small fixes and refactors for IDR locking series Doug Ledford
  6 siblings, 1 reply; 14+ messages in thread
From: Matan Barak @ 2017-04-18  9:03 UTC (permalink / raw)
  To: linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Doug Ledford, Jason Gunthorpe, Sean Hefty, Liran Liss,
	Majd Dibbiny, Yishai Hadas, Ira Weiny, Christoph Lameter,
	Matan Barak

Previously, ib_uverbs_event_file was suffixed by _file as it contained
the actual file information. Since it's now only used as base struct
for ib_uverbs_async_event_file and ib_uverbs_completion_event_file,
we change its name to ib_uverbs_event_queue. This represents its
logical role better.

Fixes: 1e7710f3f656 ('IB/core: Change completion channel to use the reworked objects schema')
Signed-off-by: Matan Barak <matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
---
 drivers/infiniband/core/uverbs.h           |  21 ++---
 drivers/infiniband/core/uverbs_cmd.c       |   8 +-
 drivers/infiniband/core/uverbs_main.c      | 132 ++++++++++++++---------------
 drivers/infiniband/core/uverbs_std_types.c |  20 ++---
 4 files changed, 91 insertions(+), 90 deletions(-)

diff --git a/drivers/infiniband/core/uverbs.h b/drivers/infiniband/core/uverbs.h
index 826f827..a3230b6 100644
--- a/drivers/infiniband/core/uverbs.h
+++ b/drivers/infiniband/core/uverbs.h
@@ -76,12 +76,13 @@
  * an asynchronous event queue file is created and released when the
  * event file is closed.
  *
- * struct ib_uverbs_event_file: One reference is held by the VFS and
- * released when the file is closed.  For asynchronous event files,
- * another reference is held by the corresponding main context file
- * and released when that file is closed.  For completion event files,
- * a reference is taken when a CQ is created that uses the file, and
- * released when the CQ is destroyed.
+ * struct ib_uverbs_event_queue: Base structure for
+ * struct ib_uverbs_async_event_file and struct ib_uverbs_completion_event_file.
+ * One reference is held by the VFS and released when the file is closed.
+ * For asynchronous event files, another reference is held by the corresponding
+ * main context file and released when that file is closed.  For completion
+ * event files, a reference is taken when a CQ is created that uses the file,
+ * and released when the CQ is destroyed.
  */
 
 struct ib_uverbs_device {
@@ -101,7 +102,7 @@ struct ib_uverbs_device {
 	struct list_head			uverbs_events_file_list;
 };
 
-struct ib_uverbs_event_file {
+struct ib_uverbs_event_queue {
 	spinlock_t				lock;
 	int					is_closed;
 	wait_queue_head_t			poll_wait;
@@ -110,7 +111,7 @@ struct ib_uverbs_event_file {
 };
 
 struct ib_uverbs_async_event_file {
-	struct ib_uverbs_event_file		ev_file;
+	struct ib_uverbs_event_queue		ev_queue;
 	struct ib_uverbs_file		       *uverbs_file;
 	struct kref				ref;
 	struct list_head			list;
@@ -118,7 +119,7 @@ struct ib_uverbs_async_event_file {
 
 struct ib_uverbs_completion_event_file {
 	struct ib_uobject_file			uobj_file;
-	struct ib_uverbs_event_file		ev_file;
+	struct ib_uverbs_event_queue		ev_queue;
 };
 
 struct ib_uverbs_file {
@@ -191,7 +192,7 @@ struct ib_ucq_object {
 };
 
 extern const struct file_operations uverbs_event_fops;
-void ib_uverbs_init_event_file(struct ib_uverbs_event_file *ev_file);
+void ib_uverbs_init_event_queue(struct ib_uverbs_event_queue *ev_queue);
 struct file *ib_uverbs_alloc_async_event_file(struct ib_uverbs_file *uverbs_file,
 					      struct ib_device *ib_dev);
 void ib_uverbs_free_async_event_file(struct ib_uverbs_file *uverbs_file);
diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
index 66cb22e..e2fee04 100644
--- a/drivers/infiniband/core/uverbs_cmd.c
+++ b/drivers/infiniband/core/uverbs_cmd.c
@@ -943,7 +943,7 @@ ssize_t ib_uverbs_create_comp_channel(struct ib_uverbs_file *file,
 
 	ev_file = container_of(uobj, struct ib_uverbs_completion_event_file,
 			       uobj_file.uobj);
-	ib_uverbs_init_event_file(&ev_file->ev_file);
+	ib_uverbs_init_event_queue(&ev_file->ev_queue);
 
 	if (copy_to_user((void __user *) (unsigned long) cmd.response,
 			 &resp, sizeof resp)) {
@@ -1015,7 +1015,7 @@ static struct ib_ucq_object *create_cq(struct ib_uverbs_file *file,
 	cq->uobject       = &obj->uobject;
 	cq->comp_handler  = ib_uverbs_comp_handler;
 	cq->event_handler = ib_uverbs_cq_event_handler;
-	cq->cq_context    = &ev_file->ev_file;
+	cq->cq_context    = &ev_file->ev_queue;
 	atomic_set(&cq->usecnt, 0);
 
 	obj->uobject.object = cq;
@@ -1296,7 +1296,7 @@ ssize_t ib_uverbs_destroy_cq(struct ib_uverbs_file *file,
 	struct ib_uobject		*uobj;
 	struct ib_cq               	*cq;
 	struct ib_ucq_object        	*obj;
-	struct ib_uverbs_event_file	*ev_file;
+	struct ib_uverbs_event_queue	*ev_queue;
 	int                        	 ret = -EINVAL;
 
 	if (copy_from_user(&cmd, buf, sizeof cmd))
@@ -1313,7 +1313,7 @@ ssize_t ib_uverbs_destroy_cq(struct ib_uverbs_file *file,
 	 */
 	uverbs_uobject_get(uobj);
 	cq      = uobj->object;
-	ev_file = cq->cq_context;
+	ev_queue = cq->cq_context;
 	obj     = container_of(cq->uobject, struct ib_ucq_object, uobject);
 
 	memset(&resp, 0, sizeof(resp));
diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
index 4ab0e5d..3a9883d 100644
--- a/drivers/infiniband/core/uverbs_main.c
+++ b/drivers/infiniband/core/uverbs_main.c
@@ -171,22 +171,22 @@ void ib_uverbs_release_ucq(struct ib_uverbs_file *file,
 	struct ib_uverbs_event *evt, *tmp;
 
 	if (ev_file) {
-		spin_lock_irq(&ev_file->ev_file.lock);
+		spin_lock_irq(&ev_file->ev_queue.lock);
 		list_for_each_entry_safe(evt, tmp, &uobj->comp_list, obj_list) {
 			list_del(&evt->list);
 			kfree(evt);
 		}
-		spin_unlock_irq(&ev_file->ev_file.lock);
+		spin_unlock_irq(&ev_file->ev_queue.lock);
 
 		uverbs_uobject_put(&ev_file->uobj_file.uobj);
 	}
 
-	spin_lock_irq(&file->async_file->ev_file.lock);
+	spin_lock_irq(&file->async_file->ev_queue.lock);
 	list_for_each_entry_safe(evt, tmp, &uobj->async_list, obj_list) {
 		list_del(&evt->list);
 		kfree(evt);
 	}
-	spin_unlock_irq(&file->async_file->ev_file.lock);
+	spin_unlock_irq(&file->async_file->ev_queue.lock);
 }
 
 void ib_uverbs_release_uevent(struct ib_uverbs_file *file,
@@ -194,12 +194,12 @@ void ib_uverbs_release_uevent(struct ib_uverbs_file *file,
 {
 	struct ib_uverbs_event *evt, *tmp;
 
-	spin_lock_irq(&file->async_file->ev_file.lock);
+	spin_lock_irq(&file->async_file->ev_queue.lock);
 	list_for_each_entry_safe(evt, tmp, &uobj->event_list, obj_list) {
 		list_del(&evt->list);
 		kfree(evt);
 	}
-	spin_unlock_irq(&file->async_file->ev_file.lock);
+	spin_unlock_irq(&file->async_file->ev_queue.lock);
 }
 
 void ib_uverbs_detach_umcast(struct ib_qp *qp,
@@ -253,7 +253,7 @@ void ib_uverbs_release_file(struct kref *ref)
 	kfree(file);
 }
 
-static ssize_t ib_uverbs_event_read(struct ib_uverbs_event_file *file,
+static ssize_t ib_uverbs_event_read(struct ib_uverbs_event_queue *ev_queue,
 				    struct ib_uverbs_file *uverbs_file,
 				    struct file *filp, char __user *buf,
 				    size_t count, loff_t *pos,
@@ -262,16 +262,16 @@ static ssize_t ib_uverbs_event_read(struct ib_uverbs_event_file *file,
 	struct ib_uverbs_event *event;
 	int ret = 0;
 
-	spin_lock_irq(&file->lock);
+	spin_lock_irq(&ev_queue->lock);
 
-	while (list_empty(&file->event_list)) {
-		spin_unlock_irq(&file->lock);
+	while (list_empty(&ev_queue->event_list)) {
+		spin_unlock_irq(&ev_queue->lock);
 
 		if (filp->f_flags & O_NONBLOCK)
 			return -EAGAIN;
 
-		if (wait_event_interruptible(file->poll_wait,
-					     (!list_empty(&file->event_list) ||
+		if (wait_event_interruptible(ev_queue->poll_wait,
+					     (!list_empty(&ev_queue->event_list) ||
 			/* The barriers built into wait_event_interruptible()
 			 * and wake_up() guarentee this will see the null set
 			 * without using RCU
@@ -280,27 +280,27 @@ static ssize_t ib_uverbs_event_read(struct ib_uverbs_event_file *file,
 			return -ERESTARTSYS;
 
 		/* If device was disassociated and no event exists set an error */
-		if (list_empty(&file->event_list) &&
+		if (list_empty(&ev_queue->event_list) &&
 		    !uverbs_file->device->ib_dev)
 			return -EIO;
 
-		spin_lock_irq(&file->lock);
+		spin_lock_irq(&ev_queue->lock);
 	}
 
-	event = list_entry(file->event_list.next, struct ib_uverbs_event, list);
+	event = list_entry(ev_queue->event_list.next, struct ib_uverbs_event, list);
 
 	if (eventsz > count) {
 		ret   = -EINVAL;
 		event = NULL;
 	} else {
-		list_del(file->event_list.next);
+		list_del(ev_queue->event_list.next);
 		if (event->counter) {
 			++(*event->counter);
 			list_del(&event->obj_list);
 		}
 	}
 
-	spin_unlock_irq(&file->lock);
+	spin_unlock_irq(&ev_queue->lock);
 
 	if (event) {
 		if (copy_to_user(buf, event, eventsz))
@@ -319,7 +319,7 @@ static ssize_t ib_uverbs_async_event_read(struct file *filp, char __user *buf,
 {
 	struct ib_uverbs_async_event_file *file = filp->private_data;
 
-	return ib_uverbs_event_read(&file->ev_file, file->uverbs_file, filp,
+	return ib_uverbs_event_read(&file->ev_queue, file->uverbs_file, filp,
 				    buf, count, pos,
 				    sizeof(struct ib_uverbs_async_event_desc));
 }
@@ -330,24 +330,24 @@ static ssize_t ib_uverbs_comp_event_read(struct file *filp, char __user *buf,
 	struct ib_uverbs_completion_event_file *comp_ev_file =
 		filp->private_data;
 
-	return ib_uverbs_event_read(&comp_ev_file->ev_file,
+	return ib_uverbs_event_read(&comp_ev_file->ev_queue,
 				    comp_ev_file->uobj_file.ufile, filp,
 				    buf, count, pos,
 				    sizeof(struct ib_uverbs_comp_event_desc));
 }
 
-static unsigned int ib_uverbs_event_poll(struct ib_uverbs_event_file *file,
+static unsigned int ib_uverbs_event_poll(struct ib_uverbs_event_queue *ev_queue,
 					 struct file *filp,
 					 struct poll_table_struct *wait)
 {
 	unsigned int pollflags = 0;
 
-	poll_wait(filp, &file->poll_wait, wait);
+	poll_wait(filp, &ev_queue->poll_wait, wait);
 
-	spin_lock_irq(&file->lock);
-	if (!list_empty(&file->event_list))
+	spin_lock_irq(&ev_queue->lock);
+	if (!list_empty(&ev_queue->event_list))
 		pollflags = POLLIN | POLLRDNORM;
-	spin_unlock_irq(&file->lock);
+	spin_unlock_irq(&ev_queue->lock);
 
 	return pollflags;
 }
@@ -364,14 +364,14 @@ static unsigned int ib_uverbs_comp_event_poll(struct file *filp,
 	struct ib_uverbs_completion_event_file *comp_ev_file =
 		filp->private_data;
 
-	return ib_uverbs_event_poll(&comp_ev_file->ev_file, filp, wait);
+	return ib_uverbs_event_poll(&comp_ev_file->ev_queue, filp, wait);
 }
 
 static int ib_uverbs_async_event_fasync(int fd, struct file *filp, int on)
 {
-	struct ib_uverbs_event_file *file = filp->private_data;
+	struct ib_uverbs_event_queue *ev_queue = filp->private_data;
 
-	return fasync_helper(fd, filp, on, &file->async_queue);
+	return fasync_helper(fd, filp, on, &ev_queue->async_queue);
 }
 
 static int ib_uverbs_comp_event_fasync(int fd, struct file *filp, int on)
@@ -379,7 +379,7 @@ static int ib_uverbs_comp_event_fasync(int fd, struct file *filp, int on)
 	struct ib_uverbs_completion_event_file *comp_ev_file =
 		filp->private_data;
 
-	return fasync_helper(fd, filp, on, &comp_ev_file->ev_file.async_queue);
+	return fasync_helper(fd, filp, on, &comp_ev_file->ev_queue.async_queue);
 }
 
 static int ib_uverbs_async_event_close(struct inode *inode, struct file *filp)
@@ -390,15 +390,15 @@ static int ib_uverbs_async_event_close(struct inode *inode, struct file *filp)
 	int closed_already = 0;
 
 	mutex_lock(&uverbs_file->device->lists_mutex);
-	spin_lock_irq(&file->ev_file.lock);
-	closed_already = file->ev_file.is_closed;
-	file->ev_file.is_closed = 1;
-	list_for_each_entry_safe(entry, tmp, &file->ev_file.event_list, list) {
+	spin_lock_irq(&file->ev_queue.lock);
+	closed_already = file->ev_queue.is_closed;
+	file->ev_queue.is_closed = 1;
+	list_for_each_entry_safe(entry, tmp, &file->ev_queue.event_list, list) {
 		if (entry->counter)
 			list_del(&entry->obj_list);
 		kfree(entry);
 	}
-	spin_unlock_irq(&file->ev_file.lock);
+	spin_unlock_irq(&file->ev_queue.lock);
 	if (!closed_already) {
 		list_del(&file->list);
 		ib_unregister_event_handler(&uverbs_file->event_handler);
@@ -416,13 +416,13 @@ static int ib_uverbs_comp_event_close(struct inode *inode, struct file *filp)
 	struct ib_uverbs_completion_event_file *file = filp->private_data;
 	struct ib_uverbs_event *entry, *tmp;
 
-	spin_lock_irq(&file->ev_file.lock);
-	list_for_each_entry_safe(entry, tmp, &file->ev_file.event_list, list) {
+	spin_lock_irq(&file->ev_queue.lock);
+	list_for_each_entry_safe(entry, tmp, &file->ev_queue.event_list, list) {
 		if (entry->counter)
 			list_del(&entry->obj_list);
 		kfree(entry);
 	}
-	spin_unlock_irq(&file->ev_file.lock);
+	spin_unlock_irq(&file->ev_queue.lock);
 
 	uverbs_close_fd(filp);
 
@@ -449,23 +449,23 @@ static int ib_uverbs_comp_event_close(struct inode *inode, struct file *filp)
 
 void ib_uverbs_comp_handler(struct ib_cq *cq, void *cq_context)
 {
-	struct ib_uverbs_event_file    *file = cq_context;
+	struct ib_uverbs_event_queue   *ev_queue = cq_context;
 	struct ib_ucq_object	       *uobj;
 	struct ib_uverbs_event	       *entry;
 	unsigned long			flags;
 
-	if (!file)
+	if (!ev_queue)
 		return;
 
-	spin_lock_irqsave(&file->lock, flags);
-	if (file->is_closed) {
-		spin_unlock_irqrestore(&file->lock, flags);
+	spin_lock_irqsave(&ev_queue->lock, flags);
+	if (ev_queue->is_closed) {
+		spin_unlock_irqrestore(&ev_queue->lock, flags);
 		return;
 	}
 
 	entry = kmalloc(sizeof *entry, GFP_ATOMIC);
 	if (!entry) {
-		spin_unlock_irqrestore(&file->lock, flags);
+		spin_unlock_irqrestore(&ev_queue->lock, flags);
 		return;
 	}
 
@@ -474,12 +474,12 @@ void ib_uverbs_comp_handler(struct ib_cq *cq, void *cq_context)
 	entry->desc.comp.cq_handle = cq->uobject->user_handle;
 	entry->counter		   = &uobj->comp_events_reported;
 
-	list_add_tail(&entry->list, &file->event_list);
+	list_add_tail(&entry->list, &ev_queue->event_list);
 	list_add_tail(&entry->obj_list, &uobj->comp_list);
-	spin_unlock_irqrestore(&file->lock, flags);
+	spin_unlock_irqrestore(&ev_queue->lock, flags);
 
-	wake_up_interruptible(&file->poll_wait);
-	kill_fasync(&file->async_queue, SIGIO, POLL_IN);
+	wake_up_interruptible(&ev_queue->poll_wait);
+	kill_fasync(&ev_queue->async_queue, SIGIO, POLL_IN);
 }
 
 static void ib_uverbs_async_handler(struct ib_uverbs_file *file,
@@ -490,15 +490,15 @@ static void ib_uverbs_async_handler(struct ib_uverbs_file *file,
 	struct ib_uverbs_event *entry;
 	unsigned long flags;
 
-	spin_lock_irqsave(&file->async_file->ev_file.lock, flags);
-	if (file->async_file->ev_file.is_closed) {
-		spin_unlock_irqrestore(&file->async_file->ev_file.lock, flags);
+	spin_lock_irqsave(&file->async_file->ev_queue.lock, flags);
+	if (file->async_file->ev_queue.is_closed) {
+		spin_unlock_irqrestore(&file->async_file->ev_queue.lock, flags);
 		return;
 	}
 
 	entry = kmalloc(sizeof *entry, GFP_ATOMIC);
 	if (!entry) {
-		spin_unlock_irqrestore(&file->async_file->ev_file.lock, flags);
+		spin_unlock_irqrestore(&file->async_file->ev_queue.lock, flags);
 		return;
 	}
 
@@ -507,13 +507,13 @@ static void ib_uverbs_async_handler(struct ib_uverbs_file *file,
 	entry->desc.async.reserved   = 0;
 	entry->counter               = counter;
 
-	list_add_tail(&entry->list, &file->async_file->ev_file.event_list);
+	list_add_tail(&entry->list, &file->async_file->ev_queue.event_list);
 	if (obj_list)
 		list_add_tail(&entry->obj_list, obj_list);
-	spin_unlock_irqrestore(&file->async_file->ev_file.lock, flags);
+	spin_unlock_irqrestore(&file->async_file->ev_queue.lock, flags);
 
-	wake_up_interruptible(&file->async_file->ev_file.poll_wait);
-	kill_fasync(&file->async_file->ev_file.async_queue, SIGIO, POLL_IN);
+	wake_up_interruptible(&file->async_file->ev_queue.poll_wait);
+	kill_fasync(&file->async_file->ev_queue.async_queue, SIGIO, POLL_IN);
 }
 
 void ib_uverbs_cq_event_handler(struct ib_event *event, void *context_ptr)
@@ -580,13 +580,13 @@ void ib_uverbs_free_async_event_file(struct ib_uverbs_file *file)
 	file->async_file = NULL;
 }
 
-void ib_uverbs_init_event_file(struct ib_uverbs_event_file *ev_file)
+void ib_uverbs_init_event_queue(struct ib_uverbs_event_queue *ev_queue)
 {
-	spin_lock_init(&ev_file->lock);
-	INIT_LIST_HEAD(&ev_file->event_list);
-	init_waitqueue_head(&ev_file->poll_wait);
-	ev_file->is_closed   = 0;
-	ev_file->async_queue = NULL;
+	spin_lock_init(&ev_queue->lock);
+	INIT_LIST_HEAD(&ev_queue->event_list);
+	init_waitqueue_head(&ev_queue->poll_wait);
+	ev_queue->is_closed   = 0;
+	ev_queue->async_queue = NULL;
 }
 
 struct file *ib_uverbs_alloc_async_event_file(struct ib_uverbs_file *uverbs_file,
@@ -600,7 +600,7 @@ struct file *ib_uverbs_alloc_async_event_file(struct ib_uverbs_file *uverbs_file
 	if (!ev_file)
 		return ERR_PTR(-ENOMEM);
 
-	ib_uverbs_init_event_file(&ev_file->ev_file);
+	ib_uverbs_init_event_queue(&ev_file->ev_queue);
 	ev_file->uverbs_file = uverbs_file;
 	kref_get(&ev_file->uverbs_file->ref);
 	kref_init(&ev_file->ref);
@@ -1186,9 +1186,9 @@ static void ib_uverbs_free_hw_resources(struct ib_uverbs_device *uverbs_dev,
 					      uverbs_events_file_list,
 					      struct ib_uverbs_async_event_file,
 					      list);
-		spin_lock_irq(&event_file->ev_file.lock);
-		event_file->ev_file.is_closed = 1;
-		spin_unlock_irq(&event_file->ev_file.lock);
+		spin_lock_irq(&event_file->ev_queue.lock);
+		event_file->ev_queue.is_closed = 1;
+		spin_unlock_irq(&event_file->ev_queue.lock);
 
 		list_del(&event_file->list);
 		ib_unregister_event_handler(
@@ -1196,8 +1196,8 @@ static void ib_uverbs_free_hw_resources(struct ib_uverbs_device *uverbs_dev,
 		event_file->uverbs_file->event_handler.device =
 			NULL;
 
-		wake_up_interruptible(&event_file->ev_file.poll_wait);
-		kill_fasync(&event_file->ev_file.async_queue, SIGIO, POLL_IN);
+		wake_up_interruptible(&event_file->ev_queue.poll_wait);
+		kill_fasync(&event_file->ev_queue.async_queue, SIGIO, POLL_IN);
 	}
 	mutex_unlock(&uverbs_dev->lists_mutex);
 }
diff --git a/drivers/infiniband/core/uverbs_std_types.c b/drivers/infiniband/core/uverbs_std_types.c
index 7f26af5..e3338b1 100644
--- a/drivers/infiniband/core/uverbs_std_types.c
+++ b/drivers/infiniband/core/uverbs_std_types.c
@@ -138,17 +138,17 @@ int uverbs_free_cq(struct ib_uobject *uobject,
 		   enum rdma_remove_reason why)
 {
 	struct ib_cq *cq = uobject->object;
-	struct ib_uverbs_event_file *ev_file = cq->cq_context;
+	struct ib_uverbs_event_queue *ev_queue = cq->cq_context;
 	struct ib_ucq_object *ucq =
 		container_of(uobject, struct ib_ucq_object, uobject);
 	int ret;
 
 	ret = ib_destroy_cq(cq);
 	if (!ret || why != RDMA_REMOVE_DESTROY)
-		ib_uverbs_release_ucq(uobject->context->ufile, ev_file ?
-				      container_of(ev_file,
+		ib_uverbs_release_ucq(uobject->context->ufile, ev_queue ?
+				      container_of(ev_queue,
 						   struct ib_uverbs_completion_event_file,
-						   ev_file) : NULL,
+						   ev_queue) : NULL,
 				      ucq);
 	return ret;
 }
@@ -196,15 +196,15 @@ int uverbs_hot_unplug_completion_event_file(struct ib_uobject_file *uobj_file,
 	struct ib_uverbs_completion_event_file *comp_event_file =
 		container_of(uobj_file, struct ib_uverbs_completion_event_file,
 			     uobj_file);
-	struct ib_uverbs_event_file *event_file = &comp_event_file->ev_file;
+	struct ib_uverbs_event_queue *event_queue = &comp_event_file->ev_queue;
 
-	spin_lock_irq(&event_file->lock);
-	event_file->is_closed = 1;
-	spin_unlock_irq(&event_file->lock);
+	spin_lock_irq(&event_queue->lock);
+	event_queue->is_closed = 1;
+	spin_unlock_irq(&event_queue->lock);
 
 	if (why == RDMA_REMOVE_DRIVER_REMOVE) {
-		wake_up_interruptible(&event_file->poll_wait);
-		kill_fasync(&event_file->async_queue, SIGIO, POLL_IN);
+		wake_up_interruptible(&event_queue->poll_wait);
+		kill_fasync(&event_queue->async_queue, SIGIO, POLL_IN);
 	}
 	return 0;
 };
-- 
1.8.3.1

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* RE: [PATCH for-next 1/6] IB/core: Rename write flag to exclusive in rdma_core
       [not found]     ` <1492506222-28999-2-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2017-04-18 15:12       ` Hefty, Sean
  0 siblings, 0 replies; 14+ messages in thread
From: Hefty, Sean @ 2017-04-18 15:12 UTC (permalink / raw)
  To: Matan Barak, linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Doug Ledford, Jason Gunthorpe, Liran Liss, Majd Dibbiny,
	Yishai Hadas, Weiny, Ira, Christoph Lameter

> We rename the "write" flags to "exclusive", as it's used for both
> WRITE and DESTROY actions.
> 
> Fixes: 3832125624b7 ('IB/core: Add support for idr types')
> Signed-off-by: Matan Barak <matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

Reviewed-by: Sean Hefty <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [PATCH for-next 2/6] IB/core: Don't pass the lock state to _rdma_remove_commit_uobject
       [not found]     ` <1492506222-28999-3-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2017-04-18 15:13       ` Hefty, Sean
  0 siblings, 0 replies; 14+ messages in thread
From: Hefty, Sean @ 2017-04-18 15:13 UTC (permalink / raw)
  To: Matan Barak, linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Doug Ledford, Jason Gunthorpe, Liran Liss, Majd Dibbiny,
	Yishai Hadas, Weiny, Ira, Christoph Lameter

> The only scenario where this function was called while the lock is
> already taken is in the context cleanup scenario. Thus, in order not
> to pass the lock state to this function, we just call the remove logic
> straight from the cleanup context function.
> 
> Fixes: 3832125624b7 ('IB/core: Add support for idr types')
> Signed-off-by: Matan Barak <matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

Reviewed-by: Sean Hefty <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [PATCH for-next 4/6] IB/core: A small refactor in destroy WQ handler
       [not found]     ` <1492506222-28999-5-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2017-04-18 15:14       ` Hefty, Sean
  0 siblings, 0 replies; 14+ messages in thread
From: Hefty, Sean @ 2017-04-18 15:14 UTC (permalink / raw)
  To: Matan Barak, linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Doug Ledford, Jason Gunthorpe, Liran Liss, Majd Dibbiny,
	Yishai Hadas, Weiny, Ira, Christoph Lameter

> Instead of having uverbs_uobject_put both in the error flow and the
> good flow, we unite them.
> 
> Fixes: fd3c7904db6e ('IB/core: Change idr objects to use the new
> schema')
> Signed-off-by: Matan Barak <matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

Reviewed-by: Sean Hefty <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [PATCH for-next 5/6] IB/core: Don't use is_async in event files to infer events size
       [not found]     ` <1492506222-28999-6-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2017-04-18 15:14       ` Hefty, Sean
  0 siblings, 0 replies; 14+ messages in thread
From: Hefty, Sean @ 2017-04-18 15:14 UTC (permalink / raw)
  To: Matan Barak, linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Doug Ledford, Jason Gunthorpe, Liran Liss, Majd Dibbiny,
	Yishai Hadas, Weiny, Ira, Christoph Lameter

> Previously, we inferred the events size in ib_uverbs_event_read by
> using the is_async flag. Instead of that, we pass the event size
> directly.
> 
> Fixes: 1e7710f3f656 ('IB/core: Change completion channel to use the
> reworked objects schema')
> Signed-off-by: Matan Barak <matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
> ---

Reviewed-by: Sean Hefty <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [PATCH for-next 6/6] IB/core: Rename uverbs event file structure
       [not found]     ` <1492506222-28999-7-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
@ 2017-04-18 15:22       ` Hefty, Sean
       [not found]         ` <1828884A29C6694DAF28B7E6B8A82373AB11476E-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
  0 siblings, 1 reply; 14+ messages in thread
From: Hefty, Sean @ 2017-04-18 15:22 UTC (permalink / raw)
  To: Matan Barak, linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Doug Ledford, Jason Gunthorpe, Liran Liss, Majd Dibbiny,
	Yishai Hadas, Weiny, Ira, Christoph Lameter

> Previously, ib_uverbs_event_file was suffixed by _file as it contained
> the actual file information. Since it's now only used as base struct
> for ib_uverbs_async_event_file and ib_uverbs_completion_event_file,
> we change its name to ib_uverbs_event_queue. This represents its
> logical role better.
> 
> Fixes: 1e7710f3f656 ('IB/core: Change completion channel to use the
> reworked objects schema')
> Signed-off-by: Matan Barak <matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
> ---

Reviewed-by: Sean Hefty <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>

Thanks - all changes look good.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH for-next 6/6] IB/core: Rename uverbs event file structure
       [not found]         ` <1828884A29C6694DAF28B7E6B8A82373AB11476E-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2017-04-19  7:28           ` Matan Barak
  0 siblings, 0 replies; 14+ messages in thread
From: Matan Barak @ 2017-04-19  7:28 UTC (permalink / raw)
  To: Hefty, Sean
  Cc: Matan Barak, linux-rdma-u79uwXL29TY76Z2rM5mHXA, Doug Ledford,
	Jason Gunthorpe, Liran Liss, Majd Dibbiny, Yishai Hadas, Weiny,
	Ira, Christoph Lameter

On Tue, Apr 18, 2017 at 6:22 PM, Hefty, Sean <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> wrote:
>> Previously, ib_uverbs_event_file was suffixed by _file as it contained
>> the actual file information. Since it's now only used as base struct
>> for ib_uverbs_async_event_file and ib_uverbs_completion_event_file,
>> we change its name to ib_uverbs_event_queue. This represents its
>> logical role better.
>>
>> Fixes: 1e7710f3f656 ('IB/core: Change completion channel to use the
>> reworked objects schema')
>> Signed-off-by: Matan Barak <matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
>> ---
>
> Reviewed-by: Sean Hefty <sean.hefty-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org>
>
> Thanks - all changes look good.

Thanks for the review Sean.

> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH for-next 0/6] IB/core: Small fixes and refactors for IDR locking series
       [not found] ` <1492506222-28999-1-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
                     ` (5 preceding siblings ...)
  2017-04-18  9:03   ` [PATCH for-next 6/6] IB/core: Rename uverbs event file structure Matan Barak
@ 2017-04-20 15:45   ` Doug Ledford
  6 siblings, 0 replies; 14+ messages in thread
From: Doug Ledford @ 2017-04-20 15:45 UTC (permalink / raw)
  To: Matan Barak, linux-rdma-u79uwXL29TY76Z2rM5mHXA
  Cc: Jason Gunthorpe, Sean Hefty, Liran Liss, Majd Dibbiny,
	Yishai Hadas, Ira Weiny, Christoph Lameter

On Tue, 2017-04-18 at 12:03 +0300, Matan Barak wrote:
> Hi Doug,
> 
> This series comes after the comments Sean and Jason sent in the
> linux-rdma
> mailing list for the "Change IDR usage and locking in uverbs" series.
> It's focused on small refactors, beautification and fixes
> for that series. It doesn't aim to change anything logically.
> 
> Thanks,
> Matan
> 
> Matan Barak (6):
>   IB/core: Rename write flag to exclusive in rdma_core
>   IB/core: Don't pass the lock state to _rdma_remove_commit_uobject
>   IB/core: Nullify ib_uobject during allocation
>   IB/core: A small refactor in destroy WQ handler
>   IB/core: Don't use is_async in event files to infer events size
>   IB/core: Rename uverbs event file structure
> 
>  drivers/infiniband/core/rdma_core.c        |  86 ++++++++---------
>  drivers/infiniband/core/uverbs.h           |  21 +++--
>  drivers/infiniband/core/uverbs_cmd.c       |  16 +---
>  drivers/infiniband/core/uverbs_main.c      | 146 ++++++++++++++-----
> ----------
>  drivers/infiniband/core/uverbs_std_types.c |  20 ++--
>  include/rdma/uverbs_types.h                |  33 +++----
>  6 files changed, 158 insertions(+), 164 deletions(-)

Hi Matan,

Thanks, series applied.

-- 
Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
    GPG KeyID: B826A3330E572FDD
   
Key fingerprint = AE6B 1BDA 122B 23B4 265B  1274 B826 A333 0E57 2FDD

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2017-04-20 15:45 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-18  9:03 [PATCH for-next 0/6] IB/core: Small fixes and refactors for IDR locking series Matan Barak
     [not found] ` <1492506222-28999-1-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-04-18  9:03   ` [PATCH for-next 1/6] IB/core: Rename write flag to exclusive in rdma_core Matan Barak
     [not found]     ` <1492506222-28999-2-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-04-18 15:12       ` Hefty, Sean
2017-04-18  9:03   ` [PATCH for-next 2/6] IB/core: Don't pass the lock state to _rdma_remove_commit_uobject Matan Barak
     [not found]     ` <1492506222-28999-3-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-04-18 15:13       ` Hefty, Sean
2017-04-18  9:03   ` [PATCH for-next 3/6] IB/core: Nullify ib_uobject during allocation Matan Barak
2017-04-18  9:03   ` [PATCH for-next 4/6] IB/core: A small refactor in destroy WQ handler Matan Barak
     [not found]     ` <1492506222-28999-5-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-04-18 15:14       ` Hefty, Sean
2017-04-18  9:03   ` [PATCH for-next 5/6] IB/core: Don't use is_async in event files to infer events size Matan Barak
     [not found]     ` <1492506222-28999-6-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-04-18 15:14       ` Hefty, Sean
2017-04-18  9:03   ` [PATCH for-next 6/6] IB/core: Rename uverbs event file structure Matan Barak
     [not found]     ` <1492506222-28999-7-git-send-email-matanb-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-04-18 15:22       ` Hefty, Sean
     [not found]         ` <1828884A29C6694DAF28B7E6B8A82373AB11476E-P5GAC/sN6hkd3b2yrw5b5LfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2017-04-19  7:28           ` Matan Barak
2017-04-20 15:45   ` [PATCH for-next 0/6] IB/core: Small fixes and refactors for IDR locking series Doug Ledford

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.