linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv6 0/3] rdmacg: IB/core: rdma controller support
@ 2016-02-20 11:00 Parav Pandit
  2016-02-20 11:00 ` [PATCHv6 1/3] rdmacg: Added rdma cgroup controller Parav Pandit
                   ` (3 more replies)
  0 siblings, 4 replies; 24+ messages in thread
From: Parav Pandit @ 2016-02-20 11:00 UTC (permalink / raw)
  To: cgroups, linux-doc, linux-kernel, linux-rdma, tj, lizefan,
	hannes, dledford, liranl, sean.hefty, jgunthorpe, haggaie
  Cc: corbet, james.l.morris, serge, ogerlitz, matanb, raindel, akpm,
	linux-security-module, pandit.parav

Overview:
Currently user space applications can easily take away all the rdma
device specific resources such as AH, CQ, QP, MR etc. Due to which other
applications in other cgroup or kernel space ULPs may not even get chance
to allocate any rdma resources. This results into service unavailibility.

RDMA cgroup addresses this issue by allowing resource accounting,
limit enforcement on per cgroup, per rdma device basis.

Resources are not defined by the RDMA cgroup. Resources are defined
by RDMA/IB stack. This allows rdma cgroup to remain constant while RDMA/IB
stack can evolve without the need of rdma cgroup update. A new
resource can be easily added by the RDMA/IB stack without touching
rdma cgroup.

RDMA uverbs layer will enforce limits on well defined RDMA verb
resources without any HCA vendor device driver involvement.

RDMA uverbs layer will not do accounting of hw vendor specific resources.
Instead rdma cgroup provides set of APIs through which vendor specific 
drivers can do resource accounting by making use of rdma cgroup.

Resource limit enforcement is hierarchical.

When process is migrated with active RDMA resources, rdma cgroup
continues to uncharge original cgroup for allocated resource. New resource
is charged to current process's cgroup, which means if the process is
migrated with active resources, for new resources it will be charged to
new cgroup and old resources will be correctly uncharged from old cgroup.

Changes from v5:
 * (To address comments from Tejun)
   1. Removed two type of resource pool, made is single type (as Tejun
      described in past comment)
   2. Removed match tokens and have array definition like "qp", "mr",
      "cq" etc.
   3. Wrote small parser and avoided match_token API as that won't work
      due to different array definitions
   4. Removed one-off remove API to unconfigure cgroup, instead all
      resource should be set to max.
   5. Removed resource pool type (user/default), instead having
      max_num_cnt, when ref_cnt drops to zero and
      max_num_cnt = total_rescource_cnt, pool is freed.
   6. Resource definition ownership is now only with IB stack at single
      header file, no longer in each low level driver.
      This goes through IB maintainer and other reviewers eyes.
      This continue to give flexibility to not force kernel upgrade for
      few enums additions for new resource type.
   7. Wherever possible pool lock is pushed out, except for hierarchical
      charging/unchanging points, as it not possible to do so, due to
      iterative process involves blocking allocations of rpool. Coming up
      more levels up to release locks doesn't make any sense either.
      This is anyway slow path where rpool is not allocated. Except for
      typical first resource allocation, this is less travelled path.
   8. Avoided %d manipulation due to removal of match_token and replaced
      with seq_putc etc friend functions.
 * Other minor cleanups.
 * Fixed rdmacg_register_device to return error in case of IB stack
   tries to register for than 64 resources.
 * Fixed not allowing negative value on resource setting.
 * Fixed cleaning up resource pools during device removal.
 * Simplfied and rename table length field to use ARRAY_SIZE macro.
 * Updated documentation to reflect single resource pool and shorter
   file names.

Changes from v4:
 * Fixed compilation errors for lockdep_assert_held reported by kbuild
   test robot
 * Fixed compilation warning reported by coccinelle for extra
   semicolon.
 * Fixed compilation error for inclusion of linux/parser.h which
   cannot be included in any header file, as that triggers multiple
   inclusion error. parser.h is included in C files which intent to
   use it.
 * Removed unused header file inclusion in cgroup_rdma.c

Changes from v3:
 * (To address comments from Tejun)
   1. Renamed cg_resource to rdmacg_resource
   2. Merged dealloc_cg_rpool and _dealloc_cg_rpool to single function
   3. Renamed _find_cg_rpool to find_cg_rpool_locked()
   5. Removed RDMACG_MAX_RESOURCE_INDEX limitation
   6. Fixed few alignments.
   7. Improved description for RDMA cgroup configuration menu
   8. Renamed cg_list_lock to rpool_list_lock to reflect the lock
      is for rpools.
   9. Renamed _get_cg_rpool to find_cg_rpool.
   10. Made creator as int variable, instead of atomic as its not 
      required to be atomic.
 * Fixed freeing right rpool during query_limit error path
 * Added copywrite for cgroup.c
 * Removed including parser.h from cgroup.c as its included by cgroup_rdma.h
 * Reduced try_charge functions to single function and removed duplicate
   comments.

Changes from v2:
 * Fixed compilation error reported by 0-DAY kernel test infrastructure
   for m68k architecture where CONFIG_CGROUP is also not defined.
 * Fixed comment in patch to refer to legacy mode of cgroup, changed to 
   refer to v1 and v2 version.
 * Added more information in commit log for rdma controller patch.

Changes from v1:
 * (To address comments from Tejun)
   a. reduces 3 patches to single patch
   b. removed resource word from the cgroup configuration files
   c. changed cgroup configuration file names to match other cgroups
   d. removed .list file and merged functionality with .max file
 * Based on comment to merge to single patch for rdma controller;
   IB/core patches are reduced to single patch.
 * Removed pid cgroup map and simplified design -
   Charge/Uncharge caller stack keeps track of the rdmacg for
   given resource. This removes the need to maintain and perform
   hash lookup. This also allows little more accurate resource
   charging/uncharging when process moved from one to other cgroup
   with active resources and continue to allocate more.
 * Critical fix: Removed rdma cgroup's dependency on the kernel module
   header files to avoid crashes when modules are upgraded without kernel
   upgrade, which is very common due to high amount of changes in IB stack
   and it is also shipped as individual kernel modules.
 * uboject extended to keep track of the owner rdma cgroup, so that same
   rdmacg can be used while uncharging.
 * Added support functions to hide details of rdmacg device in uverbs
   modules for cases of cgroup enabled/disabled at compile time. This
   avoids multiple ifdefs for every API in uverbs layer.
 * Removed failure counters in first patch, which will be added once
   initial feature is merged.
 * Fixed stale rpool access which is getting freed, while doing
   configuration to rdma.verb.max file.
 * Fixed rpool resource leak while querying max, current values.

Changes from v0:
(To address comments from Haggai, Doug, Liran, Tejun, Sean, Jason)
 * Redesigned to support per device per cgroup limit settings by bringing
   concept of resource pool.
 * Redesigned to let IB stack define the resources instead of rdma
   controller using resource template.
 * Redesigned to support hw vendor specific limits setting
   (optional to drivers).
 * Created new rdma controller instead of piggyback on device cgroup.
 * Fixed race conditions for multiple tasks sharing rdma resources.
 * Removed dependency on the task_struct.

Patch 2/3 fails to merge with Doug's tree as it has different code.
It requires manual merge.
I will generate separate patch for Doug's tree for linux-rdma for
just patch 2/3 to avoid merge conflict once we complete overall
review.


Parav Pandit (3):
  rdmacg: Added rdma cgroup controller
  IB/core: added support to use rdma cgroup controller
  rdmacg: Added documentation for rdmacg

 Documentation/cgroup-v1/rdma.txt      | 106 +++++
 Documentation/cgroup-v2.txt           |  33 ++
 drivers/infiniband/core/Makefile      |   1 +
 drivers/infiniband/core/cgroup.c      | 114 +++++
 drivers/infiniband/core/core_priv.h   |  36 ++
 drivers/infiniband/core/device.c      |  16 +
 drivers/infiniband/core/uverbs_cmd.c  | 164 +++++++-
 drivers/infiniband/core/uverbs_main.c |  19 +
 include/linux/cgroup_rdma.h           |  53 +++
 include/linux/cgroup_subsys.h         |   4 +
 include/rdma/ib_verbs.h               |  30 +-
 init/Kconfig                          |  10 +
 kernel/Makefile                       |   1 +
 kernel/cgroup_rdma.c                  | 753 ++++++++++++++++++++++++++++++++++
 14 files changed, 1324 insertions(+), 16 deletions(-)
 create mode 100644 Documentation/cgroup-v1/rdma.txt
 create mode 100644 drivers/infiniband/core/cgroup.c
 create mode 100644 include/linux/cgroup_rdma.h
 create mode 100644 kernel/cgroup_rdma.c

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCHv6 1/3] rdmacg: Added rdma cgroup controller
  2016-02-20 11:00 [PATCHv6 0/3] rdmacg: IB/core: rdma controller support Parav Pandit
@ 2016-02-20 11:00 ` Parav Pandit
  2016-02-21  7:43   ` Leon Romanovsky
  2016-02-24 13:13   ` Haggai Eran
  2016-02-20 11:00 ` [PATCHv6 2/3] IB/core: added support to use " Parav Pandit
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 24+ messages in thread
From: Parav Pandit @ 2016-02-20 11:00 UTC (permalink / raw)
  To: cgroups, linux-doc, linux-kernel, linux-rdma, tj, lizefan,
	hannes, dledford, liranl, sean.hefty, jgunthorpe, haggaie
  Cc: corbet, james.l.morris, serge, ogerlitz, matanb, raindel, akpm,
	linux-security-module, pandit.parav

Added rdma cgroup controller that does accounting, limit enforcement
on rdma/IB verbs and hw resources.

Added rdma cgroup header file which defines its APIs to perform
charing/uncharing functionality and device registration which will
participate in controller functions of accounting and limit
enforcements. It also define rdmacg_device structure to bind IB stack
and RDMA cgroup controller.

RDMA resources are tracked using resource pool. Resource pool is per
device, per cgroup entity which allows setting up accounting limits
on per device basis.

Resources are not defined by the RDMA cgroup, instead they are defined
by the external module IB stack. This allows extending IB stack
without changing kernel, as IB stack is going through changes
and enhancements.

Resource pool is created/destroyed dynamically whenever
charging/uncharging occurs respectively and whenever user
configuration is done. Its a tradeoff of memory vs little more code
space that creates resource pool whenever necessary,
instead of creating them during cgroup creation and device registration
time.

Signed-off-by: Parav Pandit <pandit.parav@gmail.com>
---
 include/linux/cgroup_rdma.h   |  53 +++
 include/linux/cgroup_subsys.h |   4 +
 init/Kconfig                  |  10 +
 kernel/Makefile               |   1 +
 kernel/cgroup_rdma.c          | 753 ++++++++++++++++++++++++++++++++++++++++++
 5 files changed, 821 insertions(+)
 create mode 100644 include/linux/cgroup_rdma.h
 create mode 100644 kernel/cgroup_rdma.c

diff --git a/include/linux/cgroup_rdma.h b/include/linux/cgroup_rdma.h
new file mode 100644
index 0000000..b370733
--- /dev/null
+++ b/include/linux/cgroup_rdma.h
@@ -0,0 +1,53 @@
+#ifndef _CGROUP_RDMA_H
+#define _CGROUP_RDMA_H
+
+#include <linux/cgroup.h>
+
+struct rdma_cgroup {
+#ifdef CONFIG_CGROUP_RDMA
+	struct cgroup_subsys_state	css;
+
+	spinlock_t rpool_list_lock;	/* protects resource pool list */
+	struct list_head rpool_head;	/* head to keep track of all resource
+					 * pools that belongs to this cgroup.
+					 */
+#endif
+};
+
+#ifdef CONFIG_CGROUP_RDMA
+
+struct rdmacg_device;
+
+struct rdmacg_pool_info {
+	const char **resource_name_table;
+	int table_len;
+};
+
+struct rdmacg_device {
+	struct rdmacg_pool_info pool_info;
+	struct list_head	rdmacg_list;
+	struct list_head	rpool_head;
+	spinlock_t		rpool_lock;	/* protects rsource pool list */
+	char			*name;
+};
+
+/* APIs for RDMA/IB stack to publish when a device wants to
+ * participate in resource accounting
+ */
+int rdmacg_register_device(struct rdmacg_device *device);
+void rdmacg_unregister_device(struct rdmacg_device *device);
+
+/* APIs for RDMA/IB stack to charge/uncharge pool specific resources */
+int rdmacg_try_charge(struct rdma_cgroup **rdmacg,
+		      struct rdmacg_device *device,
+		      int resource_index,
+		      int num);
+void rdmacg_uncharge(struct rdma_cgroup *cg,
+		     struct rdmacg_device *device,
+		     int resource_index,
+		     int num);
+void rdmacg_query_limit(struct rdmacg_device *device,
+			int *limits, int max_count);
+
+#endif	/* CONFIG_CGROUP_RDMA */
+#endif	/* _CGROUP_RDMA_H */
diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
index 0df0336a..d0e597c 100644
--- a/include/linux/cgroup_subsys.h
+++ b/include/linux/cgroup_subsys.h
@@ -56,6 +56,10 @@ SUBSYS(hugetlb)
 SUBSYS(pids)
 #endif
 
+#if IS_ENABLED(CONFIG_CGROUP_RDMA)
+SUBSYS(rdma)
+#endif
+
 /*
  * The following subsystems are not supported on the default hierarchy.
  */
diff --git a/init/Kconfig b/init/Kconfig
index 2232080..1741b65 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1054,6 +1054,16 @@ config CGROUP_PIDS
 	  since the PIDs limit only affects a process's ability to fork, not to
 	  attach to a cgroup.
 
+config CGROUP_RDMA
+	bool "RDMA controller"
+	help
+	  Provides enforcement of RDMA resources defined by IB stack.
+	  It is fairly easy for consumers to exhaust RDMA resources, which
+	  can result into resource unavailibility to other consumers.
+	  RDMA controller is designed to stop this from happening.
+	  Attaching processes with active RDMA resources to the cgroup
+	  hierarchy is allowed even if can cross the hierarchy's limit.
+
 config CGROUP_FREEZER
 	bool "Freezer controller"
 	help
diff --git a/kernel/Makefile b/kernel/Makefile
index baa55e5..501f5df 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -56,6 +56,7 @@ obj-$(CONFIG_COMPAT) += compat.o
 obj-$(CONFIG_CGROUPS) += cgroup.o
 obj-$(CONFIG_CGROUP_FREEZER) += cgroup_freezer.o
 obj-$(CONFIG_CGROUP_PIDS) += cgroup_pids.o
+obj-$(CONFIG_CGROUP_RDMA) += cgroup_rdma.o
 obj-$(CONFIG_CPUSETS) += cpuset.o
 obj-$(CONFIG_UTS_NS) += utsname.o
 obj-$(CONFIG_USER_NS) += user_namespace.o
diff --git a/kernel/cgroup_rdma.c b/kernel/cgroup_rdma.c
new file mode 100644
index 0000000..4412b03
--- /dev/null
+++ b/kernel/cgroup_rdma.c
@@ -0,0 +1,753 @@
+/*
+ * This file is subject to the terms and conditions of version 2 of the GNU
+ * General Public License.  See the file COPYING in the main directory of the
+ * Linux distribution for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/atomic.h>
+#include <linux/seq_file.h>
+#include <linux/hashtable.h>
+#include <linux/cgroup.h>
+#include <linux/parser.h>
+#include <linux/cgroup_rdma.h>
+
+#define RDMACG_MAX_STR "max"
+
+static DEFINE_MUTEX(dev_mutex);
+static LIST_HEAD(dev_list_head);
+
+enum rdmacg_file_type {
+	RDMACG_RESOURCE_MAX,
+	RDMACG_RESOURCE_STAT,
+};
+
+/* resource tracker per resource for rdma cgroup */
+struct rdmacg_resource {
+	int max;
+	int usage;
+};
+
+/**
+ * resource pool object which represents, per cgroup, per device
+ * resources. There are multiple instance
+ * of this object per cgroup, therefore it cannot be embedded within
+ * rdma_cgroup structure. Its maintained as list.
+ */
+struct rdmacg_resource_pool {
+	struct list_head cg_list;
+	struct list_head dev_list;
+
+	struct rdmacg_device *device;
+	struct rdmacg_resource *resources;
+	struct rdma_cgroup *cg;	/* owner cg used during device cleanup */
+
+	int	refcnt;		/* count active user tasks of this pool */
+	int	num_max_cnt;	/* total number counts which are set to max */
+};
+
+static struct rdma_cgroup *css_rdmacg(struct cgroup_subsys_state *css)
+{
+	return container_of(css, struct rdma_cgroup, css);
+}
+
+static struct rdma_cgroup *parent_rdmacg(struct rdma_cgroup *cg)
+{
+	return css_rdmacg(cg->css.parent);
+}
+
+static inline struct rdma_cgroup *task_rdmacg(struct task_struct *task)
+{
+	return css_rdmacg(task_css(task, rdma_cgrp_id));
+}
+
+static inline void set_resource_limit(struct rdmacg_resource_pool *rpool,
+				      int index, int new_max)
+{
+	if (new_max == S32_MAX) {
+		if (rpool->resources[index].max != S32_MAX)
+			rpool->num_max_cnt++;
+	} else {
+		if (rpool->resources[index].max == S32_MAX)
+			rpool->num_max_cnt--;
+	}
+	rpool->resources[index].max = new_max;
+}
+
+static void set_all_resource_max_limit(struct rdmacg_resource_pool *rpool)
+{
+	struct rdmacg_pool_info *pool_info = &rpool->device->pool_info;
+	int i;
+
+	for (i = 0; i < pool_info->table_len; i++)
+		set_resource_limit(rpool, i, S32_MAX);
+}
+
+static void free_cg_rpool_mem(struct rdmacg_resource_pool *rpool)
+{
+	kfree(rpool->resources);
+	kfree(rpool);
+}
+
+static void free_cg_rpool(struct rdmacg_resource_pool *rpool)
+{
+	spin_lock(&rpool->device->rpool_lock);
+	list_del(&rpool->dev_list);
+	spin_unlock(&rpool->device->rpool_lock);
+
+	free_cg_rpool_mem(rpool);
+}
+
+static struct rdmacg_resource_pool*
+find_cg_rpool_locked(struct rdma_cgroup *cg,
+		     struct rdmacg_device *device)
+
+{
+	struct rdmacg_resource_pool *pool;
+
+	lockdep_assert_held(&cg->rpool_list_lock);
+
+	list_for_each_entry(pool, &cg->rpool_head, cg_list)
+		if (pool->device == device)
+			return pool;
+
+	return NULL;
+}
+
+static int
+alloc_cg_rpool(struct rdma_cgroup *cg,
+	       struct rdmacg_device *device)
+{
+	struct rdmacg_resource_pool *rpool, *other_rpool;
+	struct rdmacg_pool_info *pool_info = &device->pool_info;
+	int ret;
+
+	rpool = kzalloc(sizeof(*rpool), GFP_KERNEL);
+	if (!rpool) {
+		ret = -ENOMEM;
+		goto err;
+	}
+	rpool->resources = kcalloc(pool_info->table_len,
+				   sizeof(*rpool->resources),
+				   GFP_KERNEL);
+	if (!rpool->resources) {
+		ret = -ENOMEM;
+		goto alloc_err;
+	}
+
+	rpool->device = device;
+	rpool->cg = cg;
+	INIT_LIST_HEAD(&rpool->cg_list);
+	INIT_LIST_HEAD(&rpool->dev_list);
+	spin_lock_init(&device->rpool_lock);
+	set_all_resource_max_limit(rpool);
+
+	spin_lock(&cg->rpool_list_lock);
+
+	other_rpool = find_cg_rpool_locked(cg, device);
+
+	/*
+	 * if other task added resource pool for this device for this cgroup
+	 * than free up which was recently created and use the one we found.
+	 */
+	if (other_rpool) {
+		spin_unlock(&cg->rpool_list_lock);
+		free_cg_rpool(rpool);
+		return 0;
+	}
+
+	list_add_tail(&rpool->cg_list, &cg->rpool_head);
+
+	spin_lock(&device->rpool_lock);
+	list_add_tail(&rpool->dev_list, &device->rpool_head);
+	spin_unlock(&device->rpool_lock);
+
+	spin_unlock(&cg->rpool_list_lock);
+	return 0;
+
+alloc_err:
+	kfree(rpool);
+err:
+	return ret;
+}
+
+/**
+ * uncharge_cg_resource - uncharge resource for rdma cgroup
+ * @cg: pointer to cg to uncharge and all parents in hierarchy
+ * @device: pointer to ib device
+ * @index: index of the resource to uncharge in cg (resource pool)
+ * @num: the number of rdma resource to uncharge
+ *
+ * It also frees the resource pool in the hierarchy for the resource pool
+ * which was created as part of charing operation.
+ */
+static void uncharge_cg_resource(struct rdma_cgroup *cg,
+				 struct rdmacg_device *device,
+				 int index, int num)
+{
+	struct rdmacg_resource_pool *rpool;
+	struct rdmacg_pool_info *pool_info = &device->pool_info;
+
+	spin_lock(&cg->rpool_list_lock);
+	rpool = find_cg_rpool_locked(cg, device);
+
+	/*
+	 * A negative count (or overflow) is invalid,
+	 * it indicates a bug in the rdma controller.
+	 */
+	rpool->resources[index].usage -= num;
+
+	WARN_ON_ONCE(rpool->resources[index].usage < 0);
+	rpool->refcnt--;
+	if (rpool->refcnt == 0 && rpool->num_max_cnt == pool_info->table_len) {
+		/*
+		 * No user of the rpool and all entries are set to max, so
+		 * safe to delete this rpool.
+		 */
+		list_del(&rpool->cg_list);
+		spin_unlock(&cg->rpool_list_lock);
+
+		free_cg_rpool(rpool);
+		return;
+	}
+	spin_unlock(&cg->rpool_list_lock);
+}
+
+/**
+ * rdmacg_uncharge_resource - hierarchically uncharge rdma resource count
+ * @device: pointer to rdmacg device
+ * @index: index of the resource to uncharge in cg in given resource pool
+ * @num: the number of rdma resource to uncharge
+ *
+ */
+void rdmacg_uncharge(struct rdma_cgroup *cg,
+		     struct rdmacg_device *device,
+		     int index, int num)
+{
+	struct rdma_cgroup *p;
+
+	for (p = cg; p; p = parent_rdmacg(p))
+		uncharge_cg_resource(p, device, index, num);
+
+	css_put(&cg->css);
+}
+EXPORT_SYMBOL(rdmacg_uncharge);
+
+/**
+ * charge_cg_resource - charge resource for rdma cgroup
+ * @cg: pointer to cg to charge
+ * @device: pointer to rdmacg device
+ * @index: index of the resource to charge in cg (resource pool)
+ * @num: the number of rdma resource to charge
+ */
+static int charge_cg_resource(struct rdma_cgroup *cg,
+			      struct rdmacg_device *device,
+			      int index, int num)
+{
+	struct rdmacg_resource_pool *rpool;
+	s64 new;
+	int ret = 0;
+
+retry:
+	spin_lock(&cg->rpool_list_lock);
+	rpool = find_cg_rpool_locked(cg, device);
+	if (!rpool) {
+		spin_unlock(&cg->rpool_list_lock);
+		ret = alloc_cg_rpool(cg, device);
+		if (ret)
+			goto err;
+		else
+			goto retry;
+	}
+	new = num + rpool->resources[index].usage;
+	if (new > rpool->resources[index].max) {
+		ret = -EAGAIN;
+	} else {
+		rpool->refcnt++;
+		rpool->resources[index].usage = new;
+	}
+	spin_unlock(&cg->rpool_list_lock);
+err:
+	return ret;
+}
+
+/**
+ * rdmacg_try_charge_resource - hierarchically try to charge the rdma resource
+ * @device: pointer to rdmacg device
+ * @rdmacg: pointer to rdma cgroup which will own this resource
+ * @index: index of the resource to charge in cg (resource pool)
+ * @num: the number of rdma resource to charge
+ *
+ * This function follows charing resource in hierarchical way.
+ * It will fail if the charge would cause the new value to exceed the
+ * hierarchical limit.
+ * Returns 0 if the charge succeded, otherwise -EAGAIN, -ENOMEM or -EINVAL.
+ * Returns pointer to rdmacg for this resource.
+ *
+ * Charger needs to account resources on three criteria.
+ * (a) per cgroup & (b) per device resource usage.
+ * Per cgroup resource usage ensures that tasks of cgroup doesn't cross
+ * the configured limits.
+ * Per device provides granular configuration in multi device usage.
+ * It allocates resource pool in the hierarchy for each parent it come
+ * across for first resource. Later on resource pool will be available.
+ * Therefore it will be much faster thereon to charge/uncharge.
+ */
+int rdmacg_try_charge(struct rdma_cgroup **rdmacg,
+		      struct rdmacg_device *device,
+		      int index, int num)
+{
+	struct rdma_cgroup *cg, *p, *q;
+	int ret;
+
+	cg = task_rdmacg(current);
+
+	for (p = cg; p; p = parent_rdmacg(p)) {
+		ret = charge_cg_resource(p, device, index, num);
+		if (ret)
+			goto err;
+	}
+	/*
+	 * hold on to css, as cgroup can be removed but resource
+	 * accounting happens on css.
+	 */
+	css_get(&cg->css);
+	*rdmacg = cg;
+	return 0;
+
+err:
+	for (q = cg; q != p; q = parent_rdmacg(q))
+		uncharge_cg_resource(q, device, index, num);
+	return ret;
+}
+EXPORT_SYMBOL(rdmacg_try_charge);
+
+/**
+ * rdmacg_register_rdmacg_device - register rdmacg device to rdma controller.
+ * @device: pointer to rdmacg device whose resources need to be accounted.
+ *
+ * If IB stack wish a device to participate in rdma cgroup resource
+ * tracking, it must invoke this API to register with rdma cgroup before
+ * any user space application can start using the RDMA resources.
+ * Returns 0 on success or EINVAL when table length given is beyond
+ * supported size.
+ */
+int rdmacg_register_device(struct rdmacg_device *device)
+{
+	if (device->pool_info.table_len > 64)
+		return -EINVAL;
+
+	INIT_LIST_HEAD(&device->rdmacg_list);
+	INIT_LIST_HEAD(&device->rpool_head);
+	spin_lock_init(&device->rpool_lock);
+
+	mutex_lock(&dev_mutex);
+	list_add_tail(&device->rdmacg_list, &dev_list_head);
+	mutex_unlock(&dev_mutex);
+	return 0;
+}
+EXPORT_SYMBOL(rdmacg_register_device);
+
+/**
+ * rdmacg_unregister_rdmacg_device - unregister the rdmacg device
+ * from rdma controller.
+ * @device: pointer to rdmacg device which was previously registered with rdma
+ *          controller using rdmacg_register_device().
+ *
+ * IB stack must invoke this after all the resources of the IB device
+ * are destroyed and after ensuring that no more resources will be created
+ * when this API is invoked.
+ */
+void rdmacg_unregister_device(struct rdmacg_device *device)
+{
+	struct rdmacg_resource_pool *rpool, *tmp;
+	struct rdma_cgroup *cg;
+
+	/*
+	 * Synchronize with any active resource settings,
+	 * usage query happening via configfs.
+	 * At this stage, there should not be any active resource pools
+	 * for this device, as RDMA/IB stack is expected to shutdown,
+	 * tear down all the applications and free up resources.
+	 */
+	mutex_lock(&dev_mutex);
+	list_del_init(&device->rdmacg_list);
+	mutex_unlock(&dev_mutex);
+
+	/*
+	 * Now that this device off the cgroup list, its safe to free
+	 * all the rpool resources.
+	 */
+	list_for_each_entry_safe(rpool, tmp, &device->rpool_head, dev_list) {
+		list_del_init(&rpool->dev_list);
+		cg = rpool->cg;
+
+		spin_lock(&cg->rpool_list_lock);
+		list_del_init(&rpool->cg_list);
+		spin_unlock(&cg->rpool_list_lock);
+
+		free_cg_rpool_mem(rpool);
+	}
+}
+EXPORT_SYMBOL(rdmacg_unregister_device);
+
+/**
+ * rdmacg_query_limit - query the resource limits that
+ * might have been configured by the user.
+ * @device: pointer to ib device
+ * @type: the type of resource pool to know the limits of.
+ * @limits: pointer to an array of limits where rdma cg will provide
+ *          the configured limits of the cgroup.
+ * @limits_array_len: number of array elements to be filled up at limits.
+ *
+ * This function follows charing resource in hierarchical way.
+ * It will fail if the charge would cause the new value to exceed the
+ * hierarchical limit.
+ */
+void rdmacg_query_limit(struct rdmacg_device *device,
+			int *limits, int max_count)
+{
+	struct rdma_cgroup *cg, *p;
+	struct rdmacg_resource_pool *rpool;
+	struct rdmacg_pool_info *pool_info;
+	int i;
+
+	cg = task_rdmacg(current);
+	pool_info = &device->pool_info;
+
+	for (i = 0; i < max_count; i++)
+		limits[i] = S32_MAX;
+
+	/*
+	 * Check in hirerchy which pool get the least amount of
+	 * resource limits.
+	 */
+	for (p = cg; p; p = parent_rdmacg(p)) {
+		spin_lock(&cg->rpool_list_lock);
+		rpool = find_cg_rpool_locked(cg, device);
+		if (rpool) {
+			for (i = 0; i < max_count; i++)
+				limits[i] = min_t(int, limits[i],
+					rpool->resources[i].max);
+		}
+		spin_unlock(&cg->rpool_list_lock);
+	}
+}
+EXPORT_SYMBOL(rdmacg_query_limit);
+
+static int parse_resource(char *c, struct rdmacg_pool_info *pool_info,
+			  int *intval)
+{
+	substring_t argstr;
+	const char **table = pool_info->resource_name_table;
+	char *name, *value = c;
+	size_t len;
+	int ret, i = 0;
+
+	name = strsep(&value, "=");
+	if (!name || !value)
+		return -EINVAL;
+
+	len = strlen(value);
+
+	for (i = 0; i < pool_info->table_len; i++) {
+		if (strcmp(table[i], name))
+			continue;
+
+		argstr.from = value;
+		argstr.to = value + len;
+
+		ret = match_int(&argstr, intval);
+		if (ret >= 0) {
+			if (*intval < 0)
+				break;
+			return i;
+		}
+		if (strncmp(value, RDMACG_MAX_STR, len) == 0) {
+			*intval = S32_MAX;
+			return i;
+		}
+		break;
+	}
+	return -EINVAL;
+}
+
+static int rdmacg_parse_limits(char *options,
+			       struct rdmacg_pool_info *pool_info,
+			       int *new_limits, u64 *enables)
+{
+	char *c;
+	int err = -EINVAL;
+
+	/* parse resource options */
+	while ((c = strsep(&options, " ")) != NULL) {
+		int index, intval;
+
+		index = parse_resource(c, pool_info, &intval);
+		if (index < 0)
+			goto err;
+
+		new_limits[index] = intval;
+		*enables |= BIT(index);
+	}
+	return 0;
+
+err:
+	return err;
+}
+
+static struct rdmacg_device *rdmacg_get_device_locked(const char *name)
+{
+	struct rdmacg_device *device;
+
+	list_for_each_entry(device, &dev_list_head, rdmacg_list)
+		if (!strcmp(name, device->name))
+			return device;
+
+	return NULL;
+}
+
+static ssize_t rdmacg_resource_set_max(struct kernfs_open_file *of,
+				       char *buf, size_t nbytes, loff_t off)
+{
+	struct rdma_cgroup *cg = css_rdmacg(of_css(of));
+	const char *dev_name;
+	struct rdmacg_resource_pool *rpool;
+	struct rdmacg_device *device;
+	char *options = strstrip(buf);
+	struct rdmacg_pool_info *pool_info;
+	u64 enables = 0;
+	int *new_limits;
+	int i = 0, ret = 0;
+
+	/* extract the device name first */
+	dev_name = strsep(&options, " ");
+	if (!dev_name) {
+		ret = -EINVAL;
+		goto err;
+	}
+
+	/* acquire lock to synchronize with hot plug devices */
+	mutex_lock(&dev_mutex);
+
+	device = rdmacg_get_device_locked(dev_name);
+	if (!device) {
+		ret = -ENODEV;
+		goto parse_err;
+	}
+
+	pool_info = &device->pool_info;
+
+	new_limits = kcalloc(pool_info->table_len, sizeof(int), GFP_KERNEL);
+	if (!new_limits) {
+		ret = -ENOMEM;
+		goto parse_err;
+	}
+
+	ret = rdmacg_parse_limits(options, pool_info, new_limits, &enables);
+	if (ret)
+		goto opt_err;
+
+retry:
+	spin_lock(&cg->rpool_list_lock);
+	rpool = find_cg_rpool_locked(cg, device);
+	if (!rpool) {
+		spin_unlock(&cg->rpool_list_lock);
+		ret = alloc_cg_rpool(cg, device);
+		if (ret)
+			goto opt_err;
+		else
+			goto retry;
+	}
+
+	/* now set the new limits of the rpool */
+	while (enables) {
+		/* if user set the limit, enables bit is set */
+		if (enables & BIT(i)) {
+			enables &= ~BIT(i);
+			set_resource_limit(rpool, i, new_limits[i]);
+		}
+		i++;
+	}
+	if (rpool->refcnt == 0 &&
+	    rpool->num_max_cnt == pool_info->table_len) {
+		/*
+		 * No user of the rpool and all entries are
+		 * set to max, so safe to delete this rpool.
+		 */
+		list_del(&rpool->cg_list);
+		spin_unlock(&cg->rpool_list_lock);
+		free_cg_rpool(rpool);
+	} else {
+		spin_unlock(&cg->rpool_list_lock);
+	}
+
+opt_err:
+	kfree(new_limits);
+parse_err:
+	mutex_unlock(&dev_mutex);
+err:
+	return ret ?: nbytes;
+}
+
+static u32 *get_cg_rpool_values(struct rdma_cgroup *cg,
+				struct rdmacg_device *device,
+				enum rdmacg_file_type sf_type,
+				int count)
+{
+	struct rdmacg_resource_pool *rpool;
+	u32 *value_tbl;
+	int i, ret;
+
+	value_tbl = kcalloc(count, sizeof(u32), GFP_KERNEL);
+	if (!value_tbl) {
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	spin_lock(&cg->rpool_list_lock);
+
+	rpool = find_cg_rpool_locked(cg, device);
+
+	for (i = 0; i < count; i++) {
+		if (sf_type == RDMACG_RESOURCE_MAX) {
+			if (rpool)
+				value_tbl[i] = rpool->resources[i].max;
+			else
+				value_tbl[i] = S32_MAX;
+		} else {
+			if (rpool)
+				value_tbl[i] = rpool->resources[i].usage;
+		}
+	}
+
+	spin_unlock(&cg->rpool_list_lock);
+
+	return value_tbl;
+
+err:
+	return ERR_PTR(ret);
+}
+
+static int print_rpool_values(struct seq_file *sf,
+			      struct rdmacg_pool_info *pool_info,
+			      u32 *value_tbl)
+{
+	int i;
+
+	for (i = 0; i < pool_info->table_len; i++) {
+		seq_puts(sf, pool_info->resource_name_table[i]);
+		seq_putc(sf, '=');
+		if (value_tbl[i] == S32_MAX)
+			seq_puts(sf, RDMACG_MAX_STR);
+		else
+			seq_printf(sf, "%d", value_tbl[i]);
+		seq_putc(sf, ' ');
+	}
+	return 0;
+}
+
+static int rdmacg_resource_read(struct seq_file *sf, void *v)
+{
+	struct rdmacg_device *device;
+	struct rdma_cgroup *cg = css_rdmacg(seq_css(sf));
+	struct rdmacg_pool_info *pool_info;
+	u32 *value_tbl;
+	int ret = 0;
+
+	mutex_lock(&dev_mutex);
+
+	list_for_each_entry(device, &dev_list_head, rdmacg_list) {
+		pool_info = &device->pool_info;
+
+		/* get the value from resource pool */
+		value_tbl = get_cg_rpool_values(cg, device,
+						seq_cft(sf)->private,
+						pool_info->table_len);
+		if (IS_ERR_OR_NULL(value_tbl)) {
+			ret = -ENOMEM;
+			break;
+		}
+
+		seq_printf(sf, "%s ", device->name);
+		ret = print_rpool_values(sf, pool_info, value_tbl);
+		seq_putc(sf, '\n');
+
+		kfree(value_tbl);
+		if (ret)
+			break;
+	}
+
+	mutex_unlock(&dev_mutex);
+	return ret;
+}
+
+static struct cftype rdmacg_files[] = {
+	{
+		.name = "max",
+		.write = rdmacg_resource_set_max,
+		.seq_show = rdmacg_resource_read,
+		.private = RDMACG_RESOURCE_MAX,
+		.flags = CFTYPE_NOT_ON_ROOT,
+	},
+	{
+		.name = "current",
+		.seq_show = rdmacg_resource_read,
+		.private = RDMACG_RESOURCE_STAT,
+		.flags = CFTYPE_NOT_ON_ROOT,
+	},
+	{ }	/* terminate */
+};
+
+static struct cgroup_subsys_state *
+rdmacg_css_alloc(struct cgroup_subsys_state *parent)
+{
+	struct rdma_cgroup *cg;
+
+	cg = kzalloc(sizeof(*cg), GFP_KERNEL);
+	if (!cg)
+		return ERR_PTR(-ENOMEM);
+
+	INIT_LIST_HEAD(&cg->rpool_head);
+	spin_lock_init(&cg->rpool_list_lock);
+	return &cg->css;
+}
+
+static void rdmacg_css_free(struct cgroup_subsys_state *css)
+{
+	struct rdma_cgroup *cg = css_rdmacg(css);
+
+	kfree(cg);
+}
+
+/**
+ * rdmacg_css_offline - cgroup css_offline callback
+ * @css: css of interest
+ *
+ * This function is called when @css is about to go away and responsible
+ * for shooting down all rdmacg associated with @css. As part of that it
+ * marks all the resource pool entries to max value, so that when resources are
+ * uncharged, associated resource pool can be freed as well.
+ */
+static void rdmacg_css_offline(struct cgroup_subsys_state *css)
+{
+	struct rdma_cgroup *cg = css_rdmacg(css);
+	struct rdmacg_resource_pool *rpool;
+
+	spin_lock(&cg->rpool_list_lock);
+
+	list_for_each_entry(rpool, &cg->rpool_head, cg_list)
+		set_all_resource_max_limit(rpool);
+
+	spin_unlock(&cg->rpool_list_lock);
+}
+
+struct cgroup_subsys rdma_cgrp_subsys = {
+	.css_alloc	= rdmacg_css_alloc,
+	.css_free	= rdmacg_css_free,
+	.css_offline	= rdmacg_css_offline,
+	.legacy_cftypes	= rdmacg_files,
+	.dfl_cftypes	= rdmacg_files,
+};
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCHv6 2/3] IB/core: added support to use rdma cgroup controller
  2016-02-20 11:00 [PATCHv6 0/3] rdmacg: IB/core: rdma controller support Parav Pandit
  2016-02-20 11:00 ` [PATCHv6 1/3] rdmacg: Added rdma cgroup controller Parav Pandit
@ 2016-02-20 11:00 ` Parav Pandit
  2016-02-24 13:43   ` Haggai Eran
  2016-02-20 11:00 ` [PATCHv6 3/3] rdmacg: Added documentation for rdmacg Parav Pandit
  2016-02-22  4:59 ` [PATCHv6 0/3] rdmacg: IB/core: rdma controller support Parav Pandit
  3 siblings, 1 reply; 24+ messages in thread
From: Parav Pandit @ 2016-02-20 11:00 UTC (permalink / raw)
  To: cgroups, linux-doc, linux-kernel, linux-rdma, tj, lizefan,
	hannes, dledford, liranl, sean.hefty, jgunthorpe, haggaie
  Cc: corbet, james.l.morris, serge, ogerlitz, matanb, raindel, akpm,
	linux-security-module, pandit.parav

Added support APIs for IB core to register/unregister every RDMA device
with rdma cgroup for tracking verbs and hw resources.
IB core registers with rdma cgroup controller and also defines resources
that can be accounted.
Added support APIs for uverbs layer to make use of rdma controller.
Added uverbs layer to perform resource charge/uncharge functionality.

Signed-off-by: Parav Pandit <pandit.parav@gmail.com>
---
 drivers/infiniband/core/Makefile      |   1 +
 drivers/infiniband/core/cgroup.c      | 114 +++++++++++++++++++++++
 drivers/infiniband/core/core_priv.h   |  36 ++++++++
 drivers/infiniband/core/device.c      |  16 ++++
 drivers/infiniband/core/uverbs_cmd.c  | 164 ++++++++++++++++++++++++++++++----
 drivers/infiniband/core/uverbs_main.c |  19 ++++
 include/rdma/ib_verbs.h               |  30 ++++++-
 7 files changed, 364 insertions(+), 16 deletions(-)
 create mode 100644 drivers/infiniband/core/cgroup.c

diff --git a/drivers/infiniband/core/Makefile b/drivers/infiniband/core/Makefile
index d43a899..df40cee 100644
--- a/drivers/infiniband/core/Makefile
+++ b/drivers/infiniband/core/Makefile
@@ -13,6 +13,7 @@ ib_core-y :=			packer.o ud_header.o verbs.o sysfs.o \
 				roce_gid_mgmt.o
 ib_core-$(CONFIG_INFINIBAND_USER_MEM) += umem.o
 ib_core-$(CONFIG_INFINIBAND_ON_DEMAND_PAGING) += umem_odp.o umem_rbtree.o
+ib_core-$(CONFIG_CGROUP_RDMA) += cgroup.o
 
 ib_mad-y :=			mad.o smi.o agent.o mad_rmpp.o
 
diff --git a/drivers/infiniband/core/cgroup.c b/drivers/infiniband/core/cgroup.c
new file mode 100644
index 0000000..9fb6c5b
--- /dev/null
+++ b/drivers/infiniband/core/cgroup.c
@@ -0,0 +1,114 @@
+/*
+ * This software is available to you under a choice of one of two
+ * licenses.  You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ *     Redistribution and use in source and binary forms, with or
+ *     without modification, are permitted provided that the following
+ *     conditions are met:
+ *
+ *      - Redistributions of source code must retain the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer.
+ *
+ *      - Redistributions in binary form must reproduce the above
+ *        copyright notice, this list of conditions and the following
+ *        disclaimer in the documentation and/or other materials
+ *        provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/kernel.h>
+#include <linux/cgroup_rdma.h>
+#include <linux/parser.h>
+
+#include "core_priv.h"
+
+/**
+ * resource table definition as to be seen by the user.
+ * Need to add entries to it when more resources are
+ * added/defined at IB verb/core layer.
+ */
+static char const *resource_tokens[] = {
+	[RDMA_VERB_RESOURCE_UCTX] = "uctx",
+	[RDMA_VERB_RESOURCE_AH] = "ah",
+	[RDMA_VERB_RESOURCE_PD] = "pd",
+	[RDMA_VERB_RESOURCE_CQ] = "cq",
+	[RDMA_VERB_RESOURCE_MR] = "mr",
+	[RDMA_VERB_RESOURCE_MW] = "mw",
+	[RDMA_VERB_RESOURCE_SRQ] = "srq",
+	[RDMA_VERB_RESOURCE_QP] = "qp",
+	[RDMA_VERB_RESOURCE_FLOW] = "flow",
+};
+
+/**
+ * ib_device_register_rdmacg - register with rdma cgroup.
+ * @device: device to register to participate in resource
+ *          accounting by rdma cgroup.
+ *
+ * Register with the rdma cgroup. Should be called before
+ * exposing rdma device to user space applications to avoid
+ * resource accounting leak.
+ * HCA drivers should set resource pool ops first if they wish
+ * to support hw specific resource accounting before IB core
+ * registers with rdma cgroup.
+ * Returns 0 on success or otherwise failure code.
+ */
+int ib_device_register_rdmacg(struct ib_device *device)
+{
+	device->cg_device.name = device->name;
+	device->cg_device.pool_info.resource_name_table = resource_tokens;
+	device->cg_device.pool_info.table_len = ARRAY_SIZE(resource_tokens);
+	return rdmacg_register_device(&device->cg_device);
+}
+
+/**
+ * ib_device_unregister_rdmacg - unregister with rdma cgroup.
+ * @device: device to unregister.
+ *
+ * Unregister with the rdma cgroup. Should be called after
+ * all the resources are deallocated, and after a stage when any
+ * other resource allocation of user application cannot be done
+ * for this device to avoid any leak in accounting.
+ * HCA drivers should clear resource pool ops after ib stack
+ * unregisters with rdma cgroup.
+ */
+void ib_device_unregister_rdmacg(struct ib_device *device)
+{
+	rdmacg_unregister_device(&device->cg_device);
+}
+
+int ib_rdmacg_try_charge(struct ib_rdmacg_object *cg_obj,
+			 struct ib_device *device,
+			 int resource_index, int num)
+{
+	return rdmacg_try_charge(&cg_obj->cg, &device->cg_device,
+				 resource_index, num);
+}
+EXPORT_SYMBOL(ib_rdmacg_try_charge);
+
+void ib_rdmacg_uncharge(struct ib_rdmacg_object *cg_obj,
+			struct ib_device *device,
+			int resource_index, int num)
+{
+	rdmacg_uncharge(cg_obj->cg, &device->cg_device,
+			resource_index, num);
+}
+EXPORT_SYMBOL(ib_rdmacg_uncharge);
+
+void ib_rdmacg_query_limit(struct ib_device *device,
+			   int *limits, int max_count)
+{
+	rdmacg_query_limit(&device->cg_device, limits, max_count);
+}
+EXPORT_SYMBOL(ib_rdmacg_query_limit);
diff --git a/drivers/infiniband/core/core_priv.h b/drivers/infiniband/core/core_priv.h
index 5cf6eb7..c8110a6 100644
--- a/drivers/infiniband/core/core_priv.h
+++ b/drivers/infiniband/core/core_priv.h
@@ -92,4 +92,40 @@ int ib_cache_setup_one(struct ib_device *device);
 void ib_cache_cleanup_one(struct ib_device *device);
 void ib_cache_release_one(struct ib_device *device);
 
+#ifdef CONFIG_CGROUP_RDMA
+
+int ib_device_register_rdmacg(struct ib_device *device);
+void ib_device_unregister_rdmacg(struct ib_device *device);
+
+int ib_rdmacg_try_charge(struct ib_rdmacg_object *cg_obj,
+			 struct ib_device *device,
+			 int resource_index, int num);
+
+void ib_rdmacg_uncharge(struct ib_rdmacg_object *cg_obj,
+			struct ib_device *device,
+			int resource_index, int num);
+
+void ib_rdmacg_query_limit(struct ib_device *device,
+			   int *limits, int max_count);
+#else
+static inline int ib_rdmacg_try_charge(struct ib_rdmacg_object *cg_obj,
+				       struct ib_device *device,
+				       int resource_index, int num)
+{ return 0; }
+
+static inline void ib_rdmacg_uncharge(struct ib_rdmacg_object *cg_obj,
+				      struct ib_device *device,
+				      int resource_index, int num)
+{ }
+
+static inline void ib_rdmacg_query_limit(struct ib_device *device,
+					 int *limits, int max_count)
+{
+	int i;
+
+	for (i = 0; i < max_count; i++)
+		limits[i] = S32_MAX;
+}
+#endif
+
 #endif /* _CORE_PRIV_H */
diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
index 179e813..c3bd24c 100644
--- a/drivers/infiniband/core/device.c
+++ b/drivers/infiniband/core/device.c
@@ -352,10 +352,22 @@ int ib_register_device(struct ib_device *device,
 		goto out;
 	}
 
+#ifdef CONFIG_CGROUP_RDMA
+	ret = ib_device_register_rdmacg(device);
+	if (ret) {
+		printk(KERN_WARNING "Couldn't set up InfiniBand P_Key/GID cache\n");
+		ib_cache_cleanup_one(device);
+		goto out;
+	}
+#endif
+
 	ret = ib_device_register_sysfs(device, port_callback);
 	if (ret) {
 		printk(KERN_WARNING "Couldn't register device %s with driver model\n",
 		       device->name);
+#ifdef CONFIG_CGROUP_RDMA
+		ib_device_unregister_rdmacg(device);
+#endif
 		ib_cache_cleanup_one(device);
 		goto out;
 	}
@@ -405,6 +417,10 @@ void ib_unregister_device(struct ib_device *device)
 
 	mutex_unlock(&device_mutex);
 
+#ifdef CONFIG_CGROUP_RDMA
+	ib_device_unregister_rdmacg(device);
+#endif
+
 	ib_device_unregister_sysfs(device);
 	ib_cache_cleanup_one(device);
 
diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
index 1c02dea..7c51e8a 100644
--- a/drivers/infiniband/core/uverbs_cmd.c
+++ b/drivers/infiniband/core/uverbs_cmd.c
@@ -296,6 +296,7 @@ ssize_t ib_uverbs_get_context(struct ib_uverbs_file *file,
 #endif
 	struct ib_ucontext		 *ucontext;
 	struct file			 *filp;
+	struct ib_rdmacg_object          cg_obj;
 	int ret;
 
 	if (out_len < sizeof resp)
@@ -315,13 +316,20 @@ ssize_t ib_uverbs_get_context(struct ib_uverbs_file *file,
 		   (unsigned long) cmd.response + sizeof resp,
 		   in_len - sizeof cmd, out_len - sizeof resp);
 
+	ret = ib_rdmacg_try_charge(&cg_obj, ib_dev,
+				   RDMA_VERB_RESOURCE_UCTX, 1);
+	if (ret)
+		goto err;
+
 	ucontext = ib_dev->alloc_ucontext(ib_dev, &udata);
 	if (IS_ERR(ucontext)) {
 		ret = PTR_ERR(ucontext);
-		goto err;
+		goto err_alloc;
 	}
 
 	ucontext->device = ib_dev;
+	ucontext->cg_obj = cg_obj;
+
 	INIT_LIST_HEAD(&ucontext->pd_list);
 	INIT_LIST_HEAD(&ucontext->mr_list);
 	INIT_LIST_HEAD(&ucontext->mw_list);
@@ -388,6 +396,9 @@ err_free:
 	put_pid(ucontext->tgid);
 	ib_dev->dealloc_ucontext(ucontext);
 
+err_alloc:
+	ib_rdmacg_uncharge(&cg_obj, ib_dev, RDMA_VERB_RESOURCE_UCTX, 1);
+
 err:
 	mutex_unlock(&file->mutex);
 	return ret;
@@ -396,7 +407,8 @@ err:
 static void copy_query_dev_fields(struct ib_uverbs_file *file,
 				  struct ib_device *ib_dev,
 				  struct ib_uverbs_query_device_resp *resp,
-				  struct ib_device_attr *attr)
+				  struct ib_device_attr *attr,
+				  int *limits)
 {
 	resp->fw_ver		= attr->fw_ver;
 	resp->node_guid		= ib_dev->node_guid;
@@ -407,14 +419,19 @@ static void copy_query_dev_fields(struct ib_uverbs_file *file,
 	resp->vendor_part_id	= attr->vendor_part_id;
 	resp->hw_ver		= attr->hw_ver;
 	resp->max_qp		= attr->max_qp;
+	resp->max_qp		= min_t(int, attr->max_qp,
+					limits[RDMA_VERB_RESOURCE_QP]);
 	resp->max_qp_wr		= attr->max_qp_wr;
 	resp->device_cap_flags	= attr->device_cap_flags;
 	resp->max_sge		= attr->max_sge;
 	resp->max_sge_rd	= attr->max_sge_rd;
-	resp->max_cq		= attr->max_cq;
+	resp->max_cq		= min_t(int, attr->max_cq,
+					limits[RDMA_VERB_RESOURCE_CQ]);
 	resp->max_cqe		= attr->max_cqe;
-	resp->max_mr		= attr->max_mr;
-	resp->max_pd		= attr->max_pd;
+	resp->max_mr		= min_t(int, attr->max_mr,
+					limits[RDMA_VERB_RESOURCE_MR]);
+	resp->max_pd		= min_t(int, attr->max_pd,
+					limits[RDMA_VERB_RESOURCE_PD]);
 	resp->max_qp_rd_atom	= attr->max_qp_rd_atom;
 	resp->max_ee_rd_atom	= attr->max_ee_rd_atom;
 	resp->max_res_rd_atom	= attr->max_res_rd_atom;
@@ -423,16 +440,19 @@ static void copy_query_dev_fields(struct ib_uverbs_file *file,
 	resp->atomic_cap		= attr->atomic_cap;
 	resp->max_ee			= attr->max_ee;
 	resp->max_rdd			= attr->max_rdd;
-	resp->max_mw			= attr->max_mw;
+	resp->max_mw			= min_t(int, attr->max_mw,
+						limits[RDMA_VERB_RESOURCE_MW]);
 	resp->max_raw_ipv6_qp		= attr->max_raw_ipv6_qp;
 	resp->max_raw_ethy_qp		= attr->max_raw_ethy_qp;
 	resp->max_mcast_grp		= attr->max_mcast_grp;
 	resp->max_mcast_qp_attach	= attr->max_mcast_qp_attach;
 	resp->max_total_mcast_qp_attach	= attr->max_total_mcast_qp_attach;
-	resp->max_ah			= attr->max_ah;
+	resp->max_ah			= min_t(int, attr->max_ah,
+						limits[RDMA_VERB_RESOURCE_AH]);
 	resp->max_fmr			= attr->max_fmr;
 	resp->max_map_per_fmr		= attr->max_map_per_fmr;
-	resp->max_srq			= attr->max_srq;
+	resp->max_srq			= min_t(int, attr->max_srq,
+						limits[RDMA_VERB_RESOURCE_SRQ]);
 	resp->max_srq_wr		= attr->max_srq_wr;
 	resp->max_srq_sge		= attr->max_srq_sge;
 	resp->max_pkeys			= attr->max_pkeys;
@@ -449,6 +469,7 @@ ssize_t ib_uverbs_query_device(struct ib_uverbs_file *file,
 	struct ib_uverbs_query_device_resp resp;
 	struct ib_device_attr              attr;
 	int                                ret;
+	int                                limits[RDMA_RESOURCE_MAX];
 
 	if (out_len < sizeof resp)
 		return -ENOSPC;
@@ -460,8 +481,10 @@ ssize_t ib_uverbs_query_device(struct ib_uverbs_file *file,
 	if (ret)
 		return ret;
 
+	ib_rdmacg_query_limit(ib_dev, limits, RDMA_RESOURCE_MAX);
+
 	memset(&resp, 0, sizeof resp);
-	copy_query_dev_fields(file, ib_dev, &resp, &attr);
+	copy_query_dev_fields(file, ib_dev, &resp, &attr, limits);
 
 	if (copy_to_user((void __user *) (unsigned long) cmd.response,
 			 &resp, sizeof resp))
@@ -547,6 +570,13 @@ ssize_t ib_uverbs_alloc_pd(struct ib_uverbs_file *file,
 	if (!uobj)
 		return -ENOMEM;
 
+	ret = ib_rdmacg_try_charge(&uobj->cg_obj, file->device->ib_dev,
+				   RDMA_VERB_RESOURCE_PD, 1);
+	if (ret) {
+		kfree(uobj);
+		return -EPERM;
+	}
+
 	init_uobj(uobj, 0, file->ucontext, &pd_lock_class);
 	down_write(&uobj->mutex);
 
@@ -592,6 +622,8 @@ err_idr:
 	ib_dealloc_pd(pd);
 
 err:
+	ib_rdmacg_uncharge(&uobj->cg_obj, file->device->ib_dev,
+			   RDMA_VERB_RESOURCE_PD, 1);
 	put_uobj_write(uobj);
 	return ret;
 }
@@ -604,6 +636,7 @@ ssize_t ib_uverbs_dealloc_pd(struct ib_uverbs_file *file,
 	struct ib_uverbs_dealloc_pd cmd;
 	struct ib_uobject          *uobj;
 	struct ib_pd		   *pd;
+	struct ib_device           *device;
 	int                         ret;
 
 	if (copy_from_user(&cmd, buf, sizeof cmd))
@@ -624,6 +657,10 @@ ssize_t ib_uverbs_dealloc_pd(struct ib_uverbs_file *file,
 	if (ret)
 		goto err_put;
 
+	device = uobj->context->device;
+
+	ib_rdmacg_uncharge(&uobj->cg_obj, device, RDMA_VERB_RESOURCE_PD, 1);
+
 	uobj->live = 0;
 	put_uobj_write(uobj);
 
@@ -997,6 +1034,11 @@ ssize_t ib_uverbs_reg_mr(struct ib_uverbs_file *file,
 		}
 	}
 
+	ret = ib_rdmacg_try_charge(&uobj->cg_obj, pd->device,
+				   RDMA_VERB_RESOURCE_MR, 1);
+	if (ret)
+		goto err_charge;
+
 	mr = pd->device->reg_user_mr(pd, cmd.start, cmd.length, cmd.hca_va,
 				     cmd.access_flags, &udata);
 	if (IS_ERR(mr)) {
@@ -1045,6 +1087,9 @@ err_unreg:
 	ib_dereg_mr(mr);
 
 err_put:
+	ib_rdmacg_uncharge(&uobj->cg_obj, pd->device, RDMA_VERB_RESOURCE_MR, 1);
+
+err_charge:
 	put_pd_read(pd);
 
 err_free:
@@ -1154,6 +1199,7 @@ ssize_t ib_uverbs_dereg_mr(struct ib_uverbs_file *file,
 	struct ib_uverbs_dereg_mr cmd;
 	struct ib_mr             *mr;
 	struct ib_uobject	 *uobj;
+	struct ib_pd             *pd;
 	int                       ret = -EINVAL;
 
 	if (copy_from_user(&cmd, buf, sizeof cmd))
@@ -1165,6 +1211,8 @@ ssize_t ib_uverbs_dereg_mr(struct ib_uverbs_file *file,
 
 	mr = uobj->object;
 
+	pd = mr->pd;
+
 	ret = ib_dereg_mr(mr);
 	if (!ret)
 		uobj->live = 0;
@@ -1174,6 +1222,8 @@ ssize_t ib_uverbs_dereg_mr(struct ib_uverbs_file *file,
 	if (ret)
 		return ret;
 
+	ib_rdmacg_uncharge(&uobj->cg_obj, pd->device, RDMA_VERB_RESOURCE_MR, 1);
+
 	idr_remove_uobj(&ib_uverbs_mr_idr, uobj);
 
 	mutex_lock(&file->mutex);
@@ -1216,6 +1266,11 @@ ssize_t ib_uverbs_alloc_mw(struct ib_uverbs_file *file,
 		goto err_free;
 	}
 
+	ret = ib_rdmacg_try_charge(&uobj->cg_obj, pd->device,
+				   RDMA_VERB_RESOURCE_MW, 1);
+	if (ret)
+		goto err_charge;
+
 	mw = pd->device->alloc_mw(pd, cmd.mw_type);
 	if (IS_ERR(mw)) {
 		ret = PTR_ERR(mw);
@@ -1261,6 +1316,9 @@ err_unalloc:
 	ib_dealloc_mw(mw);
 
 err_put:
+	ib_rdmacg_uncharge(&uobj->cg_obj, pd->device, RDMA_VERB_RESOURCE_MW, 1);
+
+err_charge:
 	put_pd_read(pd);
 
 err_free:
@@ -1275,6 +1333,7 @@ ssize_t ib_uverbs_dealloc_mw(struct ib_uverbs_file *file,
 {
 	struct ib_uverbs_dealloc_mw cmd;
 	struct ib_mw               *mw;
+	struct ib_pd               *pd;
 	struct ib_uobject	   *uobj;
 	int                         ret = -EINVAL;
 
@@ -1286,6 +1345,7 @@ ssize_t ib_uverbs_dealloc_mw(struct ib_uverbs_file *file,
 		return -EINVAL;
 
 	mw = uobj->object;
+	pd = mw->pd;
 
 	ret = ib_dealloc_mw(mw);
 	if (!ret)
@@ -1296,6 +1356,8 @@ ssize_t ib_uverbs_dealloc_mw(struct ib_uverbs_file *file,
 	if (ret)
 		return ret;
 
+	ib_rdmacg_uncharge(&uobj->cg_obj, pd->device, RDMA_VERB_RESOURCE_MW, 1);
+
 	idr_remove_uobj(&ib_uverbs_mw_idr, uobj);
 
 	mutex_lock(&file->mutex);
@@ -1395,6 +1457,11 @@ static struct ib_ucq_object *create_cq(struct ib_uverbs_file *file,
 	if (cmd_sz > offsetof(typeof(*cmd), flags) + sizeof(cmd->flags))
 		attr.flags = cmd->flags;
 
+	ret = ib_rdmacg_try_charge(&obj->uobject.cg_obj, file->device->ib_dev,
+				   RDMA_VERB_RESOURCE_CQ, 1);
+	if (ret)
+		goto err_charge;
+
 	cq = ib_dev->create_cq(ib_dev, &attr,
 					     file->ucontext, uhw);
 	if (IS_ERR(cq)) {
@@ -1442,6 +1509,10 @@ err_free:
 	ib_destroy_cq(cq);
 
 err_file:
+	ib_rdmacg_uncharge(&obj->uobject.cg_obj, file->device->ib_dev,
+			   RDMA_VERB_RESOURCE_CQ, 1);
+
+err_charge:
 	if (ev_file)
 		ib_uverbs_release_ucq(file, ev_file, obj);
 
@@ -1722,6 +1793,9 @@ ssize_t ib_uverbs_destroy_cq(struct ib_uverbs_file *file,
 	if (ret)
 		return ret;
 
+	ib_rdmacg_uncharge(&uobj->cg_obj, uobj->context->device,
+			   RDMA_VERB_RESOURCE_CQ, 1);
+
 	idr_remove_uobj(&ib_uverbs_cq_idr, uobj);
 
 	mutex_lock(&file->mutex);
@@ -1777,6 +1851,12 @@ static int create_qp(struct ib_uverbs_file *file,
 		  &qp_lock_class);
 	down_write(&obj->uevent.uobject.mutex);
 
+	pd  = idr_read_pd(cmd->pd_handle, file->ucontext);
+	if (!pd) {
+		ret = -EINVAL;
+		goto err_put;
+	}
+
 	if (cmd->qp_type == IB_QPT_XRC_TGT) {
 		xrcd = idr_read_xrcd(cmd->pd_handle, file->ucontext,
 				     &xrcd_uobj);
@@ -1811,8 +1891,7 @@ static int create_qp(struct ib_uverbs_file *file,
 
 		scq = idr_read_cq(cmd->send_cq_handle, file->ucontext, !!rcq);
 		rcq = rcq ?: scq;
-		pd  = idr_read_pd(cmd->pd_handle, file->ucontext);
-		if (!pd || !scq) {
+		if (!scq) {
 			ret = -EINVAL;
 			goto err_put;
 		}
@@ -1858,6 +1937,11 @@ static int create_qp(struct ib_uverbs_file *file,
 			goto err_put;
 		}
 
+	ret = ib_rdmacg_try_charge(&obj->uevent.uobject.cg_obj, pd->device,
+				   RDMA_VERB_RESOURCE_QP, 1);
+	if (ret)
+		goto err_put;
+
 	if (cmd->qp_type == IB_QPT_XRC_TGT)
 		qp = ib_create_qp(pd, &attr);
 	else
@@ -1865,7 +1949,7 @@ static int create_qp(struct ib_uverbs_file *file,
 
 	if (IS_ERR(qp)) {
 		ret = PTR_ERR(qp);
-		goto err_put;
+		goto err_create;
 	}
 
 	if (cmd->qp_type != IB_QPT_XRC_TGT) {
@@ -1940,6 +2024,10 @@ err_cb:
 err_destroy:
 	ib_destroy_qp(qp);
 
+err_create:
+	ib_rdmacg_uncharge(&obj->uevent.uobject.cg_obj, device,
+			   RDMA_VERB_RESOURCE_QP, 1);
+
 err_put:
 	if (xrcd)
 		put_xrcd_read(xrcd_uobj);
@@ -2379,6 +2467,7 @@ ssize_t ib_uverbs_destroy_qp(struct ib_uverbs_file *file,
 	struct ib_uverbs_destroy_qp_resp resp;
 	struct ib_uobject		*uobj;
 	struct ib_qp               	*qp;
+	struct ib_pd                    *pd;
 	struct ib_uqp_object        	*obj;
 	int                        	 ret = -EINVAL;
 
@@ -2391,6 +2480,7 @@ ssize_t ib_uverbs_destroy_qp(struct ib_uverbs_file *file,
 	if (!uobj)
 		return -EINVAL;
 	qp  = uobj->object;
+	pd  = qp->pd;
 	obj = container_of(uobj, struct ib_uqp_object, uevent.uobject);
 
 	if (!list_empty(&obj->mcast_list)) {
@@ -2407,6 +2497,8 @@ ssize_t ib_uverbs_destroy_qp(struct ib_uverbs_file *file,
 	if (ret)
 		return ret;
 
+	ib_rdmacg_uncharge(&uobj->cg_obj, pd->device, RDMA_VERB_RESOURCE_QP, 1);
+
 	if (obj->uxrcd)
 		atomic_dec(&obj->uxrcd->refcnt);
 
@@ -2853,10 +2945,15 @@ ssize_t ib_uverbs_create_ah(struct ib_uverbs_file *file,
 	memset(&attr.dmac, 0, sizeof(attr.dmac));
 	memcpy(attr.grh.dgid.raw, cmd.attr.grh.dgid, 16);
 
+	ret = ib_rdmacg_try_charge(&uobj->cg_obj, pd->device,
+				   RDMA_VERB_RESOURCE_AH, 1);
+	if (ret)
+		goto err_put;
+
 	ah = ib_create_ah(pd, &attr);
 	if (IS_ERR(ah)) {
 		ret = PTR_ERR(ah);
-		goto err_put;
+		goto err_create;
 	}
 
 	ah->uobject  = uobj;
@@ -2892,6 +2989,9 @@ err_copy:
 err_destroy:
 	ib_destroy_ah(ah);
 
+err_create:
+	ib_rdmacg_uncharge(&uobj->cg_obj, pd->device, RDMA_VERB_RESOURCE_AH, 1);
+
 err_put:
 	put_pd_read(pd);
 
@@ -2906,6 +3006,7 @@ ssize_t ib_uverbs_destroy_ah(struct ib_uverbs_file *file,
 {
 	struct ib_uverbs_destroy_ah cmd;
 	struct ib_ah		   *ah;
+	struct ib_pd		   *pd;
 	struct ib_uobject	   *uobj;
 	int			    ret;
 
@@ -2916,6 +3017,7 @@ ssize_t ib_uverbs_destroy_ah(struct ib_uverbs_file *file,
 	if (!uobj)
 		return -EINVAL;
 	ah = uobj->object;
+	pd = ah->pd;
 
 	ret = ib_destroy_ah(ah);
 	if (!ret)
@@ -2926,6 +3028,8 @@ ssize_t ib_uverbs_destroy_ah(struct ib_uverbs_file *file,
 	if (ret)
 		return ret;
 
+	ib_rdmacg_uncharge(&uobj->cg_obj, pd->device, RDMA_VERB_RESOURCE_AH, 1);
+
 	idr_remove_uobj(&ib_uverbs_ah_idr, uobj);
 
 	mutex_lock(&file->mutex);
@@ -3178,10 +3282,16 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file,
 		err = -EINVAL;
 		goto err_free;
 	}
+
+	err = ib_rdmacg_try_charge(&uobj->cg_obj, qp->pd->device,
+				   RDMA_VERB_RESOURCE_FLOW, 1);
+	if (err)
+		goto err_free;
+
 	flow_id = ib_create_flow(qp, flow_attr, IB_FLOW_DOMAIN_USER);
 	if (IS_ERR(flow_id)) {
 		err = PTR_ERR(flow_id);
-		goto err_free;
+		goto err_create;
 	}
 	flow_id->qp = qp;
 	flow_id->uobject = uobj;
@@ -3215,6 +3325,9 @@ err_copy:
 	idr_remove_uobj(&ib_uverbs_rule_idr, uobj);
 destroy_flow:
 	ib_destroy_flow(flow_id);
+err_create:
+	ib_rdmacg_uncharge(&uobj->cg_obj, qp->pd->device,
+			   RDMA_VERB_RESOURCE_FLOW, 1);
 err_free:
 	kfree(flow_attr);
 err_put:
@@ -3235,6 +3348,7 @@ int ib_uverbs_ex_destroy_flow(struct ib_uverbs_file *file,
 	struct ib_uverbs_destroy_flow	cmd;
 	struct ib_flow			*flow_id;
 	struct ib_uobject		*uobj;
+	struct ib_pd			*pd;
 	int				ret;
 
 	if (ucore->inlen < sizeof(cmd))
@@ -3252,11 +3366,15 @@ int ib_uverbs_ex_destroy_flow(struct ib_uverbs_file *file,
 	if (!uobj)
 		return -EINVAL;
 	flow_id = uobj->object;
+	pd = flow_id->qp->pd;
 
 	ret = ib_destroy_flow(flow_id);
 	if (!ret)
 		uobj->live = 0;
 
+	ib_rdmacg_uncharge(&uobj->cg_obj, pd->device,
+			   RDMA_VERB_RESOURCE_FLOW, 1);
+
 	put_uobj_write(uobj);
 
 	idr_remove_uobj(&ib_uverbs_rule_idr, uobj);
@@ -3323,6 +3441,11 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file,
 	obj->uevent.events_reported = 0;
 	INIT_LIST_HEAD(&obj->uevent.event_list);
 
+	ret = ib_rdmacg_try_charge(&obj->uevent.uobject.cg_obj, pd->device,
+				   RDMA_VERB_RESOURCE_SRQ, 1);
+	if (ret)
+		goto err_put_cq;
+
 	srq = pd->device->create_srq(pd, &attr, udata);
 	if (IS_ERR(srq)) {
 		ret = PTR_ERR(srq);
@@ -3387,6 +3510,8 @@ err_destroy:
 	ib_destroy_srq(srq);
 
 err_put:
+	ib_rdmacg_uncharge(&obj->uevent.uobject.cg_obj, pd->device,
+			   RDMA_VERB_RESOURCE_SRQ, 1);
 	put_pd_read(pd);
 
 err_put_cq:
@@ -3547,6 +3672,7 @@ ssize_t ib_uverbs_destroy_srq(struct ib_uverbs_file *file,
 	struct ib_uverbs_destroy_srq_resp resp;
 	struct ib_uobject		 *uobj;
 	struct ib_srq               	 *srq;
+	struct ib_pd                     *pd;
 	struct ib_uevent_object        	 *obj;
 	int                         	  ret = -EINVAL;
 	struct ib_usrq_object		 *us;
@@ -3561,6 +3687,7 @@ ssize_t ib_uverbs_destroy_srq(struct ib_uverbs_file *file,
 	srq = uobj->object;
 	obj = container_of(uobj, struct ib_uevent_object, uobject);
 	srq_type = srq->srq_type;
+	pd = srq->pd;
 
 	ret = ib_destroy_srq(srq);
 	if (!ret)
@@ -3571,6 +3698,9 @@ ssize_t ib_uverbs_destroy_srq(struct ib_uverbs_file *file,
 	if (ret)
 		return ret;
 
+	ib_rdmacg_uncharge(&uobj->cg_obj, pd->device,
+			   RDMA_VERB_RESOURCE_SRQ, 1);
+
 	if (srq_type == IB_SRQT_XRC) {
 		us = container_of(obj, struct ib_usrq_object, uevent);
 		atomic_dec(&us->uxrcd->refcnt);
@@ -3604,6 +3734,7 @@ int ib_uverbs_ex_query_device(struct ib_uverbs_file *file,
 	struct ib_uverbs_ex_query_device_resp resp;
 	struct ib_uverbs_ex_query_device  cmd;
 	struct ib_device_attr attr;
+	int    limits[RDMA_RESOURCE_MAX];
 	int err;
 
 	if (ucore->inlen < sizeof(cmd))
@@ -3630,7 +3761,10 @@ int ib_uverbs_ex_query_device(struct ib_uverbs_file *file,
 	if (err)
 		return err;
 
-	copy_query_dev_fields(file, ib_dev, &resp.base, &attr);
+	ib_rdmacg_query_limit(ib_dev, limits, RDMA_RESOURCE_MAX);
+
+	copy_query_dev_fields(file, ib_dev, &resp.base, &attr, limits);
+
 	resp.comp_mask = 0;
 
 	if (ucore->outlen < resp.response_length + sizeof(resp.odp_caps))
diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
index e3ef288..4132622 100644
--- a/drivers/infiniband/core/uverbs_main.c
+++ b/drivers/infiniband/core/uverbs_main.c
@@ -49,6 +49,7 @@
 #include <asm/uaccess.h>
 
 #include "uverbs.h"
+#include "core_priv.h"
 
 MODULE_AUTHOR("Roland Dreier");
 MODULE_DESCRIPTION("InfiniBand userspace verbs access");
@@ -214,6 +215,8 @@ static int ib_uverbs_cleanup_ucontext(struct ib_uverbs_file *file,
 	list_for_each_entry_safe(uobj, tmp, &context->ah_list, list) {
 		struct ib_ah *ah = uobj->object;
 
+		ib_rdmacg_uncharge(&uobj->cg_obj, ah->pd->device,
+				   RDMA_VERB_RESOURCE_AH, 1);
 		idr_remove_uobj(&ib_uverbs_ah_idr, uobj);
 		ib_destroy_ah(ah);
 		kfree(uobj);
@@ -223,6 +226,8 @@ static int ib_uverbs_cleanup_ucontext(struct ib_uverbs_file *file,
 	list_for_each_entry_safe(uobj, tmp, &context->mw_list, list) {
 		struct ib_mw *mw = uobj->object;
 
+		ib_rdmacg_uncharge(&uobj->cg_obj, mw->pd->device,
+				   RDMA_VERB_RESOURCE_MW, 1);
 		idr_remove_uobj(&ib_uverbs_mw_idr, uobj);
 		ib_dealloc_mw(mw);
 		kfree(uobj);
@@ -231,6 +236,8 @@ static int ib_uverbs_cleanup_ucontext(struct ib_uverbs_file *file,
 	list_for_each_entry_safe(uobj, tmp, &context->rule_list, list) {
 		struct ib_flow *flow_id = uobj->object;
 
+		ib_rdmacg_uncharge(&uobj->cg_obj, flow_id->qp->pd->device,
+				   RDMA_VERB_RESOURCE_FLOW, 1);
 		idr_remove_uobj(&ib_uverbs_rule_idr, uobj);
 		ib_destroy_flow(flow_id);
 		kfree(uobj);
@@ -245,6 +252,8 @@ static int ib_uverbs_cleanup_ucontext(struct ib_uverbs_file *file,
 		if (qp != qp->real_qp) {
 			ib_close_qp(qp);
 		} else {
+			ib_rdmacg_uncharge(&uobj->cg_obj, qp->pd->device,
+					   RDMA_VERB_RESOURCE_QP, 1);
 			ib_uverbs_detach_umcast(qp, uqp);
 			ib_destroy_qp(qp);
 		}
@@ -257,6 +266,8 @@ static int ib_uverbs_cleanup_ucontext(struct ib_uverbs_file *file,
 		struct ib_uevent_object *uevent =
 			container_of(uobj, struct ib_uevent_object, uobject);
 
+		ib_rdmacg_uncharge(&uobj->cg_obj, srq->pd->device,
+				   RDMA_VERB_RESOURCE_SRQ, 1);
 		idr_remove_uobj(&ib_uverbs_srq_idr, uobj);
 		ib_destroy_srq(srq);
 		ib_uverbs_release_uevent(file, uevent);
@@ -269,6 +280,8 @@ static int ib_uverbs_cleanup_ucontext(struct ib_uverbs_file *file,
 		struct ib_ucq_object *ucq =
 			container_of(uobj, struct ib_ucq_object, uobject);
 
+		ib_rdmacg_uncharge(&uobj->cg_obj, cq->device,
+				   RDMA_VERB_RESOURCE_CQ, 1);
 		idr_remove_uobj(&ib_uverbs_cq_idr, uobj);
 		ib_destroy_cq(cq);
 		ib_uverbs_release_ucq(file, ev_file, ucq);
@@ -278,6 +291,8 @@ static int ib_uverbs_cleanup_ucontext(struct ib_uverbs_file *file,
 	list_for_each_entry_safe(uobj, tmp, &context->mr_list, list) {
 		struct ib_mr *mr = uobj->object;
 
+		ib_rdmacg_uncharge(&uobj->cg_obj, mr->pd->device,
+				   RDMA_VERB_RESOURCE_MR, 1);
 		idr_remove_uobj(&ib_uverbs_mr_idr, uobj);
 		ib_dereg_mr(mr);
 		kfree(uobj);
@@ -298,11 +313,15 @@ static int ib_uverbs_cleanup_ucontext(struct ib_uverbs_file *file,
 	list_for_each_entry_safe(uobj, tmp, &context->pd_list, list) {
 		struct ib_pd *pd = uobj->object;
 
+		ib_rdmacg_uncharge(&uobj->cg_obj, pd->device,
+				   RDMA_VERB_RESOURCE_PD, 1);
 		idr_remove_uobj(&ib_uverbs_pd_idr, uobj);
 		ib_dealloc_pd(pd);
 		kfree(uobj);
 	}
 
+	ib_rdmacg_uncharge(&context->cg_obj, context->device,
+			   RDMA_VERB_RESOURCE_UCTX, 1);
 	put_pid(context->tgid);
 
 	return context->device->dealloc_ucontext(context);
diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
index 120da1d..3140568 100644
--- a/include/rdma/ib_verbs.h
+++ b/include/rdma/ib_verbs.h
@@ -55,6 +55,8 @@
 #include <linux/mmu_notifier.h>
 #include <asm/uaccess.h>
 
+#include <linux/cgroup_rdma.h>
+
 extern struct workqueue_struct *ib_wq;
 
 union ib_gid {
@@ -95,6 +97,22 @@ enum rdma_protocol_type {
 	RDMA_PROTOCOL_USNIC_UDP
 };
 
+enum rdma_resource_type {
+	RDMA_VERB_RESOURCE_UCTX,
+	RDMA_VERB_RESOURCE_AH,
+	RDMA_VERB_RESOURCE_PD,
+	RDMA_VERB_RESOURCE_CQ,
+	RDMA_VERB_RESOURCE_MR,
+	RDMA_VERB_RESOURCE_MW,
+	RDMA_VERB_RESOURCE_SRQ,
+	RDMA_VERB_RESOURCE_QP,
+	RDMA_VERB_RESOURCE_FLOW,
+	/*
+	 * add any hw specific resource here as RDMA_HW_RESOURCE_NAME
+	 */
+	RDMA_RESOURCE_MAX,
+};
+
 __attribute_const__ enum rdma_transport_type
 rdma_node_get_transport(enum rdma_node_type node_type);
 
@@ -1231,6 +1249,12 @@ struct ib_fmr_attr {
 
 struct ib_umem;
 
+struct ib_rdmacg_object {
+#ifdef CONFIG_CGROUP_RDMA
+	struct rdma_cgroup	*cg;		/* owner rdma cgroup */
+#endif
+};
+
 struct ib_ucontext {
 	struct ib_device       *device;
 	struct list_head	pd_list;
@@ -1261,12 +1285,14 @@ struct ib_ucontext {
 	struct list_head	no_private_counters;
 	int                     odp_mrs_count;
 #endif
+	struct ib_rdmacg_object cg_obj;
 };
 
 struct ib_uobject {
 	u64			user_handle;	/* handle given to us by userspace */
 	struct ib_ucontext     *context;	/* associated user context */
 	void		       *object;		/* containing object */
+	struct ib_rdmacg_object cg_obj;
 	struct list_head	list;		/* link to context's list */
 	int			id;		/* index into kernel idr */
 	struct kref		ref;
@@ -1823,7 +1849,9 @@ struct ib_device {
 	u16                          is_switch:1;
 	u8                           node_type;
 	u8                           phys_port_cnt;
-
+#ifdef CONFIG_CGROUP_RDMA
+	struct rdmacg_device	     cg_device;
+#endif
 	/**
 	 * The following mandatory functions are used only at device
 	 * registration.  Keep functions such as these at the end of this
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCHv6 3/3] rdmacg: Added documentation for rdmacg
  2016-02-20 11:00 [PATCHv6 0/3] rdmacg: IB/core: rdma controller support Parav Pandit
  2016-02-20 11:00 ` [PATCHv6 1/3] rdmacg: Added rdma cgroup controller Parav Pandit
  2016-02-20 11:00 ` [PATCHv6 2/3] IB/core: added support to use " Parav Pandit
@ 2016-02-20 11:00 ` Parav Pandit
  2016-02-24 14:26   ` Haggai Eran
  2016-02-22  4:59 ` [PATCHv6 0/3] rdmacg: IB/core: rdma controller support Parav Pandit
  3 siblings, 1 reply; 24+ messages in thread
From: Parav Pandit @ 2016-02-20 11:00 UTC (permalink / raw)
  To: cgroups, linux-doc, linux-kernel, linux-rdma, tj, lizefan,
	hannes, dledford, liranl, sean.hefty, jgunthorpe, haggaie
  Cc: corbet, james.l.morris, serge, ogerlitz, matanb, raindel, akpm,
	linux-security-module, pandit.parav

Added documentation for v1 and v2 version describing high
level design and usage examples on using rdma controller.

Signed-off-by: Parav Pandit <pandit.parav@gmail.com>
---
 Documentation/cgroup-v1/rdma.txt | 106 +++++++++++++++++++++++++++++++++++++++
 Documentation/cgroup-v2.txt      |  33 ++++++++++++
 2 files changed, 139 insertions(+)
 create mode 100644 Documentation/cgroup-v1/rdma.txt

diff --git a/Documentation/cgroup-v1/rdma.txt b/Documentation/cgroup-v1/rdma.txt
new file mode 100644
index 0000000..ee53303
--- /dev/null
+++ b/Documentation/cgroup-v1/rdma.txt
@@ -0,0 +1,106 @@
+				RDMA Controller
+				----------------
+
+Contents
+--------
+
+1. Overview
+  1-1. What is RDMA controller?
+  1-2. Why RDMA controller needed?
+  1-3. How is RDMA controller implemented?
+2. Usage Examples
+
+1. Overview
+
+1-1. What is RDMA controller?
+-----------------------------
+
+RDMA controller allows user to limit RDMA/IB specific resources
+that a given set of processes can use. These processes are grouped using
+RDMA controller.
+
+RDMA controller allows operating on resources defined by the IB stack
+which are mainly IB verb resources and in future hardware specific
+well defined resources.
+
+1-2. Why RDMA controller needed?
+--------------------------------
+
+Currently user space applications can easily take away all the rdma device
+specific resources such as AH, CQ, QP, MR etc. Due to which other applications
+in other cgroup or kernel space ULPs may not even get chance to allocate any
+rdma resources. This leads to service unavailability.
+
+Therefore RDMA controller is needed through which resource consumption
+of processes can be limited. Through this controller various different rdma
+resources described by IB stack can be accounted.
+
+1-3. How is RDMA controller implemented?
+----------------------------------------
+
+RDMA cgroup allows limit configuration of resources. These resources are not
+defined by the rdma controller. Instead they are defined by the IB stack.
+This provides great flexibility to allow IB stack to define new resources,
+without any changes to rdma cgroup.
+Rdma cgroup maintains resource accounting per cgroup, per device using
+resource pool structure. Each such resource pool is limited up to
+64 resources in given resource pool by rdma cgroup, which can be extended
+later if required.
+
+This resource pool object is linked to the cgroup css. Typically there
+are 0 to 4 resource pool instances per cgroup, per device in most use cases.
+But nothing limits to have it more. At present hundreds of RDMA devices per
+single cgroup may not be handled optimally, however there is no
+known use case for such configuration either.
+
+Since RDMA resources can be allocated from any process and can be freed by any
+of the child processes which shares the address space, rdma resources are
+always owned by the creator cgroup css. This allows process migration from one
+to other cgroup without major complexity of transferring resource ownership;
+because such ownership is not really present due to shared nature of
+rdma resources. Linking resources around css also ensures that cgroups can be
+deleted after processes migrated. This allow progress migration as well with
+active resources, even though that’s not the primary use case.
+
+Whenever RDMA resource charing occurs, owner rdma cgroup is returned to
+the caller. Same rdma cgroup should be passed while uncharging the resource.
+This also allows process migrated with active RDMA resource to charge
+to new owner cgroup for new resource. It also allows to uncharge resource of
+a process from previously charged cgroup which is migrated to new cgroup,
+even though that is not a primary use case.
+
+Resource pool object is created in following situations.
+(a) User sets the limit and no previous resource pool exist for the device
+of interest for the cgroup.
+(b) No resource limits were configured, but IB/RDMA stack tries to
+charge the resource. So that it correctly uncharge them when applications are
+running without limits and later on when limits are enforced during uncharging,
+otherwise usage count will drop to negative.
+
+Resource pool is destroyed if it all the resource limits are set to max
+and it is the last resource getting deallocated.
+
+User should set all the limit to max value if it intents to remove/unconfigure
+the resource pool for a particular device.
+
+2. Usage Examples
+-----------------
+
+(a) Configure resource limit:
+echo mlx4_0 mr=100 qp=10 ah=2 > /sys/fs/cgroup/rdma/1/rdma.max
+echo ocrdma1 mr=120 qp=20 cq=10 > /sys/fs/cgroup/rdma/2/rdma.max
+
+(b) Query resource limit:
+cat /sys/fs/cgroup/rdma/2/rdma.max
+#Output:
+mlx4_0 mr=100 qp=10 ah=2 pd=max
+ocrdma1 mr=120 qp=20 cq=10 pd=max ah=max
+
+(c) Query current usage:
+cat /sys/fs/cgroup/rdma/2/rdma.current
+#Output:
+mlx4_0 mr=95 qp=8 ah=2
+ocrdma1 mr=0 qp=20 cq=10
+
+(d) Delete resource limit:
+echo mlx4_0 mr=max qp=max ah=max > /sys/fs/cgroup/rdma/1/rdma.max
diff --git a/Documentation/cgroup-v2.txt b/Documentation/cgroup-v2.txt
index 3922ae1..dc3ab5f 100644
--- a/Documentation/cgroup-v2.txt
+++ b/Documentation/cgroup-v2.txt
@@ -47,6 +47,8 @@ CONTENTS
   5-3. IO
     5-3-1. IO Interface Files
     5-3-2. Writeback
+  5-4. RDMA
+    5-4-1. RDMA Interface Files
 P. Information on Kernel Programming
   P-1. Filesystem Support for Writeback
 D. Deprecated v1 Core Features
@@ -1090,6 +1092,37 @@ writeback as follows.
 	total available memory and applied the same way as
 	vm.dirty[_background]_ratio.
 
+5-4. RDMA
+
+The "rdma" controller regulates the distribution of RDMA resources.
+This controller implements resource accounting of resources defined
+by IB stack.
+
+5-4-1. RDMA Interface Files
+
+  rdma.max
+	A readwrite file that exists for all the cgroups except root that
+	describes current configured resource limit for a RDMA/IB device.
+
+	Lines are keyed by device name and are not ordered.
+	Each line contains space separated resource name and its configured
+	limit that can be distributed.
+
+	An example for mlx4 and ocrdma device follows.
+
+	  mlx4_0 ah=2 mr=1000 qp=104
+	  ocrdma1 cq=10 mr=900 qp=89
+	  mlx4_1 uctx=max ah=max pd=max cq=max qp=max
+
+  rdma.current
+	A read-only file that describes current resource usage.
+	It exists for all the cgroup except root.
+
+	An example for mlx4 and ocrdma device follows.
+
+	  mlx4_0 mr=1000 qp=102 ah=2 flow=10 srq=0
+	  ocrdma1 mr=900 qp=79 cq=10 flow=0 srq=0
+	  mlx4_1 uctx=max ah=max pd=max cq=max qp=max
 
 P. Information on Kernel Programming
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 1/3] rdmacg: Added rdma cgroup controller
  2016-02-20 11:00 ` [PATCHv6 1/3] rdmacg: Added rdma cgroup controller Parav Pandit
@ 2016-02-21  7:43   ` Leon Romanovsky
  2016-02-21 11:33     ` Parav Pandit
  2016-02-24 13:13   ` Haggai Eran
  1 sibling, 1 reply; 24+ messages in thread
From: Leon Romanovsky @ 2016-02-21  7:43 UTC (permalink / raw)
  To: Parav Pandit
  Cc: cgroups, linux-doc, linux-kernel, linux-rdma, tj, lizefan,
	hannes, dledford, liranl, sean.hefty, jgunthorpe, haggaie,
	corbet, james.l.morris, serge, ogerlitz, matanb, raindel, akpm,
	linux-security-module

On Sat, Feb 20, 2016 at 04:30:04PM +0530, Parav Pandit wrote:
> Added rdma cgroup controller that does accounting, limit enforcement
> on rdma/IB verbs and hw resources.
> 
> Added rdma cgroup header file which defines its APIs to perform
> charing/uncharing functionality and device registration which will
> participate in controller functions of accounting and limit
> enforcements. It also define rdmacg_device structure to bind IB stack
> and RDMA cgroup controller.
> 
> RDMA resources are tracked using resource pool. Resource pool is per
> device, per cgroup entity which allows setting up accounting limits
> on per device basis.
> 
> Resources are not defined by the RDMA cgroup, instead they are defined
> by the external module IB stack. This allows extending IB stack
> without changing kernel, as IB stack is going through changes
> and enhancements.
> 
> Resource pool is created/destroyed dynamically whenever
> charging/uncharging occurs respectively and whenever user
> configuration is done. Its a tradeoff of memory vs little more code
> space that creates resource pool whenever necessary,
> instead of creating them during cgroup creation and device registration
> time.
> 
> Signed-off-by: Parav Pandit <pandit.parav@gmail.com>
> ---
>  include/linux/cgroup_rdma.h   |  53 +++
>  include/linux/cgroup_subsys.h |   4 +
>  init/Kconfig                  |  10 +
>  kernel/Makefile               |   1 +
>  kernel/cgroup_rdma.c          | 753 ++++++++++++++++++++++++++++++++++++++++++
>  5 files changed, 821 insertions(+)
>  create mode 100644 include/linux/cgroup_rdma.h
>  create mode 100644 kernel/cgroup_rdma.c
> 
> diff --git a/include/linux/cgroup_rdma.h b/include/linux/cgroup_rdma.h
> new file mode 100644
> index 0000000..b370733
> --- /dev/null
> +++ b/include/linux/cgroup_rdma.h
> @@ -0,0 +1,53 @@
> +#ifndef _CGROUP_RDMA_H
> +#define _CGROUP_RDMA_H
> +
> +#include <linux/cgroup.h>
> +
> +struct rdma_cgroup {
> +#ifdef CONFIG_CGROUP_RDMA
> +	struct cgroup_subsys_state	css;
> +
> +	spinlock_t rpool_list_lock;	/* protects resource pool list */
> +	struct list_head rpool_head;	/* head to keep track of all resource
> +					 * pools that belongs to this cgroup.
> +					 */
> +#endif
> +};
> +
> +#ifdef CONFIG_CGROUP_RDMA

I'm sure that you already asked about that, but why do you need ifdef
embedded in struct rdma_cgroup and right after that the same one?
Can you place this ifdef before declaring struct rdma_cgroup?

> +
> +struct rdmacg_device;
> +

Thanks

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 1/3] rdmacg: Added rdma cgroup controller
  2016-02-21  7:43   ` Leon Romanovsky
@ 2016-02-21 11:33     ` Parav Pandit
  2016-02-21 13:45       ` Leon Romanovsky
  0 siblings, 1 reply; 24+ messages in thread
From: Parav Pandit @ 2016-02-21 11:33 UTC (permalink / raw)
  To: leon, Parav Pandit, cgroups, linux-doc, linux-kernel, linux-rdma,
	Tejun Heo, lizefan, Johannes Weiner, Doug Ledford, Liran Liss,
	Hefty, Sean, Jason Gunthorpe, Haggai Eran, Jonathan Corbet,
	james.l.morris, serge, Or Gerlitz, Matan Barak, raindel, akpm,
	linux-security-module

On Sun, Feb 21, 2016 at 1:13 PM, Leon Romanovsky <leon@leon.nu> wrote:
> On Sat, Feb 20, 2016 at 04:30:04PM +0530, Parav Pandit wrote:
> Can you place this ifdef before declaring struct rdma_cgroup?
Yes. I missed out this cleanup. Done locally now.

> Thanks

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 1/3] rdmacg: Added rdma cgroup controller
  2016-02-21 11:33     ` Parav Pandit
@ 2016-02-21 13:45       ` Leon Romanovsky
  2016-02-21 14:11         ` Parav Pandit
  0 siblings, 1 reply; 24+ messages in thread
From: Leon Romanovsky @ 2016-02-21 13:45 UTC (permalink / raw)
  To: Parav Pandit
  Cc: cgroups, linux-doc, linux-kernel, linux-rdma, Tejun Heo, lizefan,
	Johannes Weiner, Doug Ledford, Liran Liss, Hefty, Sean,
	Jason Gunthorpe, Haggai Eran, Jonathan Corbet, james.l.morris,
	serge, Or Gerlitz, Matan Barak, raindel, akpm,
	linux-security-module

On Sun, Feb 21, 2016 at 05:03:05PM +0530, Parav Pandit wrote:
> On Sun, Feb 21, 2016 at 1:13 PM, Leon Romanovsky <leon@leon.nu> wrote:
> > On Sat, Feb 20, 2016 at 04:30:04PM +0530, Parav Pandit wrote:
> > Can you place this ifdef before declaring struct rdma_cgroup?
> Yes. I missed out this cleanup. Done locally now.

Great, additional thing which spotted my attention was related to
declaring and using the new cgroups functions. There are number of
places where you protected the calls by specific ifdefs in the
IB/core c-files and not in h-files as it is usually done.

> 
> > Thanks

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 1/3] rdmacg: Added rdma cgroup controller
  2016-02-21 13:45       ` Leon Romanovsky
@ 2016-02-21 14:11         ` Parav Pandit
  2016-02-21 15:09           ` Leon Romanovsky
  0 siblings, 1 reply; 24+ messages in thread
From: Parav Pandit @ 2016-02-21 14:11 UTC (permalink / raw)
  To: leon, Parav Pandit, cgroups, linux-doc, linux-kernel, linux-rdma,
	Tejun Heo, lizefan, Johannes Weiner, Doug Ledford, Liran Liss,
	Hefty, Sean, Jason Gunthorpe, Haggai Eran, Jonathan Corbet,
	james.l.morris, serge, Or Gerlitz, Matan Barak, raindel, akpm,
	linux-security-module

CONFIG_CGROUP_RDMA

On Sun, Feb 21, 2016 at 7:15 PM, Leon Romanovsky <leon@leon.nu> wrote:
> On Sun, Feb 21, 2016 at 05:03:05PM +0530, Parav Pandit wrote:
>> On Sun, Feb 21, 2016 at 1:13 PM, Leon Romanovsky <leon@leon.nu> wrote:
>> > On Sat, Feb 20, 2016 at 04:30:04PM +0530, Parav Pandit wrote:
>> > Can you place this ifdef before declaring struct rdma_cgroup?
>> Yes. I missed out this cleanup. Done locally now.
>
> Great, additional thing which spotted my attention was related to
> declaring and using the new cgroups functions. There are number of
> places where you protected the calls by specific ifdefs in the
> IB/core c-files and not in h-files as it is usually done.
>
ib_device_register_rdmacg, ib_device_unregister_rdmacg are the only
two functions called from IB/core as its tied to functionality.
They can also be implemented as NULL call when CONFIG_CGROUP_RDMA is undefined.
(Similar to ib_rdmacg_try_charge and others).
I didn't do because occurrence of call of register and unregister is
limited to single file and only twice compare to charge/uncharge
functions.
Either way is fine with me, I can make the changes which you
described. Let me know.

>>
>> > Thanks

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 1/3] rdmacg: Added rdma cgroup controller
  2016-02-21 14:11         ` Parav Pandit
@ 2016-02-21 15:09           ` Leon Romanovsky
  2016-02-21 15:15             ` Parav Pandit
  0 siblings, 1 reply; 24+ messages in thread
From: Leon Romanovsky @ 2016-02-21 15:09 UTC (permalink / raw)
  To: Parav Pandit
  Cc: cgroups, linux-doc, linux-kernel, linux-rdma, Tejun Heo, lizefan,
	Johannes Weiner, Doug Ledford, Liran Liss, Hefty, Sean,
	Jason Gunthorpe, Haggai Eran, Jonathan Corbet, james.l.morris,
	serge, Or Gerlitz, Matan Barak, raindel, akpm,
	linux-security-module

On Sun, Feb 21, 2016 at 07:41:08PM +0530, Parav Pandit wrote:
> CONFIG_CGROUP_RDMA
> 
> On Sun, Feb 21, 2016 at 7:15 PM, Leon Romanovsky <leon@leon.nu> wrote:
> > On Sun, Feb 21, 2016 at 05:03:05PM +0530, Parav Pandit wrote:
> >> On Sun, Feb 21, 2016 at 1:13 PM, Leon Romanovsky <leon@leon.nu> wrote:
> >> > On Sat, Feb 20, 2016 at 04:30:04PM +0530, Parav Pandit wrote:
> >> > Can you place this ifdef before declaring struct rdma_cgroup?
> >> Yes. I missed out this cleanup. Done locally now.
> >
> > Great, additional thing which spotted my attention was related to
> > declaring and using the new cgroups functions. There are number of
> > places where you protected the calls by specific ifdefs in the
> > IB/core c-files and not in h-files as it is usually done.
> >
> ib_device_register_rdmacg, ib_device_unregister_rdmacg are the only
> two functions called from IB/core as its tied to functionality.
> They can also be implemented as NULL call when CONFIG_CGROUP_RDMA is undefined.
> (Similar to ib_rdmacg_try_charge and others).
> I didn't do because occurrence of call of register and unregister is
> limited to single file and only twice compare to charge/uncharge
> functions.
> Either way is fine with me, I can make the changes which you
> described. Let me know.

Please do,
IMHO, it is better to have one place which handles all relevant ifdefs
and functions. IB/core doesn't need to know about cgroups implementation.

Thanks

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 1/3] rdmacg: Added rdma cgroup controller
  2016-02-21 15:09           ` Leon Romanovsky
@ 2016-02-21 15:15             ` Parav Pandit
  0 siblings, 0 replies; 24+ messages in thread
From: Parav Pandit @ 2016-02-21 15:15 UTC (permalink / raw)
  To: leon, Parav Pandit, cgroups, linux-doc, linux-kernel, linux-rdma,
	Tejun Heo, lizefan, Johannes Weiner, Doug Ledford, Liran Liss,
	Hefty, Sean, Jason Gunthorpe, Haggai Eran, Jonathan Corbet,
	james.l.morris, serge, Or Gerlitz, Matan Barak, raindel, akpm,
	linux-security-module

On Sun, Feb 21, 2016 at 8:39 PM, Leon Romanovsky <leon@leon.nu> wrote:
> On Sun, Feb 21, 2016 at 07:41:08PM +0530, Parav Pandit wrote:
>> CONFIG_CGROUP_RDMA
>>
>> On Sun, Feb 21, 2016 at 7:15 PM, Leon Romanovsky <leon@leon.nu> wrote:
>> > On Sun, Feb 21, 2016 at 05:03:05PM +0530, Parav Pandit wrote:
>> >> On Sun, Feb 21, 2016 at 1:13 PM, Leon Romanovsky <leon@leon.nu> wrote:
>> >> > On Sat, Feb 20, 2016 at 04:30:04PM +0530, Parav Pandit wrote:
>> >> > Can you place this ifdef before declaring struct rdma_cgroup?
>> >> Yes. I missed out this cleanup. Done locally now.
>> >
>> > Great, additional thing which spotted my attention was related to
>> > declaring and using the new cgroups functions. There are number of
>> > places where you protected the calls by specific ifdefs in the
>> > IB/core c-files and not in h-files as it is usually done.
>> >
>> ib_device_register_rdmacg, ib_device_unregister_rdmacg are the only
>> two functions called from IB/core as its tied to functionality.
>> They can also be implemented as NULL call when CONFIG_CGROUP_RDMA is undefined.
>> (Similar to ib_rdmacg_try_charge and others).
>> I didn't do because occurrence of call of register and unregister is
>> limited to single file and only twice compare to charge/uncharge
>> functions.
>> Either way is fine with me, I can make the changes which you
>> described. Let me know.
>
> Please do,
> IMHO, it is better to have one place which handles all relevant ifdefs
> and functions. IB/core doesn't need to know about cgroups implementation.
>
ok. Done. Thanks for the review. I will accumulate more comments from
Tejun and others before spinning v7.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 0/3] rdmacg: IB/core: rdma controller support
  2016-02-20 11:00 [PATCHv6 0/3] rdmacg: IB/core: rdma controller support Parav Pandit
                   ` (2 preceding siblings ...)
  2016-02-20 11:00 ` [PATCHv6 3/3] rdmacg: Added documentation for rdmacg Parav Pandit
@ 2016-02-22  4:59 ` Parav Pandit
  3 siblings, 0 replies; 24+ messages in thread
From: Parav Pandit @ 2016-02-22  4:59 UTC (permalink / raw)
  To: cgroups, linux-doc, linux-kernel, linux-rdma, Tejun Heo, lizefan,
	Johannes Weiner, Doug Ledford, Liran Liss, Hefty, Sean,
	Jason Gunthorpe, Haggai Eran
  Cc: Jonathan Corbet, james.l.morris, serge, Or Gerlitz, Matan Barak,
	raindel, akpm, linux-security-module, Parav Pandit

Hi Tejun, Doug,

I would like to know direction from you on how do we intent to merge this code.
So that I generate next patch v7 against right tree.

Few options that comes to me are:
1. Shall we merge this code from Doug's linux-rdma tree, where there
are no merge conflicts in cgroup?
Or
2. Shall we merge this as two separate patches one from cgroup side
from Tejun's cgroups tree as kernel support and 2nd from linux-rdma
for IB core changes to make use of those features?
Or
3. Some other options that you have in mind.

I was thinking of (1) as that has the least churn in development, test
and merge efforts.

Regards,
Parav Pandit

On Sat, Feb 20, 2016 at 4:30 PM, Parav Pandit <pandit.parav@gmail.com> wrote:
> Overview:
> Currently user space applications can easily take away all the rdma
> device specific resources such as AH, CQ, QP, MR etc. Due to which other
> applications in other cgroup or kernel space ULPs may not even get chance
> to allocate any rdma resources. This results into service unavailibility.
>
> RDMA cgroup addresses this issue by allowing resource accounting,
> limit enforcement on per cgroup, per rdma device basis.
>
> Resources are not defined by the RDMA cgroup. Resources are defined
> by RDMA/IB stack. This allows rdma cgroup to remain constant while RDMA/IB
> stack can evolve without the need of rdma cgroup update. A new
> resource can be easily added by the RDMA/IB stack without touching
> rdma cgroup.
>
> RDMA uverbs layer will enforce limits on well defined RDMA verb
> resources without any HCA vendor device driver involvement.
>
> RDMA uverbs layer will not do accounting of hw vendor specific resources.
> Instead rdma cgroup provides set of APIs through which vendor specific
> drivers can do resource accounting by making use of rdma cgroup.
>
> Resource limit enforcement is hierarchical.
>
> When process is migrated with active RDMA resources, rdma cgroup
> continues to uncharge original cgroup for allocated resource. New resource
> is charged to current process's cgroup, which means if the process is
> migrated with active resources, for new resources it will be charged to
> new cgroup and old resources will be correctly uncharged from old cgroup.
>
> Changes from v5:
>  * (To address comments from Tejun)
>    1. Removed two type of resource pool, made is single type (as Tejun
>       described in past comment)
>    2. Removed match tokens and have array definition like "qp", "mr",
>       "cq" etc.
>    3. Wrote small parser and avoided match_token API as that won't work
>       due to different array definitions
>    4. Removed one-off remove API to unconfigure cgroup, instead all
>       resource should be set to max.
>    5. Removed resource pool type (user/default), instead having
>       max_num_cnt, when ref_cnt drops to zero and
>       max_num_cnt = total_rescource_cnt, pool is freed.
>    6. Resource definition ownership is now only with IB stack at single
>       header file, no longer in each low level driver.
>       This goes through IB maintainer and other reviewers eyes.
>       This continue to give flexibility to not force kernel upgrade for
>       few enums additions for new resource type.
>    7. Wherever possible pool lock is pushed out, except for hierarchical
>       charging/unchanging points, as it not possible to do so, due to
>       iterative process involves blocking allocations of rpool. Coming up
>       more levels up to release locks doesn't make any sense either.
>       This is anyway slow path where rpool is not allocated. Except for
>       typical first resource allocation, this is less travelled path.
>    8. Avoided %d manipulation due to removal of match_token and replaced
>       with seq_putc etc friend functions.
>  * Other minor cleanups.
>  * Fixed rdmacg_register_device to return error in case of IB stack
>    tries to register for than 64 resources.
>  * Fixed not allowing negative value on resource setting.
>  * Fixed cleaning up resource pools during device removal.
>  * Simplfied and rename table length field to use ARRAY_SIZE macro.
>  * Updated documentation to reflect single resource pool and shorter
>    file names.
>
> Changes from v4:
>  * Fixed compilation errors for lockdep_assert_held reported by kbuild
>    test robot
>  * Fixed compilation warning reported by coccinelle for extra
>    semicolon.
>  * Fixed compilation error for inclusion of linux/parser.h which
>    cannot be included in any header file, as that triggers multiple
>    inclusion error. parser.h is included in C files which intent to
>    use it.
>  * Removed unused header file inclusion in cgroup_rdma.c
>
> Changes from v3:
>  * (To address comments from Tejun)
>    1. Renamed cg_resource to rdmacg_resource
>    2. Merged dealloc_cg_rpool and _dealloc_cg_rpool to single function
>    3. Renamed _find_cg_rpool to find_cg_rpool_locked()
>    5. Removed RDMACG_MAX_RESOURCE_INDEX limitation
>    6. Fixed few alignments.
>    7. Improved description for RDMA cgroup configuration menu
>    8. Renamed cg_list_lock to rpool_list_lock to reflect the lock
>       is for rpools.
>    9. Renamed _get_cg_rpool to find_cg_rpool.
>    10. Made creator as int variable, instead of atomic as its not
>       required to be atomic.
>  * Fixed freeing right rpool during query_limit error path
>  * Added copywrite for cgroup.c
>  * Removed including parser.h from cgroup.c as its included by cgroup_rdma.h
>  * Reduced try_charge functions to single function and removed duplicate
>    comments.
>
> Changes from v2:
>  * Fixed compilation error reported by 0-DAY kernel test infrastructure
>    for m68k architecture where CONFIG_CGROUP is also not defined.
>  * Fixed comment in patch to refer to legacy mode of cgroup, changed to
>    refer to v1 and v2 version.
>  * Added more information in commit log for rdma controller patch.
>
> Changes from v1:
>  * (To address comments from Tejun)
>    a. reduces 3 patches to single patch
>    b. removed resource word from the cgroup configuration files
>    c. changed cgroup configuration file names to match other cgroups
>    d. removed .list file and merged functionality with .max file
>  * Based on comment to merge to single patch for rdma controller;
>    IB/core patches are reduced to single patch.
>  * Removed pid cgroup map and simplified design -
>    Charge/Uncharge caller stack keeps track of the rdmacg for
>    given resource. This removes the need to maintain and perform
>    hash lookup. This also allows little more accurate resource
>    charging/uncharging when process moved from one to other cgroup
>    with active resources and continue to allocate more.
>  * Critical fix: Removed rdma cgroup's dependency on the kernel module
>    header files to avoid crashes when modules are upgraded without kernel
>    upgrade, which is very common due to high amount of changes in IB stack
>    and it is also shipped as individual kernel modules.
>  * uboject extended to keep track of the owner rdma cgroup, so that same
>    rdmacg can be used while uncharging.
>  * Added support functions to hide details of rdmacg device in uverbs
>    modules for cases of cgroup enabled/disabled at compile time. This
>    avoids multiple ifdefs for every API in uverbs layer.
>  * Removed failure counters in first patch, which will be added once
>    initial feature is merged.
>  * Fixed stale rpool access which is getting freed, while doing
>    configuration to rdma.verb.max file.
>  * Fixed rpool resource leak while querying max, current values.
>
> Changes from v0:
> (To address comments from Haggai, Doug, Liran, Tejun, Sean, Jason)
>  * Redesigned to support per device per cgroup limit settings by bringing
>    concept of resource pool.
>  * Redesigned to let IB stack define the resources instead of rdma
>    controller using resource template.
>  * Redesigned to support hw vendor specific limits setting
>    (optional to drivers).
>  * Created new rdma controller instead of piggyback on device cgroup.
>  * Fixed race conditions for multiple tasks sharing rdma resources.
>  * Removed dependency on the task_struct.
>
> Patch 2/3 fails to merge with Doug's tree as it has different code.
> It requires manual merge.
> I will generate separate patch for Doug's tree for linux-rdma for
> just patch 2/3 to avoid merge conflict once we complete overall
> review.
>
>
> Parav Pandit (3):
>   rdmacg: Added rdma cgroup controller
>   IB/core: added support to use rdma cgroup controller
>   rdmacg: Added documentation for rdmacg
>
>  Documentation/cgroup-v1/rdma.txt      | 106 +++++
>  Documentation/cgroup-v2.txt           |  33 ++
>  drivers/infiniband/core/Makefile      |   1 +
>  drivers/infiniband/core/cgroup.c      | 114 +++++
>  drivers/infiniband/core/core_priv.h   |  36 ++
>  drivers/infiniband/core/device.c      |  16 +
>  drivers/infiniband/core/uverbs_cmd.c  | 164 +++++++-
>  drivers/infiniband/core/uverbs_main.c |  19 +
>  include/linux/cgroup_rdma.h           |  53 +++
>  include/linux/cgroup_subsys.h         |   4 +
>  include/rdma/ib_verbs.h               |  30 +-
>  init/Kconfig                          |  10 +
>  kernel/Makefile                       |   1 +
>  kernel/cgroup_rdma.c                  | 753 ++++++++++++++++++++++++++++++++++
>  14 files changed, 1324 insertions(+), 16 deletions(-)
>  create mode 100644 Documentation/cgroup-v1/rdma.txt
>  create mode 100644 drivers/infiniband/core/cgroup.c
>  create mode 100644 include/linux/cgroup_rdma.h
>  create mode 100644 kernel/cgroup_rdma.c
>
> --
> 1.8.3.1
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 1/3] rdmacg: Added rdma cgroup controller
  2016-02-20 11:00 ` [PATCHv6 1/3] rdmacg: Added rdma cgroup controller Parav Pandit
  2016-02-21  7:43   ` Leon Romanovsky
@ 2016-02-24 13:13   ` Haggai Eran
  2016-02-24 16:16     ` Parav Pandit
  1 sibling, 1 reply; 24+ messages in thread
From: Haggai Eran @ 2016-02-24 13:13 UTC (permalink / raw)
  To: Parav Pandit, cgroups, linux-doc, linux-kernel, linux-rdma, tj,
	lizefan, hannes, dledford, liranl, sean.hefty, jgunthorpe
  Cc: corbet, james.l.morris, serge, ogerlitz, matanb, raindel, akpm,
	linux-security-module

Hi,

Overall I the patch looks good to me. I have a few comments below.

On 20/02/2016 13:00, Parav Pandit wrote:
> Resource pool is created/destroyed dynamically whenever
> charging/uncharging occurs respectively and whenever user
> configuration is done. Its a tradeoff of memory vs little more code
Its -> It's
> space that creates resource pool whenever necessary,
> instead of creating them during cgroup creation and device registration
> time.
> 
> Signed-off-by: Parav Pandit <pandit.parav@gmail.com>

> diff --git a/include/linux/cgroup_rdma.h b/include/linux/cgroup_rdma.h
> new file mode 100644
> index 0000000..b370733
> --- /dev/null
> +++ b/include/linux/cgroup_rdma.h

> +struct rdmacg_device {
> +	struct rdmacg_pool_info pool_info;
> +	struct list_head	rdmacg_list;
> +	struct list_head	rpool_head;
> +	spinlock_t		rpool_lock;	/* protects rsource pool list */
rsource -> resource

> +	char			*name;
> +};
> +
> +/* APIs for RDMA/IB stack to publish when a device wants to
> + * participate in resource accounting
> + */
> +int rdmacg_register_device(struct rdmacg_device *device);
> +void rdmacg_unregister_device(struct rdmacg_device *device);
> +
> +/* APIs for RDMA/IB stack to charge/uncharge pool specific resources */
> +int rdmacg_try_charge(struct rdma_cgroup **rdmacg,
> +		      struct rdmacg_device *device,
> +		      int resource_index,
> +		      int num);
> +void rdmacg_uncharge(struct rdma_cgroup *cg,
> +		     struct rdmacg_device *device,
> +		     int resource_index,
> +		     int num);
> +void rdmacg_query_limit(struct rdmacg_device *device,
> +			int *limits, int max_count);
You can drop the max_count parameter, and require the caller to
always provide pool_info->table_len items, couldn't you?

> +
> +#endif	/* CONFIG_CGROUP_RDMA */
> +#endif	/* _CGROUP_RDMA_H */
> diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
> index 0df0336a..d0e597c 100644
> --- a/include/linux/cgroup_subsys.h
> +++ b/include/linux/cgroup_subsys.h
> @@ -56,6 +56,10 @@ SUBSYS(hugetlb)
>  SUBSYS(pids)
>  #endif
>  
> +#if IS_ENABLED(CONFIG_CGROUP_RDMA)
> +SUBSYS(rdma)
> +#endif
> +
>  /*
>   * The following subsystems are not supported on the default hierarchy.
>   */
> diff --git a/init/Kconfig b/init/Kconfig
> index 2232080..1741b65 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1054,6 +1054,16 @@ config CGROUP_PIDS
>  	  since the PIDs limit only affects a process's ability to fork, not to
>  	  attach to a cgroup.
>  
> +config CGROUP_RDMA
> +	bool "RDMA controller"
> +	help
> +	  Provides enforcement of RDMA resources defined by IB stack.
> +	  It is fairly easy for consumers to exhaust RDMA resources, which
> +	  can result into resource unavailibility to other consumers.
unavailibility -> unavailability
> +	  RDMA controller is designed to stop this from happening.
> +	  Attaching processes with active RDMA resources to the cgroup
> +	  hierarchy is allowed even if can cross the hierarchy's limit.
> +
>  config CGROUP_FREEZER
>  	bool "Freezer controller"
>  	help
> diff --git a/kernel/Makefile b/kernel/Makefile
> index baa55e5..501f5df 100644

> +/**
> + * uncharge_cg_resource - uncharge resource for rdma cgroup
> + * @cg: pointer to cg to uncharge and all parents in hierarchy
It only uncharges a single cg, right?
> + * @device: pointer to ib device
> + * @index: index of the resource to uncharge in cg (resource pool)
> + * @num: the number of rdma resource to uncharge
> + *
> + * It also frees the resource pool in the hierarchy for the resource pool
> + * which was created as part of charing operation.
charing -> charging
> + */
> +static void uncharge_cg_resource(struct rdma_cgroup *cg,
> +				 struct rdmacg_device *device,
> +				 int index, int num)
> +{
> +	struct rdmacg_resource_pool *rpool;
> +	struct rdmacg_pool_info *pool_info = &device->pool_info;
> +
> +	spin_lock(&cg->rpool_list_lock);
> +	rpool = find_cg_rpool_locked(cg, device);
Is it possible for rpool to be NULL?

> +
> +	/*
> +	 * A negative count (or overflow) is invalid,
> +	 * it indicates a bug in the rdma controller.
> +	 */
> +	rpool->resources[index].usage -= num;
> +
> +	WARN_ON_ONCE(rpool->resources[index].usage < 0);
> +	rpool->refcnt--;
> +	if (rpool->refcnt == 0 && rpool->num_max_cnt == pool_info->table_len) {
> +		/*
> +		 * No user of the rpool and all entries are set to max, so
> +		 * safe to delete this rpool.
> +		 */
> +		list_del(&rpool->cg_list);
> +		spin_unlock(&cg->rpool_list_lock);
> +
> +		free_cg_rpool(rpool);
> +		return;
> +	}
> +	spin_unlock(&cg->rpool_list_lock);
> +}

> +/**
> + * charge_cg_resource - charge resource for rdma cgroup
> + * @cg: pointer to cg to charge
> + * @device: pointer to rdmacg device
> + * @index: index of the resource to charge in cg (resource pool)
> + * @num: the number of rdma resource to charge
> + */
> +static int charge_cg_resource(struct rdma_cgroup *cg,
> +			      struct rdmacg_device *device,
> +			      int index, int num)
> +{
> +	struct rdmacg_resource_pool *rpool;
> +	s64 new;
> +	int ret = 0;
> +
> +retry:
> +	spin_lock(&cg->rpool_list_lock);
> +	rpool = find_cg_rpool_locked(cg, device);
> +	if (!rpool) {
> +		spin_unlock(&cg->rpool_list_lock);
> +		ret = alloc_cg_rpool(cg, device);
> +		if (ret)
> +			goto err;
> +		else
> +			goto retry;
Instead of retrying after allocation of a new rpool, why not just return the
newly allocated rpool (or the existing one) from alloc_cg_rpool?

> +	}
> +	new = num + rpool->resources[index].usage;
> +	if (new > rpool->resources[index].max) {
> +		ret = -EAGAIN;
> +	} else {
> +		rpool->refcnt++;
> +		rpool->resources[index].usage = new;
> +	}
> +	spin_unlock(&cg->rpool_list_lock);
> +err:
> +	return ret;
> +}

> +static ssize_t rdmacg_resource_set_max(struct kernfs_open_file *of,
> +				       char *buf, size_t nbytes, loff_t off)
> +{
> +	struct rdma_cgroup *cg = css_rdmacg(of_css(of));
> +	const char *dev_name;
> +	struct rdmacg_resource_pool *rpool;
> +	struct rdmacg_device *device;
> +	char *options = strstrip(buf);
> +	struct rdmacg_pool_info *pool_info;
> +	u64 enables = 0;
This limits the number of resources to 64. Sounds fine to me, but I think 
there should be a check somewhere (maybe in rdmacg_register_device()?) to
make sure someone doesn't pass too many resources.
 
> +	int *new_limits;
> +	int i = 0, ret = 0;
> +
> +	/* extract the device name first */
> +	dev_name = strsep(&options, " ");
> +	if (!dev_name) {
> +		ret = -EINVAL;
> +		goto err;
> +	}
> +
> +	/* acquire lock to synchronize with hot plug devices */
> +	mutex_lock(&dev_mutex);
> +
> +	device = rdmacg_get_device_locked(dev_name);
> +	if (!device) {
> +		ret = -ENODEV;
> +		goto parse_err;
> +	}
> +
> +	pool_info = &device->pool_info;
> +
> +	new_limits = kcalloc(pool_info->table_len, sizeof(int), GFP_KERNEL);
> +	if (!new_limits) {
> +		ret = -ENOMEM;
> +		goto parse_err;
> +	}
> +
> +	ret = rdmacg_parse_limits(options, pool_info, new_limits, &enables);
> +	if (ret)
> +		goto opt_err;
> +
> +retry:
> +	spin_lock(&cg->rpool_list_lock);
> +	rpool = find_cg_rpool_locked(cg, device);
> +	if (!rpool) {
> +		spin_unlock(&cg->rpool_list_lock);
> +		ret = alloc_cg_rpool(cg, device);
> +		if (ret)
> +			goto opt_err;
> +		else
> +			goto retry;
You can avoid the retry here too. Perhaps this can go into a function.

> +	}
> +
> +	/* now set the new limits of the rpool */
> +	while (enables) {
> +		/* if user set the limit, enables bit is set */
> +		if (enables & BIT(i)) {
> +			enables &= ~BIT(i);
> +			set_resource_limit(rpool, i, new_limits[i]);
> +		}
> +		i++;
> +	}
> +	if (rpool->refcnt == 0 &&
> +	    rpool->num_max_cnt == pool_info->table_len) {
> +		/*
> +		 * No user of the rpool and all entries are
> +		 * set to max, so safe to delete this rpool.
> +		 */
> +		list_del(&rpool->cg_list);
> +		spin_unlock(&cg->rpool_list_lock);
> +		free_cg_rpool(rpool);
> +	} else {
> +		spin_unlock(&cg->rpool_list_lock);
> +	}
You should consider putting this piece of code in a function (the
check of the reference counts and release of the rpool).

> +
> +opt_err:
> +	kfree(new_limits);
> +parse_err:
> +	mutex_unlock(&dev_mutex);
> +err:
> +	return ret ?: nbytes;
> +}
> +

> +
> +static int print_rpool_values(struct seq_file *sf,
This can return void.

> +			      struct rdmacg_pool_info *pool_info,
> +			      u32 *value_tbl)
> +{
> +	int i;
> +
> +	for (i = 0; i < pool_info->table_len; i++) {
> +		seq_puts(sf, pool_info->resource_name_table[i]);
> +		seq_putc(sf, '=');
> +		if (value_tbl[i] == S32_MAX)
> +			seq_puts(sf, RDMACG_MAX_STR);
> +		else
> +			seq_printf(sf, "%d", value_tbl[i]);
> +		seq_putc(sf, ' ');
> +	}
> +	return 0;
> +}
> +

Thanks,
Haggai

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 2/3] IB/core: added support to use rdma cgroup controller
  2016-02-20 11:00 ` [PATCHv6 2/3] IB/core: added support to use " Parav Pandit
@ 2016-02-24 13:43   ` Haggai Eran
  2016-02-24 16:05     ` Parav Pandit
  0 siblings, 1 reply; 24+ messages in thread
From: Haggai Eran @ 2016-02-24 13:43 UTC (permalink / raw)
  To: Parav Pandit, cgroups, linux-doc, linux-kernel, linux-rdma, tj,
	lizefan, hannes, dledford, liranl, sean.hefty, jgunthorpe
  Cc: corbet, james.l.morris, serge, ogerlitz, matanb, raindel, akpm,
	linux-security-module

On 20/02/2016 13:00, Parav Pandit wrote:
> +/**
> + * ib_device_unregister_rdmacg - unregister with rdma cgroup.
> + * @device: device to unregister.
> + *
> + * Unregister with the rdma cgroup. Should be called after
> + * all the resources are deallocated, and after a stage when any
> + * other resource allocation of user application cannot be done
> + * for this device to avoid any leak in accounting.
> + * HCA drivers should clear resource pool ops after ib stack
> + * unregisters with rdma cgroup.
HCA drivers don't supply their own ops in this version, right?
If so, you can update the comment.

> + */
> +void ib_device_unregister_rdmacg(struct ib_device *device)
> +{
> +	rdmacg_unregister_device(&device->cg_device);
> +}

> diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
> index 179e813..c3bd24c 100644
> --- a/drivers/infiniband/core/device.c
> +++ b/drivers/infiniband/core/device.c
> @@ -352,10 +352,22 @@ int ib_register_device(struct ib_device *device,
>  		goto out;
>  	}
>  
> +#ifdef CONFIG_CGROUP_RDMA
> +	ret = ib_device_register_rdmacg(device);
> +	if (ret) {
> +		printk(KERN_WARNING "Couldn't set up InfiniBand P_Key/GID cache\n");
You should update the error string, and I think checkpatch recommends
using pr_warn().

> +		ib_cache_cleanup_one(device);
> +		goto out;
> +	}
> +#endif
> +
>  	ret = ib_device_register_sysfs(device, port_callback);
>  	if (ret) {
>  		printk(KERN_WARNING "Couldn't register device %s with driver model\n",
>  		       device->name);
> +#ifdef CONFIG_CGROUP_RDMA
> +		ib_device_unregister_rdmacg(device);
> +#endif
>  		ib_cache_cleanup_one(device);
>  		goto out;
>  	}
> @@ -405,6 +417,10 @@ void ib_unregister_device(struct ib_device *device)
>  
>  	mutex_unlock(&device_mutex);
>  
> +#ifdef CONFIG_CGROUP_RDMA
> +	ib_device_unregister_rdmacg(device);
> +#endif
> +
>  	ib_device_unregister_sysfs(device);
>  	ib_cache_cleanup_one(device);
>  
> diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
> index 1c02dea..7c51e8a 100644
> --- a/drivers/infiniband/core/uverbs_cmd.c
> +++ b/drivers/infiniband/core/uverbs_cmd.c

> @@ -1777,6 +1851,12 @@ static int create_qp(struct ib_uverbs_file *file,
>  		  &qp_lock_class);
>  	down_write(&obj->uevent.uobject.mutex);
>  
> +	pd  = idr_read_pd(cmd->pd_handle, file->ucontext);
> +	if (!pd) {
> +		ret = -EINVAL;
> +		goto err_put;
> +	}
> +
I'm not sure I understand why you need to keep the PD here. Why 
don't you use the same ib_device that is used to create the QP? 
The same applies comment also applies to other uverbs commands.

>  	if (cmd->qp_type == IB_QPT_XRC_TGT) {
>  		xrcd = idr_read_xrcd(cmd->pd_handle, file->ucontext,
>  				     &xrcd_uobj);
> @@ -1811,8 +1891,7 @@ static int create_qp(struct ib_uverbs_file *file,
>  
>  		scq = idr_read_cq(cmd->send_cq_handle, file->ucontext, !!rcq);
>  		rcq = rcq ?: scq;
> -		pd  = idr_read_pd(cmd->pd_handle, file->ucontext);
> -		if (!pd || !scq) {
> +		if (!scq) {
>  			ret = -EINVAL;
>  			goto err_put;
>  		}
> @@ -1858,6 +1937,11 @@ static int create_qp(struct ib_uverbs_file *file,
>  			goto err_put;
>  		}
>  
> +	ret = ib_rdmacg_try_charge(&obj->uevent.uobject.cg_obj, pd->device,
> +				   RDMA_VERB_RESOURCE_QP, 1);
> +	if (ret)
> +		goto err_put;
> +
>  	if (cmd->qp_type == IB_QPT_XRC_TGT)
>  		qp = ib_create_qp(pd, &attr);
>  	else
> @@ -1865,7 +1949,7 @@ static int create_qp(struct ib_uverbs_file *file,
>  
>  	if (IS_ERR(qp)) {
>  		ret = PTR_ERR(qp);
> -		goto err_put;
> +		goto err_create;
>  	}
>  
>  	if (cmd->qp_type != IB_QPT_XRC_TGT) {
> @@ -1940,6 +2024,10 @@ err_cb:
>  err_destroy:
>  	ib_destroy_qp(qp);
>  
> +err_create:
> +	ib_rdmacg_uncharge(&obj->uevent.uobject.cg_obj, device,
> +			   RDMA_VERB_RESOURCE_QP, 1);
> +
>  err_put:
>  	if (xrcd)
>  		put_xrcd_read(xrcd_uobj);

> @@ -3323,6 +3441,11 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file,
>  	obj->uevent.events_reported = 0;
>  	INIT_LIST_HEAD(&obj->uevent.event_list);
>  
> +	ret = ib_rdmacg_try_charge(&obj->uevent.uobject.cg_obj, pd->device,
> +				   RDMA_VERB_RESOURCE_SRQ, 1);
> +	if (ret)
> +		goto err_put_cq;
> +
I think you need a new error label to release the PD IDR but not
uncharge.

>  	srq = pd->device->create_srq(pd, &attr, udata);
>  	if (IS_ERR(srq)) {
>  		ret = PTR_ERR(srq);
> @@ -3387,6 +3510,8 @@ err_destroy:
>  	ib_destroy_srq(srq);
>  
>  err_put:
> +	ib_rdmacg_uncharge(&obj->uevent.uobject.cg_obj, pd->device,
> +			   RDMA_VERB_RESOURCE_SRQ, 1);
>  	put_pd_read(pd);
>  
>  err_put_cq:

Regards,
Haggai

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 3/3] rdmacg: Added documentation for rdmacg
  2016-02-20 11:00 ` [PATCHv6 3/3] rdmacg: Added documentation for rdmacg Parav Pandit
@ 2016-02-24 14:26   ` Haggai Eran
  2016-02-24 15:21     ` Parav Pandit
  0 siblings, 1 reply; 24+ messages in thread
From: Haggai Eran @ 2016-02-24 14:26 UTC (permalink / raw)
  To: Parav Pandit, cgroups, linux-doc, linux-kernel, linux-rdma, tj,
	lizefan, hannes, dledford, liranl, sean.hefty, jgunthorpe
  Cc: corbet, james.l.morris, serge, ogerlitz, matanb, raindel, akpm,
	linux-security-module

On 20/02/2016 13:00, Parav Pandit wrote:
> Added documentation for v1 and v2 version describing high
> level design and usage examples on using rdma controller.
> 
> Signed-off-by: Parav Pandit <pandit.parav@gmail.com>

I think you might want to mention that resource limits are reflected
in the results returned from ib_uverbs_query_device/ibv_query_device 
or printed from "ibv_devinfo -v".

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 3/3] rdmacg: Added documentation for rdmacg
  2016-02-24 14:26   ` Haggai Eran
@ 2016-02-24 15:21     ` Parav Pandit
  2016-02-28  8:55       ` Haggai Eran
  0 siblings, 1 reply; 24+ messages in thread
From: Parav Pandit @ 2016-02-24 15:21 UTC (permalink / raw)
  To: Haggai Eran
  Cc: cgroups, linux-doc, linux-kernel, linux-rdma, Tejun Heo, lizefan,
	Johannes Weiner, Doug Ledford, Liran Liss, Hefty, Sean,
	Jason Gunthorpe, Jonathan Corbet, james.l.morris, serge,
	Or Gerlitz, Matan Barak, raindel, akpm, linux-security-module

On Wed, Feb 24, 2016 at 7:56 PM, Haggai Eran <haggaie@mellanox.com> wrote:
> On 20/02/2016 13:00, Parav Pandit wrote:
>> Added documentation for v1 and v2 version describing high
>> level design and usage examples on using rdma controller.
>>
>> Signed-off-by: Parav Pandit <pandit.parav@gmail.com>
>
> I think you might want to mention that resource limits are reflected
> in the results returned from ib_uverbs_query_device/ibv_query_device
> or printed from "ibv_devinfo -v".
>
Its valid point.
Since this documentation is for rdma controller, I was wondering
should I have it this documentation or should I add the uverbs_cmds.c?

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 2/3] IB/core: added support to use rdma cgroup controller
  2016-02-24 13:43   ` Haggai Eran
@ 2016-02-24 16:05     ` Parav Pandit
  0 siblings, 0 replies; 24+ messages in thread
From: Parav Pandit @ 2016-02-24 16:05 UTC (permalink / raw)
  To: Haggai Eran
  Cc: cgroups, linux-doc, linux-kernel, linux-rdma, Tejun Heo, lizefan,
	Johannes Weiner, Doug Ledford, Liran Liss, Hefty, Sean,
	Jason Gunthorpe, Jonathan Corbet, james.l.morris, serge,
	Or Gerlitz, Matan Barak, raindel, akpm, linux-security-module

On Wed, Feb 24, 2016 at 7:13 PM, Haggai Eran <haggaie@mellanox.com> wrote:

>> + * all the resources are deallocated, and after a stage when any
>> + * other resource allocation of user application cannot be done
>> + * for this device to avoid any leak in accounting.
>> + * HCA drivers should clear resource pool ops after ib stack
>> + * unregisters with rdma cgroup.
> HCA drivers don't supply their own ops in this version, right?
> If so, you can update the comment.
Yes. I will update.

>> +
> I'm not sure I understand why you need to keep the PD here. Why
> don't you use the same ib_device that is used to create the QP?
> The same applies comment also applies to other uverbs commands.

My bad. Left out of previous implementation. I have changed it now.

>
>>       if (cmd->qp_type == IB_QPT_XRC_TGT) {
>>               xrcd = idr_read_xrcd(cmd->pd_handle, file->ucontext,
>>                                    &xrcd_uobj);
>> @@ -1811,8 +1891,7 @@ static int create_qp(struct ib_uverbs_file *file,
>>
>>               scq = idr_read_cq(cmd->send_cq_handle, file->ucontext, !!rcq);
>>               rcq = rcq ?: scq;
>> -             pd  = idr_read_pd(cmd->pd_handle, file->ucontext);
>> -             if (!pd || !scq) {
>> +             if (!scq) {
>>                       ret = -EINVAL;
>>                       goto err_put;
>>               }
>> @@ -1858,6 +1937,11 @@ static int create_qp(struct ib_uverbs_file *file,
>>                       goto err_put;
>>               }
>>
>> +     ret = ib_rdmacg_try_charge(&obj->uevent.uobject.cg_obj, pd->device,
>> +                                RDMA_VERB_RESOURCE_QP, 1);
>> +     if (ret)
>> +             goto err_put;
>> +
>>       if (cmd->qp_type == IB_QPT_XRC_TGT)
>>               qp = ib_create_qp(pd, &attr);
>>       else
>> @@ -1865,7 +1949,7 @@ static int create_qp(struct ib_uverbs_file *file,
>>
>>       if (IS_ERR(qp)) {
>>               ret = PTR_ERR(qp);
>> -             goto err_put;
>> +             goto err_create;
>>       }
>>
>>       if (cmd->qp_type != IB_QPT_XRC_TGT) {
>> @@ -1940,6 +2024,10 @@ err_cb:
>>  err_destroy:
>>       ib_destroy_qp(qp);
>>
>> +err_create:
>> +     ib_rdmacg_uncharge(&obj->uevent.uobject.cg_obj, device,
>> +                        RDMA_VERB_RESOURCE_QP, 1);
>> +
>>  err_put:
>>       if (xrcd)
>>               put_xrcd_read(xrcd_uobj);
>
>> @@ -3323,6 +3441,11 @@ static int __uverbs_create_xsrq(struct ib_uverbs_file *file,
>>       obj->uevent.events_reported = 0;
>>       INIT_LIST_HEAD(&obj->uevent.event_list);
>>
>> +     ret = ib_rdmacg_try_charge(&obj->uevent.uobject.cg_obj, pd->device,
>> +                                RDMA_VERB_RESOURCE_SRQ, 1);
>> +     if (ret)
>> +             goto err_put_cq;
>> +
> I think you need a new error label to release the PD IDR but not
> uncharge.

Yes. Changed.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 1/3] rdmacg: Added rdma cgroup controller
  2016-02-24 13:13   ` Haggai Eran
@ 2016-02-24 16:16     ` Parav Pandit
  2016-02-25 12:03       ` Haggai Eran
  0 siblings, 1 reply; 24+ messages in thread
From: Parav Pandit @ 2016-02-24 16:16 UTC (permalink / raw)
  To: Haggai Eran
  Cc: cgroups, linux-doc, linux-kernel, linux-rdma, Tejun Heo, lizefan,
	Johannes Weiner, Doug Ledford, Liran Liss, Hefty, Sean,
	Jason Gunthorpe, Jonathan Corbet, james.l.morris, serge,
	Or Gerlitz, Matan Barak, raindel, akpm, linux-security-module

On Wed, Feb 24, 2016 at 6:43 PM, Haggai Eran <haggaie@mellanox.com> wrote:
> Hi,
>
> Overall I the patch looks good to me. I have a few comments below.
>
Thanks for the review. Addressing most comments one.
Some comments inline.


> Its -> It's
Ok.

>> +void rdmacg_query_limit(struct rdmacg_device *device,
>> +                     int *limits, int max_count);
> You can drop the max_count parameter, and require the caller to
> always provide pool_info->table_len items, couldn't you?
>
Done.

>> +       can result into resource unavailibility to other consumers.
> unavailibility -> unavailability
Done.

>> +     struct rdmacg_resource_pool *rpool;
>> +     struct rdmacg_pool_info *pool_info = &device->pool_info;
>> +
>> +     spin_lock(&cg->rpool_list_lock);
>> +     rpool = find_cg_rpool_locked(cg, device);
> Is it possible for rpool to be NULL?
>
Unlikely, unless we have but in cgroup implementation.
It may be worth to add WARN_ON and return from here to avoid kernel crash.

>> +static int charge_cg_resource(struct rdma_cgroup *cg,
>> +                           struct rdmacg_device *device,
>> +                           int index, int num)
>> +{
>> +     struct rdmacg_resource_pool *rpool;
>> +     s64 new;
>> +     int ret = 0;
>> +
>> +retry:
>> +     spin_lock(&cg->rpool_list_lock);
>> +     rpool = find_cg_rpool_locked(cg, device);
>> +     if (!rpool) {
>> +             spin_unlock(&cg->rpool_list_lock);
>> +             ret = alloc_cg_rpool(cg, device);
>> +             if (ret)
>> +                     goto err;
>> +             else
>> +                     goto retry;
> Instead of retrying after allocation of a new rpool, why not just return the
> newly allocated rpool (or the existing one) from alloc_cg_rpool?

It can be done, but locking semantics just becomes difficult to
review/maintain with that where alloc_cg_rpool will unlock and lock
conditionally later on.
This path will be hit anyway on first allocation typically. Once
application is warm up, it will be unlikely to enter here.
I should change if(!rpool) to if (unlikely(!rpool)).


>
>> +     }
>> +     new = num + rpool->resources[index].usage;
>> +     if (new > rpool->resources[index].max) {
>> +             ret = -EAGAIN;
>> +     } else {
>> +             rpool->refcnt++;
>> +             rpool->resources[index].usage = new;
>> +     }
>> +     spin_unlock(&cg->rpool_list_lock);
>> +err:
>> +     return ret;
>> +}
>
>> +static ssize_t rdmacg_resource_set_max(struct kernfs_open_file *of,
>> +                                    char *buf, size_t nbytes, loff_t off)
>> +{
>> +     struct rdma_cgroup *cg = css_rdmacg(of_css(of));
>> +     const char *dev_name;
>> +     struct rdmacg_resource_pool *rpool;
>> +     struct rdmacg_device *device;
>> +     char *options = strstrip(buf);
>> +     struct rdmacg_pool_info *pool_info;
>> +     u64 enables = 0;
> This limits the number of resources to 64. Sounds fine to me, but I think
> there should be a check somewhere (maybe in rdmacg_register_device()?) to
> make sure someone doesn't pass too many resources.
Right. Such check is in place in rdmacg_register_device which return
EINVAL when more than 64 resources are requested.

>> +     spin_lock(&cg->rpool_list_lock);
>> +     rpool = find_cg_rpool_locked(cg, device);
>> +     if (!rpool) {
>> +             spin_unlock(&cg->rpool_list_lock);
>> +             ret = alloc_cg_rpool(cg, device);
>> +             if (ret)
>> +                     goto opt_err;
>> +             else
>> +                     goto retry;
> You can avoid the retry here too. Perhaps this can go into a function.
>
In v5 I had wrapper around code which used to similar hiding using
get_cg_rpool and put_cg_rpool helper functions.
But Tejun was of opinion that I should have locks outside of all those
functions. With that approach, this is done.
So I think its ok. to have it this way.

>> +     }
>> +
>> +     /* now set the new limits of the rpool */
>> +     while (enables) {
>> +             /* if user set the limit, enables bit is set */
>> +             if (enables & BIT(i)) {
>> +                     enables &= ~BIT(i);
>> +                     set_resource_limit(rpool, i, new_limits[i]);
>> +             }
>> +             i++;
>> +     }
>> +     if (rpool->refcnt == 0 &&
>> +         rpool->num_max_cnt == pool_info->table_len) {
>> +             /*
>> +              * No user of the rpool and all entries are
>> +              * set to max, so safe to delete this rpool.
>> +              */
>> +             list_del(&rpool->cg_list);
>> +             spin_unlock(&cg->rpool_list_lock);
>> +             free_cg_rpool(rpool);
>> +     } else {
>> +             spin_unlock(&cg->rpool_list_lock);
>> +     }
> You should consider putting this piece of code in a function (the
> check of the reference counts and release of the rpool).
>
Yes. I did. Same as above comment. Also this function will have to
unlock. Its usually better to lock/unlock from same function level,
instead of locking at one level and unlocking from inside the
function.
Or
I should have
cg_rpool_cond_free_unlock() for above code (check of the reference
counts and release of the rpool)?

>> +static int print_rpool_values(struct seq_file *sf,
> This can return void.
Done.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 1/3] rdmacg: Added rdma cgroup controller
  2016-02-24 16:16     ` Parav Pandit
@ 2016-02-25 12:03       ` Haggai Eran
  2016-02-25 13:34         ` Parav Pandit
  0 siblings, 1 reply; 24+ messages in thread
From: Haggai Eran @ 2016-02-25 12:03 UTC (permalink / raw)
  To: Parav Pandit
  Cc: cgroups, linux-doc, linux-kernel, linux-rdma, Tejun Heo, lizefan,
	Johannes Weiner, Doug Ledford, Liran Liss, Hefty, Sean,
	Jason Gunthorpe, Jonathan Corbet, james.l.morris, serge,
	Or Gerlitz, Matan Barak, raindel, akpm, linux-security-module

On 24/02/2016 18:16, Parav Pandit wrote:
>>> +     struct rdmacg_resource_pool *rpool;
>>> +     struct rdmacg_pool_info *pool_info = &device->pool_info;
>>> +
>>> +     spin_lock(&cg->rpool_list_lock);
>>> +     rpool = find_cg_rpool_locked(cg, device);
>> Is it possible for rpool to be NULL?
>>
> Unlikely, unless we have but in cgroup implementation.
> It may be worth to add WARN_ON and return from here to avoid kernel crash.
Sounds good.

>>> +static int charge_cg_resource(struct rdma_cgroup *cg,
>>> +                           struct rdmacg_device *device,
>>> +                           int index, int num)
>>> +{
>>> +     struct rdmacg_resource_pool *rpool;
>>> +     s64 new;
>>> +     int ret = 0;
>>> +
>>> +retry:
>>> +     spin_lock(&cg->rpool_list_lock);
>>> +     rpool = find_cg_rpool_locked(cg, device);
>>> +     if (!rpool) {
>>> +             spin_unlock(&cg->rpool_list_lock);
>>> +             ret = alloc_cg_rpool(cg, device);
>>> +             if (ret)
>>> +                     goto err;
>>> +             else
>>> +                     goto retry;
>> Instead of retrying after allocation of a new rpool, why not just return the
>> newly allocated rpool (or the existing one) from alloc_cg_rpool?
> 
> It can be done, but locking semantics just becomes difficult to
> review/maintain with that where alloc_cg_rpool will unlock and lock
> conditionally later on.
Maybe I'm missing something, but couldn't you simply lock rpool_list_lock
inside alloc_cg_rpool()? It already does that around its call to 
find_cg_rpool_locked() and the insertion to cg_list.

> This path will be hit anyway on first allocation typically. Once
> application is warm up, it will be unlikely to enter here.
> I should change if(!rpool) to if (unlikely(!rpool)).
Theoretically the new allocated rpool can be released again by the time you
get to the second call to find_cg_rpool_locked().

>>> +     spin_lock(&cg->rpool_list_lock);
>>> +     rpool = find_cg_rpool_locked(cg, device);
>>> +     if (!rpool) {
>>> +             spin_unlock(&cg->rpool_list_lock);
>>> +             ret = alloc_cg_rpool(cg, device);
>>> +             if (ret)
>>> +                     goto opt_err;
>>> +             else
>>> +                     goto retry;
>> You can avoid the retry here too. Perhaps this can go into a function.
>>
> In v5 I had wrapper around code which used to similar hiding using
> get_cg_rpool and put_cg_rpool helper functions.
> But Tejun was of opinion that I should have locks outside of all those
> functions. With that approach, this is done.
> So I think its ok. to have it this way.
I thought that was about functions that only locked the lock, called the 
find function, and released the lock. What I'm suggesting is to have one
function that does "lock + find + allocate if needed + unlock", and another
function that does (under caller's lock) "check ref count + check max count +
release rpool".

>>> +     }
>>> +
>>> +     /* now set the new limits of the rpool */
>>> +     while (enables) {
>>> +             /* if user set the limit, enables bit is set */
>>> +             if (enables & BIT(i)) {
>>> +                     enables &= ~BIT(i);
>>> +                     set_resource_limit(rpool, i, new_limits[i]);
>>> +             }
>>> +             i++;
>>> +     }
>>> +     if (rpool->refcnt == 0 &&
>>> +         rpool->num_max_cnt == pool_info->table_len) {
>>> +             /*
>>> +              * No user of the rpool and all entries are
>>> +              * set to max, so safe to delete this rpool.
>>> +              */
>>> +             list_del(&rpool->cg_list);
>>> +             spin_unlock(&cg->rpool_list_lock);
>>> +             free_cg_rpool(rpool);
>>> +     } else {
>>> +             spin_unlock(&cg->rpool_list_lock);
>>> +     }
>> You should consider putting this piece of code in a function (the
>> check of the reference counts and release of the rpool).
>>
> Yes. I did. Same as above comment. Also this function will have to
> unlock. Its usually better to lock/unlock from same function level,
> instead of locking at one level and unlocking from inside the
> function.
> Or
> I should have
> cg_rpool_cond_free_unlock() for above code (check of the reference
> counts and release of the rpool)?
It is confusing to lock and unlock in different contexts. Why not lock
in the caller context? free_cg_rpool() can be called under rpool_list_lock,
couldn't it? It locks device->rpool_lock, but uncharge_cg_resource() also
locks both in the same order.

Thanks,
Haggai

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 1/3] rdmacg: Added rdma cgroup controller
  2016-02-25 12:03       ` Haggai Eran
@ 2016-02-25 13:34         ` Parav Pandit
  2016-02-25 14:26           ` Parav Pandit
  2016-02-25 14:30           ` Haggai Eran
  0 siblings, 2 replies; 24+ messages in thread
From: Parav Pandit @ 2016-02-25 13:34 UTC (permalink / raw)
  To: Haggai Eran
  Cc: cgroups, linux-doc, linux-kernel, linux-rdma, Tejun Heo, lizefan,
	Johannes Weiner, Doug Ledford, Liran Liss, Hefty, Sean,
	Jason Gunthorpe, Jonathan Corbet, james.l.morris, serge,
	Or Gerlitz, Matan Barak, raindel, akpm, linux-security-module

On Thu, Feb 25, 2016 at 5:33 PM, Haggai Eran <haggaie@mellanox.com> wrote:
>>>> +retry:
>>>> +     spin_lock(&cg->rpool_list_lock);
>>>> +     rpool = find_cg_rpool_locked(cg, device);
>>>> +     if (!rpool) {
>>>> +             spin_unlock(&cg->rpool_list_lock);
>>>> +             ret = alloc_cg_rpool(cg, device);
>>>> +             if (ret)
>>>> +                     goto err;
>>>> +             else
>>>> +                     goto retry;
>>> Instead of retrying after allocation of a new rpool, why not just return the
>>> newly allocated rpool (or the existing one) from alloc_cg_rpool?
>>
>> It can be done, but locking semantics just becomes difficult to
>> review/maintain with that where alloc_cg_rpool will unlock and lock
>> conditionally later on.
> Maybe I'm missing something, but couldn't you simply lock rpool_list_lock
> inside alloc_cg_rpool()? It already does that around its call to
> find_cg_rpool_locked() and the insertion to cg_list.

No. ref_count and usage counters are updated at level where lock is
taken in charge_cg_resource().
If I move locking rpool_list_lock inside alloc_cg_rpool, unlocking
will continue outside, alloc_cg_rpool() when its found or allocated.
As you acknowledged in below comment that this makes confusing to
lock/unlock from different context, I think current implementation
achieves both.
(a) take lock from single context
(b) keep functionality of find and alloc in two separate individual functions

>
>> This path will be hit anyway on first allocation typically. Once
>> application is warm up, it will be unlikely to enter here.
>> I should change if(!rpool) to if (unlikely(!rpool)).
> Theoretically the new allocated rpool can be released again by the time you
> get to the second call to find_cg_rpool_locked().
>
Thats ok, because if that occurs find_cg_rpool_locked() won't find the
entry and will try to allocate again.
Things work fine in that case.

> I thought that was about functions that only locked the lock, called the
> find function, and released the lock. What I'm suggesting is to have one
> function that does "lock + find + allocate if needed + unlock",

I had similar function in past which does,
"lock + find + allocate if needed + + inc_ref_cnt + unlock", (get_cg_rpool)
update usage_counter atomically, because other thread/process might update too.
check atomic_dec_cnt - on reaching zero, "lock + del_entry + unlock + free".

Tejun asked to simplify this to,

"lock + find + allocate if needed + inc_ref_cnt_without_atomic" + unlock".
which I did in this patch v6.

> and another
> function that does (under caller's lock) "check ref count + check max count +
> release rpool".
This can be done. Have one dumb basic question for thiat.
Can we call kfree() with spin_lock held? All these years I tend to
avoid doing so.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 1/3] rdmacg: Added rdma cgroup controller
  2016-02-25 13:34         ` Parav Pandit
@ 2016-02-25 14:26           ` Parav Pandit
  2016-02-25 14:42             ` Haggai Eran
  2016-02-25 14:30           ` Haggai Eran
  1 sibling, 1 reply; 24+ messages in thread
From: Parav Pandit @ 2016-02-25 14:26 UTC (permalink / raw)
  To: Haggai Eran
  Cc: cgroups, linux-doc, linux-kernel, linux-rdma, Tejun Heo, lizefan,
	Johannes Weiner, Doug Ledford, Liran Liss, Hefty, Sean,
	Jason Gunthorpe, Jonathan Corbet, james.l.morris, serge,
	Or Gerlitz, Matan Barak, raindel, akpm, linux-security-module

> Can we call kfree() with spin_lock held? All these years I tend to
> avoid doing so.
Also it doesn't look correct to hold the lock while freeing the memory
which is totally unrelated to the lock.
With that I think current code appears ok with exception that its
duplicated at two place for code readability around lock.
What say?

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 1/3] rdmacg: Added rdma cgroup controller
  2016-02-25 13:34         ` Parav Pandit
  2016-02-25 14:26           ` Parav Pandit
@ 2016-02-25 14:30           ` Haggai Eran
  1 sibling, 0 replies; 24+ messages in thread
From: Haggai Eran @ 2016-02-25 14:30 UTC (permalink / raw)
  To: Parav Pandit
  Cc: cgroups, linux-doc, linux-kernel, linux-rdma, Tejun Heo, lizefan,
	Johannes Weiner, Doug Ledford, Liran Liss, Hefty, Sean,
	Jason Gunthorpe, Jonathan Corbet, james.l.morris, serge,
	Or Gerlitz, Matan Barak, raindel, akpm, linux-security-module

On 25/02/2016 15:34, Parav Pandit wrote:
> On Thu, Feb 25, 2016 at 5:33 PM, Haggai Eran <haggaie@mellanox.com> wrote:
>>>>> +retry:
>>>>> +     spin_lock(&cg->rpool_list_lock);
>>>>> +     rpool = find_cg_rpool_locked(cg, device);
>>>>> +     if (!rpool) {
>>>>> +             spin_unlock(&cg->rpool_list_lock);
>>>>> +             ret = alloc_cg_rpool(cg, device);
>>>>> +             if (ret)
>>>>> +                     goto err;
>>>>> +             else
>>>>> +                     goto retry;
>>>> Instead of retrying after allocation of a new rpool, why not just return the
>>>> newly allocated rpool (or the existing one) from alloc_cg_rpool?
>>>
>>> It can be done, but locking semantics just becomes difficult to
>>> review/maintain with that where alloc_cg_rpool will unlock and lock
>>> conditionally later on.
>> Maybe I'm missing something, but couldn't you simply lock rpool_list_lock
>> inside alloc_cg_rpool()? It already does that around its call to
>> find_cg_rpool_locked() and the insertion to cg_list.
> 
> No. ref_count and usage counters are updated at level where lock is
> taken in charge_cg_resource().
> If I move locking rpool_list_lock inside alloc_cg_rpool, unlocking
> will continue outside, alloc_cg_rpool() when its found or allocated.
> As you acknowledged in below comment that this makes confusing to
> lock/unlock from different context, I think current implementation
> achieves both.
> (a) take lock from single context
> (b) keep functionality of find and alloc in two separate individual functions

Okay, fair enough.

>> I thought that was about functions that only locked the lock, called the
>> find function, and released the lock. What I'm suggesting is to have one
>> function that does "lock + find + allocate if needed + unlock",
> 
> I had similar function in past which does,
> "lock + find + allocate if needed + + inc_ref_cnt + unlock", (get_cg_rpool)
> update usage_counter atomically, because other thread/process might update too.
> check atomic_dec_cnt - on reaching zero, "lock + del_entry + unlock + free".
> 
> Tejun asked to simplify this to,
> 
> "lock + find + allocate if needed + inc_ref_cnt_without_atomic" + unlock".
> which I did in this patch v6.

Okay.

>> and another
>> function that does (under caller's lock) "check ref count + check max count +
>> release rpool".
> This can be done. Have one dumb basic question for thiat.
> Can we call kfree() with spin_lock held? All these years I tend to
> avoid doing so.
> 

I think so. This is an old link but I think it still applies: 
https://lkml.org/lkml/2004/11/21/130

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 1/3] rdmacg: Added rdma cgroup controller
  2016-02-25 14:26           ` Parav Pandit
@ 2016-02-25 14:42             ` Haggai Eran
  0 siblings, 0 replies; 24+ messages in thread
From: Haggai Eran @ 2016-02-25 14:42 UTC (permalink / raw)
  To: Parav Pandit
  Cc: cgroups, linux-doc, linux-kernel, linux-rdma, Tejun Heo, lizefan,
	Johannes Weiner, Doug Ledford, Liran Liss, Hefty, Sean,
	Jason Gunthorpe, Jonathan Corbet, james.l.morris, serge,
	Or Gerlitz, Matan Barak, raindel, akpm, linux-security-module

On 25/02/2016 16:26, Parav Pandit wrote:
>> Can we call kfree() with spin_lock held? All these years I tend to
>> avoid doing so.
> Also it doesn't look correct to hold the lock while freeing the memory
> which is totally unrelated to the lock.
> With that I think current code appears ok with exception that its
> duplicated at two place for code readability around lock.
> What say?
Yes, the only thing that bothered me there was the duplication.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 3/3] rdmacg: Added documentation for rdmacg
  2016-02-24 15:21     ` Parav Pandit
@ 2016-02-28  8:55       ` Haggai Eran
  2016-02-28  9:02         ` Parav Pandit
  0 siblings, 1 reply; 24+ messages in thread
From: Haggai Eran @ 2016-02-28  8:55 UTC (permalink / raw)
  To: Parav Pandit
  Cc: cgroups, linux-doc, linux-kernel, linux-rdma, Tejun Heo, lizefan,
	Johannes Weiner, Doug Ledford, Liran Liss, Hefty, Sean,
	Jason Gunthorpe, Jonathan Corbet, james.l.morris, serge,
	Or Gerlitz, Matan Barak, raindel, akpm, linux-security-module

On 24/02/2016 17:21, Parav Pandit wrote:
> On Wed, Feb 24, 2016 at 7:56 PM, Haggai Eran <haggaie@mellanox.com> wrote:
>> On 20/02/2016 13:00, Parav Pandit wrote:
>>> Added documentation for v1 and v2 version describing high
>>> level design and usage examples on using rdma controller.
>>>
>>> Signed-off-by: Parav Pandit <pandit.parav@gmail.com>
>>
>> I think you might want to mention that resource limits are reflected
>> in the results returned from ib_uverbs_query_device/ibv_query_device
>> or printed from "ibv_devinfo -v".
>>
> Its valid point.
> Since this documentation is for rdma controller, I was wondering
> should I have it this documentation or should I add the uverbs_cmds.c?

I was thinking it should be in the documentation because an application
developer might look there first, without reading uverbs_cmd.c.

Haggai

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv6 3/3] rdmacg: Added documentation for rdmacg
  2016-02-28  8:55       ` Haggai Eran
@ 2016-02-28  9:02         ` Parav Pandit
  0 siblings, 0 replies; 24+ messages in thread
From: Parav Pandit @ 2016-02-28  9:02 UTC (permalink / raw)
  To: Haggai Eran
  Cc: cgroups, linux-doc, linux-kernel, linux-rdma, Tejun Heo, lizefan,
	Johannes Weiner, Doug Ledford, Liran Liss, Hefty, Sean,
	Jason Gunthorpe, Jonathan Corbet, james.l.morris, serge,
	Or Gerlitz, Matan Barak, raindel, akpm, linux-security-module

o.k. I will add a note there that IB stack would honor the limits
given by the rdma cgroup.

On Sun, Feb 28, 2016 at 2:25 PM, Haggai Eran <haggaie@mellanox.com> wrote:
> On 24/02/2016 17:21, Parav Pandit wrote:
>> On Wed, Feb 24, 2016 at 7:56 PM, Haggai Eran <haggaie@mellanox.com> wrote:
>>> On 20/02/2016 13:00, Parav Pandit wrote:
>>>> Added documentation for v1 and v2 version describing high
>>>> level design and usage examples on using rdma controller.
>>>>
>>>> Signed-off-by: Parav Pandit <pandit.parav@gmail.com>
>>>
>>> I think you might want to mention that resource limits are reflected
>>> in the results returned from ib_uverbs_query_device/ibv_query_device
>>> or printed from "ibv_devinfo -v".
>>>
>> Its valid point.
>> Since this documentation is for rdma controller, I was wondering
>> should I have it this documentation or should I add the uverbs_cmds.c?
>
> I was thinking it should be in the documentation because an application
> developer might look there first, without reading uverbs_cmd.c.
>
> Haggai
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2016-02-28  9:02 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-20 11:00 [PATCHv6 0/3] rdmacg: IB/core: rdma controller support Parav Pandit
2016-02-20 11:00 ` [PATCHv6 1/3] rdmacg: Added rdma cgroup controller Parav Pandit
2016-02-21  7:43   ` Leon Romanovsky
2016-02-21 11:33     ` Parav Pandit
2016-02-21 13:45       ` Leon Romanovsky
2016-02-21 14:11         ` Parav Pandit
2016-02-21 15:09           ` Leon Romanovsky
2016-02-21 15:15             ` Parav Pandit
2016-02-24 13:13   ` Haggai Eran
2016-02-24 16:16     ` Parav Pandit
2016-02-25 12:03       ` Haggai Eran
2016-02-25 13:34         ` Parav Pandit
2016-02-25 14:26           ` Parav Pandit
2016-02-25 14:42             ` Haggai Eran
2016-02-25 14:30           ` Haggai Eran
2016-02-20 11:00 ` [PATCHv6 2/3] IB/core: added support to use " Parav Pandit
2016-02-24 13:43   ` Haggai Eran
2016-02-24 16:05     ` Parav Pandit
2016-02-20 11:00 ` [PATCHv6 3/3] rdmacg: Added documentation for rdmacg Parav Pandit
2016-02-24 14:26   ` Haggai Eran
2016-02-24 15:21     ` Parav Pandit
2016-02-28  8:55       ` Haggai Eran
2016-02-28  9:02         ` Parav Pandit
2016-02-22  4:59 ` [PATCHv6 0/3] rdmacg: IB/core: rdma controller support Parav Pandit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).