All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] mm/vmap: Add a notifier for when we run out of vmap address space
@ 2016-03-17 11:59 ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-03-17 11:59 UTC (permalink / raw)
  To: intel-gfx
  Cc: Chris Wilson, Andrew Morton, David Rientjes, Roman Pen,
	Mel Gorman, linux-mm, linux-kernel

vmaps are temporary kernel mappings that may be of long duration.
Reusing a vmap on an object is preferrable for a driver as the cost of
setting up the vmap can otherwise dominate the operation on the object.
However, the vmap address space is rather limited on 32bit systems and
so we add a notification for vmap pressure in order for the driver to
release any cached vmappings.

The interface is styled after the oom-notifier where the callees are
passed a pointer to an unsigned long counter for them to indicate if they
have freed any space.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Roman Pen <r.peniaev@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
 include/linux/vmalloc.h |  4 ++++
 mm/vmalloc.c            | 22 ++++++++++++++++++++++
 2 files changed, 26 insertions(+)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index d1f1d338af20..edd676b8e112 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -187,4 +187,8 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
 #define VMALLOC_TOTAL 0UL
 #endif
 
+struct notitifer_block;
+int register_vmap_purge_notifier(struct notifier_block *nb);
+int unregister_vmap_purge_notifier(struct notifier_block *nb);
+
 #endif /* _LINUX_VMALLOC_H */
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index fb42a5bffe47..fd2ca94c2732 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -21,6 +21,7 @@
 #include <linux/debugobjects.h>
 #include <linux/kallsyms.h>
 #include <linux/list.h>
+#include <linux/notifier.h>
 #include <linux/rbtree.h>
 #include <linux/radix-tree.h>
 #include <linux/rcupdate.h>
@@ -344,6 +345,8 @@ static void __insert_vmap_area(struct vmap_area *va)
 
 static void purge_vmap_area_lazy(void);
 
+static BLOCKING_NOTIFIER_HEAD(vmap_notify_list);
+
 /*
  * Allocate a region of KVA of the specified size and alignment, within the
  * vstart and vend.
@@ -356,6 +359,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
 	struct vmap_area *va;
 	struct rb_node *n;
 	unsigned long addr;
+	unsigned long freed;
 	int purged = 0;
 	struct vmap_area *first;
 
@@ -468,6 +472,12 @@ overflow:
 		purged = 1;
 		goto retry;
 	}
+	freed = 0;
+	blocking_notifier_call_chain(&vmap_notify_list, 0, &freed);
+	if (freed > 0) {
+		purged = 0;
+		goto retry;
+	}
 	if (printk_ratelimit())
 		pr_warn("vmap allocation for size %lu failed: "
 			"use vmalloc=<size> to increase size.\n", size);
@@ -475,6 +485,18 @@ overflow:
 	return ERR_PTR(-EBUSY);
 }
 
+int register_vmap_purge_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_register(&vmap_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(register_vmap_purge_notifier);
+
+int unregister_vmap_purge_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_unregister(&vmap_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(unregister_vmap_purge_notifier);
+
 static void __free_vmap_area(struct vmap_area *va)
 {
 	BUG_ON(RB_EMPTY_NODE(&va->rb_node));
-- 
2.8.0.rc3

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 1/2] mm/vmap: Add a notifier for when we run out of vmap address space
@ 2016-03-17 11:59 ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-03-17 11:59 UTC (permalink / raw)
  To: intel-gfx
  Cc: Chris Wilson, Andrew Morton, David Rientjes, Roman Pen,
	Mel Gorman, linux-mm, linux-kernel

vmaps are temporary kernel mappings that may be of long duration.
Reusing a vmap on an object is preferrable for a driver as the cost of
setting up the vmap can otherwise dominate the operation on the object.
However, the vmap address space is rather limited on 32bit systems and
so we add a notification for vmap pressure in order for the driver to
release any cached vmappings.

The interface is styled after the oom-notifier where the callees are
passed a pointer to an unsigned long counter for them to indicate if they
have freed any space.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Roman Pen <r.peniaev@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
 include/linux/vmalloc.h |  4 ++++
 mm/vmalloc.c            | 22 ++++++++++++++++++++++
 2 files changed, 26 insertions(+)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index d1f1d338af20..edd676b8e112 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -187,4 +187,8 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
 #define VMALLOC_TOTAL 0UL
 #endif
 
+struct notitifer_block;
+int register_vmap_purge_notifier(struct notifier_block *nb);
+int unregister_vmap_purge_notifier(struct notifier_block *nb);
+
 #endif /* _LINUX_VMALLOC_H */
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index fb42a5bffe47..fd2ca94c2732 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -21,6 +21,7 @@
 #include <linux/debugobjects.h>
 #include <linux/kallsyms.h>
 #include <linux/list.h>
+#include <linux/notifier.h>
 #include <linux/rbtree.h>
 #include <linux/radix-tree.h>
 #include <linux/rcupdate.h>
@@ -344,6 +345,8 @@ static void __insert_vmap_area(struct vmap_area *va)
 
 static void purge_vmap_area_lazy(void);
 
+static BLOCKING_NOTIFIER_HEAD(vmap_notify_list);
+
 /*
  * Allocate a region of KVA of the specified size and alignment, within the
  * vstart and vend.
@@ -356,6 +359,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
 	struct vmap_area *va;
 	struct rb_node *n;
 	unsigned long addr;
+	unsigned long freed;
 	int purged = 0;
 	struct vmap_area *first;
 
@@ -468,6 +472,12 @@ overflow:
 		purged = 1;
 		goto retry;
 	}
+	freed = 0;
+	blocking_notifier_call_chain(&vmap_notify_list, 0, &freed);
+	if (freed > 0) {
+		purged = 0;
+		goto retry;
+	}
 	if (printk_ratelimit())
 		pr_warn("vmap allocation for size %lu failed: "
 			"use vmalloc=<size> to increase size.\n", size);
@@ -475,6 +485,18 @@ overflow:
 	return ERR_PTR(-EBUSY);
 }
 
+int register_vmap_purge_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_register(&vmap_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(register_vmap_purge_notifier);
+
+int unregister_vmap_purge_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_unregister(&vmap_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(unregister_vmap_purge_notifier);
+
 static void __free_vmap_area(struct vmap_area *va)
 {
 	BUG_ON(RB_EMPTY_NODE(&va->rb_node));
-- 
2.8.0.rc3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 1/2] mm/vmap: Add a notifier for when we run out of vmap address space
@ 2016-03-17 11:59 ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-03-17 11:59 UTC (permalink / raw)
  To: intel-gfx
  Cc: linux-kernel, Roman Pen, linux-mm, David Rientjes, Andrew Morton,
	Mel Gorman

vmaps are temporary kernel mappings that may be of long duration.
Reusing a vmap on an object is preferrable for a driver as the cost of
setting up the vmap can otherwise dominate the operation on the object.
However, the vmap address space is rather limited on 32bit systems and
so we add a notification for vmap pressure in order for the driver to
release any cached vmappings.

The interface is styled after the oom-notifier where the callees are
passed a pointer to an unsigned long counter for them to indicate if they
have freed any space.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Roman Pen <r.peniaev@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
 include/linux/vmalloc.h |  4 ++++
 mm/vmalloc.c            | 22 ++++++++++++++++++++++
 2 files changed, 26 insertions(+)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index d1f1d338af20..edd676b8e112 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -187,4 +187,8 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
 #define VMALLOC_TOTAL 0UL
 #endif
 
+struct notitifer_block;
+int register_vmap_purge_notifier(struct notifier_block *nb);
+int unregister_vmap_purge_notifier(struct notifier_block *nb);
+
 #endif /* _LINUX_VMALLOC_H */
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index fb42a5bffe47..fd2ca94c2732 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -21,6 +21,7 @@
 #include <linux/debugobjects.h>
 #include <linux/kallsyms.h>
 #include <linux/list.h>
+#include <linux/notifier.h>
 #include <linux/rbtree.h>
 #include <linux/radix-tree.h>
 #include <linux/rcupdate.h>
@@ -344,6 +345,8 @@ static void __insert_vmap_area(struct vmap_area *va)
 
 static void purge_vmap_area_lazy(void);
 
+static BLOCKING_NOTIFIER_HEAD(vmap_notify_list);
+
 /*
  * Allocate a region of KVA of the specified size and alignment, within the
  * vstart and vend.
@@ -356,6 +359,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
 	struct vmap_area *va;
 	struct rb_node *n;
 	unsigned long addr;
+	unsigned long freed;
 	int purged = 0;
 	struct vmap_area *first;
 
@@ -468,6 +472,12 @@ overflow:
 		purged = 1;
 		goto retry;
 	}
+	freed = 0;
+	blocking_notifier_call_chain(&vmap_notify_list, 0, &freed);
+	if (freed > 0) {
+		purged = 0;
+		goto retry;
+	}
 	if (printk_ratelimit())
 		pr_warn("vmap allocation for size %lu failed: "
 			"use vmalloc=<size> to increase size.\n", size);
@@ -475,6 +485,18 @@ overflow:
 	return ERR_PTR(-EBUSY);
 }
 
+int register_vmap_purge_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_register(&vmap_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(register_vmap_purge_notifier);
+
+int unregister_vmap_purge_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_unregister(&vmap_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(unregister_vmap_purge_notifier);
+
 static void __free_vmap_area(struct vmap_area *va)
 {
 	BUG_ON(RB_EMPTY_NODE(&va->rb_node));
-- 
2.8.0.rc3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 2/2] drm/i915/shrinker: Hook up vmap allocation failure notifier
  2016-03-17 11:59 ` Chris Wilson
@ 2016-03-17 11:59   ` Chris Wilson
  -1 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-03-17 11:59 UTC (permalink / raw)
  To: intel-gfx
  Cc: Chris Wilson, Andrew Morton, David Rientjes, Roman Pen,
	Mel Gorman, linux-mm, linux-kernel

If the core runs out of vmap address space, it will call a notifier in
case any driver can reap some of its vmaps. As i915.ko is possibily
holding onto vmap address space that could be recovered, hook into the
notifier chain and try and reap objects holding onto vmaps.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Roman Pen <r.peniaev@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
 drivers/gpu/drm/i915/i915_drv.h          |  1 +
 drivers/gpu/drm/i915/i915_gem_shrinker.c | 39 ++++++++++++++++++++++++++++++++
 2 files changed, 40 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index b9989d05f82a..4646b8504b84 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1257,6 +1257,7 @@ struct i915_gem_mm {
 	struct i915_hw_ppgtt *aliasing_ppgtt;
 
 	struct notifier_block oom_notifier;
+	struct notifier_block vmap_notifier;
 	struct shrinker shrinker;
 	bool shrinker_no_lock_stealing;
 
diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c
index d3c473ffb90a..54943f983dc4 100644
--- a/drivers/gpu/drm/i915/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c
@@ -28,6 +28,7 @@
 #include <linux/swap.h>
 #include <linux/pci.h>
 #include <linux/dma-buf.h>
+#include <linux/vmalloc.h>
 #include <drm/drmP.h>
 #include <drm/i915_drm.h>
 
@@ -356,6 +357,40 @@ i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr)
 	return NOTIFY_DONE;
 }
 
+static int
+i915_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr)
+{
+	struct drm_i915_private *dev_priv =
+		container_of(nb, struct drm_i915_private, mm.vmap_notifier);
+	struct drm_device *dev = dev_priv->dev;
+	unsigned long timeout = msecs_to_jiffies(5000) + 1;
+	unsigned long freed_pages;
+	bool was_interruptible;
+	bool unlock;
+
+	while (!i915_gem_shrinker_lock(dev, &unlock) && --timeout) {
+		schedule_timeout_killable(1);
+		if (fatal_signal_pending(current))
+			return NOTIFY_DONE;
+	}
+	if (timeout == 0) {
+		pr_err("Unable to purge GPU vmaps due to lock contention.\n");
+		return NOTIFY_DONE;
+	}
+
+	was_interruptible = dev_priv->mm.interruptible;
+	dev_priv->mm.interruptible = false;
+
+	freed_pages = i915_gem_shrink_all(dev_priv);
+
+	dev_priv->mm.interruptible = was_interruptible;
+	if (unlock)
+		mutex_unlock(&dev->struct_mutex);
+
+	*(unsigned long *)ptr += freed_pages;
+	return NOTIFY_DONE;
+}
+
 /**
  * i915_gem_shrinker_init - Initialize i915 shrinker
  * @dev_priv: i915 device
@@ -371,6 +406,9 @@ void i915_gem_shrinker_init(struct drm_i915_private *dev_priv)
 
 	dev_priv->mm.oom_notifier.notifier_call = i915_gem_shrinker_oom;
 	WARN_ON(register_oom_notifier(&dev_priv->mm.oom_notifier));
+
+	dev_priv->mm.vmap_notifier.notifier_call = i915_gem_shrinker_vmap;
+	WARN_ON(register_vmap_purge_notifier(&dev_priv->mm.vmap_notifier));
 }
 
 /**
@@ -381,6 +419,7 @@ void i915_gem_shrinker_init(struct drm_i915_private *dev_priv)
  */
 void i915_gem_shrinker_cleanup(struct drm_i915_private *dev_priv)
 {
+	WARN_ON(unregister_vmap_purge_notifier(&dev_priv->mm.vmap_notifier));
 	WARN_ON(unregister_oom_notifier(&dev_priv->mm.oom_notifier));
 	unregister_shrinker(&dev_priv->mm.shrinker);
 }
-- 
2.8.0.rc3

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH 2/2] drm/i915/shrinker: Hook up vmap allocation failure notifier
@ 2016-03-17 11:59   ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-03-17 11:59 UTC (permalink / raw)
  To: intel-gfx
  Cc: Chris Wilson, Andrew Morton, David Rientjes, Roman Pen,
	Mel Gorman, linux-mm, linux-kernel

If the core runs out of vmap address space, it will call a notifier in
case any driver can reap some of its vmaps. As i915.ko is possibily
holding onto vmap address space that could be recovered, hook into the
notifier chain and try and reap objects holding onto vmaps.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Roman Pen <r.peniaev@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
 drivers/gpu/drm/i915/i915_drv.h          |  1 +
 drivers/gpu/drm/i915/i915_gem_shrinker.c | 39 ++++++++++++++++++++++++++++++++
 2 files changed, 40 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index b9989d05f82a..4646b8504b84 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1257,6 +1257,7 @@ struct i915_gem_mm {
 	struct i915_hw_ppgtt *aliasing_ppgtt;
 
 	struct notifier_block oom_notifier;
+	struct notifier_block vmap_notifier;
 	struct shrinker shrinker;
 	bool shrinker_no_lock_stealing;
 
diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c
index d3c473ffb90a..54943f983dc4 100644
--- a/drivers/gpu/drm/i915/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c
@@ -28,6 +28,7 @@
 #include <linux/swap.h>
 #include <linux/pci.h>
 #include <linux/dma-buf.h>
+#include <linux/vmalloc.h>
 #include <drm/drmP.h>
 #include <drm/i915_drm.h>
 
@@ -356,6 +357,40 @@ i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr)
 	return NOTIFY_DONE;
 }
 
+static int
+i915_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr)
+{
+	struct drm_i915_private *dev_priv =
+		container_of(nb, struct drm_i915_private, mm.vmap_notifier);
+	struct drm_device *dev = dev_priv->dev;
+	unsigned long timeout = msecs_to_jiffies(5000) + 1;
+	unsigned long freed_pages;
+	bool was_interruptible;
+	bool unlock;
+
+	while (!i915_gem_shrinker_lock(dev, &unlock) && --timeout) {
+		schedule_timeout_killable(1);
+		if (fatal_signal_pending(current))
+			return NOTIFY_DONE;
+	}
+	if (timeout == 0) {
+		pr_err("Unable to purge GPU vmaps due to lock contention.\n");
+		return NOTIFY_DONE;
+	}
+
+	was_interruptible = dev_priv->mm.interruptible;
+	dev_priv->mm.interruptible = false;
+
+	freed_pages = i915_gem_shrink_all(dev_priv);
+
+	dev_priv->mm.interruptible = was_interruptible;
+	if (unlock)
+		mutex_unlock(&dev->struct_mutex);
+
+	*(unsigned long *)ptr += freed_pages;
+	return NOTIFY_DONE;
+}
+
 /**
  * i915_gem_shrinker_init - Initialize i915 shrinker
  * @dev_priv: i915 device
@@ -371,6 +406,9 @@ void i915_gem_shrinker_init(struct drm_i915_private *dev_priv)
 
 	dev_priv->mm.oom_notifier.notifier_call = i915_gem_shrinker_oom;
 	WARN_ON(register_oom_notifier(&dev_priv->mm.oom_notifier));
+
+	dev_priv->mm.vmap_notifier.notifier_call = i915_gem_shrinker_vmap;
+	WARN_ON(register_vmap_purge_notifier(&dev_priv->mm.vmap_notifier));
 }
 
 /**
@@ -381,6 +419,7 @@ void i915_gem_shrinker_init(struct drm_i915_private *dev_priv)
  */
 void i915_gem_shrinker_cleanup(struct drm_i915_private *dev_priv)
 {
+	WARN_ON(unregister_vmap_purge_notifier(&dev_priv->mm.vmap_notifier));
 	WARN_ON(unregister_oom_notifier(&dev_priv->mm.oom_notifier));
 	unregister_shrinker(&dev_priv->mm.shrinker);
 }
-- 
2.8.0.rc3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/2] mm/vmap: Add a notifier for when we run out of vmap address space
  2016-03-17 11:59 ` Chris Wilson
@ 2016-03-17 12:37   ` Roman Peniaev
  -1 siblings, 0 replies; 27+ messages in thread
From: Roman Peniaev @ 2016-03-17 12:37 UTC (permalink / raw)
  To: Chris Wilson
  Cc: intel-gfx, Andrew Morton, David Rientjes, Mel Gorman, linux-mm,
	linux-kernel

Hi, Chris.

Comment is below.

On Thu, Mar 17, 2016 at 12:59 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> vmaps are temporary kernel mappings that may be of long duration.
> Reusing a vmap on an object is preferrable for a driver as the cost of
> setting up the vmap can otherwise dominate the operation on the object.
> However, the vmap address space is rather limited on 32bit systems and
> so we add a notification for vmap pressure in order for the driver to
> release any cached vmappings.
>
> The interface is styled after the oom-notifier where the callees are
> passed a pointer to an unsigned long counter for them to indicate if they
> have freed any space.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Roman Pen <r.peniaev@gmail.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> ---
>  include/linux/vmalloc.h |  4 ++++
>  mm/vmalloc.c            | 22 ++++++++++++++++++++++
>  2 files changed, 26 insertions(+)
>
> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> index d1f1d338af20..edd676b8e112 100644
> --- a/include/linux/vmalloc.h
> +++ b/include/linux/vmalloc.h
> @@ -187,4 +187,8 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
>  #define VMALLOC_TOTAL 0UL
>  #endif
>
> +struct notitifer_block;
> +int register_vmap_purge_notifier(struct notifier_block *nb);
> +int unregister_vmap_purge_notifier(struct notifier_block *nb);
> +
>  #endif /* _LINUX_VMALLOC_H */
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index fb42a5bffe47..fd2ca94c2732 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -21,6 +21,7 @@
>  #include <linux/debugobjects.h>
>  #include <linux/kallsyms.h>
>  #include <linux/list.h>
> +#include <linux/notifier.h>
>  #include <linux/rbtree.h>
>  #include <linux/radix-tree.h>
>  #include <linux/rcupdate.h>
> @@ -344,6 +345,8 @@ static void __insert_vmap_area(struct vmap_area *va)
>
>  static void purge_vmap_area_lazy(void);
>
> +static BLOCKING_NOTIFIER_HEAD(vmap_notify_list);
> +
>  /*
>   * Allocate a region of KVA of the specified size and alignment, within the
>   * vstart and vend.
> @@ -356,6 +359,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
>         struct vmap_area *va;
>         struct rb_node *n;
>         unsigned long addr;
> +       unsigned long freed;
>         int purged = 0;
>         struct vmap_area *first;
>
> @@ -468,6 +472,12 @@ overflow:
>                 purged = 1;
>                 goto retry;
>         }
> +       freed = 0;
> +       blocking_notifier_call_chain(&vmap_notify_list, 0, &freed);

It seems to me that alloc_vmap_area() was designed not to sleep,
at least on GFP_NOWAIT path (__GFP_DIRECT_RECLAIM is not set).

But blocking_notifier_call_chain() might sleep.

Roman.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/2] mm/vmap: Add a notifier for when we run out of vmap address space
@ 2016-03-17 12:37   ` Roman Peniaev
  0 siblings, 0 replies; 27+ messages in thread
From: Roman Peniaev @ 2016-03-17 12:37 UTC (permalink / raw)
  To: Chris Wilson
  Cc: intel-gfx, Andrew Morton, David Rientjes, Mel Gorman, linux-mm,
	linux-kernel

Hi, Chris.

Comment is below.

On Thu, Mar 17, 2016 at 12:59 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> vmaps are temporary kernel mappings that may be of long duration.
> Reusing a vmap on an object is preferrable for a driver as the cost of
> setting up the vmap can otherwise dominate the operation on the object.
> However, the vmap address space is rather limited on 32bit systems and
> so we add a notification for vmap pressure in order for the driver to
> release any cached vmappings.
>
> The interface is styled after the oom-notifier where the callees are
> passed a pointer to an unsigned long counter for them to indicate if they
> have freed any space.
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Roman Pen <r.peniaev@gmail.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> ---
>  include/linux/vmalloc.h |  4 ++++
>  mm/vmalloc.c            | 22 ++++++++++++++++++++++
>  2 files changed, 26 insertions(+)
>
> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> index d1f1d338af20..edd676b8e112 100644
> --- a/include/linux/vmalloc.h
> +++ b/include/linux/vmalloc.h
> @@ -187,4 +187,8 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
>  #define VMALLOC_TOTAL 0UL
>  #endif
>
> +struct notitifer_block;
> +int register_vmap_purge_notifier(struct notifier_block *nb);
> +int unregister_vmap_purge_notifier(struct notifier_block *nb);
> +
>  #endif /* _LINUX_VMALLOC_H */
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index fb42a5bffe47..fd2ca94c2732 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -21,6 +21,7 @@
>  #include <linux/debugobjects.h>
>  #include <linux/kallsyms.h>
>  #include <linux/list.h>
> +#include <linux/notifier.h>
>  #include <linux/rbtree.h>
>  #include <linux/radix-tree.h>
>  #include <linux/rcupdate.h>
> @@ -344,6 +345,8 @@ static void __insert_vmap_area(struct vmap_area *va)
>
>  static void purge_vmap_area_lazy(void);
>
> +static BLOCKING_NOTIFIER_HEAD(vmap_notify_list);
> +
>  /*
>   * Allocate a region of KVA of the specified size and alignment, within the
>   * vstart and vend.
> @@ -356,6 +359,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
>         struct vmap_area *va;
>         struct rb_node *n;
>         unsigned long addr;
> +       unsigned long freed;
>         int purged = 0;
>         struct vmap_area *first;
>
> @@ -468,6 +472,12 @@ overflow:
>                 purged = 1;
>                 goto retry;
>         }
> +       freed = 0;
> +       blocking_notifier_call_chain(&vmap_notify_list, 0, &freed);

It seems to me that alloc_vmap_area() was designed not to sleep,
at least on GFP_NOWAIT path (__GFP_DIRECT_RECLAIM is not set).

But blocking_notifier_call_chain() might sleep.

Roman.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/2] mm/vmap: Add a notifier for when we run out of vmap address space
  2016-03-17 12:37   ` Roman Peniaev
@ 2016-03-17 12:57     ` Chris Wilson
  -1 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-03-17 12:57 UTC (permalink / raw)
  To: Roman Peniaev
  Cc: intel-gfx, Andrew Morton, David Rientjes, Mel Gorman, linux-mm,
	linux-kernel

On Thu, Mar 17, 2016 at 01:37:06PM +0100, Roman Peniaev wrote:
> > +       freed = 0;
> > +       blocking_notifier_call_chain(&vmap_notify_list, 0, &freed);
> 
> It seems to me that alloc_vmap_area() was designed not to sleep,
> at least on GFP_NOWAIT path (__GFP_DIRECT_RECLAIM is not set).
> 
> But blocking_notifier_call_chain() might sleep.

Indeed, I had not anticipated anybody using GFP_ATOMIC or equivalently
restrictive gfp_t for vmap and yes there are such callers.

Would guarding the notifier with gfp & __GFP_DIRECT_RECLAIM and
!(gfp & __GFP_NORETRY) == be sufficient? Is that enough for GFP_NOFS?
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/2] mm/vmap: Add a notifier for when we run out of vmap address space
@ 2016-03-17 12:57     ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-03-17 12:57 UTC (permalink / raw)
  To: Roman Peniaev
  Cc: intel-gfx, Andrew Morton, David Rientjes, Mel Gorman, linux-mm,
	linux-kernel

On Thu, Mar 17, 2016 at 01:37:06PM +0100, Roman Peniaev wrote:
> > +       freed = 0;
> > +       blocking_notifier_call_chain(&vmap_notify_list, 0, &freed);
> 
> It seems to me that alloc_vmap_area() was designed not to sleep,
> at least on GFP_NOWAIT path (__GFP_DIRECT_RECLAIM is not set).
> 
> But blocking_notifier_call_chain() might sleep.

Indeed, I had not anticipated anybody using GFP_ATOMIC or equivalently
restrictive gfp_t for vmap and yes there are such callers.

Would guarding the notifier with gfp & __GFP_DIRECT_RECLAIM and
!(gfp & __GFP_NORETRY) == be sufficient? Is that enough for GFP_NOFS?
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/2] mm/vmap: Add a notifier for when we run out of vmap address space
  2016-03-17 12:57     ` Chris Wilson
@ 2016-03-17 13:21       ` Roman Peniaev
  -1 siblings, 0 replies; 27+ messages in thread
From: Roman Peniaev @ 2016-03-17 13:21 UTC (permalink / raw)
  To: Chris Wilson, Roman Peniaev, intel-gfx, Andrew Morton,
	David Rientjes, Mel Gorman, linux-mm, linux-kernel

On Thu, Mar 17, 2016 at 1:57 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> On Thu, Mar 17, 2016 at 01:37:06PM +0100, Roman Peniaev wrote:
>> > +       freed = 0;
>> > +       blocking_notifier_call_chain(&vmap_notify_list, 0, &freed);
>>
>> It seems to me that alloc_vmap_area() was designed not to sleep,
>> at least on GFP_NOWAIT path (__GFP_DIRECT_RECLAIM is not set).
>>
>> But blocking_notifier_call_chain() might sleep.
>
> Indeed, I had not anticipated anybody using GFP_ATOMIC or equivalently
> restrictive gfp_t for vmap and yes there are such callers.
>
> Would guarding the notifier with gfp & __GFP_DIRECT_RECLAIM and
> !(gfp & __GFP_NORETRY) == be sufficient? Is that enough for GFP_NOFS?

I would use gfpflags_allow_blocking() for that purpose.

Roman

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/2] mm/vmap: Add a notifier for when we run out of vmap address space
@ 2016-03-17 13:21       ` Roman Peniaev
  0 siblings, 0 replies; 27+ messages in thread
From: Roman Peniaev @ 2016-03-17 13:21 UTC (permalink / raw)
  To: Chris Wilson, Roman Peniaev, intel-gfx, Andrew Morton,
	David Rientjes, Mel Gorman, linux-mm, linux-kernel

On Thu, Mar 17, 2016 at 1:57 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> On Thu, Mar 17, 2016 at 01:37:06PM +0100, Roman Peniaev wrote:
>> > +       freed = 0;
>> > +       blocking_notifier_call_chain(&vmap_notify_list, 0, &freed);
>>
>> It seems to me that alloc_vmap_area() was designed not to sleep,
>> at least on GFP_NOWAIT path (__GFP_DIRECT_RECLAIM is not set).
>>
>> But blocking_notifier_call_chain() might sleep.
>
> Indeed, I had not anticipated anybody using GFP_ATOMIC or equivalently
> restrictive gfp_t for vmap and yes there are such callers.
>
> Would guarding the notifier with gfp & __GFP_DIRECT_RECLAIM and
> !(gfp & __GFP_NORETRY) == be sufficient? Is that enough for GFP_NOFS?

I would use gfpflags_allow_blocking() for that purpose.

Roman

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/2] mm/vmap: Add a notifier for when we run out of vmap address space
  2016-03-17 13:21       ` Roman Peniaev
@ 2016-03-17 13:30         ` Chris Wilson
  -1 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-03-17 13:30 UTC (permalink / raw)
  To: Roman Peniaev
  Cc: intel-gfx, Andrew Morton, David Rientjes, Mel Gorman, linux-mm,
	linux-kernel

On Thu, Mar 17, 2016 at 02:21:40PM +0100, Roman Peniaev wrote:
> On Thu, Mar 17, 2016 at 1:57 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > On Thu, Mar 17, 2016 at 01:37:06PM +0100, Roman Peniaev wrote:
> >> > +       freed = 0;
> >> > +       blocking_notifier_call_chain(&vmap_notify_list, 0, &freed);
> >>
> >> It seems to me that alloc_vmap_area() was designed not to sleep,
> >> at least on GFP_NOWAIT path (__GFP_DIRECT_RECLAIM is not set).
> >>
> >> But blocking_notifier_call_chain() might sleep.
> >
> > Indeed, I had not anticipated anybody using GFP_ATOMIC or equivalently
> > restrictive gfp_t for vmap and yes there are such callers.
> >
> > Would guarding the notifier with gfp & __GFP_DIRECT_RECLAIM and
> > !(gfp & __GFP_NORETRY) == be sufficient? Is that enough for GFP_NOFS?
> 
> I would use gfpflags_allow_blocking() for that purpose.

Thanks,
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH 1/2] mm/vmap: Add a notifier for when we run out of vmap address space
@ 2016-03-17 13:30         ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-03-17 13:30 UTC (permalink / raw)
  To: Roman Peniaev
  Cc: intel-gfx, Andrew Morton, David Rientjes, Mel Gorman, linux-mm,
	linux-kernel

On Thu, Mar 17, 2016 at 02:21:40PM +0100, Roman Peniaev wrote:
> On Thu, Mar 17, 2016 at 1:57 PM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> > On Thu, Mar 17, 2016 at 01:37:06PM +0100, Roman Peniaev wrote:
> >> > +       freed = 0;
> >> > +       blocking_notifier_call_chain(&vmap_notify_list, 0, &freed);
> >>
> >> It seems to me that alloc_vmap_area() was designed not to sleep,
> >> at least on GFP_NOWAIT path (__GFP_DIRECT_RECLAIM is not set).
> >>
> >> But blocking_notifier_call_chain() might sleep.
> >
> > Indeed, I had not anticipated anybody using GFP_ATOMIC or equivalently
> > restrictive gfp_t for vmap and yes there are such callers.
> >
> > Would guarding the notifier with gfp & __GFP_DIRECT_RECLAIM and
> > !(gfp & __GFP_NORETRY) == be sufficient? Is that enough for GFP_NOFS?
> 
> I would use gfpflags_allow_blocking() for that purpose.

Thanks,
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v2] mm/vmap: Add a notifier for when we run out of vmap address space
  2016-03-17 11:59 ` Chris Wilson
  (?)
@ 2016-03-17 13:34   ` Chris Wilson
  -1 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-03-17 13:34 UTC (permalink / raw)
  To: intel-gfx
  Cc: Chris Wilson, Andrew Morton, David Rientjes, Roman Peniaev,
	Mel Gorman, linux-mm, linux-kernel

vmaps are temporary kernel mappings that may be of long duration.
Reusing a vmap on an object is preferrable for a driver as the cost of
setting up the vmap can otherwise dominate the operation on the object.
However, the vmap address space is rather limited on 32bit systems and
so we add a notification for vmap pressure in order for the driver to
release any cached vmappings.

The interface is styled after the oom-notifier where the callees are
passed a pointer to an unsigned long counter for them to indicate if they
have freed any space.

v2: Guard the blocking notifier call with gfpflags_allow_blocking()

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Roman Peniaev <r.peniaev@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
 include/linux/vmalloc.h |  4 ++++
 mm/vmalloc.c            | 27 +++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index d1f1d338af20..edd676b8e112 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -187,4 +187,8 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
 #define VMALLOC_TOTAL 0UL
 #endif
 
+struct notitifer_block;
+int register_vmap_purge_notifier(struct notifier_block *nb);
+int unregister_vmap_purge_notifier(struct notifier_block *nb);
+
 #endif /* _LINUX_VMALLOC_H */
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index fb42a5bffe47..12d27ac303ae 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -21,6 +21,7 @@
 #include <linux/debugobjects.h>
 #include <linux/kallsyms.h>
 #include <linux/list.h>
+#include <linux/notifier.h>
 #include <linux/rbtree.h>
 #include <linux/radix-tree.h>
 #include <linux/rcupdate.h>
@@ -344,6 +345,8 @@ static void __insert_vmap_area(struct vmap_area *va)
 
 static void purge_vmap_area_lazy(void);
 
+static BLOCKING_NOTIFIER_HEAD(vmap_notify_list);
+
 /*
  * Allocate a region of KVA of the specified size and alignment, within the
  * vstart and vend.
@@ -363,6 +366,8 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
 	BUG_ON(offset_in_page(size));
 	BUG_ON(!is_power_of_2(align));
 
+	might_sleep_if(gfpflags_allow_blocking(gfp_mask));
+
 	va = kmalloc_node(sizeof(struct vmap_area),
 			gfp_mask & GFP_RECLAIM_MASK, node);
 	if (unlikely(!va))
@@ -468,6 +473,16 @@ overflow:
 		purged = 1;
 		goto retry;
 	}
+
+	if (gfpflags_allow_blocking(gfp_mask)) {
+		unsigned long freed = 0;
+		blocking_notifier_call_chain(&vmap_notify_list, 0, &freed);
+		if (freed > 0) {
+			purged = 0;
+			goto retry;
+		}
+	}
+
 	if (printk_ratelimit())
 		pr_warn("vmap allocation for size %lu failed: "
 			"use vmalloc=<size> to increase size.\n", size);
@@ -475,6 +490,18 @@ overflow:
 	return ERR_PTR(-EBUSY);
 }
 
+int register_vmap_purge_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_register(&vmap_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(register_vmap_purge_notifier);
+
+int unregister_vmap_purge_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_unregister(&vmap_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(unregister_vmap_purge_notifier);
+
 static void __free_vmap_area(struct vmap_area *va)
 {
 	BUG_ON(RB_EMPTY_NODE(&va->rb_node));
-- 
2.8.0.rc3

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2] mm/vmap: Add a notifier for when we run out of vmap address space
@ 2016-03-17 13:34   ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-03-17 13:34 UTC (permalink / raw)
  To: intel-gfx
  Cc: Chris Wilson, Andrew Morton, David Rientjes, Roman Peniaev,
	Mel Gorman, linux-mm, linux-kernel

vmaps are temporary kernel mappings that may be of long duration.
Reusing a vmap on an object is preferrable for a driver as the cost of
setting up the vmap can otherwise dominate the operation on the object.
However, the vmap address space is rather limited on 32bit systems and
so we add a notification for vmap pressure in order for the driver to
release any cached vmappings.

The interface is styled after the oom-notifier where the callees are
passed a pointer to an unsigned long counter for them to indicate if they
have freed any space.

v2: Guard the blocking notifier call with gfpflags_allow_blocking()

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Roman Peniaev <r.peniaev@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
 include/linux/vmalloc.h |  4 ++++
 mm/vmalloc.c            | 27 +++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index d1f1d338af20..edd676b8e112 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -187,4 +187,8 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
 #define VMALLOC_TOTAL 0UL
 #endif
 
+struct notitifer_block;
+int register_vmap_purge_notifier(struct notifier_block *nb);
+int unregister_vmap_purge_notifier(struct notifier_block *nb);
+
 #endif /* _LINUX_VMALLOC_H */
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index fb42a5bffe47..12d27ac303ae 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -21,6 +21,7 @@
 #include <linux/debugobjects.h>
 #include <linux/kallsyms.h>
 #include <linux/list.h>
+#include <linux/notifier.h>
 #include <linux/rbtree.h>
 #include <linux/radix-tree.h>
 #include <linux/rcupdate.h>
@@ -344,6 +345,8 @@ static void __insert_vmap_area(struct vmap_area *va)
 
 static void purge_vmap_area_lazy(void);
 
+static BLOCKING_NOTIFIER_HEAD(vmap_notify_list);
+
 /*
  * Allocate a region of KVA of the specified size and alignment, within the
  * vstart and vend.
@@ -363,6 +366,8 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
 	BUG_ON(offset_in_page(size));
 	BUG_ON(!is_power_of_2(align));
 
+	might_sleep_if(gfpflags_allow_blocking(gfp_mask));
+
 	va = kmalloc_node(sizeof(struct vmap_area),
 			gfp_mask & GFP_RECLAIM_MASK, node);
 	if (unlikely(!va))
@@ -468,6 +473,16 @@ overflow:
 		purged = 1;
 		goto retry;
 	}
+
+	if (gfpflags_allow_blocking(gfp_mask)) {
+		unsigned long freed = 0;
+		blocking_notifier_call_chain(&vmap_notify_list, 0, &freed);
+		if (freed > 0) {
+			purged = 0;
+			goto retry;
+		}
+	}
+
 	if (printk_ratelimit())
 		pr_warn("vmap allocation for size %lu failed: "
 			"use vmalloc=<size> to increase size.\n", size);
@@ -475,6 +490,18 @@ overflow:
 	return ERR_PTR(-EBUSY);
 }
 
+int register_vmap_purge_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_register(&vmap_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(register_vmap_purge_notifier);
+
+int unregister_vmap_purge_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_unregister(&vmap_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(unregister_vmap_purge_notifier);
+
 static void __free_vmap_area(struct vmap_area *va)
 {
 	BUG_ON(RB_EMPTY_NODE(&va->rb_node));
-- 
2.8.0.rc3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2] mm/vmap: Add a notifier for when we run out of vmap address space
@ 2016-03-17 13:34   ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-03-17 13:34 UTC (permalink / raw)
  To: intel-gfx
  Cc: linux-kernel, Roman Peniaev, linux-mm, David Rientjes,
	Andrew Morton, Mel Gorman

vmaps are temporary kernel mappings that may be of long duration.
Reusing a vmap on an object is preferrable for a driver as the cost of
setting up the vmap can otherwise dominate the operation on the object.
However, the vmap address space is rather limited on 32bit systems and
so we add a notification for vmap pressure in order for the driver to
release any cached vmappings.

The interface is styled after the oom-notifier where the callees are
passed a pointer to an unsigned long counter for them to indicate if they
have freed any space.

v2: Guard the blocking notifier call with gfpflags_allow_blocking()

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Roman Peniaev <r.peniaev@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
 include/linux/vmalloc.h |  4 ++++
 mm/vmalloc.c            | 27 +++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index d1f1d338af20..edd676b8e112 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -187,4 +187,8 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
 #define VMALLOC_TOTAL 0UL
 #endif
 
+struct notitifer_block;
+int register_vmap_purge_notifier(struct notifier_block *nb);
+int unregister_vmap_purge_notifier(struct notifier_block *nb);
+
 #endif /* _LINUX_VMALLOC_H */
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index fb42a5bffe47..12d27ac303ae 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -21,6 +21,7 @@
 #include <linux/debugobjects.h>
 #include <linux/kallsyms.h>
 #include <linux/list.h>
+#include <linux/notifier.h>
 #include <linux/rbtree.h>
 #include <linux/radix-tree.h>
 #include <linux/rcupdate.h>
@@ -344,6 +345,8 @@ static void __insert_vmap_area(struct vmap_area *va)
 
 static void purge_vmap_area_lazy(void);
 
+static BLOCKING_NOTIFIER_HEAD(vmap_notify_list);
+
 /*
  * Allocate a region of KVA of the specified size and alignment, within the
  * vstart and vend.
@@ -363,6 +366,8 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
 	BUG_ON(offset_in_page(size));
 	BUG_ON(!is_power_of_2(align));
 
+	might_sleep_if(gfpflags_allow_blocking(gfp_mask));
+
 	va = kmalloc_node(sizeof(struct vmap_area),
 			gfp_mask & GFP_RECLAIM_MASK, node);
 	if (unlikely(!va))
@@ -468,6 +473,16 @@ overflow:
 		purged = 1;
 		goto retry;
 	}
+
+	if (gfpflags_allow_blocking(gfp_mask)) {
+		unsigned long freed = 0;
+		blocking_notifier_call_chain(&vmap_notify_list, 0, &freed);
+		if (freed > 0) {
+			purged = 0;
+			goto retry;
+		}
+	}
+
 	if (printk_ratelimit())
 		pr_warn("vmap allocation for size %lu failed: "
 			"use vmalloc=<size> to increase size.\n", size);
@@ -475,6 +490,18 @@ overflow:
 	return ERR_PTR(-EBUSY);
 }
 
+int register_vmap_purge_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_register(&vmap_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(register_vmap_purge_notifier);
+
+int unregister_vmap_purge_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_unregister(&vmap_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(unregister_vmap_purge_notifier);
+
 static void __free_vmap_area(struct vmap_area *va)
 {
 	BUG_ON(RB_EMPTY_NODE(&va->rb_node));
-- 
2.8.0.rc3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH v2] mm/vmap: Add a notifier for when we run out of vmap address space
  2016-03-17 13:34   ` Chris Wilson
@ 2016-03-17 13:41     ` Chris Wilson
  -1 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-03-17 13:41 UTC (permalink / raw)
  To: intel-gfx
  Cc: Andrew Morton, David Rientjes, Roman Peniaev, Mel Gorman,
	linux-mm, linux-kernel

On Thu, Mar 17, 2016 at 01:34:59PM +0000, Chris Wilson wrote:
> vmaps are temporary kernel mappings that may be of long duration.
> Reusing a vmap on an object is preferrable for a driver as the cost of
> setting up the vmap can otherwise dominate the operation on the object.
> However, the vmap address space is rather limited on 32bit systems and
> so we add a notification for vmap pressure in order for the driver to
> release any cached vmappings.
> 
> The interface is styled after the oom-notifier where the callees are
> passed a pointer to an unsigned long counter for them to indicate if they
> have freed any space.
> 
> v2: Guard the blocking notifier call with gfpflags_allow_blocking()
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Roman Peniaev <r.peniaev@gmail.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> ---
>  include/linux/vmalloc.h |  4 ++++
>  mm/vmalloc.c            | 27 +++++++++++++++++++++++++++
>  2 files changed, 31 insertions(+)
> 
> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> index d1f1d338af20..edd676b8e112 100644
> --- a/include/linux/vmalloc.h
> +++ b/include/linux/vmalloc.h
> @@ -187,4 +187,8 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
>  #define VMALLOC_TOTAL 0UL
>  #endif
>  
> +struct notitifer_block;
Omg. /o\
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2] mm/vmap: Add a notifier for when we run out of vmap address space
@ 2016-03-17 13:41     ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-03-17 13:41 UTC (permalink / raw)
  To: intel-gfx
  Cc: Andrew Morton, David Rientjes, Roman Peniaev, Mel Gorman,
	linux-mm, linux-kernel

On Thu, Mar 17, 2016 at 01:34:59PM +0000, Chris Wilson wrote:
> vmaps are temporary kernel mappings that may be of long duration.
> Reusing a vmap on an object is preferrable for a driver as the cost of
> setting up the vmap can otherwise dominate the operation on the object.
> However, the vmap address space is rather limited on 32bit systems and
> so we add a notification for vmap pressure in order for the driver to
> release any cached vmappings.
> 
> The interface is styled after the oom-notifier where the callees are
> passed a pointer to an unsigned long counter for them to indicate if they
> have freed any space.
> 
> v2: Guard the blocking notifier call with gfpflags_allow_blocking()
> 
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Roman Peniaev <r.peniaev@gmail.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> ---
>  include/linux/vmalloc.h |  4 ++++
>  mm/vmalloc.c            | 27 +++++++++++++++++++++++++++
>  2 files changed, 31 insertions(+)
> 
> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> index d1f1d338af20..edd676b8e112 100644
> --- a/include/linux/vmalloc.h
> +++ b/include/linux/vmalloc.h
> @@ -187,4 +187,8 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
>  #define VMALLOC_TOTAL 0UL
>  #endif
>  
> +struct notitifer_block;
Omg. /o\
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* ✗ Fi.CI.BAT: failure for series starting with [v2] mm/vmap: Add a notifier for when we run out of vmap address space (rev2)
  2016-03-17 11:59 ` Chris Wilson
                   ` (4 preceding siblings ...)
  (?)
@ 2016-03-18  7:03 ` Patchwork
  -1 siblings, 0 replies; 27+ messages in thread
From: Patchwork @ 2016-03-18  7:03 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [v2] mm/vmap: Add a notifier for when we run out of vmap address space (rev2)
URL   : https://patchwork.freedesktop.org/series/4569/
State : failure

== Summary ==

Series 4569v2 Series without cover letter
http://patchwork.freedesktop.org/api/1.0/series/4569/revisions/2/mbox/

Test gem_ringfill:
        Subgroup basic-default-s3:
                dmesg-warn -> PASS       (skl-nuci5)
Test gem_storedw_loop:
        Subgroup basic-default:
                dmesg-warn -> PASS       (skl-nuci5)
Test gem_sync:
        Subgroup basic-vebox:
                dmesg-warn -> PASS       (skl-nuci5)
Test kms_flip:
        Subgroup basic-plain-flip:
                pass       -> DMESG-WARN (hsw-gt2)
Test kms_pipe_crc_basic:
        Subgroup read-crc-pipe-a-frame-sequence:
                dmesg-warn -> PASS       (hsw-gt2)
        Subgroup suspend-read-crc-pipe-c:
                dmesg-warn -> PASS       (bsw-nuc-2)
Test pm_rpm:
        Subgroup basic-rte:
                pass       -> DMESG-WARN (bsw-nuc-2)

bdw-ultra        total:194  pass:172  dwarn:1   dfail:0   fail:0   skip:21 
bsw-nuc-2        total:194  pass:155  dwarn:2   dfail:0   fail:0   skip:37 
hsw-brixbox      total:194  pass:171  dwarn:1   dfail:0   fail:0   skip:22 
hsw-gt2          total:194  pass:174  dwarn:3   dfail:0   fail:0   skip:17 
ivb-t430s        total:194  pass:168  dwarn:1   dfail:0   fail:0   skip:25 
skl-i5k-2        total:194  pass:170  dwarn:1   dfail:0   fail:0   skip:23 
skl-i7k-2        total:194  pass:170  dwarn:1   dfail:0   fail:0   skip:23 
skl-nuci5        total:194  pass:182  dwarn:1   dfail:0   fail:0   skip:11 

Results at /archive/results/CI_IGT_test/Patchwork_1634/

10e913a48ca36790da9b58bed8729598ea79ebdb drm-intel-nightly: 2016y-03m-17d-13h-22m-41s UTC integration manifest
7b584cab867d352cfb0810c6bc9ea234b455783e drm/i915/shrinker: Hook up vmap allocation failure notifier
9cace0d9d9756314b9a049b5314642e59f91fd5d mm/vmap: Add a notifier for when we run out of vmap address space

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2] mm/vmap: Add a notifier for when we run out of vmap address space
  2016-03-17 13:41     ` Chris Wilson
@ 2016-03-28 23:15       ` Andrew Morton
  -1 siblings, 0 replies; 27+ messages in thread
From: Andrew Morton @ 2016-03-28 23:15 UTC (permalink / raw)
  To: Chris Wilson
  Cc: intel-gfx, David Rientjes, Roman Peniaev, Mel Gorman, linux-mm,
	linux-kernel

On Thu, 17 Mar 2016 13:41:56 +0000 Chris Wilson <chris@chris-wilson.co.uk> wrote:

> On Thu, Mar 17, 2016 at 01:34:59PM +0000, Chris Wilson wrote:
> > vmaps are temporary kernel mappings that may be of long duration.
> > Reusing a vmap on an object is preferrable for a driver as the cost of
> > setting up the vmap can otherwise dominate the operation on the object.
> > However, the vmap address space is rather limited on 32bit systems and
> > so we add a notification for vmap pressure in order for the driver to
> > release any cached vmappings.
> > 
> > The interface is styled after the oom-notifier where the callees are
> > passed a pointer to an unsigned long counter for them to indicate if they
> > have freed any space.
> > 
> > v2: Guard the blocking notifier call with gfpflags_allow_blocking()
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: David Rientjes <rientjes@google.com>
> > Cc: Roman Peniaev <r.peniaev@gmail.com>
> > Cc: Mel Gorman <mgorman@techsingularity.net>
> > Cc: linux-mm@kvack.org
> > Cc: linux-kernel@vger.kernel.org
> > ---
> >  include/linux/vmalloc.h |  4 ++++
> >  mm/vmalloc.c            | 27 +++++++++++++++++++++++++++
> >  2 files changed, 31 insertions(+)
> > 
> > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> > index d1f1d338af20..edd676b8e112 100644
> > --- a/include/linux/vmalloc.h
> > +++ b/include/linux/vmalloc.h
> > @@ -187,4 +187,8 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
> >  #define VMALLOC_TOTAL 0UL
> >  #endif
> >  
> > +struct notitifer_block;
> Omg. /o\

Hah.

Please move the forward declaration to top-of-file.  This prevents
people from later adding the same thing at line 100 - this has happened
before.

Apart from that, all looks OK to me - please merge it via the DRM tree
if that is more convenient.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2] mm/vmap: Add a notifier for when we run out of vmap address space
@ 2016-03-28 23:15       ` Andrew Morton
  0 siblings, 0 replies; 27+ messages in thread
From: Andrew Morton @ 2016-03-28 23:15 UTC (permalink / raw)
  To: Chris Wilson
  Cc: intel-gfx, David Rientjes, Roman Peniaev, Mel Gorman, linux-mm,
	linux-kernel

On Thu, 17 Mar 2016 13:41:56 +0000 Chris Wilson <chris@chris-wilson.co.uk> wrote:

> On Thu, Mar 17, 2016 at 01:34:59PM +0000, Chris Wilson wrote:
> > vmaps are temporary kernel mappings that may be of long duration.
> > Reusing a vmap on an object is preferrable for a driver as the cost of
> > setting up the vmap can otherwise dominate the operation on the object.
> > However, the vmap address space is rather limited on 32bit systems and
> > so we add a notification for vmap pressure in order for the driver to
> > release any cached vmappings.
> > 
> > The interface is styled after the oom-notifier where the callees are
> > passed a pointer to an unsigned long counter for them to indicate if they
> > have freed any space.
> > 
> > v2: Guard the blocking notifier call with gfpflags_allow_blocking()
> > 
> > Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: David Rientjes <rientjes@google.com>
> > Cc: Roman Peniaev <r.peniaev@gmail.com>
> > Cc: Mel Gorman <mgorman@techsingularity.net>
> > Cc: linux-mm@kvack.org
> > Cc: linux-kernel@vger.kernel.org
> > ---
> >  include/linux/vmalloc.h |  4 ++++
> >  mm/vmalloc.c            | 27 +++++++++++++++++++++++++++
> >  2 files changed, 31 insertions(+)
> > 
> > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> > index d1f1d338af20..edd676b8e112 100644
> > --- a/include/linux/vmalloc.h
> > +++ b/include/linux/vmalloc.h
> > @@ -187,4 +187,8 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
> >  #define VMALLOC_TOTAL 0UL
> >  #endif
> >  
> > +struct notitifer_block;
> Omg. /o\

Hah.

Please move the forward declaration to top-of-file.  This prevents
people from later adding the same thing at line 100 - this has happened
before.

Apart from that, all looks OK to me - please merge it via the DRM tree
if that is more convenient.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v3] mm/vmap: Add a notifier for when we run out of vmap address space
  2016-03-28 23:15       ` Andrew Morton
  (?)
@ 2016-03-29  8:16         ` Chris Wilson
  -1 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-03-29  8:16 UTC (permalink / raw)
  To: intel-gfx
  Cc: Chris Wilson, Andrew Morton, David Rientjes, Roman Peniaev,
	Mel Gorman, linux-mm, linux-kernel

vmaps are temporary kernel mappings that may be of long duration.
Reusing a vmap on an object is preferrable for a driver as the cost of
setting up the vmap can otherwise dominate the operation on the object.
However, the vmap address space is rather limited on 32bit systems and
so we add a notification for vmap pressure in order for the driver to
release any cached vmappings.

The interface is styled after the oom-notifier where the callees are
passed a pointer to an unsigned long counter for them to indicate if they
have freed any space.

v2: Guard the blocking notifier call with gfpflags_allow_blocking()
v3: Correct typo in forward declaration and move to head of file

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Roman Peniaev <r.peniaev@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Acked-by: Andrew Morton <akpm@linux-foundation.org> # for inclusion via DRM
---
Thanks Andrew, may I trouble someone for a Reviewed-by?
---
 include/linux/vmalloc.h |  4 ++++
 mm/vmalloc.c            | 27 +++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index d1f1d338af20..8b51df3ab334 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -8,6 +8,7 @@
 #include <linux/rbtree.h>
 
 struct vm_area_struct;		/* vma defining user mapping in mm_types.h */
+struct notifier_block;		/* in notifier.h */
 
 /* bits in flags of vmalloc's vm_struct below */
 #define VM_IOREMAP		0x00000001	/* ioremap() and friends */
@@ -187,4 +188,7 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
 #define VMALLOC_TOTAL 0UL
 #endif
 
+int register_vmap_purge_notifier(struct notifier_block *nb);
+int unregister_vmap_purge_notifier(struct notifier_block *nb);
+
 #endif /* _LINUX_VMALLOC_H */
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index fb42a5bffe47..12d27ac303ae 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -21,6 +21,7 @@
 #include <linux/debugobjects.h>
 #include <linux/kallsyms.h>
 #include <linux/list.h>
+#include <linux/notifier.h>
 #include <linux/rbtree.h>
 #include <linux/radix-tree.h>
 #include <linux/rcupdate.h>
@@ -344,6 +345,8 @@ static void __insert_vmap_area(struct vmap_area *va)
 
 static void purge_vmap_area_lazy(void);
 
+static BLOCKING_NOTIFIER_HEAD(vmap_notify_list);
+
 /*
  * Allocate a region of KVA of the specified size and alignment, within the
  * vstart and vend.
@@ -363,6 +366,8 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
 	BUG_ON(offset_in_page(size));
 	BUG_ON(!is_power_of_2(align));
 
+	might_sleep_if(gfpflags_allow_blocking(gfp_mask));
+
 	va = kmalloc_node(sizeof(struct vmap_area),
 			gfp_mask & GFP_RECLAIM_MASK, node);
 	if (unlikely(!va))
@@ -468,6 +473,16 @@ overflow:
 		purged = 1;
 		goto retry;
 	}
+
+	if (gfpflags_allow_blocking(gfp_mask)) {
+		unsigned long freed = 0;
+		blocking_notifier_call_chain(&vmap_notify_list, 0, &freed);
+		if (freed > 0) {
+			purged = 0;
+			goto retry;
+		}
+	}
+
 	if (printk_ratelimit())
 		pr_warn("vmap allocation for size %lu failed: "
 			"use vmalloc=<size> to increase size.\n", size);
@@ -475,6 +490,18 @@ overflow:
 	return ERR_PTR(-EBUSY);
 }
 
+int register_vmap_purge_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_register(&vmap_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(register_vmap_purge_notifier);
+
+int unregister_vmap_purge_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_unregister(&vmap_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(unregister_vmap_purge_notifier);
+
 static void __free_vmap_area(struct vmap_area *va)
 {
 	BUG_ON(RB_EMPTY_NODE(&va->rb_node));
-- 
2.8.0.rc3

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3] mm/vmap: Add a notifier for when we run out of vmap address space
@ 2016-03-29  8:16         ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-03-29  8:16 UTC (permalink / raw)
  To: intel-gfx
  Cc: Chris Wilson, Andrew Morton, David Rientjes, Roman Peniaev,
	Mel Gorman, linux-mm, linux-kernel

vmaps are temporary kernel mappings that may be of long duration.
Reusing a vmap on an object is preferrable for a driver as the cost of
setting up the vmap can otherwise dominate the operation on the object.
However, the vmap address space is rather limited on 32bit systems and
so we add a notification for vmap pressure in order for the driver to
release any cached vmappings.

The interface is styled after the oom-notifier where the callees are
passed a pointer to an unsigned long counter for them to indicate if they
have freed any space.

v2: Guard the blocking notifier call with gfpflags_allow_blocking()
v3: Correct typo in forward declaration and move to head of file

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Roman Peniaev <r.peniaev@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Acked-by: Andrew Morton <akpm@linux-foundation.org> # for inclusion via DRM
---
Thanks Andrew, may I trouble someone for a Reviewed-by?
---
 include/linux/vmalloc.h |  4 ++++
 mm/vmalloc.c            | 27 +++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index d1f1d338af20..8b51df3ab334 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -8,6 +8,7 @@
 #include <linux/rbtree.h>
 
 struct vm_area_struct;		/* vma defining user mapping in mm_types.h */
+struct notifier_block;		/* in notifier.h */
 
 /* bits in flags of vmalloc's vm_struct below */
 #define VM_IOREMAP		0x00000001	/* ioremap() and friends */
@@ -187,4 +188,7 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
 #define VMALLOC_TOTAL 0UL
 #endif
 
+int register_vmap_purge_notifier(struct notifier_block *nb);
+int unregister_vmap_purge_notifier(struct notifier_block *nb);
+
 #endif /* _LINUX_VMALLOC_H */
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index fb42a5bffe47..12d27ac303ae 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -21,6 +21,7 @@
 #include <linux/debugobjects.h>
 #include <linux/kallsyms.h>
 #include <linux/list.h>
+#include <linux/notifier.h>
 #include <linux/rbtree.h>
 #include <linux/radix-tree.h>
 #include <linux/rcupdate.h>
@@ -344,6 +345,8 @@ static void __insert_vmap_area(struct vmap_area *va)
 
 static void purge_vmap_area_lazy(void);
 
+static BLOCKING_NOTIFIER_HEAD(vmap_notify_list);
+
 /*
  * Allocate a region of KVA of the specified size and alignment, within the
  * vstart and vend.
@@ -363,6 +366,8 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
 	BUG_ON(offset_in_page(size));
 	BUG_ON(!is_power_of_2(align));
 
+	might_sleep_if(gfpflags_allow_blocking(gfp_mask));
+
 	va = kmalloc_node(sizeof(struct vmap_area),
 			gfp_mask & GFP_RECLAIM_MASK, node);
 	if (unlikely(!va))
@@ -468,6 +473,16 @@ overflow:
 		purged = 1;
 		goto retry;
 	}
+
+	if (gfpflags_allow_blocking(gfp_mask)) {
+		unsigned long freed = 0;
+		blocking_notifier_call_chain(&vmap_notify_list, 0, &freed);
+		if (freed > 0) {
+			purged = 0;
+			goto retry;
+		}
+	}
+
 	if (printk_ratelimit())
 		pr_warn("vmap allocation for size %lu failed: "
 			"use vmalloc=<size> to increase size.\n", size);
@@ -475,6 +490,18 @@ overflow:
 	return ERR_PTR(-EBUSY);
 }
 
+int register_vmap_purge_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_register(&vmap_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(register_vmap_purge_notifier);
+
+int unregister_vmap_purge_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_unregister(&vmap_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(unregister_vmap_purge_notifier);
+
 static void __free_vmap_area(struct vmap_area *va)
 {
 	BUG_ON(RB_EMPTY_NODE(&va->rb_node));
-- 
2.8.0.rc3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3] mm/vmap: Add a notifier for when we run out of vmap address space
@ 2016-03-29  8:16         ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-03-29  8:16 UTC (permalink / raw)
  To: intel-gfx
  Cc: linux-kernel, Roman Peniaev, linux-mm, David Rientjes,
	Andrew Morton, Mel Gorman

vmaps are temporary kernel mappings that may be of long duration.
Reusing a vmap on an object is preferrable for a driver as the cost of
setting up the vmap can otherwise dominate the operation on the object.
However, the vmap address space is rather limited on 32bit systems and
so we add a notification for vmap pressure in order for the driver to
release any cached vmappings.

The interface is styled after the oom-notifier where the callees are
passed a pointer to an unsigned long counter for them to indicate if they
have freed any space.

v2: Guard the blocking notifier call with gfpflags_allow_blocking()
v3: Correct typo in forward declaration and move to head of file

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Roman Peniaev <r.peniaev@gmail.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Acked-by: Andrew Morton <akpm@linux-foundation.org> # for inclusion via DRM
---
Thanks Andrew, may I trouble someone for a Reviewed-by?
---
 include/linux/vmalloc.h |  4 ++++
 mm/vmalloc.c            | 27 +++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index d1f1d338af20..8b51df3ab334 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -8,6 +8,7 @@
 #include <linux/rbtree.h>
 
 struct vm_area_struct;		/* vma defining user mapping in mm_types.h */
+struct notifier_block;		/* in notifier.h */
 
 /* bits in flags of vmalloc's vm_struct below */
 #define VM_IOREMAP		0x00000001	/* ioremap() and friends */
@@ -187,4 +188,7 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
 #define VMALLOC_TOTAL 0UL
 #endif
 
+int register_vmap_purge_notifier(struct notifier_block *nb);
+int unregister_vmap_purge_notifier(struct notifier_block *nb);
+
 #endif /* _LINUX_VMALLOC_H */
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index fb42a5bffe47..12d27ac303ae 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -21,6 +21,7 @@
 #include <linux/debugobjects.h>
 #include <linux/kallsyms.h>
 #include <linux/list.h>
+#include <linux/notifier.h>
 #include <linux/rbtree.h>
 #include <linux/radix-tree.h>
 #include <linux/rcupdate.h>
@@ -344,6 +345,8 @@ static void __insert_vmap_area(struct vmap_area *va)
 
 static void purge_vmap_area_lazy(void);
 
+static BLOCKING_NOTIFIER_HEAD(vmap_notify_list);
+
 /*
  * Allocate a region of KVA of the specified size and alignment, within the
  * vstart and vend.
@@ -363,6 +366,8 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
 	BUG_ON(offset_in_page(size));
 	BUG_ON(!is_power_of_2(align));
 
+	might_sleep_if(gfpflags_allow_blocking(gfp_mask));
+
 	va = kmalloc_node(sizeof(struct vmap_area),
 			gfp_mask & GFP_RECLAIM_MASK, node);
 	if (unlikely(!va))
@@ -468,6 +473,16 @@ overflow:
 		purged = 1;
 		goto retry;
 	}
+
+	if (gfpflags_allow_blocking(gfp_mask)) {
+		unsigned long freed = 0;
+		blocking_notifier_call_chain(&vmap_notify_list, 0, &freed);
+		if (freed > 0) {
+			purged = 0;
+			goto retry;
+		}
+	}
+
 	if (printk_ratelimit())
 		pr_warn("vmap allocation for size %lu failed: "
 			"use vmalloc=<size> to increase size.\n", size);
@@ -475,6 +490,18 @@ overflow:
 	return ERR_PTR(-EBUSY);
 }
 
+int register_vmap_purge_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_register(&vmap_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(register_vmap_purge_notifier);
+
+int unregister_vmap_purge_notifier(struct notifier_block *nb)
+{
+	return blocking_notifier_chain_unregister(&vmap_notify_list, nb);
+}
+EXPORT_SYMBOL_GPL(unregister_vmap_purge_notifier);
+
 static void __free_vmap_area(struct vmap_area *va)
 {
 	BUG_ON(RB_EMPTY_NODE(&va->rb_node));
-- 
2.8.0.rc3

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* ✗ Fi.CI.BAT: failure for series starting with [v3] mm/vmap: Add a notifier for when we run out of vmap address space (rev3)
  2016-03-17 11:59 ` Chris Wilson
                   ` (5 preceding siblings ...)
  (?)
@ 2016-03-29  8:34 ` Patchwork
  2016-03-29 12:51   ` Marius Vlad
  -1 siblings, 1 reply; 27+ messages in thread
From: Patchwork @ 2016-03-29  8:34 UTC (permalink / raw)
  To: Chris Wilson; +Cc: intel-gfx

== Series Details ==

Series: series starting with [v3] mm/vmap: Add a notifier for when we run out of vmap address space (rev3)
URL   : https://patchwork.freedesktop.org/series/4569/
State : failure

== Summary ==

Series 4569v3 Series without cover letter
http://patchwork.freedesktop.org/api/1.0/series/4569/revisions/3/mbox/

Test kms_flip:
        Subgroup basic-flip-vs-wf_vblank:
                pass       -> FAIL       (snb-x220t)
Test pm_rpm:
        Subgroup basic-pci-d3-state:
                pass       -> DMESG-WARN (bsw-nuc-2)
        Subgroup basic-rte:
                dmesg-warn -> PASS       (byt-nuc) UNSTABLE

bdw-nuci7        total:192  pass:179  dwarn:0   dfail:0   fail:1   skip:12 
bdw-ultra        total:192  pass:170  dwarn:0   dfail:0   fail:1   skip:21 
bsw-nuc-2        total:192  pass:154  dwarn:1   dfail:0   fail:0   skip:37 
byt-nuc          total:192  pass:157  dwarn:0   dfail:0   fail:0   skip:35 
hsw-brixbox      total:192  pass:170  dwarn:0   dfail:0   fail:0   skip:22 
hsw-gt2          total:192  pass:175  dwarn:0   dfail:0   fail:0   skip:17 
ivb-t430s        total:192  pass:167  dwarn:0   dfail:0   fail:0   skip:25 
skl-i7k-2        total:192  pass:169  dwarn:0   dfail:0   fail:0   skip:23 
skl-nuci5        total:192  pass:181  dwarn:0   dfail:0   fail:0   skip:11 
snb-dellxps      total:192  pass:158  dwarn:0   dfail:0   fail:0   skip:34 
snb-x220t        total:192  pass:157  dwarn:0   dfail:0   fail:2   skip:33 

Results at /archive/results/CI_IGT_test/Patchwork_1725/

f5d413cccefa1f93d64c34f357151d42add63a84 drm-intel-nightly: 2016y-03m-24d-14h-34m-29s UTC integration manifest
d9f9cda1b4e8a64ad1ac9bef0392e2c701b0d9f7 drm/i915/shrinker: Hook up vmap allocation failure notifier
93698e6141bccbabfc898a7d2cad5577f5af893b mm/vmap: Add a notifier for when we run out of vmap address space

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ✗ Fi.CI.BAT: failure for series starting with [v3] mm/vmap: Add a notifier for when we run out of vmap address space (rev3)
  2016-03-29  8:34 ` ✗ Fi.CI.BAT: failure for series starting with [v3] mm/vmap: Add a notifier for when we run out of vmap address space (rev3) Patchwork
@ 2016-03-29 12:51   ` Marius Vlad
  2016-03-29 12:56     ` Chris Wilson
  0 siblings, 1 reply; 27+ messages in thread
From: Marius Vlad @ 2016-03-29 12:51 UTC (permalink / raw)
  To: intel-gfx


[-- Attachment #1.1: Type: text/plain, Size: 14002 bytes --]

We're not catching it, but this gives deadlock trace when running 
kms_pipe_crc_basic@suspend-read-crc-pipe-A (happens on BSW):

[  132.555497] kms_pipe_crc_basic: starting subtest suspend-read-crc-pipe-A
[  132.734041] PM: Syncing filesystems ... done.
[  132.751624] Freezing user space processes ... (elapsed 0.003 seconds) done.
[  132.755240] Freezing remaining freezable tasks ... (elapsed 0.002 seconds) done.
[  132.758372] Suspending console(s) (use no_console_suspend to debug)
[  132.768157] sd 0:0:0:0: [sda] Synchronizing SCSI cache
[  132.780482] sd 0:0:0:0: [sda] Stopping disk
[  132.889902] PM: suspend of devices complete after 129.133 msecs
[  132.924359] PM: late suspend of devices complete after 34.440 msecs
[  132.932433] r8169 0000:03:00.0: System wakeup enabled by ACPI
[  132.938105] xhci_hcd 0000:00:14.0: System wakeup enabled by ACPI
[  132.948029] PM: noirq suspend of devices complete after 23.660 msecs
[  132.948073] ACPI: Preparing to enter system sleep state S3
[  132.960567] PM: Saving platform NVS memory
[  132.960803] Disabling non-boot CPUs ...
[  132.999863] Broke affinity for irq 116
[  133.002229] smpboot: CPU 1 is now offline

[  133.022915] ======================================================
[  133.022916] [ INFO: possible circular locking dependency detected ]
[  133.022921] 4.5.0-gfxbench-Patchwork_315+ #1 Tainted: G     U         
[  133.022922] -------------------------------------------------------
[  133.022925] rtcwake/5998 is trying to acquire lock:
[  133.022942]  (s_active#6){++++.+}, at: [<ffffffff81252b10>] kernfs_remove_by_name_ns+0x40/0x90
[  133.022943] 
but task is already holding lock:
[  133.022953]  (cpu_hotplug.lock){+.+.+.}, at: [<ffffffff8107908d>] cpu_hotplug_begin+0x6d/0xc0
[  133.022954] 
which lock already depends on the new lock.

[  133.022955] 
the existing dependency chain (in reverse order) is:
[  133.022962] 
-> #3 (cpu_hotplug.lock){+.+.+.}:
[  133.022968]        [<ffffffff810ce2a1>] lock_acquire+0xb1/0x200
[  133.022974]        [<ffffffff817c3a72>] mutex_lock_nested+0x62/0x3c0
[  133.022978]        [<ffffffff81078d41>] get_online_cpus+0x61/0x80
[  133.022983]        [<ffffffff8111a2cb>] stop_machine+0x1b/0xe0
[  133.023046]        [<ffffffffa012a3ad>] gen8_ggtt_insert_entries__BKL+0x2d/0x30 [i915]
[  133.023096]        [<ffffffffa012dc06>] ggtt_bind_vma+0x46/0x70 [i915]
[  133.023146]        [<ffffffffa012f55d>] i915_vma_bind+0xed/0x260 [i915]
[  133.023197]        [<ffffffffa0137153>] i915_gem_object_do_pin+0x873/0xb20 [i915]
[  133.023248]        [<ffffffffa0137428>] i915_gem_object_pin+0x28/0x30 [i915]
[  133.023301]        [<ffffffffa014b5f5>] intel_init_pipe_control+0xb5/0x200 [i915]
[  133.023354]        [<ffffffffa01483be>] intel_logical_rings_init+0x14e/0x1080 [i915]
[  133.023405]        [<ffffffffa0137ee3>] i915_gem_init+0xf3/0x130 [i915]
[  133.023462]        [<ffffffffa01bcfeb>] i915_driver_load+0xbeb/0x1950 [i915]
[  133.023470]        [<ffffffff8151b7f4>] drm_dev_register+0xa4/0xb0
[  133.023474]        [<ffffffff8151d9de>] drm_get_pci_dev+0xce/0x1d0
[  133.023519]        [<ffffffffa00f72ff>] i915_pci_probe+0x2f/0x50 [i915]
[  133.023525]        [<ffffffff814490a5>] pci_device_probe+0x85/0xf0
[  133.023530]        [<ffffffff8153fd67>] driver_probe_device+0x227/0x440
[  133.023534]        [<ffffffff81540003>] __driver_attach+0x83/0x90
[  133.023538]        [<ffffffff8153d8b1>] bus_for_each_dev+0x61/0xa0
[  133.023542]        [<ffffffff8153f4c9>] driver_attach+0x19/0x20
[  133.023546]        [<ffffffff8153efb9>] bus_add_driver+0x1e9/0x280
[  133.023550]        [<ffffffff81540bab>] driver_register+0x5b/0xd0
[  133.023555]        [<ffffffff81447ffb>] __pci_register_driver+0x5b/0x60
[  133.023559]        [<ffffffff8151dbb6>] drm_pci_init+0xd6/0x100
[  133.023563]        [<ffffffffa0230092>] 0xffffffffa0230092
[  133.023571]        [<ffffffff810003d6>] do_one_initcall+0xa6/0x1d0
[  133.023577]        [<ffffffff8115c832>] do_init_module+0x5a/0x1c8
[  133.023585]        [<ffffffff81108d9d>] load_module+0x1efd/0x25a0
[  133.023591]        [<ffffffff81109658>] SyS_finit_module+0x98/0xc0
[  133.023598]        [<ffffffff817c82db>] entry_SYSCALL_64_fastpath+0x16/0x6f
[  133.023603] 
-> #2 (&dev->struct_mutex){+.+.+.}:
[  133.023608]        [<ffffffff810ce2a1>] lock_acquire+0xb1/0x200
[  133.023616]        [<ffffffff81516d0f>] drm_gem_mmap+0x19f/0x2a0
[  133.023622]        [<ffffffff8119adb9>] mmap_region+0x389/0x5f0
[  133.023626]        [<ffffffff8119b38a>] do_mmap+0x36a/0x420
[  133.023632]        [<ffffffff8117f03d>] vm_mmap_pgoff+0x6d/0xa0
[  133.023638]        [<ffffffff811992c3>] SyS_mmap_pgoff+0x183/0x220
[  133.023645]        [<ffffffff8100a246>] SyS_mmap+0x16/0x20
[  133.023649]        [<ffffffff817c82db>] entry_SYSCALL_64_fastpath+0x16/0x6f
[  133.023656] 
-> #1 (&mm->mmap_sem){++++++}:
[  133.023661]        [<ffffffff810ce2a1>] lock_acquire+0xb1/0x200
[  133.023667]        [<ffffffff8118fda5>] __might_fault+0x75/0xa0
[  133.023673]        [<ffffffff8125350a>] kernfs_fop_write+0x8a/0x180
[  133.023679]        [<ffffffff811d5d03>] __vfs_write+0x23/0xe0
[  133.023684]        [<ffffffff811d6ac2>] vfs_write+0xa2/0x190
[  133.023687]        [<ffffffff811d7914>] SyS_write+0x44/0xb0
[  133.023691]        [<ffffffff817c82db>] entry_SYSCALL_64_fastpath+0x16/0x6f
[  133.023697] 
-> #0 (s_active#6){++++.+}:
[  133.023701]        [<ffffffff810cd961>] __lock_acquire+0x1e81/0x1ef0
[  133.023705]        [<ffffffff810ce2a1>] lock_acquire+0xb1/0x200
[  133.023709]        [<ffffffff81251d81>] __kernfs_remove+0x241/0x320
[  133.023713]        [<ffffffff81252b10>] kernfs_remove_by_name_ns+0x40/0x90
[  133.023717]        [<ffffffff812544a0>] sysfs_remove_file_ns+0x10/0x20
[  133.023722]        [<ffffffff8153b504>] device_del+0x124/0x240
[  133.023726]        [<ffffffff8153b639>] device_unregister+0x19/0x60
[  133.023731]        [<ffffffff81545d42>] cpu_cache_sysfs_exit+0x52/0xb0
[  133.023735]        [<ffffffff81546318>] cacheinfo_cpu_callback+0x38/0x70
[  133.023739]        [<ffffffff8109bc59>] notifier_call_chain+0x39/0xa0
[  133.023743]        [<ffffffff8109bcc9>] __raw_notifier_call_chain+0x9/0x10
[  133.023748]        [<ffffffff8107900e>] cpu_notify_nofail+0x1e/0x30
[  133.023751]        [<ffffffff81079320>] _cpu_down+0x200/0x330
[  133.023756]        [<ffffffff8107987a>] disable_nonboot_cpus+0xaa/0x3b0
[  133.023761]        [<ffffffff810d47f8>] suspend_devices_and_enter+0x478/0xc30
[  133.023765]        [<ffffffff810d54c5>] pm_suspend+0x515/0x9e0
[  133.023771]        [<ffffffff810d3617>] state_store+0x77/0xe0
[  133.023777]        [<ffffffff81403eaf>] kobj_attr_store+0xf/0x20
[  133.023781]        [<ffffffff81254200>] sysfs_kf_write+0x40/0x50
[  133.023785]        [<ffffffff812535bc>] kernfs_fop_write+0x13c/0x180
[  133.023790]        [<ffffffff811d5d03>] __vfs_write+0x23/0xe0
[  133.023794]        [<ffffffff811d6ac2>] vfs_write+0xa2/0x190
[  133.023798]        [<ffffffff811d7914>] SyS_write+0x44/0xb0
[  133.023802]        [<ffffffff817c82db>] entry_SYSCALL_64_fastpath+0x16/0x6f
[  133.023805] 
other info that might help us debug this:

[  133.023813] Chain exists of:
  s_active#6 --> &dev->struct_mutex --> cpu_hotplug.lock

[  133.023814]  Possible unsafe locking scenario:

[  133.023815]        CPU0                    CPU1
[  133.023816]        ----                    ----
[  133.023819]   lock(cpu_hotplug.lock);
[  133.023823]                                lock(&dev->struct_mutex);
[  133.023826]                                lock(cpu_hotplug.lock);
[  133.023830]   lock(s_active#6);
[  133.023831] 
 *** DEADLOCK ***

[  133.023835] 8 locks held by rtcwake/5998:
[  133.023847]  #0:  (sb_writers#6){.+.+.+}, at: [<ffffffff811da0f2>] __sb_start_write+0xb2/0xf0
[  133.023855]  #1:  (&of->mutex){+.+.+.}, at: [<ffffffff812534e1>] kernfs_fop_write+0x61/0x180
[  133.023863]  #2:  (s_active#105){.+.+.+}, at: [<ffffffff812534e9>] kernfs_fop_write+0x69/0x180
[  133.023871]  #3:  (pm_mutex){+.+...}, at: [<ffffffff810d501f>] pm_suspend+0x6f/0x9e0
[  133.023882]  #4:  (acpi_scan_lock){+.+.+.}, at: [<ffffffff8147c1ef>] acpi_scan_lock_acquire+0x12/0x14
[  133.023891]  #5:  (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff810797f4>] disable_nonboot_cpus+0x24/0x3b0
[  133.023899]  #6:  (cpu_hotplug.dep_map){++++++}, at: [<ffffffff81079020>] cpu_hotplug_begin+0x0/0xc0
[  133.023906]  #7:  (cpu_hotplug.lock){+.+.+.}, at: [<ffffffff8107908d>] cpu_hotplug_begin+0x6d/0xc0
[  133.023907] 
stack backtrace:
[  133.023913] CPU: 0 PID: 5998 Comm: rtcwake Tainted: G     U          4.5.0-gfxbench-Patchwork_315+ #1
[  133.023915] Hardware name:                  /NUC5CPYB, BIOS PYBSWCEL.86A.0043.2015.0904.1904 09/04/2015
[  133.023922]  0000000000000000 ffff880274967870 ffffffff81401d15 ffffffff825c52d0
[  133.023927]  ffffffff82586bd0 ffff8802749678b0 ffffffff810ca1b0 ffff880274967900
[  133.023932]  ffff880273845328 ffff880273844b00 ffff880273845440 0000000000000008
[  133.023935] Call Trace:
[  133.023941]  [<ffffffff81401d15>] dump_stack+0x67/0x92
[  133.023946]  [<ffffffff810ca1b0>] print_circular_bug+0x1e0/0x2e0
[  133.023949]  [<ffffffff810cd961>] __lock_acquire+0x1e81/0x1ef0
[  133.023953]  [<ffffffff810ce2a1>] lock_acquire+0xb1/0x200
[  133.023959]  [<ffffffff81252b10>] ? kernfs_remove_by_name_ns+0x40/0x90
[  133.023962]  [<ffffffff81251d81>] __kernfs_remove+0x241/0x320
[  133.023965]  [<ffffffff81252b10>] ? kernfs_remove_by_name_ns+0x40/0x90
[  133.023969]  [<ffffffff812518a7>] ? kernfs_find_ns+0x97/0x140
[  133.023972]  [<ffffffff81252b10>] kernfs_remove_by_name_ns+0x40/0x90
[  133.023975]  [<ffffffff812544a0>] sysfs_remove_file_ns+0x10/0x20
[  133.023979]  [<ffffffff8153b504>] device_del+0x124/0x240
[  133.023982]  [<ffffffff810cb82d>] ? trace_hardirqs_on+0xd/0x10
[  133.023986]  [<ffffffff8153b639>] device_unregister+0x19/0x60
[  133.023989]  [<ffffffff81545d42>] cpu_cache_sysfs_exit+0x52/0xb0
[  133.023992]  [<ffffffff81546318>] cacheinfo_cpu_callback+0x38/0x70
[  133.023995]  [<ffffffff8109bc59>] notifier_call_chain+0x39/0xa0
[  133.023999]  [<ffffffff8109bcc9>] __raw_notifier_call_chain+0x9/0x10
[  133.024002]  [<ffffffff8107900e>] cpu_notify_nofail+0x1e/0x30
[  133.024005]  [<ffffffff81079320>] _cpu_down+0x200/0x330
[  133.024011]  [<ffffffff810e5d70>] ? __call_rcu.constprop.58+0x2f0/0x2f0
[  133.024014]  [<ffffffff810e5dd0>] ? call_rcu_bh+0x20/0x20
[  133.024019]  [<ffffffff810e1830>] ? trace_raw_output_rcu_utilization+0x60/0x60
[  133.024023]  [<ffffffff810e1830>] ? trace_raw_output_rcu_utilization+0x60/0x60
[  133.024027]  [<ffffffff8107987a>] disable_nonboot_cpus+0xaa/0x3b0
[  133.024031]  [<ffffffff810d47f8>] suspend_devices_and_enter+0x478/0xc30
[  133.024035]  [<ffffffff810d54c5>] pm_suspend+0x515/0x9e0
[  133.024038]  [<ffffffff810d3617>] state_store+0x77/0xe0
[  133.024043]  [<ffffffff81403eaf>] kobj_attr_store+0xf/0x20
[  133.024046]  [<ffffffff81254200>] sysfs_kf_write+0x40/0x50
[  133.024049]  [<ffffffff812535bc>] kernfs_fop_write+0x13c/0x180
[  133.024054]  [<ffffffff811d5d03>] __vfs_write+0x23/0xe0
[  133.024059]  [<ffffffff810c7882>] ? percpu_down_read+0x52/0x90
[  133.024062]  [<ffffffff811da0f2>] ? __sb_start_write+0xb2/0xf0
[  133.024065]  [<ffffffff811da0f2>] ? __sb_start_write+0xb2/0xf0
[  133.024069]  [<ffffffff811d6ac2>] vfs_write+0xa2/0x190
[  133.024073]  [<ffffffff811f549a>] ? __fget_light+0x6a/0x90
[  133.024076]  [<ffffffff811d7914>] SyS_write+0x44/0xb0
[  133.024080]  [<ffffffff817c82db>] entry_SYSCALL_64_fastpath+0x16/0x6f
[  133.027055] ACPI: Low-level resume complete
[  133.027294] PM: Restoring platform NVS memory


On Tue, Mar 29, 2016 at 08:34:36AM +0000, Patchwork wrote:
> == Series Details ==
> 
> Series: series starting with [v3] mm/vmap: Add a notifier for when we run out of vmap address space (rev3)
> URL   : https://patchwork.freedesktop.org/series/4569/
> State : failure
> 
> == Summary ==
> 
> Series 4569v3 Series without cover letter
> http://patchwork.freedesktop.org/api/1.0/series/4569/revisions/3/mbox/
> 
> Test kms_flip:
>         Subgroup basic-flip-vs-wf_vblank:
>                 pass       -> FAIL       (snb-x220t)
> Test pm_rpm:
>         Subgroup basic-pci-d3-state:
>                 pass       -> DMESG-WARN (bsw-nuc-2)
>         Subgroup basic-rte:
>                 dmesg-warn -> PASS       (byt-nuc) UNSTABLE
> 
> bdw-nuci7        total:192  pass:179  dwarn:0   dfail:0   fail:1   skip:12 
> bdw-ultra        total:192  pass:170  dwarn:0   dfail:0   fail:1   skip:21 
> bsw-nuc-2        total:192  pass:154  dwarn:1   dfail:0   fail:0   skip:37 
> byt-nuc          total:192  pass:157  dwarn:0   dfail:0   fail:0   skip:35 
> hsw-brixbox      total:192  pass:170  dwarn:0   dfail:0   fail:0   skip:22 
> hsw-gt2          total:192  pass:175  dwarn:0   dfail:0   fail:0   skip:17 
> ivb-t430s        total:192  pass:167  dwarn:0   dfail:0   fail:0   skip:25 
> skl-i7k-2        total:192  pass:169  dwarn:0   dfail:0   fail:0   skip:23 
> skl-nuci5        total:192  pass:181  dwarn:0   dfail:0   fail:0   skip:11 
> snb-dellxps      total:192  pass:158  dwarn:0   dfail:0   fail:0   skip:34 
> snb-x220t        total:192  pass:157  dwarn:0   dfail:0   fail:2   skip:33 
> 
> Results at /archive/results/CI_IGT_test/Patchwork_1725/
> 
> f5d413cccefa1f93d64c34f357151d42add63a84 drm-intel-nightly: 2016y-03m-24d-14h-34m-29s UTC integration manifest
> d9f9cda1b4e8a64ad1ac9bef0392e2c701b0d9f7 drm/i915/shrinker: Hook up vmap allocation failure notifier
> 93698e6141bccbabfc898a7d2cad5577f5af893b mm/vmap: Add a notifier for when we run out of vmap address space
> 
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[-- Attachment #1.2: Digital signature --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

[-- Attachment #2: Type: text/plain, Size: 160 bytes --]

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: ✗ Fi.CI.BAT: failure for series starting with [v3] mm/vmap: Add a notifier for when we run out of vmap address space (rev3)
  2016-03-29 12:51   ` Marius Vlad
@ 2016-03-29 12:56     ` Chris Wilson
  0 siblings, 0 replies; 27+ messages in thread
From: Chris Wilson @ 2016-03-29 12:56 UTC (permalink / raw)
  To: intel-gfx

On Tue, Mar 29, 2016 at 03:51:12PM +0300, Marius Vlad wrote:
> We're not catching it, but this gives deadlock trace when running 
> kms_pipe_crc_basic@suspend-read-crc-pipe-A (happens on BSW):

> [  133.022915] ======================================================
> [  133.022916] [ INFO: possible circular locking dependency detected ]
> [  133.022921] 4.5.0-gfxbench-Patchwork_315+ #1 Tainted: G     U         
> [  133.022922] -------------------------------------------------------

That's the warning (just boring impossible warning) that the patch to
move the kernfs locking around to avoid nesting mmap_sem inside it
resolves.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2016-03-29 12:56 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-17 11:59 [PATCH 1/2] mm/vmap: Add a notifier for when we run out of vmap address space Chris Wilson
2016-03-17 11:59 ` Chris Wilson
2016-03-17 11:59 ` Chris Wilson
2016-03-17 11:59 ` [PATCH 2/2] drm/i915/shrinker: Hook up vmap allocation failure notifier Chris Wilson
2016-03-17 11:59   ` Chris Wilson
2016-03-17 12:37 ` [PATCH 1/2] mm/vmap: Add a notifier for when we run out of vmap address space Roman Peniaev
2016-03-17 12:37   ` Roman Peniaev
2016-03-17 12:57   ` Chris Wilson
2016-03-17 12:57     ` Chris Wilson
2016-03-17 13:21     ` Roman Peniaev
2016-03-17 13:21       ` Roman Peniaev
2016-03-17 13:30       ` Chris Wilson
2016-03-17 13:30         ` Chris Wilson
2016-03-17 13:34 ` [PATCH v2] " Chris Wilson
2016-03-17 13:34   ` Chris Wilson
2016-03-17 13:34   ` Chris Wilson
2016-03-17 13:41   ` Chris Wilson
2016-03-17 13:41     ` Chris Wilson
2016-03-28 23:15     ` Andrew Morton
2016-03-28 23:15       ` Andrew Morton
2016-03-29  8:16       ` [PATCH v3] " Chris Wilson
2016-03-29  8:16         ` Chris Wilson
2016-03-29  8:16         ` Chris Wilson
2016-03-18  7:03 ` ✗ Fi.CI.BAT: failure for series starting with [v2] mm/vmap: Add a notifier for when we run out of vmap address space (rev2) Patchwork
2016-03-29  8:34 ` ✗ Fi.CI.BAT: failure for series starting with [v3] mm/vmap: Add a notifier for when we run out of vmap address space (rev3) Patchwork
2016-03-29 12:51   ` Marius Vlad
2016-03-29 12:56     ` Chris Wilson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.