All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 0/9] backports: take us to next-20130703
@ 2013-07-28  1:16 Luis R. Rodriguez
  2013-07-28  1:16 ` [RFC 1/9] backports: simplify space regexp for src_line Luis R. Rodriguez
                   ` (8 more replies)
  0 siblings, 9 replies; 11+ messages in thread
From: Luis R. Rodriguez @ 2013-07-28  1:16 UTC (permalink / raw)
  To: backports; +Cc: Luis R. Rodriguez

From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>

We're again a bit behind linux-next but time to push things forward.
The delta between next-20130627 and next-20130703 had a very special
feature introduced which was hard to backport: ww_mutex. I have
just compile tested this stuff. I'd appreciate some tests and
review on this.

Johannes Berg (1):
  backport: disable unused automatic backports

Luis R. Rodriguez (8):
  backports: simplify space regexp for src_line
  backports: fix wq_name_list initialization
  backports: fix DMI_EXACT_MATCH() backport
  backports: copy over mfd/max8998.h mfd/max8998-private.h
  backports: backport of_get_child_by_name() support
  backports: backport ww_mutex support
  backports: backport cross-device reservation support
  backports: refresh patches for next-20130703

 backport/backport-include/linux/mod_devicetable.h  |    2 +-
 backport/backport-include/linux/of.h               |   23 +-
 backport/backport-include/linux/reservation.h      |   70 ++
 backport/backport-include/linux/ww_mutex.h         |  333 ++++++++++
 backport/compat/Kconfig                            |   18 +
 backport/compat/Makefile                           |    2 +
 backport/compat/compat-3.3.c                       |    4 +-
 backport/compat/compat-3.7.c                       |   29 +
 backport/compat/drivers-base-reservation.c         |   39 ++
 backport/compat/kernel/ww_mutex.c                  |  667 ++++++++++++++++++++
 copy-list                                          |    2 +
 gentree.py                                         |    8 +-
 lib/kconfig.py                                     |    2 +-
 .../drivers_gpu_drm_i915_i915_gem.patch            |    2 +-
 .../drivers_gpu_drm_i915_i915_gem.patch            |    2 +-
 .../drivers_gpu_drm_i915_i915_gem.patch            |    2 +-
 .../14-shrinkers-api/drivers_gpu_drm_i915.patch    |   12 +-
 .../network/0001-netdev_ops/alx.patch              |    2 +-
 .../network/0007-pci_dev_dev_flags/alx.patch       |    2 +-
 .../drivers_net_wireless_b43_main.patch            |    6 +-
 .../11-dev-pm-ops/drivers_bcma_host_pci.patch      |    2 +-
 .../drivers_net_ethernet_atheros_alx_main.patch    |    4 +-
 .../drivers_net_ethernet_atheros_alx_main.patch    |   10 +-
 .../drivers_net_wireless_b43_main.patch            |    2 +-
 .../drivers_net_usb_cdc_ether.patch                |    2 +-
 25 files changed, 1215 insertions(+), 32 deletions(-)
 create mode 100644 backport/backport-include/linux/reservation.h
 create mode 100644 backport/backport-include/linux/ww_mutex.h
 create mode 100644 backport/compat/drivers-base-reservation.c
 create mode 100644 backport/compat/kernel/ww_mutex.c

-- 
1.7.10.4


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC 1/9] backports: simplify space regexp for src_line
  2013-07-28  1:16 [RFC 0/9] backports: take us to next-20130703 Luis R. Rodriguez
@ 2013-07-28  1:16 ` Luis R. Rodriguez
  2013-07-28  1:16 ` [RFC 2/9] backports: fix wq_name_list initialization Luis R. Rodriguez
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Luis R. Rodriguez @ 2013-07-28  1:16 UTC (permalink / raw)
  To: backports; +Cc: Luis R. Rodriguez

From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>

The regexp doesn't require the extra brackets.

Signed-off-by: Luis R. Rodriguez <mcgrof@do-not-panic.com>
---
 lib/kconfig.py |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/kconfig.py b/lib/kconfig.py
index 8a30ddc..179121a 100644
--- a/lib/kconfig.py
+++ b/lib/kconfig.py
@@ -4,7 +4,7 @@
 
 import os, re
 
-src_line = re.compile(r'^[\s]*source\s+"?(?P<src>[^\s"]*)"?\s*$')
+src_line = re.compile(r'^\s*source\s+"?(?P<src>[^\s"]*)"?\s*$')
 tri_line = re.compile(r'^(?P<spc>\s+)tristate')
 bool_line = re.compile(r'^(?P<spc>\s+)bool')
 cfg_line = re.compile(r'^(config|menuconfig)\s+(?P<sym>[^\s]*)')
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC 2/9] backports: fix wq_name_list initialization
  2013-07-28  1:16 [RFC 0/9] backports: take us to next-20130703 Luis R. Rodriguez
  2013-07-28  1:16 ` [RFC 1/9] backports: simplify space regexp for src_line Luis R. Rodriguez
@ 2013-07-28  1:16 ` Luis R. Rodriguez
  2013-07-28  1:16 ` [RFC 3/9] backports: fix DMI_EXACT_MATCH() backport Luis R. Rodriguez
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Luis R. Rodriguez @ 2013-07-28  1:16 UTC (permalink / raw)
  To: backports; +Cc: Luis R. Rodriguez

From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>

As noted by Johannes this wasn't being initialized
without this.

Signed-off-by: Luis R. Rodriguez <mcgrof@do-not-panic.com>
---
 backport/compat/compat-3.3.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/backport/compat/compat-3.3.c b/backport/compat/compat-3.3.c
index d86b40e..75bd5e5 100644
--- a/backport/compat/compat-3.3.c
+++ b/backport/compat/compat-3.3.c
@@ -174,7 +174,7 @@ out:
 EXPORT_SYMBOL_GPL(__pskb_copy);
 
 static DEFINE_SPINLOCK(wq_name_lock);
-static struct list_head wq_name_list;
+static LIST_HEAD(wq_name_list);
 
 struct wq_name {
 	struct list_head list;
@@ -204,7 +204,7 @@ backport_alloc_workqueue(const char *fmt, unsigned int flags,
 				    0,
 #endif
 				    key, lock_name);
-#else	                        
+#else
 	wq = __alloc_workqueue_key(n->name, flags, max_active, key, lock_name);
 #endif
 	if (!wq) {
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC 3/9] backports: fix DMI_EXACT_MATCH() backport
  2013-07-28  1:16 [RFC 0/9] backports: take us to next-20130703 Luis R. Rodriguez
  2013-07-28  1:16 ` [RFC 1/9] backports: simplify space regexp for src_line Luis R. Rodriguez
  2013-07-28  1:16 ` [RFC 2/9] backports: fix wq_name_list initialization Luis R. Rodriguez
@ 2013-07-28  1:16 ` Luis R. Rodriguez
  2013-07-28  1:16 ` [RFC 4/9] backport: disable unused automatic backports Luis R. Rodriguez
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Luis R. Rodriguez @ 2013-07-28  1:16 UTC (permalink / raw)
  To: backports; +Cc: Luis R. Rodriguez

From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>

DMI_EXACT_MATCH uses struct dmi_strmatch's new member exact_match:

	@@ -456,7 +456,8 @@ enum dmi_field {
	 };

	 struct dmi_strmatch {
	-       unsigned char slot;
	+       unsigned char slot:7;
	+       unsigned char exact_match:1;
		char substr[79];
	 };

Prior to 5017b285 we only had slot so to use DMI_EXACT_MATCH with its
intent we'd have to do something like slot |= 1 if its called. This
however assumes though that older code has the sanity check as changed
in 5017b285 on drivers/firmware/dmi_scan.c. dmi_scan.o gets linked
with CONFIG_DMI. At least for x86 that gets objects sprinkled on
arch/x86/, but more importantly CONFIG_DMI is bool. I've argued how I
envision us being able to backport core components before (see
0935deab for the hint) but as it is right now we can't. We only
backport things we can throw in as modular.

Since we only backport modularly for now we can't backport DMI_EXACT_MATCH()
and as such all entries defined with DMI_EXACT_MATCH() should be ifdef'd
out for usage only on kernels >= v3.11 but to help reduce code churn we
can also just force such entries to be ignored for now. We therefore
backport DMI_EXACT_MATCH() for now to match something that will not be
found.

Signed-off-by: Luis R. Rodriguez <mcgrof@do-not-panic.com>
---
 backport/backport-include/linux/mod_devicetable.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/backport/backport-include/linux/mod_devicetable.h b/backport/backport-include/linux/mod_devicetable.h
index 5b63637..c09793b 100644
--- a/backport/backport-include/linux/mod_devicetable.h
+++ b/backport/backport-include/linux/mod_devicetable.h
@@ -3,7 +3,7 @@
 #include_next <linux/mod_devicetable.h>
 
 #if LINUX_VERSION_CODE < KERNEL_VERSION(3,11,0)
-#define DMI_EXACT_MATCH(a, b)  { a, b }
+#define DMI_EXACT_MATCH(a, b)  DMI_MATCH(DMI_PRODUCT_NAME, "BACKPORT_IGNORE")
 #endif
 
 #ifndef HID_BUS_ANY
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC 4/9] backport: disable unused automatic backports
  2013-07-28  1:16 [RFC 0/9] backports: take us to next-20130703 Luis R. Rodriguez
                   ` (2 preceding siblings ...)
  2013-07-28  1:16 ` [RFC 3/9] backports: fix DMI_EXACT_MATCH() backport Luis R. Rodriguez
@ 2013-07-28  1:16 ` Luis R. Rodriguez
  2013-07-28  1:16 ` [RFC 5/9] backports: copy over mfd/max8998.h mfd/max8998-private.h Luis R. Rodriguez
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Luis R. Rodriguez @ 2013-07-28  1:16 UTC (permalink / raw)
  To: backports; +Cc: Johannes Berg, Luis R. Rodriguez

From: Johannes Berg <johannes.berg@intel.com>

When an automatic backport isn't included due to not
being used (see commit 6e0475b599217eceb8e01a1e572c,
"gentree: add automatic backports only if needed"),
selecting BACKPORT_USERSEL_BUILD_ALL will make the
build fail. Avoid this by disabling such symbols in
the Kconfig.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luis R. Rodriguez <mcgrof@do-not-panic.com>
---
 gentree.py |    8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/gentree.py b/gentree.py
index 356871d..b6c98e8 100755
--- a/gentree.py
+++ b/gentree.py
@@ -176,6 +176,7 @@ def automatic_backport_mangle_c_file(name):
 
 
 def add_automatic_backports(args):
+    disable_list = []
     export = re.compile(r'^EXPORT_SYMBOL(_GPL)?\((?P<sym>[^\)]*)\)')
     bpi = kconfig.get_backport_info(os.path.join(args.outdir, 'compat', 'Kconfig'))
     configtree = kconfig.ConfigTree(os.path.join(args.outdir, 'Kconfig'))
@@ -183,6 +184,7 @@ def add_automatic_backports(args):
     for sym, vals in bpi.items():
         if sym.startswith('BACKPORT_BUILD_'):
             if not sym[15:] in all_selects:
+                disable_list.append(sym)
                 continue
         symtype, module_name, c_files, h_files = vals
 
@@ -229,6 +231,7 @@ def add_automatic_backports(args):
                 outf.write('#define %s LINUX_BACKPORT(%s)\n' % (s, s))
             outf.write('#include <%s>\n' % (os.path.dirname(f) + '/backport-' + os.path.basename(f), ))
             outf.write('#endif /* CPTCFG_%s */\n' % sym)
+    return disable_list
 
 def git_debug_init(args):
     """
@@ -346,7 +349,10 @@ def process(kerneldir, outdir, copy_list_file, git_revision=None,
 
     git_debug_snapshot(args, 'Add driver sources')
 
-    add_automatic_backports(args)
+    disable_list = add_automatic_backports(args)
+    if disable_list:
+        bpcfg = kconfig.ConfigTree(os.path.join(args.outdir, 'compat', 'Kconfig'))
+        bpcfg.disable_symbols(disable_list)
     git_debug_snapshot(args, 'Add automatic backports')
 
     logwrite('Apply patches ...')
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC 5/9] backports: copy over mfd/max8998.h mfd/max8998-private.h
  2013-07-28  1:16 [RFC 0/9] backports: take us to next-20130703 Luis R. Rodriguez
                   ` (3 preceding siblings ...)
  2013-07-28  1:16 ` [RFC 4/9] backport: disable unused automatic backports Luis R. Rodriguez
@ 2013-07-28  1:16 ` Luis R. Rodriguez
  2013-07-28  1:16 ` [RFC 6/9] backports: backport of_get_child_by_name() support Luis R. Rodriguez
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Luis R. Rodriguez @ 2013-07-28  1:16 UTC (permalink / raw)
  To: backports; +Cc: Luis R. Rodriguez

From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>

This is required by CONFIG_REGULATOR_MAX8998
which builds drivers/regulator/max8998.c as of
next-20130703.

Signed-off-by: Luis R. Rodriguez <mcgrof@do-not-panic.com>
---
 copy-list |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/copy-list b/copy-list
index 9957811..ab8bb8b 100644
--- a/copy-list
+++ b/copy-list
@@ -141,6 +141,8 @@ include/linux/regulator/max8952.h
 include/linux/regulator/max8973-regulator.h
 include/linux/mfd/max77693.h
 include/linux/mfd/max77693-private.h
+include/linux/mfd/max8998.h
+include/linux/mfd/max8998-private.h
 include/linux/regulator/of_regulator.h
 include/linux/regulator/tps51632-regulator.h
 include/linux/regulator/tps62360.h
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC 6/9] backports: backport of_get_child_by_name() support
  2013-07-28  1:16 [RFC 0/9] backports: take us to next-20130703 Luis R. Rodriguez
                   ` (4 preceding siblings ...)
  2013-07-28  1:16 ` [RFC 5/9] backports: copy over mfd/max8998.h mfd/max8998-private.h Luis R. Rodriguez
@ 2013-07-28  1:16 ` Luis R. Rodriguez
  2013-07-28  1:16   ` Luis R. Rodriguez
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 11+ messages in thread
From: Luis R. Rodriguez @ 2013-07-28  1:16 UTC (permalink / raw)
  To: backports; +Cc: Luis R. Rodriguez

From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>

This was added via 9c19761a. While at it clean up the backported
header a bit to make backporting more OF stuff more manageable.

mcgrof@frijol ~/linux-stable (git::master)$ git describe --contains 9c19761a
v3.7-rc1~123^2~4

commit 9c19761a7ecdc86abb2fba0feb81e8952eccc1f1
Author: Srinivas Kandagatla <srinivas.kandagatla@st.com>
Date:   Tue Sep 18 08:10:28 2012 +0100

    dt: introduce of_get_child_by_name to get child node by name

    This patch introduces of_get_child_by_name function to get a child node
    by its name in a given parent node.

    Without this patch each driver code has to iterate the parent and do
    a string compare, However having of_get_child_by_name libary function would
    avoid code duplication, errors and is more convenient.

    Signed-off-by: Srinivas Kandagatla <srinivas.kandagatla@st.com>
    Signed-off-by: Rob Herring <rob.herring@calxeda.com>

Signed-off-by: Luis R. Rodriguez <mcgrof@do-not-panic.com>
---
 backport/backport-include/linux/of.h |   23 ++++++++++++++++++++---
 backport/compat/compat-3.7.c         |   29 +++++++++++++++++++++++++++++
 2 files changed, 49 insertions(+), 3 deletions(-)

diff --git a/backport/backport-include/linux/of.h b/backport/backport-include/linux/of.h
index c5dc87c..93e91dd 100644
--- a/backport/backport-include/linux/of.h
+++ b/backport/backport-include/linux/of.h
@@ -4,13 +4,30 @@
 #include <linux/version.h>
 
 #if (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,34))
-#include_next <linux/of.h>
-#else
+#define KERNEL_HAS_OF_SUPPORT 1
+#endif /* (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,34)) */
 
 #ifdef CONFIG_OF
+#define KERNEL_HAS_OF_SUPPORT 1
+#endif /* CONFIG_OF */
+
+#ifdef KERNEL_HAS_OF_SUPPORT
 #include_next <linux/of.h>
+
+#if (LINUX_VERSION_CODE < KERNEL_VERSION(3,7,0))
+#ifdef CONFIG_OF
+extern struct device_node *of_get_child_by_name(const struct device_node *node,
+						const char *name);
+#else
+static inline struct device_node *of_get_child_by_name(
+					const struct device_node *node,
+					const char *name)
+{
+	return NULL;
+}
 #endif /* CONFIG_OF */
+#endif /* (LINUX_VERSION_CODE < KERNEL_VERSION(3,7,0)) */
 
-#endif /* (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,34)) */
+#endif /* KERNEL_HAS_OF_SUPPORT */
 
 #endif	/* _COMPAT_LINUX_OF_H */
diff --git a/backport/compat/compat-3.7.c b/backport/compat/compat-3.7.c
index 0f2d332..8f5a56c 100644
--- a/backport/compat/compat-3.7.c
+++ b/backport/compat/compat-3.7.c
@@ -251,3 +251,32 @@ int pcie_capability_clear_and_set_dword(struct pci_dev *dev, int pos,
 	return ret;
 }
 EXPORT_SYMBOL_GPL(pcie_capability_clear_and_set_dword);
+
+#ifdef KERNEL_HAS_OF_SUPPORT
+#ifdef CONFIG_OF
+#if (LINUX_VERSION_CODE < KERNEL_VERSION(3,7,0))
+/**
+ *	of_get_child_by_name - Find the child node by name for a given parent
+ *	@node:	parent node
+ *	@name:	child name to look for.
+ *
+ *      This function looks for child node for given matching name
+ *
+ *	Returns a node pointer if found, with refcount incremented, use
+ *	of_node_put() on it when done.
+ *	Returns NULL if node is not found.
+ */
+struct device_node *of_get_child_by_name(const struct device_node *node,
+				const char *name)
+{
+	struct device_node *child;
+
+	for_each_child_of_node(node, child)
+		if (child->name && (of_node_cmp(child->name, name) == 0))
+			break;
+	return child;
+}
+EXPORT_SYMBOL_GPL(of_get_child_by_name);
+#endif /* (LINUX_VERSION_CODE < KERNEL_VERSION(3,7,0)) */
+#endif /* CONFIG_OF */
+#endif /* KERNEL_HAS_OF_SUPPORT */
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC 7/9] backports: backport ww_mutex support
@ 2013-07-28  1:16   ` Luis R. Rodriguez
  0 siblings, 0 replies; 11+ messages in thread
From: Luis R. Rodriguez @ 2013-07-28  1:16 UTC (permalink / raw)
  To: backports
  Cc: Luis R. Rodriguez, maarten.lankhorst, Daniel Vetter, Rob Clark,
	Peter Zijlstra, dri-devel, linaro-mm-sig, rostedt, daniel,
	Linus Torvalds, Andrew Morton, Thomas Gleixner

From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>

This backports the kernel's wound/wait style locks 040a0a371,
using the linux-stable v3.11-rc2 as a base for development.
Given the complexity to support debugging mutexes this backport
implementation is simplified by only making this feature availabe
if you to have DEBUG_MUTEXES and DEBUG_LOCK_ALLOC disabled. Given
that ww mutex is required for DRM this also means we must update
the kconfig for DRM and require you to also not be able to build
DRM if you have either of these options enabled. Support for
DEBUG_MUTEXES and DEBUG_LOCK_ALLOC can be added later by anyone
daring.

Part of the ww mutex addition to the kernel required modifying
the fast path mutex locking scheme by requiring you to deal
with the slow path alternatives on your own (refer to a41b56ef).
The reason for this change was that the mutex fastpath implementation
assumed your slowpath alternative can only be passed one argument
and the addition of ww mutexes requires dealing with the slow
path with a context passed.

It'd be painful to backport all asm for an optimized fastpath
implementation so we penalize the backport ww mutex fast path
by using the generic atomic_dec_return().

To backport a clean our own mutex_lock_common() with the least
amount of changes against upstream commits 2bd2c92c and 41fcb9f2
also needed to be backported. Commit 2bd2c92c dealt with adding
support for queue mutex spinners with an MCS lock, since this
cannot be backported for older kernels we provide empty inlines.
Commit 41fcb9f2 just removed SCHED_FEAT_OWNER_SPIN as it was an
early hack, the only thing required to backport this commit was
to provide an alternative declaration for mutex_spin_on_owner()
as it was declared non-inline for older kernels.

Finally c5491ea7 required backporting schedule_preempt_disabled()
as well but that just consisted of carrying over the original
implementation. Since its not exported we need to reimplement
it to make it available to our internal core ww mutex port.

mcgrof@frijol ~/linux-stable (git::master)$ git describe --contains 040a0a371
v3.11-rc1~147^2~5

mcgrof@frijol ~/linux-stable (git::master)$ git describe --contains a41b56ef
v3.11-rc1~147^2~6

mcgrof@frijol ~/linux-stable (git::master)$ git describe --contains 2bd2c92c
v3.10-rc1~200^2~3

mcgrof@frijol ~/linux-stable (git::master)$ git describe --contains 41fcb9f2
v3.10-rc1~200^2~5

mcgrof@frijol ~/linux-stable (git::master)$ git describe --contains c5491ea7
v3.4-rc1~3^2~27

commit 040a0a37100563754bb1fee6ff6427420bcfa609
Author: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Date:   Mon Jun 24 10:30:04 2013 +0200

    mutex: Add support for wound/wait style locks

    Wound/wait mutexes are used when other multiple lock
    acquisitions of a similar type can be done in an arbitrary
    order. The deadlock handling used here is called wait/wound in
    the RDBMS literature: The older tasks waits until it can acquire
    the contended lock. The younger tasks needs to back off and drop
    all the locks it is currently holding, i.e. the younger task is
    wounded.

    For full documentation please read Documentation/ww-mutex-design.txt.

    References: https://lwn.net/Articles/548909/
    Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
    Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
    Acked-by: Rob Clark <robdclark@gmail.com>
    Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
    Cc: dri-devel@lists.freedesktop.org
    Cc: linaro-mm-sig@lists.linaro.org
    Cc: rostedt@goodmis.org
    Cc: daniel@ffwll.ch
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Link: http://lkml.kernel.org/r/51C8038C.9000106@canonical.com
    Signed-off-by: Ingo Molnar <mingo@kernel.org>

commit a41b56efa70e060f650aeb54740aaf52044a1ead
Author: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Date:   Thu Jun 20 13:31:05 2013 +0200

    arch: Make __mutex_fastpath_lock_retval return whether fastpath succeeded or not

    This will allow me to call functions that have multiple
    arguments if fastpath fails. This is required to support ticket
    mutexes, because they need to be able to pass an extra argument
    to the fail function.

    Originally I duplicated the functions, by adding
    __mutex_fastpath_lock_retval_arg. This ended up being just a
    duplication of the existing function, so a way to test if
    fastpath was called ended up being better.

    This also cleaned up the reservation mutex patch some by being
    able to call an atomic_set instead of atomic_xchg, and making it
    easier to detect if the wrong unlock function was previously
    used.

    Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
    Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
    Cc: dri-devel@lists.freedesktop.org
    Cc: linaro-mm-sig@lists.linaro.org
    Cc: robclark@gmail.com
    Cc: rostedt@goodmis.org
    Cc: daniel@ffwll.ch
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Link: http://lkml.kernel.org/r/20130620113105.4001.83929.stgit@patser
    Signed-off-by: Ingo Molnar <mingo@kernel.org>

commit 2bd2c92cf07cc4a373bf316c75b78ac465fefd35
Author: Waiman Long <Waiman.Long@hp.com>
Date:   Wed Apr 17 15:23:13 2013 -0400

    mutex: Queue mutex spinners with MCS lock to reduce cacheline contention

    <-- snip -->

commit 41fcb9f230bf773656d1768b73000ef720bf00c3
Author: Waiman Long <Waiman.Long@hp.com>
Date:   Wed Apr 17 15:23:11 2013 -0400

    mutex: Move mutex spinning code from sched/core.c back to mutex.c

    <-- snip -->

commit c5491ea779793f977d282754db478157cc409d82
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Mon Mar 21 12:09:35 2011 +0100

    sched/rt: Add schedule_preempt_disabled()

    <-- snip -->

Cc: maarten.lankhorst@canonical.com
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Rob Clark <robdclark@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: dri-devel@lists.freedesktop.org
Cc: linaro-mm-sig@lists.linaro.org
Cc: rostedt@goodmis.org
Cc: daniel@ffwll.ch
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Luis R. Rodriguez <mcgrof@do-not-panic.com>
---
 backport/backport-include/linux/ww_mutex.h |  333 ++++++++++++++
 backport/compat/Kconfig                    |   11 +
 backport/compat/Makefile                   |    1 +
 backport/compat/kernel/ww_mutex.c          |  667 ++++++++++++++++++++++++++++
 4 files changed, 1012 insertions(+)
 create mode 100644 backport/backport-include/linux/ww_mutex.h
 create mode 100644 backport/compat/kernel/ww_mutex.c

diff --git a/backport/backport-include/linux/ww_mutex.h b/backport/backport-include/linux/ww_mutex.h
new file mode 100644
index 0000000..0953939
--- /dev/null
+++ b/backport/backport-include/linux/ww_mutex.h
@@ -0,0 +1,333 @@
+#ifndef __BACKPORT_LINUX_WW_MUTEX_H
+#define __BACKPORT_LINUX_WW_MUTEX_H
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,11,0)
+#include_next <linux/ww_mutex.h>
+#else
+#ifdef CPTCFG_BACKPORT_BUILD_WW_MUTEX
+/*
+ * Wound/Wait Mutexes: blocking mutual exclusion locks with deadlock avoidance
+ *
+ * Original mutex implementation started by Ingo Molnar:
+ *
+ *  Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ *
+ * Wound/wait implementation:
+ *  Copyright (C) 2013 Canonical Ltd.
+ *
+ * This file contains the main data structure and API definitions.
+ */
+
+#include <linux/mutex.h>
+
+struct ww_class {
+	atomic_long_t stamp;
+	struct lock_class_key acquire_key;
+	struct lock_class_key mutex_key;
+	const char *acquire_name;
+	const char *mutex_name;
+};
+
+struct ww_acquire_ctx {
+	struct task_struct *task;
+	unsigned long stamp;
+	unsigned acquired;
+};
+
+struct ww_mutex {
+	struct mutex base;
+	struct ww_acquire_ctx *ctx;
+};
+
+# define __WW_CLASS_MUTEX_INITIALIZER(lockname, ww_class)
+
+#define __WW_CLASS_INITIALIZER(ww_class) \
+		{ .stamp = ATOMIC_LONG_INIT(0) \
+		, .acquire_name = #ww_class "_acquire" \
+		, .mutex_name = #ww_class "_mutex" }
+
+#define __WW_MUTEX_INITIALIZER(lockname, class) \
+		{ .base = { \__MUTEX_INITIALIZER(lockname) } \
+		__WW_CLASS_MUTEX_INITIALIZER(lockname, class) }
+
+#define DEFINE_WW_CLASS(classname) \
+	struct ww_class classname = __WW_CLASS_INITIALIZER(classname)
+
+#define DEFINE_WW_MUTEX(mutexname, ww_class) \
+	struct ww_mutex mutexname = __WW_MUTEX_INITIALIZER(mutexname, ww_class)
+
+/**
+ * ww_mutex_init - initialize the w/w mutex
+ * @lock: the mutex to be initialized
+ * @ww_class: the w/w class the mutex should belong to
+ *
+ * Initialize the w/w mutex to unlocked state and associate it with the given
+ * class.
+ *
+ * It is not allowed to initialize an already locked mutex.
+ */
+#define ww_mutex_init LINUX_BACKPORT(ww_mutex_init)
+static inline void ww_mutex_init(struct ww_mutex *lock,
+				 struct ww_class *ww_class)
+{
+	__mutex_init(&lock->base, ww_class->mutex_name, &ww_class->mutex_key);
+	lock->ctx = NULL;
+}
+
+/**
+ * ww_acquire_init - initialize a w/w acquire context
+ * @ctx: w/w acquire context to initialize
+ * @ww_class: w/w class of the context
+ *
+ * Initializes an context to acquire multiple mutexes of the given w/w class.
+ *
+ * Context-based w/w mutex acquiring can be done in any order whatsoever within
+ * a given lock class. Deadlocks will be detected and handled with the
+ * wait/wound logic.
+ *
+ * Mixing of context-based w/w mutex acquiring and single w/w mutex locking can
+ * result in undetected deadlocks and is so forbidden. Mixing different contexts
+ * for the same w/w class when acquiring mutexes can also result in undetected
+ * deadlocks, and is hence also forbidden. Both types of abuse will be caught by
+ * enabling CONFIG_PROVE_LOCKING.
+ *
+ * Nesting of acquire contexts for _different_ w/w classes is possible, subject
+ * to the usual locking rules between different lock classes.
+ *
+ * An acquire context must be released with ww_acquire_fini by the same task
+ * before the memory is freed. It is recommended to allocate the context itself
+ * on the stack.
+ */
+#define ww_acquire_init LINUX_BACKPORT(ww_acquire_init)
+static inline void ww_acquire_init(struct ww_acquire_ctx *ctx,
+				   struct ww_class *ww_class)
+{
+	ctx->task = current;
+	ctx->stamp = atomic_long_inc_return(&ww_class->stamp);
+	ctx->acquired = 0;
+}
+
+/**
+ * ww_acquire_done - marks the end of the acquire phase
+ * @ctx: the acquire context
+ *
+ * Marks the end of the acquire phase, any further w/w mutex lock calls using
+ * this context are forbidden.
+ *
+ * Calling this function is optional, it is just useful to document w/w mutex
+ * code and clearly designated the acquire phase from actually using the locked
+ * data structures.
+ */
+#define ww_acquire_done LINUX_BACKPORT(ww_acquire_done)
+static inline void ww_acquire_done(struct ww_acquire_ctx *ctx)
+{
+}
+
+/**
+ * ww_acquire_fini - releases a w/w acquire context
+ * @ctx: the acquire context to free
+ *
+ * Releases a w/w acquire context. This must be called _after_ all acquired w/w
+ * mutexes have been released with ww_mutex_unlock.
+ */
+#define ww_acquire_fini LINUX_BACKPORT(ww_acquire_fini)
+static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx)
+{
+}
+
+#define __ww_mutex_lock LINUX_BACKPORT(__ww_mutex_lock)
+extern int __must_check __ww_mutex_lock(struct ww_mutex *lock,
+					struct ww_acquire_ctx *ctx);
+#define __ww_mutex_lock_interruptible LINUX_BACKPORT(__ww_mutex_lock_interruptible)
+extern int __must_check __ww_mutex_lock_interruptible(struct ww_mutex *lock,
+						      struct ww_acquire_ctx *ctx);
+
+/**
+ * ww_mutex_lock - acquire the w/w mutex
+ * @lock: the mutex to be acquired
+ * @ctx: w/w acquire context, or NULL to acquire only a single lock.
+ *
+ * Lock the w/w mutex exclusively for this task.
+ *
+ * Deadlocks within a given w/w class of locks are detected and handled with the
+ * wait/wound algorithm. If the lock isn't immediately avaiable this function
+ * will either sleep until it is (wait case). Or it selects the current context
+ * for backing off by returning -EDEADLK (wound case). Trying to acquire the
+ * same lock with the same context twice is also detected and signalled by
+ * returning -EALREADY. Returns 0 if the mutex was successfully acquired.
+ *
+ * In the wound case the caller must release all currently held w/w mutexes for
+ * the given context and then wait for this contending lock to be available by
+ * calling ww_mutex_lock_slow. Alternatively callers can opt to not acquire this
+ * lock and proceed with trying to acquire further w/w mutexes (e.g. when
+ * scanning through lru lists trying to free resources).
+ *
+ * The mutex must later on be released by the same task that
+ * acquired it. The task may not exit without first unlocking the mutex. Also,
+ * kernel memory where the mutex resides must not be freed with the mutex still
+ * locked. The mutex must first be initialized (or statically defined) before it
+ * can be locked. memset()-ing the mutex to 0 is not allowed. The mutex must be
+ * of the same w/w lock class as was used to initialize the acquire context.
+ *
+ * A mutex acquired with this function must be released with ww_mutex_unlock.
+ */
+#define ww_mutex_lock LINUX_BACKPORT(ww_mutex_lock)
+static inline int ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+{
+	if (ctx)
+		return __ww_mutex_lock(lock, ctx);
+
+	mutex_lock(&lock->base);
+	return 0;
+}
+
+/**
+ * ww_mutex_lock_interruptible - acquire the w/w mutex, interruptible
+ * @lock: the mutex to be acquired
+ * @ctx: w/w acquire context
+ *
+ * Lock the w/w mutex exclusively for this task.
+ *
+ * Deadlocks within a given w/w class of locks are detected and handled with the
+ * wait/wound algorithm. If the lock isn't immediately avaiable this function
+ * will either sleep until it is (wait case). Or it selects the current context
+ * for backing off by returning -EDEADLK (wound case). Trying to acquire the
+ * same lock with the same context twice is also detected and signalled by
+ * returning -EALREADY. Returns 0 if the mutex was successfully acquired. If a
+ * signal arrives while waiting for the lock then this function returns -EINTR.
+ *
+ * In the wound case the caller must release all currently held w/w mutexes for
+ * the given context and then wait for this contending lock to be available by
+ * calling ww_mutex_lock_slow_interruptible. Alternatively callers can opt to
+ * not acquire this lock and proceed with trying to acquire further w/w mutexes
+ * (e.g. when scanning through lru lists trying to free resources).
+ *
+ * The mutex must later on be released by the same task that
+ * acquired it. The task may not exit without first unlocking the mutex. Also,
+ * kernel memory where the mutex resides must not be freed with the mutex still
+ * locked. The mutex must first be initialized (or statically defined) before it
+ * can be locked. memset()-ing the mutex to 0 is not allowed. The mutex must be
+ * of the same w/w lock class as was used to initialize the acquire context.
+ *
+ * A mutex acquired with this function must be released with ww_mutex_unlock.
+ */
+#define ww_mutex_lock_interruptible LINUX_BACKPORT(ww_mutex_lock_interruptible)
+static inline int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock,
+							   struct ww_acquire_ctx *ctx)
+{
+	if (ctx)
+		return __ww_mutex_lock_interruptible(lock, ctx);
+	else
+		return mutex_lock_interruptible(&lock->base);
+}
+
+/**
+ * ww_mutex_lock_slow - slowpath acquiring of the w/w mutex
+ * @lock: the mutex to be acquired
+ * @ctx: w/w acquire context
+ *
+ * Acquires a w/w mutex with the given context after a wound case. This function
+ * will sleep until the lock becomes available.
+ *
+ * The caller must have released all w/w mutexes already acquired with the
+ * context and then call this function on the contended lock.
+ *
+ * Afterwards the caller may continue to (re)acquire the other w/w mutexes it
+ * needs with ww_mutex_lock. Note that the -EALREADY return code from
+ * ww_mutex_lock can be used to avoid locking this contended mutex twice.
+ *
+ * It is forbidden to call this function with any other w/w mutexes associated
+ * with the context held. It is forbidden to call this on anything else than the
+ * contending mutex.
+ *
+ * Note that the slowpath lock acquiring can also be done by calling
+ * ww_mutex_lock directly. This function here is simply to help w/w mutex
+ * locking code readability by clearly denoting the slowpath.
+ */
+#define ww_mutex_lock_slow LINUX_BACKPORT(ww_mutex_lock_slow)
+static inline void
+ww_mutex_lock_slow(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+{
+	int ret;
+	ret = ww_mutex_lock(lock, ctx);
+	(void)ret;
+}
+
+/**
+ * ww_mutex_lock_slow_interruptible - slowpath acquiring of the w/w mutex, interruptible
+ * @lock: the mutex to be acquired
+ * @ctx: w/w acquire context
+ *
+ * Acquires a w/w mutex with the given context after a wound case. This function
+ * will sleep until the lock becomes available and returns 0 when the lock has
+ * been acquired. If a signal arrives while waiting for the lock then this
+ * function returns -EINTR.
+ *
+ * The caller must have released all w/w mutexes already acquired with the
+ * context and then call this function on the contended lock.
+ *
+ * Afterwards the caller may continue to (re)acquire the other w/w mutexes it
+ * needs with ww_mutex_lock. Note that the -EALREADY return code from
+ * ww_mutex_lock can be used to avoid locking this contended mutex twice.
+ *
+ * It is forbidden to call this function with any other w/w mutexes associated
+ * with the given context held. It is forbidden to call this on anything else
+ * than the contending mutex.
+ *
+ * Note that the slowpath lock acquiring can also be done by calling
+ * ww_mutex_lock_interruptible directly. This function here is simply to help
+ * w/w mutex locking code readability by clearly denoting the slowpath.
+ */
+#define ww_mutex_lock_slow_interruptible LINUX_BACKPORT(ww_mutex_lock_slow_interruptible)
+static inline int __must_check
+ww_mutex_lock_slow_interruptible(struct ww_mutex *lock,
+				 struct ww_acquire_ctx *ctx)
+{
+	return ww_mutex_lock_interruptible(lock, ctx);
+}
+
+#define ww_mutex_unlock LINUX_BACKPORT(ww_mutex_unlock)
+extern void ww_mutex_unlock(struct ww_mutex *lock);
+
+/**
+ * ww_mutex_trylock - tries to acquire the w/w mutex without acquire context
+ * @lock: mutex to lock
+ *
+ * Trylocks a mutex without acquire context, so no deadlock detection is
+ * possible. Returns 1 if the mutex has been acquired successfully, 0 otherwise.
+ */
+#define ww_mutex_trylock LINUX_BACKPORT(ww_mutex_trylock)
+static inline int __must_check ww_mutex_trylock(struct ww_mutex *lock)
+{
+	return mutex_trylock(&lock->base);
+}
+
+/***
+ * ww_mutex_destroy - mark a w/w mutex unusable
+ * @lock: the mutex to be destroyed
+ *
+ * This function marks the mutex uninitialized, and any subsequent
+ * use of the mutex is forbidden. The mutex must not be locked when
+ * this function is called.
+ */
+#define ww_mutex_destroy LINUX_BACKPORT(ww_mutex_destroy)
+static inline void ww_mutex_destroy(struct ww_mutex *lock)
+{
+	mutex_destroy(&lock->base);
+}
+
+/**
+ * ww_mutex_is_locked - is the w/w mutex locked
+ * @lock: the mutex to be queried
+ *
+ * Returns 1 if the mutex is locked, 0 if unlocked.
+ */
+#define ww_mutex_is_locked LINUX_BACKPORT(ww_mutex_is_locked)
+static inline bool ww_mutex_is_locked(struct ww_mutex *lock)
+{
+	return mutex_is_locked(&lock->base);
+}
+
+#endif /* CPTCFG_BACKPORT_BUILD_WW_MUTEX */
+#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(3,11,0) */
+#endif /* __BACKPORT_LINUX_WW_MUTEX_H */
diff --git a/backport/compat/Kconfig b/backport/compat/Kconfig
index e2f0cdd..f3c1ab3 100644
--- a/backport/compat/Kconfig
+++ b/backport/compat/Kconfig
@@ -185,6 +185,17 @@ config BACKPORT_LEDS_CLASS
 config BACKPORT_LEDS_TRIGGERS
 	bool
 
+config BACKPORT_BUILD_WW_MUTEX
+	bool
+	# Build only if on kernels < 3.11
+	# For now only DRM drivers use ww mutexes.
+	depends on DRM && BACKPORT_KERNEL_3_11
+	default y if BACKPORT_USERSEL_BUILD_ALL
+	# probably a bad idea if you have these options given we
+	# ripped those options out.
+	depends on !DEBUG_MUTEXES
+	depends on !DEBUG_LOCK_ALLOC
+
 config BACKPORT_BUILD_RADIX_HELPERS
 	bool
 	# You have selected to build backported DRM drivers
diff --git a/backport/compat/Makefile b/backport/compat/Makefile
index 252290e..fec01c4 100644
--- a/backport/compat/Makefile
+++ b/backport/compat/Makefile
@@ -41,3 +41,4 @@ compat-$(CPTCFG_BACKPORT_BUILD_KFIFO) += kfifo.o
 compat-$(CPTCFG_BACKPORT_BUILD_GENERIC_ATOMIC64) += compat_atomic.o
 compat-$(CPTCFG_BACKPORT_BUILD_DMA_SHARED_HELPERS) += dma-shared-helpers.o
 compat-$(CPTCFG_BACKPORT_BUILD_RADIX_HELPERS) += lib-radix-tree-helpers.o
+compat-$(CPTCFG_BACKPORT_BUILD_WW_MUTEX) += kernel/ww_mutex.o
diff --git a/backport/compat/kernel/ww_mutex.c b/backport/compat/kernel/ww_mutex.c
new file mode 100644
index 0000000..257c2a4
--- /dev/null
+++ b/backport/compat/kernel/ww_mutex.c
@@ -0,0 +1,667 @@
+/*
+ * Copyright (c) 2013  Luis R. Rodriguez <mcgrof@do-not-panic.com>
+ *
+ * Backport ww mutex for older kernels. This is not supported when
+ * DEBUG_MUTEXES or DEBUG_LOCK_ALLOC is enabled.
+ *
+ * Taken from: kernel/mutex.c - via linux-stable v3.11-rc2
+ *
+ * Mutexes: blocking mutual exclusion locks
+ *
+ * Started by Ingo Molnar:
+ *
+ *  Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
+ *
+ * Many thanks to Arjan van de Ven, Thomas Gleixner, Steven Rostedt and
+ * David Howells for suggestions and improvements.
+ *
+ *  - Adaptive spinning for mutexes by Peter Zijlstra. (Ported to mainline
+ *    from the -rt tree, where it was originally implemented for rtmutexes
+ *    by Steven Rostedt, based on work by Gregory Haskins, Peter Morreale
+ *    and Sven Dietrich.
+ *
+ * Also see Documentation/mutex-design.txt.
+ */
+
+#include <linux/mutex.h>
+#include <linux/ww_mutex.h>
+#include <asm/mutex.h>
+#include <linux/sched.h>
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,9,0)
+#include <linux/sched/rt.h>
+#endif
+#include <linux/export.h>
+#include <linux/spinlock.h>
+#include <linux/interrupt.h>
+#include <linux/debug_locks.h>
+#include <linux/version.h>
+
+/*
+ * A negative mutex count indicates that waiters are sleeping waiting for the
+ * mutex.
+ */
+#define	MUTEX_SHOW_NO_WAITER(mutex)	(atomic_read(&(mutex)->count) >= 0)
+
+#define spin_lock_mutex(lock, flags) \
+	do { spin_lock(lock); (void)(flags); } while (0)
+#define spin_unlock_mutex(lock, flags) \
+	do { spin_unlock(lock); (void)(flags); } while (0)
+#define mutex_remove_waiter(lock, waiter, ti) \
+	__list_del((waiter)->list.prev, (waiter)->list.next)
+
+#ifdef CONFIG_SMP
+static inline void mutex_set_owner(struct mutex *lock)
+{
+	lock->owner = current;
+}
+
+static inline void mutex_clear_owner(struct mutex *lock)
+{
+	lock->owner = NULL;
+}
+#else
+static inline void mutex_set_owner(struct mutex *lock)
+{
+}
+
+static inline void mutex_clear_owner(struct mutex *lock)
+{
+}
+#endif
+
+
+#ifdef CONFIG_MUTEX_SPIN_ON_OWNER
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,10,0) /* 2bd2c92c and 41fcb9f2 */
+/*
+ * In order to avoid a stampede of mutex spinners from acquiring the mutex
+ * more or less simultaneously, the spinners need to acquire a MCS lock
+ * first before spinning on the owner field.
+ *
+ * We don't inline mspin_lock() so that perf can correctly account for the
+ * time spent in this lock function.
+ */
+struct mspin_node {
+	struct mspin_node *next ;
+	int		  locked;	/* 1 if lock acquired */
+};
+#define	MLOCK(mutex)	((struct mspin_node **)&((mutex)->spin_mlock))
+
+static noinline
+void mspin_lock(struct mspin_node **lock, struct mspin_node *node)
+{
+	struct mspin_node *prev;
+
+	/* Init node */
+	node->locked = 0;
+	node->next   = NULL;
+
+	prev = xchg(lock, node);
+	if (likely(prev == NULL)) {
+		/* Lock acquired */
+		node->locked = 1;
+		return;
+	}
+	ACCESS_ONCE(prev->next) = node;
+	smp_wmb();
+	/* Wait until the lock holder passes the lock down */
+	while (!ACCESS_ONCE(node->locked))
+		arch_mutex_cpu_relax();
+}
+
+static void mspin_unlock(struct mspin_node **lock, struct mspin_node *node)
+{
+	struct mspin_node *next = ACCESS_ONCE(node->next);
+
+	if (likely(!next)) {
+		/*
+		 * Release the lock by setting it to NULL
+		 */
+		if (cmpxchg(lock, node, NULL) == node)
+			return;
+		/* Wait until the next pointer is set */
+		while (!(next = ACCESS_ONCE(node->next)))
+			arch_mutex_cpu_relax();
+	}
+	ACCESS_ONCE(next->locked) = 1;
+	smp_wmb();
+}
+
+/*
+ * Mutex spinning code migrated from kernel/sched/core.c
+ */
+
+static inline bool owner_running(struct mutex *lock, struct task_struct *owner)
+{
+	if (lock->owner != owner)
+		return false;
+
+	/*
+	 * Ensure we emit the owner->on_cpu, dereference _after_ checking
+	 * lock->owner still matches owner, if that fails, owner might
+	 * point to free()d memory, if it still matches, the rcu_read_lock()
+	 * ensures the memory stays valid.
+	 */
+	barrier();
+
+	return owner->on_cpu;
+}
+
+/*
+ * Look out! "owner" is an entirely speculative pointer
+ * access and not reliable.
+ */
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,10,0)
+static noinline
+#endif
+int mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
+{
+	rcu_read_lock();
+	while (owner_running(lock, owner)) {
+		if (need_resched())
+			break;
+
+		arch_mutex_cpu_relax();
+	}
+	rcu_read_unlock();
+
+	/*
+	 * We break out the loop above on need_resched() and when the
+	 * owner changed, which is a sign for heavy contention. Return
+	 * success only when lock->owner is NULL.
+	 */
+	return lock->owner == NULL;
+}
+
+/*
+ * Initial check for entering the mutex spinning loop
+ */
+static inline int mutex_can_spin_on_owner(struct mutex *lock)
+{
+	int retval = 1;
+
+	rcu_read_lock();
+	if (lock->owner)
+		retval = lock->owner->on_cpu;
+	rcu_read_unlock();
+	/*
+	 * if lock->owner is not set, the mutex owner may have just acquired
+	 * it and not set the owner yet or the mutex has been released.
+	 */
+	return retval;
+}
+#else /* Backport 2bd2c92c: help keep backport_mutex_lock_common() clean */
+
+struct mspin_node {
+};
+#define	MLOCK(mutex) NULL
+
+static noinline
+void mspin_lock(struct mspin_node **lock, struct mspin_node *node)
+{
+}
+
+static void mspin_unlock(struct mspin_node **lock, struct mspin_node *node)
+{
+}
+
+static inline bool owner_running(struct mutex *lock, struct task_struct *owner)
+{
+}
+
+int mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
+{
+}
+
+static inline int mutex_can_spin_on_owner(struct mutex *lock)
+{
+	return 1;
+}
+#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(3,10,0) */
+#endif /* CONFIG_MUTEX_SPIN_ON_OWNER */
+
+/*
+ * Release the lock, slowpath:
+ */
+static inline void
+__mutex_unlock_common_slowpath(atomic_t *lock_count, int nested)
+{
+	struct mutex *lock = container_of(lock_count, struct mutex, count);
+	unsigned long flags;
+
+	spin_lock_mutex(&lock->wait_lock, flags);
+	mutex_release(&lock->dep_map, nested, _RET_IP_);
+	/* debug_mutex_unlock(lock); */
+
+	/*
+	 * some architectures leave the lock unlocked in the fastpath failure
+	 * case, others need to leave it locked. In the later case we have to
+	 * unlock it here
+	 */
+	if (__mutex_slowpath_needs_to_unlock())
+		atomic_set(&lock->count, 1);
+
+	if (!list_empty(&lock->wait_list)) {
+		/* get the first entry from the wait-list: */
+		struct mutex_waiter *waiter =
+				list_entry(lock->wait_list.next,
+					   struct mutex_waiter, list);
+
+		/* debug_mutex_wake_waiter(lock, waiter); */
+
+		wake_up_process(waiter->task);
+	}
+
+	spin_unlock_mutex(&lock->wait_lock, flags);
+}
+
+/*
+ * Release the lock, slowpath:
+ */
+static __used noinline void
+__mutex_unlock_slowpath(atomic_t *lock_count)
+{
+	__mutex_unlock_common_slowpath(lock_count, 1);
+}
+
+/**
+ * ww_mutex_unlock - release the w/w mutex
+ * @lock: the mutex to be released
+ *
+ * Unlock a mutex that has been locked by this task previously with any of the
+ * ww_mutex_lock* functions (with or without an acquire context). It is
+ * forbidden to release the locks after releasing the acquire context.
+ *
+ * This function must not be used in interrupt context. Unlocking
+ * of a unlocked mutex is not allowed.
+ */
+void __sched ww_mutex_unlock(struct ww_mutex *lock)
+{
+	/*
+	 * The unlocking fastpath is the 0->1 transition from 'locked'
+	 * into 'unlocked' state:
+	 */
+	if (lock->ctx) {
+		if (lock->ctx->acquired > 0)
+			lock->ctx->acquired--;
+		lock->ctx = NULL;
+	}
+
+	__mutex_fastpath_unlock(&lock->base.count, __mutex_unlock_slowpath);
+}
+EXPORT_SYMBOL_GPL(ww_mutex_unlock);
+
+static inline int __sched
+__mutex_lock_check_stamp(struct mutex *lock, struct ww_acquire_ctx *ctx)
+{
+	struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
+	struct ww_acquire_ctx *hold_ctx = ACCESS_ONCE(ww->ctx);
+
+	if (!hold_ctx)
+		return 0;
+
+	if (unlikely(ctx == hold_ctx))
+		return -EALREADY;
+
+	if (ctx->stamp - hold_ctx->stamp <= LONG_MAX &&
+	    (ctx->stamp != hold_ctx->stamp || ctx > hold_ctx)) {
+		return -EDEADLK;
+	}
+
+	return 0;
+}
+
+static __always_inline void ww_mutex_lock_acquired(struct ww_mutex *ww,
+						   struct ww_acquire_ctx *ww_ctx)
+{
+	ww_ctx->acquired++;
+}
+
+/*
+ * after acquiring lock with fastpath or when we lost out in contested
+ * slowpath, set ctx and wake up any waiters so they can recheck.
+ *
+ * This function is never called when CONFIG_DEBUG_LOCK_ALLOC is set,
+ * as the fastpath and opportunistic spinning are disabled in that case.
+ */
+static __always_inline void
+ww_mutex_set_context_fastpath(struct ww_mutex *lock,
+			       struct ww_acquire_ctx *ctx)
+{
+	unsigned long flags;
+	struct mutex_waiter *cur;
+
+	ww_mutex_lock_acquired(lock, ctx);
+
+	lock->ctx = ctx;
+
+	/*
+	 * The lock->ctx update should be visible on all cores before
+	 * the atomic read is done, otherwise contended waiters might be
+	 * missed. The contended waiters will either see ww_ctx == NULL
+	 * and keep spinning, or it will acquire wait_lock, add itself
+	 * to waiter list and sleep.
+	 */
+	smp_mb(); /* ^^^ */
+
+	/*
+	 * Check if lock is contended, if not there is nobody to wake up
+	 */
+	if (likely(atomic_read(&lock->base.count) == 0))
+		return;
+
+	/*
+	 * Uh oh, we raced in fastpath, wake up everyone in this case,
+	 * so they can see the new lock->ctx.
+	 */
+	spin_lock_mutex(&lock->base.wait_lock, flags);
+	list_for_each_entry(cur, &lock->base.wait_list, list) {
+		/* debug_mutex_wake_waiter(&lock->base, cur); */
+		wake_up_process(cur->task);
+	}
+	spin_unlock_mutex(&lock->base.wait_lock, flags);
+}
+
+/**
+ * backport_schedule_preempt_disabled - called with preemption disabled
+ *
+ * Backports c5491ea7. This is not exported so we leave it
+ * here as this is the only current core user on backports.
+ * Although available on >= 3.4 its only for in-kernel code so
+ * we provide our own.
+ *
+ * Returns with preemption disabled. Note: preempt_count must be 1
+ */
+static void __sched backport_schedule_preempt_disabled(void)
+{
+	preempt_enable_no_resched();
+	schedule();
+	preempt_disable();
+}
+
+/*
+ * Lock a mutex (possibly interruptible), slowpath:
+ */
+static __always_inline int __sched
+__backport_mutex_lock_common(struct mutex *lock, long state,
+			     unsigned int subclass,
+			     struct lockdep_map *nest_lock, unsigned long ip,
+			     struct ww_acquire_ctx *ww_ctx)
+{
+	struct task_struct *task = current;
+	struct mutex_waiter waiter;
+	unsigned long flags;
+	int ret;
+
+	preempt_disable();
+	mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip);
+
+#ifdef CONFIG_MUTEX_SPIN_ON_OWNER
+	/*
+	 * Optimistic spinning.
+	 *
+	 * We try to spin for acquisition when we find that there are no
+	 * pending waiters and the lock owner is currently running on a
+	 * (different) CPU.
+	 *
+	 * The rationale is that if the lock owner is running, it is likely to
+	 * release the lock soon.
+	 *
+	 * Since this needs the lock owner, and this mutex implementation
+	 * doesn't track the owner atomically in the lock field, we need to
+	 * track it non-atomically.
+	 *
+	 * We can't do this for DEBUG_MUTEXES because that relies on wait_lock
+	 * to serialize everything.
+	 *
+	 * The mutex spinners are queued up using MCS lock so that only one
+	 * spinner can compete for the mutex. However, if mutex spinning isn't
+	 * going to happen, there is no point in going through the lock/unlock
+	 * overhead.
+	 */
+	if (!mutex_can_spin_on_owner(lock))
+		goto slowpath;
+
+	for (;;) {
+		struct task_struct *owner;
+		struct mspin_node  node;
+
+		if (!__builtin_constant_p(ww_ctx == NULL) && ww_ctx->acquired > 0) {
+			struct ww_mutex *ww;
+
+			ww = container_of(lock, struct ww_mutex, base);
+			/*
+			 * If ww->ctx is set the contents are undefined, only
+			 * by acquiring wait_lock there is a guarantee that
+			 * they are not invalid when reading.
+			 *
+			 * As such, when deadlock detection needs to be
+			 * performed the optimistic spinning cannot be done.
+			 */
+			if (ACCESS_ONCE(ww->ctx))
+				break;
+		}
+
+		/*
+		 * If there's an owner, wait for it to either
+		 * release the lock or go to sleep.
+		 */
+		mspin_lock(MLOCK(lock), &node);
+		owner = ACCESS_ONCE(lock->owner);
+		if (owner && !mutex_spin_on_owner(lock, owner)) {
+			mspin_unlock(MLOCK(lock), &node);
+			break;
+		}
+
+		if ((atomic_read(&lock->count) == 1) &&
+		    (atomic_cmpxchg(&lock->count, 1, 0) == 1)) {
+			lock_acquired(&lock->dep_map, ip);
+			if (!__builtin_constant_p(ww_ctx == NULL)) {
+				struct ww_mutex *ww;
+				ww = container_of(lock, struct ww_mutex, base);
+
+				ww_mutex_set_context_fastpath(ww, ww_ctx);
+			}
+
+			mutex_set_owner(lock);
+			mspin_unlock(MLOCK(lock), &node);
+			preempt_enable();
+			return 0;
+		}
+		mspin_unlock(MLOCK(lock), &node);
+
+		/*
+		 * When there's no owner, we might have preempted between the
+		 * owner acquiring the lock and setting the owner field. If
+		 * we're an RT task that will live-lock because we won't let
+		 * the owner complete.
+		 */
+		if (!owner && (need_resched() || rt_task(task)))
+			break;
+
+		/*
+		 * The cpu_relax() call is a compiler barrier which forces
+		 * everything in this loop to be re-loaded. We don't need
+		 * memory barriers as we'll eventually observe the right
+		 * values at the cost of a few extra spins.
+		 */
+		arch_mutex_cpu_relax();
+	}
+slowpath:
+#endif
+	spin_lock_mutex(&lock->wait_lock, flags);
+
+	/* We don't support DEBUG_MUTEXES on the backport */
+	/* debug_mutex_lock_common(lock, &waiter); */
+	/* debug_mutex_add_waiter(lock, &waiter, task_thread_info(task)); */
+
+	/* add waiting tasks to the end of the waitqueue (FIFO): */
+	list_add_tail(&waiter.list, &lock->wait_list);
+	waiter.task = task;
+
+	if (MUTEX_SHOW_NO_WAITER(lock) && (atomic_xchg(&lock->count, -1) == 1))
+		goto done;
+
+	lock_contended(&lock->dep_map, ip);
+
+	for (;;) {
+		/*
+		 * Lets try to take the lock again - this is needed even if
+		 * we get here for the first time (shortly after failing to
+		 * acquire the lock), to make sure that we get a wakeup once
+		 * it's unlocked. Later on, if we sleep, this is the
+		 * operation that gives us the lock. We xchg it to -1, so
+		 * that when we release the lock, we properly wake up the
+		 * other waiters:
+		 */
+		if (MUTEX_SHOW_NO_WAITER(lock) &&
+		   (atomic_xchg(&lock->count, -1) == 1))
+			break;
+
+		/*
+		 * got a signal? (This code gets eliminated in the
+		 * TASK_UNINTERRUPTIBLE case.)
+		 */
+		if (unlikely(signal_pending_state(state, task))) {
+			ret = -EINTR;
+			goto err;
+		}
+
+		if (!__builtin_constant_p(ww_ctx == NULL) && ww_ctx->acquired > 0) {
+			ret = __mutex_lock_check_stamp(lock, ww_ctx);
+			if (ret)
+				goto err;
+		}
+
+		__set_task_state(task, state);
+
+		/* didn't get the lock, go to sleep: */
+		spin_unlock_mutex(&lock->wait_lock, flags);
+		backport_schedule_preempt_disabled();
+		spin_lock_mutex(&lock->wait_lock, flags);
+	}
+
+done:
+	lock_acquired(&lock->dep_map, ip);
+	/* got the lock - rejoice! */
+	mutex_remove_waiter(lock, &waiter, current_thread_info());
+	mutex_set_owner(lock);
+
+	if (!__builtin_constant_p(ww_ctx == NULL)) {
+		struct ww_mutex *ww = container_of(lock,
+						      struct ww_mutex,
+						      base);
+		struct mutex_waiter *cur;
+
+		/*
+		 * This branch gets optimized out for the common case,
+		 * and is only important for ww_mutex_lock.
+		 */
+
+		ww_mutex_lock_acquired(ww, ww_ctx);
+		ww->ctx = ww_ctx;
+
+		/*
+		 * Give any possible sleeping processes the chance to wake up,
+		 * so they can recheck if they have to back off.
+		 */
+		list_for_each_entry(cur, &lock->wait_list, list) {
+			/* debug_mutex_wake_waiter(lock, cur); */
+			wake_up_process(cur->task);
+		}
+	}
+
+	/* set it to 0 if there are no waiters left: */
+	if (likely(list_empty(&lock->wait_list)))
+		atomic_set(&lock->count, 0);
+
+	spin_unlock_mutex(&lock->wait_lock, flags);
+
+	/* debug_mutex_free_waiter(&waiter); */
+	preempt_enable();
+
+	return 0;
+
+err:
+	mutex_remove_waiter(lock, &waiter, task_thread_info(task));
+	spin_unlock_mutex(&lock->wait_lock, flags);
+	/* debug_mutex_free_waiter(&waiter); */
+	mutex_release(&lock->dep_map, 1, ip);
+	preempt_enable();
+	return ret;
+}
+
+static noinline int __sched
+__ww_mutex_lock_slowpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+{
+	return __backport_mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, 0,
+					    NULL, _RET_IP_, ctx);
+}
+
+static noinline int __sched
+__ww_mutex_lock_interruptible_slowpath(struct ww_mutex *lock,
+					    struct ww_acquire_ctx *ctx)
+{
+	return __backport_mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, 0,
+					    NULL, _RET_IP_, ctx);
+}
+
+/**
+ * __mutex_fastpath_lock_retval - try to take the lock by moving the count
+ *				  from 1 to a 0 value
+ * @count: pointer of type atomic_t
+ *
+ * For backporting purposes we can't use the older kernel's
+ * __mutex_fastpath_lock_retval() since upon failure of a fastpath
+ * lock we want to call our a failure routine with more than one argument, in
+ * this case the context for ww mutexes. Refer to commit a41b56ef the
+ * argument increase. It'd be painful to backport all asm code for the
+ * supported architectures so instead lets penalize the backport ww mutex
+ * fastpath lock with the not so efficient generic atomic_dec_return()
+ * implementation.
+ *
+ * Change the count from 1 to a value lower than 1. This function returns 0
+ * if the fastpath succeeds, or -1 otherwise.
+ */
+static inline int
+__backport_mutex_fastpath_lock_retval(atomic_t *count)
+{
+	if (unlikely(atomic_dec_return(count) < 0))
+		return -1;
+	return 0;
+}
+
+int __sched
+__ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+{
+	int ret;
+
+	might_sleep();
+
+	ret = __backport_mutex_fastpath_lock_retval(&lock->base.count);
+
+	if (likely(!ret)) {
+		ww_mutex_set_context_fastpath(lock, ctx);
+		mutex_set_owner(&lock->base);
+	} else
+		ret = __ww_mutex_lock_slowpath(lock, ctx);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(__ww_mutex_lock);
+
+int __sched
+__ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+{
+	int ret;
+
+	might_sleep();
+
+	ret = __backport_mutex_fastpath_lock_retval(&lock->base.count);
+
+	if (likely(!ret)) {
+		ww_mutex_set_context_fastpath(lock, ctx);
+		mutex_set_owner(&lock->base);
+	} else
+		ret = __ww_mutex_lock_interruptible_slowpath(lock, ctx);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(__ww_mutex_lock_interruptible);
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC 7/9] backports: backport ww_mutex support
@ 2013-07-28  1:16   ` Luis R. Rodriguez
  0 siblings, 0 replies; 11+ messages in thread
From: Luis R. Rodriguez @ 2013-07-28  1:16 UTC (permalink / raw)
  To: backports-u79uwXL29TY76Z2rM5mHXA
  Cc: Luis R. Rodriguez, maarten.lankhorst-Z7WLFzj8eWMS+FvcfC7Uqw,
	Daniel Vetter, Rob Clark, Peter Zijlstra,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	linaro-mm-sig-cunTk1MwBs8s++Sfvej+rw,
	rostedt-nx8X9YLhiw1AfugRpC6u6w, daniel-/w4YWyX8dFk,
	Linus Torvalds, Andrew Morton, Thomas Gleixner

From: "Luis R. Rodriguez" <mcgrof-3uybbJdB1yH774rrrx3eTA@public.gmane.org>

This backports the kernel's wound/wait style locks 040a0a371,
using the linux-stable v3.11-rc2 as a base for development.
Given the complexity to support debugging mutexes this backport
implementation is simplified by only making this feature availabe
if you to have DEBUG_MUTEXES and DEBUG_LOCK_ALLOC disabled. Given
that ww mutex is required for DRM this also means we must update
the kconfig for DRM and require you to also not be able to build
DRM if you have either of these options enabled. Support for
DEBUG_MUTEXES and DEBUG_LOCK_ALLOC can be added later by anyone
daring.

Part of the ww mutex addition to the kernel required modifying
the fast path mutex locking scheme by requiring you to deal
with the slow path alternatives on your own (refer to a41b56ef).
The reason for this change was that the mutex fastpath implementation
assumed your slowpath alternative can only be passed one argument
and the addition of ww mutexes requires dealing with the slow
path with a context passed.

It'd be painful to backport all asm for an optimized fastpath
implementation so we penalize the backport ww mutex fast path
by using the generic atomic_dec_return().

To backport a clean our own mutex_lock_common() with the least
amount of changes against upstream commits 2bd2c92c and 41fcb9f2
also needed to be backported. Commit 2bd2c92c dealt with adding
support for queue mutex spinners with an MCS lock, since this
cannot be backported for older kernels we provide empty inlines.
Commit 41fcb9f2 just removed SCHED_FEAT_OWNER_SPIN as it was an
early hack, the only thing required to backport this commit was
to provide an alternative declaration for mutex_spin_on_owner()
as it was declared non-inline for older kernels.

Finally c5491ea7 required backporting schedule_preempt_disabled()
as well but that just consisted of carrying over the original
implementation. Since its not exported we need to reimplement
it to make it available to our internal core ww mutex port.

mcgrof@frijol ~/linux-stable (git::master)$ git describe --contains 040a0a371
v3.11-rc1~147^2~5

mcgrof@frijol ~/linux-stable (git::master)$ git describe --contains a41b56ef
v3.11-rc1~147^2~6

mcgrof@frijol ~/linux-stable (git::master)$ git describe --contains 2bd2c92c
v3.10-rc1~200^2~3

mcgrof@frijol ~/linux-stable (git::master)$ git describe --contains 41fcb9f2
v3.10-rc1~200^2~5

mcgrof@frijol ~/linux-stable (git::master)$ git describe --contains c5491ea7
v3.4-rc1~3^2~27

commit 040a0a37100563754bb1fee6ff6427420bcfa609
Author: Maarten Lankhorst <maarten.lankhorst-Z7WLFzj8eWMS+FvcfC7Uqw@public.gmane.org>
Date:   Mon Jun 24 10:30:04 2013 +0200

    mutex: Add support for wound/wait style locks

    Wound/wait mutexes are used when other multiple lock
    acquisitions of a similar type can be done in an arbitrary
    order. The deadlock handling used here is called wait/wound in
    the RDBMS literature: The older tasks waits until it can acquire
    the contended lock. The younger tasks needs to back off and drop
    all the locks it is currently holding, i.e. the younger task is
    wounded.

    For full documentation please read Documentation/ww-mutex-design.txt.

    References: https://lwn.net/Articles/548909/
    Signed-off-by: Maarten Lankhorst <maarten.lankhorst-Z7WLFzj8eWMS+FvcfC7Uqw@public.gmane.org>
    Acked-by: Daniel Vetter <daniel.vetter-/w4YWyX8dFk@public.gmane.org>
    Acked-by: Rob Clark <robdclark-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
    Acked-by: Peter Zijlstra <a.p.zijlstra-/NLkJaSkS4VmR6Xm/wNWPw@public.gmane.org>
    Cc: dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
    Cc: linaro-mm-sig-cunTk1MwBs8s++Sfvej+rw@public.gmane.org
    Cc: rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org
    Cc: daniel-/w4YWyX8dFk@public.gmane.org
    Cc: Linus Torvalds <torvalds-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
    Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
    Cc: Thomas Gleixner <tglx-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org>
    Link: http://lkml.kernel.org/r/51C8038C.9000106-Z7WLFzj8eWMS+FvcfC7Uqw@public.gmane.org
    Signed-off-by: Ingo Molnar <mingo-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>

commit a41b56efa70e060f650aeb54740aaf52044a1ead
Author: Maarten Lankhorst <maarten.lankhorst-Z7WLFzj8eWMS+FvcfC7Uqw@public.gmane.org>
Date:   Thu Jun 20 13:31:05 2013 +0200

    arch: Make __mutex_fastpath_lock_retval return whether fastpath succeeded or not

    This will allow me to call functions that have multiple
    arguments if fastpath fails. This is required to support ticket
    mutexes, because they need to be able to pass an extra argument
    to the fail function.

    Originally I duplicated the functions, by adding
    __mutex_fastpath_lock_retval_arg. This ended up being just a
    duplication of the existing function, so a way to test if
    fastpath was called ended up being better.

    This also cleaned up the reservation mutex patch some by being
    able to call an atomic_set instead of atomic_xchg, and making it
    easier to detect if the wrong unlock function was previously
    used.

    Signed-off-by: Maarten Lankhorst <maarten.lankhorst-Z7WLFzj8eWMS+FvcfC7Uqw@public.gmane.org>
    Acked-by: Peter Zijlstra <a.p.zijlstra-/NLkJaSkS4VmR6Xm/wNWPw@public.gmane.org>
    Cc: dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
    Cc: linaro-mm-sig-cunTk1MwBs8s++Sfvej+rw@public.gmane.org
    Cc: robclark-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org
    Cc: rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org
    Cc: daniel-/w4YWyX8dFk@public.gmane.org
    Cc: Linus Torvalds <torvalds-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
    Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
    Cc: Thomas Gleixner <tglx-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org>
    Link: http://lkml.kernel.org/r/20130620113105.4001.83929.stgit@patser
    Signed-off-by: Ingo Molnar <mingo-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>

commit 2bd2c92cf07cc4a373bf316c75b78ac465fefd35
Author: Waiman Long <Waiman.Long-VXdhtT5mjnY@public.gmane.org>
Date:   Wed Apr 17 15:23:13 2013 -0400

    mutex: Queue mutex spinners with MCS lock to reduce cacheline contention

    <-- snip -->

commit 41fcb9f230bf773656d1768b73000ef720bf00c3
Author: Waiman Long <Waiman.Long-VXdhtT5mjnY@public.gmane.org>
Date:   Wed Apr 17 15:23:11 2013 -0400

    mutex: Move mutex spinning code from sched/core.c back to mutex.c

    <-- snip -->

commit c5491ea779793f977d282754db478157cc409d82
Author: Thomas Gleixner <tglx-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org>
Date:   Mon Mar 21 12:09:35 2011 +0100

    sched/rt: Add schedule_preempt_disabled()

    <-- snip -->

Cc: maarten.lankhorst-Z7WLFzj8eWMS+FvcfC7Uqw@public.gmane.org
Cc: Daniel Vetter <daniel.vetter-/w4YWyX8dFk@public.gmane.org>
Cc: Rob Clark <robdclark-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Peter Zijlstra <a.p.zijlstra-/NLkJaSkS4VmR6Xm/wNWPw@public.gmane.org>
Cc: dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
Cc: linaro-mm-sig-cunTk1MwBs8s++Sfvej+rw@public.gmane.org
Cc: rostedt-nx8X9YLhiw1AfugRpC6u6w@public.gmane.org
Cc: daniel-/w4YWyX8dFk@public.gmane.org
Cc: Linus Torvalds <torvalds-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
Cc: Andrew Morton <akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>
Cc: Thomas Gleixner <tglx-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org>
Signed-off-by: Luis R. Rodriguez <mcgrof-3uybbJdB1yH774rrrx3eTA@public.gmane.org>
---
 backport/backport-include/linux/ww_mutex.h |  333 ++++++++++++++
 backport/compat/Kconfig                    |   11 +
 backport/compat/Makefile                   |    1 +
 backport/compat/kernel/ww_mutex.c          |  667 ++++++++++++++++++++++++++++
 4 files changed, 1012 insertions(+)
 create mode 100644 backport/backport-include/linux/ww_mutex.h
 create mode 100644 backport/compat/kernel/ww_mutex.c

diff --git a/backport/backport-include/linux/ww_mutex.h b/backport/backport-include/linux/ww_mutex.h
new file mode 100644
index 0000000..0953939
--- /dev/null
+++ b/backport/backport-include/linux/ww_mutex.h
@@ -0,0 +1,333 @@
+#ifndef __BACKPORT_LINUX_WW_MUTEX_H
+#define __BACKPORT_LINUX_WW_MUTEX_H
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,11,0)
+#include_next <linux/ww_mutex.h>
+#else
+#ifdef CPTCFG_BACKPORT_BUILD_WW_MUTEX
+/*
+ * Wound/Wait Mutexes: blocking mutual exclusion locks with deadlock avoidance
+ *
+ * Original mutex implementation started by Ingo Molnar:
+ *
+ *  Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
+ *
+ * Wound/wait implementation:
+ *  Copyright (C) 2013 Canonical Ltd.
+ *
+ * This file contains the main data structure and API definitions.
+ */
+
+#include <linux/mutex.h>
+
+struct ww_class {
+	atomic_long_t stamp;
+	struct lock_class_key acquire_key;
+	struct lock_class_key mutex_key;
+	const char *acquire_name;
+	const char *mutex_name;
+};
+
+struct ww_acquire_ctx {
+	struct task_struct *task;
+	unsigned long stamp;
+	unsigned acquired;
+};
+
+struct ww_mutex {
+	struct mutex base;
+	struct ww_acquire_ctx *ctx;
+};
+
+# define __WW_CLASS_MUTEX_INITIALIZER(lockname, ww_class)
+
+#define __WW_CLASS_INITIALIZER(ww_class) \
+		{ .stamp = ATOMIC_LONG_INIT(0) \
+		, .acquire_name = #ww_class "_acquire" \
+		, .mutex_name = #ww_class "_mutex" }
+
+#define __WW_MUTEX_INITIALIZER(lockname, class) \
+		{ .base = { \__MUTEX_INITIALIZER(lockname) } \
+		__WW_CLASS_MUTEX_INITIALIZER(lockname, class) }
+
+#define DEFINE_WW_CLASS(classname) \
+	struct ww_class classname = __WW_CLASS_INITIALIZER(classname)
+
+#define DEFINE_WW_MUTEX(mutexname, ww_class) \
+	struct ww_mutex mutexname = __WW_MUTEX_INITIALIZER(mutexname, ww_class)
+
+/**
+ * ww_mutex_init - initialize the w/w mutex
+ * @lock: the mutex to be initialized
+ * @ww_class: the w/w class the mutex should belong to
+ *
+ * Initialize the w/w mutex to unlocked state and associate it with the given
+ * class.
+ *
+ * It is not allowed to initialize an already locked mutex.
+ */
+#define ww_mutex_init LINUX_BACKPORT(ww_mutex_init)
+static inline void ww_mutex_init(struct ww_mutex *lock,
+				 struct ww_class *ww_class)
+{
+	__mutex_init(&lock->base, ww_class->mutex_name, &ww_class->mutex_key);
+	lock->ctx = NULL;
+}
+
+/**
+ * ww_acquire_init - initialize a w/w acquire context
+ * @ctx: w/w acquire context to initialize
+ * @ww_class: w/w class of the context
+ *
+ * Initializes an context to acquire multiple mutexes of the given w/w class.
+ *
+ * Context-based w/w mutex acquiring can be done in any order whatsoever within
+ * a given lock class. Deadlocks will be detected and handled with the
+ * wait/wound logic.
+ *
+ * Mixing of context-based w/w mutex acquiring and single w/w mutex locking can
+ * result in undetected deadlocks and is so forbidden. Mixing different contexts
+ * for the same w/w class when acquiring mutexes can also result in undetected
+ * deadlocks, and is hence also forbidden. Both types of abuse will be caught by
+ * enabling CONFIG_PROVE_LOCKING.
+ *
+ * Nesting of acquire contexts for _different_ w/w classes is possible, subject
+ * to the usual locking rules between different lock classes.
+ *
+ * An acquire context must be released with ww_acquire_fini by the same task
+ * before the memory is freed. It is recommended to allocate the context itself
+ * on the stack.
+ */
+#define ww_acquire_init LINUX_BACKPORT(ww_acquire_init)
+static inline void ww_acquire_init(struct ww_acquire_ctx *ctx,
+				   struct ww_class *ww_class)
+{
+	ctx->task = current;
+	ctx->stamp = atomic_long_inc_return(&ww_class->stamp);
+	ctx->acquired = 0;
+}
+
+/**
+ * ww_acquire_done - marks the end of the acquire phase
+ * @ctx: the acquire context
+ *
+ * Marks the end of the acquire phase, any further w/w mutex lock calls using
+ * this context are forbidden.
+ *
+ * Calling this function is optional, it is just useful to document w/w mutex
+ * code and clearly designated the acquire phase from actually using the locked
+ * data structures.
+ */
+#define ww_acquire_done LINUX_BACKPORT(ww_acquire_done)
+static inline void ww_acquire_done(struct ww_acquire_ctx *ctx)
+{
+}
+
+/**
+ * ww_acquire_fini - releases a w/w acquire context
+ * @ctx: the acquire context to free
+ *
+ * Releases a w/w acquire context. This must be called _after_ all acquired w/w
+ * mutexes have been released with ww_mutex_unlock.
+ */
+#define ww_acquire_fini LINUX_BACKPORT(ww_acquire_fini)
+static inline void ww_acquire_fini(struct ww_acquire_ctx *ctx)
+{
+}
+
+#define __ww_mutex_lock LINUX_BACKPORT(__ww_mutex_lock)
+extern int __must_check __ww_mutex_lock(struct ww_mutex *lock,
+					struct ww_acquire_ctx *ctx);
+#define __ww_mutex_lock_interruptible LINUX_BACKPORT(__ww_mutex_lock_interruptible)
+extern int __must_check __ww_mutex_lock_interruptible(struct ww_mutex *lock,
+						      struct ww_acquire_ctx *ctx);
+
+/**
+ * ww_mutex_lock - acquire the w/w mutex
+ * @lock: the mutex to be acquired
+ * @ctx: w/w acquire context, or NULL to acquire only a single lock.
+ *
+ * Lock the w/w mutex exclusively for this task.
+ *
+ * Deadlocks within a given w/w class of locks are detected and handled with the
+ * wait/wound algorithm. If the lock isn't immediately avaiable this function
+ * will either sleep until it is (wait case). Or it selects the current context
+ * for backing off by returning -EDEADLK (wound case). Trying to acquire the
+ * same lock with the same context twice is also detected and signalled by
+ * returning -EALREADY. Returns 0 if the mutex was successfully acquired.
+ *
+ * In the wound case the caller must release all currently held w/w mutexes for
+ * the given context and then wait for this contending lock to be available by
+ * calling ww_mutex_lock_slow. Alternatively callers can opt to not acquire this
+ * lock and proceed with trying to acquire further w/w mutexes (e.g. when
+ * scanning through lru lists trying to free resources).
+ *
+ * The mutex must later on be released by the same task that
+ * acquired it. The task may not exit without first unlocking the mutex. Also,
+ * kernel memory where the mutex resides must not be freed with the mutex still
+ * locked. The mutex must first be initialized (or statically defined) before it
+ * can be locked. memset()-ing the mutex to 0 is not allowed. The mutex must be
+ * of the same w/w lock class as was used to initialize the acquire context.
+ *
+ * A mutex acquired with this function must be released with ww_mutex_unlock.
+ */
+#define ww_mutex_lock LINUX_BACKPORT(ww_mutex_lock)
+static inline int ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+{
+	if (ctx)
+		return __ww_mutex_lock(lock, ctx);
+
+	mutex_lock(&lock->base);
+	return 0;
+}
+
+/**
+ * ww_mutex_lock_interruptible - acquire the w/w mutex, interruptible
+ * @lock: the mutex to be acquired
+ * @ctx: w/w acquire context
+ *
+ * Lock the w/w mutex exclusively for this task.
+ *
+ * Deadlocks within a given w/w class of locks are detected and handled with the
+ * wait/wound algorithm. If the lock isn't immediately avaiable this function
+ * will either sleep until it is (wait case). Or it selects the current context
+ * for backing off by returning -EDEADLK (wound case). Trying to acquire the
+ * same lock with the same context twice is also detected and signalled by
+ * returning -EALREADY. Returns 0 if the mutex was successfully acquired. If a
+ * signal arrives while waiting for the lock then this function returns -EINTR.
+ *
+ * In the wound case the caller must release all currently held w/w mutexes for
+ * the given context and then wait for this contending lock to be available by
+ * calling ww_mutex_lock_slow_interruptible. Alternatively callers can opt to
+ * not acquire this lock and proceed with trying to acquire further w/w mutexes
+ * (e.g. when scanning through lru lists trying to free resources).
+ *
+ * The mutex must later on be released by the same task that
+ * acquired it. The task may not exit without first unlocking the mutex. Also,
+ * kernel memory where the mutex resides must not be freed with the mutex still
+ * locked. The mutex must first be initialized (or statically defined) before it
+ * can be locked. memset()-ing the mutex to 0 is not allowed. The mutex must be
+ * of the same w/w lock class as was used to initialize the acquire context.
+ *
+ * A mutex acquired with this function must be released with ww_mutex_unlock.
+ */
+#define ww_mutex_lock_interruptible LINUX_BACKPORT(ww_mutex_lock_interruptible)
+static inline int __must_check ww_mutex_lock_interruptible(struct ww_mutex *lock,
+							   struct ww_acquire_ctx *ctx)
+{
+	if (ctx)
+		return __ww_mutex_lock_interruptible(lock, ctx);
+	else
+		return mutex_lock_interruptible(&lock->base);
+}
+
+/**
+ * ww_mutex_lock_slow - slowpath acquiring of the w/w mutex
+ * @lock: the mutex to be acquired
+ * @ctx: w/w acquire context
+ *
+ * Acquires a w/w mutex with the given context after a wound case. This function
+ * will sleep until the lock becomes available.
+ *
+ * The caller must have released all w/w mutexes already acquired with the
+ * context and then call this function on the contended lock.
+ *
+ * Afterwards the caller may continue to (re)acquire the other w/w mutexes it
+ * needs with ww_mutex_lock. Note that the -EALREADY return code from
+ * ww_mutex_lock can be used to avoid locking this contended mutex twice.
+ *
+ * It is forbidden to call this function with any other w/w mutexes associated
+ * with the context held. It is forbidden to call this on anything else than the
+ * contending mutex.
+ *
+ * Note that the slowpath lock acquiring can also be done by calling
+ * ww_mutex_lock directly. This function here is simply to help w/w mutex
+ * locking code readability by clearly denoting the slowpath.
+ */
+#define ww_mutex_lock_slow LINUX_BACKPORT(ww_mutex_lock_slow)
+static inline void
+ww_mutex_lock_slow(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+{
+	int ret;
+	ret = ww_mutex_lock(lock, ctx);
+	(void)ret;
+}
+
+/**
+ * ww_mutex_lock_slow_interruptible - slowpath acquiring of the w/w mutex, interruptible
+ * @lock: the mutex to be acquired
+ * @ctx: w/w acquire context
+ *
+ * Acquires a w/w mutex with the given context after a wound case. This function
+ * will sleep until the lock becomes available and returns 0 when the lock has
+ * been acquired. If a signal arrives while waiting for the lock then this
+ * function returns -EINTR.
+ *
+ * The caller must have released all w/w mutexes already acquired with the
+ * context and then call this function on the contended lock.
+ *
+ * Afterwards the caller may continue to (re)acquire the other w/w mutexes it
+ * needs with ww_mutex_lock. Note that the -EALREADY return code from
+ * ww_mutex_lock can be used to avoid locking this contended mutex twice.
+ *
+ * It is forbidden to call this function with any other w/w mutexes associated
+ * with the given context held. It is forbidden to call this on anything else
+ * than the contending mutex.
+ *
+ * Note that the slowpath lock acquiring can also be done by calling
+ * ww_mutex_lock_interruptible directly. This function here is simply to help
+ * w/w mutex locking code readability by clearly denoting the slowpath.
+ */
+#define ww_mutex_lock_slow_interruptible LINUX_BACKPORT(ww_mutex_lock_slow_interruptible)
+static inline int __must_check
+ww_mutex_lock_slow_interruptible(struct ww_mutex *lock,
+				 struct ww_acquire_ctx *ctx)
+{
+	return ww_mutex_lock_interruptible(lock, ctx);
+}
+
+#define ww_mutex_unlock LINUX_BACKPORT(ww_mutex_unlock)
+extern void ww_mutex_unlock(struct ww_mutex *lock);
+
+/**
+ * ww_mutex_trylock - tries to acquire the w/w mutex without acquire context
+ * @lock: mutex to lock
+ *
+ * Trylocks a mutex without acquire context, so no deadlock detection is
+ * possible. Returns 1 if the mutex has been acquired successfully, 0 otherwise.
+ */
+#define ww_mutex_trylock LINUX_BACKPORT(ww_mutex_trylock)
+static inline int __must_check ww_mutex_trylock(struct ww_mutex *lock)
+{
+	return mutex_trylock(&lock->base);
+}
+
+/***
+ * ww_mutex_destroy - mark a w/w mutex unusable
+ * @lock: the mutex to be destroyed
+ *
+ * This function marks the mutex uninitialized, and any subsequent
+ * use of the mutex is forbidden. The mutex must not be locked when
+ * this function is called.
+ */
+#define ww_mutex_destroy LINUX_BACKPORT(ww_mutex_destroy)
+static inline void ww_mutex_destroy(struct ww_mutex *lock)
+{
+	mutex_destroy(&lock->base);
+}
+
+/**
+ * ww_mutex_is_locked - is the w/w mutex locked
+ * @lock: the mutex to be queried
+ *
+ * Returns 1 if the mutex is locked, 0 if unlocked.
+ */
+#define ww_mutex_is_locked LINUX_BACKPORT(ww_mutex_is_locked)
+static inline bool ww_mutex_is_locked(struct ww_mutex *lock)
+{
+	return mutex_is_locked(&lock->base);
+}
+
+#endif /* CPTCFG_BACKPORT_BUILD_WW_MUTEX */
+#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(3,11,0) */
+#endif /* __BACKPORT_LINUX_WW_MUTEX_H */
diff --git a/backport/compat/Kconfig b/backport/compat/Kconfig
index e2f0cdd..f3c1ab3 100644
--- a/backport/compat/Kconfig
+++ b/backport/compat/Kconfig
@@ -185,6 +185,17 @@ config BACKPORT_LEDS_CLASS
 config BACKPORT_LEDS_TRIGGERS
 	bool
 
+config BACKPORT_BUILD_WW_MUTEX
+	bool
+	# Build only if on kernels < 3.11
+	# For now only DRM drivers use ww mutexes.
+	depends on DRM && BACKPORT_KERNEL_3_11
+	default y if BACKPORT_USERSEL_BUILD_ALL
+	# probably a bad idea if you have these options given we
+	# ripped those options out.
+	depends on !DEBUG_MUTEXES
+	depends on !DEBUG_LOCK_ALLOC
+
 config BACKPORT_BUILD_RADIX_HELPERS
 	bool
 	# You have selected to build backported DRM drivers
diff --git a/backport/compat/Makefile b/backport/compat/Makefile
index 252290e..fec01c4 100644
--- a/backport/compat/Makefile
+++ b/backport/compat/Makefile
@@ -41,3 +41,4 @@ compat-$(CPTCFG_BACKPORT_BUILD_KFIFO) += kfifo.o
 compat-$(CPTCFG_BACKPORT_BUILD_GENERIC_ATOMIC64) += compat_atomic.o
 compat-$(CPTCFG_BACKPORT_BUILD_DMA_SHARED_HELPERS) += dma-shared-helpers.o
 compat-$(CPTCFG_BACKPORT_BUILD_RADIX_HELPERS) += lib-radix-tree-helpers.o
+compat-$(CPTCFG_BACKPORT_BUILD_WW_MUTEX) += kernel/ww_mutex.o
diff --git a/backport/compat/kernel/ww_mutex.c b/backport/compat/kernel/ww_mutex.c
new file mode 100644
index 0000000..257c2a4
--- /dev/null
+++ b/backport/compat/kernel/ww_mutex.c
@@ -0,0 +1,667 @@
+/*
+ * Copyright (c) 2013  Luis R. Rodriguez <mcgrof-3uybbJdB1yH774rrrx3eTA@public.gmane.org>
+ *
+ * Backport ww mutex for older kernels. This is not supported when
+ * DEBUG_MUTEXES or DEBUG_LOCK_ALLOC is enabled.
+ *
+ * Taken from: kernel/mutex.c - via linux-stable v3.11-rc2
+ *
+ * Mutexes: blocking mutual exclusion locks
+ *
+ * Started by Ingo Molnar:
+ *
+ *  Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
+ *
+ * Many thanks to Arjan van de Ven, Thomas Gleixner, Steven Rostedt and
+ * David Howells for suggestions and improvements.
+ *
+ *  - Adaptive spinning for mutexes by Peter Zijlstra. (Ported to mainline
+ *    from the -rt tree, where it was originally implemented for rtmutexes
+ *    by Steven Rostedt, based on work by Gregory Haskins, Peter Morreale
+ *    and Sven Dietrich.
+ *
+ * Also see Documentation/mutex-design.txt.
+ */
+
+#include <linux/mutex.h>
+#include <linux/ww_mutex.h>
+#include <asm/mutex.h>
+#include <linux/sched.h>
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,9,0)
+#include <linux/sched/rt.h>
+#endif
+#include <linux/export.h>
+#include <linux/spinlock.h>
+#include <linux/interrupt.h>
+#include <linux/debug_locks.h>
+#include <linux/version.h>
+
+/*
+ * A negative mutex count indicates that waiters are sleeping waiting for the
+ * mutex.
+ */
+#define	MUTEX_SHOW_NO_WAITER(mutex)	(atomic_read(&(mutex)->count) >= 0)
+
+#define spin_lock_mutex(lock, flags) \
+	do { spin_lock(lock); (void)(flags); } while (0)
+#define spin_unlock_mutex(lock, flags) \
+	do { spin_unlock(lock); (void)(flags); } while (0)
+#define mutex_remove_waiter(lock, waiter, ti) \
+	__list_del((waiter)->list.prev, (waiter)->list.next)
+
+#ifdef CONFIG_SMP
+static inline void mutex_set_owner(struct mutex *lock)
+{
+	lock->owner = current;
+}
+
+static inline void mutex_clear_owner(struct mutex *lock)
+{
+	lock->owner = NULL;
+}
+#else
+static inline void mutex_set_owner(struct mutex *lock)
+{
+}
+
+static inline void mutex_clear_owner(struct mutex *lock)
+{
+}
+#endif
+
+
+#ifdef CONFIG_MUTEX_SPIN_ON_OWNER
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,10,0) /* 2bd2c92c and 41fcb9f2 */
+/*
+ * In order to avoid a stampede of mutex spinners from acquiring the mutex
+ * more or less simultaneously, the spinners need to acquire a MCS lock
+ * first before spinning on the owner field.
+ *
+ * We don't inline mspin_lock() so that perf can correctly account for the
+ * time spent in this lock function.
+ */
+struct mspin_node {
+	struct mspin_node *next ;
+	int		  locked;	/* 1 if lock acquired */
+};
+#define	MLOCK(mutex)	((struct mspin_node **)&((mutex)->spin_mlock))
+
+static noinline
+void mspin_lock(struct mspin_node **lock, struct mspin_node *node)
+{
+	struct mspin_node *prev;
+
+	/* Init node */
+	node->locked = 0;
+	node->next   = NULL;
+
+	prev = xchg(lock, node);
+	if (likely(prev == NULL)) {
+		/* Lock acquired */
+		node->locked = 1;
+		return;
+	}
+	ACCESS_ONCE(prev->next) = node;
+	smp_wmb();
+	/* Wait until the lock holder passes the lock down */
+	while (!ACCESS_ONCE(node->locked))
+		arch_mutex_cpu_relax();
+}
+
+static void mspin_unlock(struct mspin_node **lock, struct mspin_node *node)
+{
+	struct mspin_node *next = ACCESS_ONCE(node->next);
+
+	if (likely(!next)) {
+		/*
+		 * Release the lock by setting it to NULL
+		 */
+		if (cmpxchg(lock, node, NULL) == node)
+			return;
+		/* Wait until the next pointer is set */
+		while (!(next = ACCESS_ONCE(node->next)))
+			arch_mutex_cpu_relax();
+	}
+	ACCESS_ONCE(next->locked) = 1;
+	smp_wmb();
+}
+
+/*
+ * Mutex spinning code migrated from kernel/sched/core.c
+ */
+
+static inline bool owner_running(struct mutex *lock, struct task_struct *owner)
+{
+	if (lock->owner != owner)
+		return false;
+
+	/*
+	 * Ensure we emit the owner->on_cpu, dereference _after_ checking
+	 * lock->owner still matches owner, if that fails, owner might
+	 * point to free()d memory, if it still matches, the rcu_read_lock()
+	 * ensures the memory stays valid.
+	 */
+	barrier();
+
+	return owner->on_cpu;
+}
+
+/*
+ * Look out! "owner" is an entirely speculative pointer
+ * access and not reliable.
+ */
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,10,0)
+static noinline
+#endif
+int mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
+{
+	rcu_read_lock();
+	while (owner_running(lock, owner)) {
+		if (need_resched())
+			break;
+
+		arch_mutex_cpu_relax();
+	}
+	rcu_read_unlock();
+
+	/*
+	 * We break out the loop above on need_resched() and when the
+	 * owner changed, which is a sign for heavy contention. Return
+	 * success only when lock->owner is NULL.
+	 */
+	return lock->owner == NULL;
+}
+
+/*
+ * Initial check for entering the mutex spinning loop
+ */
+static inline int mutex_can_spin_on_owner(struct mutex *lock)
+{
+	int retval = 1;
+
+	rcu_read_lock();
+	if (lock->owner)
+		retval = lock->owner->on_cpu;
+	rcu_read_unlock();
+	/*
+	 * if lock->owner is not set, the mutex owner may have just acquired
+	 * it and not set the owner yet or the mutex has been released.
+	 */
+	return retval;
+}
+#else /* Backport 2bd2c92c: help keep backport_mutex_lock_common() clean */
+
+struct mspin_node {
+};
+#define	MLOCK(mutex) NULL
+
+static noinline
+void mspin_lock(struct mspin_node **lock, struct mspin_node *node)
+{
+}
+
+static void mspin_unlock(struct mspin_node **lock, struct mspin_node *node)
+{
+}
+
+static inline bool owner_running(struct mutex *lock, struct task_struct *owner)
+{
+}
+
+int mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner)
+{
+}
+
+static inline int mutex_can_spin_on_owner(struct mutex *lock)
+{
+	return 1;
+}
+#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(3,10,0) */
+#endif /* CONFIG_MUTEX_SPIN_ON_OWNER */
+
+/*
+ * Release the lock, slowpath:
+ */
+static inline void
+__mutex_unlock_common_slowpath(atomic_t *lock_count, int nested)
+{
+	struct mutex *lock = container_of(lock_count, struct mutex, count);
+	unsigned long flags;
+
+	spin_lock_mutex(&lock->wait_lock, flags);
+	mutex_release(&lock->dep_map, nested, _RET_IP_);
+	/* debug_mutex_unlock(lock); */
+
+	/*
+	 * some architectures leave the lock unlocked in the fastpath failure
+	 * case, others need to leave it locked. In the later case we have to
+	 * unlock it here
+	 */
+	if (__mutex_slowpath_needs_to_unlock())
+		atomic_set(&lock->count, 1);
+
+	if (!list_empty(&lock->wait_list)) {
+		/* get the first entry from the wait-list: */
+		struct mutex_waiter *waiter =
+				list_entry(lock->wait_list.next,
+					   struct mutex_waiter, list);
+
+		/* debug_mutex_wake_waiter(lock, waiter); */
+
+		wake_up_process(waiter->task);
+	}
+
+	spin_unlock_mutex(&lock->wait_lock, flags);
+}
+
+/*
+ * Release the lock, slowpath:
+ */
+static __used noinline void
+__mutex_unlock_slowpath(atomic_t *lock_count)
+{
+	__mutex_unlock_common_slowpath(lock_count, 1);
+}
+
+/**
+ * ww_mutex_unlock - release the w/w mutex
+ * @lock: the mutex to be released
+ *
+ * Unlock a mutex that has been locked by this task previously with any of the
+ * ww_mutex_lock* functions (with or without an acquire context). It is
+ * forbidden to release the locks after releasing the acquire context.
+ *
+ * This function must not be used in interrupt context. Unlocking
+ * of a unlocked mutex is not allowed.
+ */
+void __sched ww_mutex_unlock(struct ww_mutex *lock)
+{
+	/*
+	 * The unlocking fastpath is the 0->1 transition from 'locked'
+	 * into 'unlocked' state:
+	 */
+	if (lock->ctx) {
+		if (lock->ctx->acquired > 0)
+			lock->ctx->acquired--;
+		lock->ctx = NULL;
+	}
+
+	__mutex_fastpath_unlock(&lock->base.count, __mutex_unlock_slowpath);
+}
+EXPORT_SYMBOL_GPL(ww_mutex_unlock);
+
+static inline int __sched
+__mutex_lock_check_stamp(struct mutex *lock, struct ww_acquire_ctx *ctx)
+{
+	struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
+	struct ww_acquire_ctx *hold_ctx = ACCESS_ONCE(ww->ctx);
+
+	if (!hold_ctx)
+		return 0;
+
+	if (unlikely(ctx == hold_ctx))
+		return -EALREADY;
+
+	if (ctx->stamp - hold_ctx->stamp <= LONG_MAX &&
+	    (ctx->stamp != hold_ctx->stamp || ctx > hold_ctx)) {
+		return -EDEADLK;
+	}
+
+	return 0;
+}
+
+static __always_inline void ww_mutex_lock_acquired(struct ww_mutex *ww,
+						   struct ww_acquire_ctx *ww_ctx)
+{
+	ww_ctx->acquired++;
+}
+
+/*
+ * after acquiring lock with fastpath or when we lost out in contested
+ * slowpath, set ctx and wake up any waiters so they can recheck.
+ *
+ * This function is never called when CONFIG_DEBUG_LOCK_ALLOC is set,
+ * as the fastpath and opportunistic spinning are disabled in that case.
+ */
+static __always_inline void
+ww_mutex_set_context_fastpath(struct ww_mutex *lock,
+			       struct ww_acquire_ctx *ctx)
+{
+	unsigned long flags;
+	struct mutex_waiter *cur;
+
+	ww_mutex_lock_acquired(lock, ctx);
+
+	lock->ctx = ctx;
+
+	/*
+	 * The lock->ctx update should be visible on all cores before
+	 * the atomic read is done, otherwise contended waiters might be
+	 * missed. The contended waiters will either see ww_ctx == NULL
+	 * and keep spinning, or it will acquire wait_lock, add itself
+	 * to waiter list and sleep.
+	 */
+	smp_mb(); /* ^^^ */
+
+	/*
+	 * Check if lock is contended, if not there is nobody to wake up
+	 */
+	if (likely(atomic_read(&lock->base.count) == 0))
+		return;
+
+	/*
+	 * Uh oh, we raced in fastpath, wake up everyone in this case,
+	 * so they can see the new lock->ctx.
+	 */
+	spin_lock_mutex(&lock->base.wait_lock, flags);
+	list_for_each_entry(cur, &lock->base.wait_list, list) {
+		/* debug_mutex_wake_waiter(&lock->base, cur); */
+		wake_up_process(cur->task);
+	}
+	spin_unlock_mutex(&lock->base.wait_lock, flags);
+}
+
+/**
+ * backport_schedule_preempt_disabled - called with preemption disabled
+ *
+ * Backports c5491ea7. This is not exported so we leave it
+ * here as this is the only current core user on backports.
+ * Although available on >= 3.4 its only for in-kernel code so
+ * we provide our own.
+ *
+ * Returns with preemption disabled. Note: preempt_count must be 1
+ */
+static void __sched backport_schedule_preempt_disabled(void)
+{
+	preempt_enable_no_resched();
+	schedule();
+	preempt_disable();
+}
+
+/*
+ * Lock a mutex (possibly interruptible), slowpath:
+ */
+static __always_inline int __sched
+__backport_mutex_lock_common(struct mutex *lock, long state,
+			     unsigned int subclass,
+			     struct lockdep_map *nest_lock, unsigned long ip,
+			     struct ww_acquire_ctx *ww_ctx)
+{
+	struct task_struct *task = current;
+	struct mutex_waiter waiter;
+	unsigned long flags;
+	int ret;
+
+	preempt_disable();
+	mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip);
+
+#ifdef CONFIG_MUTEX_SPIN_ON_OWNER
+	/*
+	 * Optimistic spinning.
+	 *
+	 * We try to spin for acquisition when we find that there are no
+	 * pending waiters and the lock owner is currently running on a
+	 * (different) CPU.
+	 *
+	 * The rationale is that if the lock owner is running, it is likely to
+	 * release the lock soon.
+	 *
+	 * Since this needs the lock owner, and this mutex implementation
+	 * doesn't track the owner atomically in the lock field, we need to
+	 * track it non-atomically.
+	 *
+	 * We can't do this for DEBUG_MUTEXES because that relies on wait_lock
+	 * to serialize everything.
+	 *
+	 * The mutex spinners are queued up using MCS lock so that only one
+	 * spinner can compete for the mutex. However, if mutex spinning isn't
+	 * going to happen, there is no point in going through the lock/unlock
+	 * overhead.
+	 */
+	if (!mutex_can_spin_on_owner(lock))
+		goto slowpath;
+
+	for (;;) {
+		struct task_struct *owner;
+		struct mspin_node  node;
+
+		if (!__builtin_constant_p(ww_ctx == NULL) && ww_ctx->acquired > 0) {
+			struct ww_mutex *ww;
+
+			ww = container_of(lock, struct ww_mutex, base);
+			/*
+			 * If ww->ctx is set the contents are undefined, only
+			 * by acquiring wait_lock there is a guarantee that
+			 * they are not invalid when reading.
+			 *
+			 * As such, when deadlock detection needs to be
+			 * performed the optimistic spinning cannot be done.
+			 */
+			if (ACCESS_ONCE(ww->ctx))
+				break;
+		}
+
+		/*
+		 * If there's an owner, wait for it to either
+		 * release the lock or go to sleep.
+		 */
+		mspin_lock(MLOCK(lock), &node);
+		owner = ACCESS_ONCE(lock->owner);
+		if (owner && !mutex_spin_on_owner(lock, owner)) {
+			mspin_unlock(MLOCK(lock), &node);
+			break;
+		}
+
+		if ((atomic_read(&lock->count) == 1) &&
+		    (atomic_cmpxchg(&lock->count, 1, 0) == 1)) {
+			lock_acquired(&lock->dep_map, ip);
+			if (!__builtin_constant_p(ww_ctx == NULL)) {
+				struct ww_mutex *ww;
+				ww = container_of(lock, struct ww_mutex, base);
+
+				ww_mutex_set_context_fastpath(ww, ww_ctx);
+			}
+
+			mutex_set_owner(lock);
+			mspin_unlock(MLOCK(lock), &node);
+			preempt_enable();
+			return 0;
+		}
+		mspin_unlock(MLOCK(lock), &node);
+
+		/*
+		 * When there's no owner, we might have preempted between the
+		 * owner acquiring the lock and setting the owner field. If
+		 * we're an RT task that will live-lock because we won't let
+		 * the owner complete.
+		 */
+		if (!owner && (need_resched() || rt_task(task)))
+			break;
+
+		/*
+		 * The cpu_relax() call is a compiler barrier which forces
+		 * everything in this loop to be re-loaded. We don't need
+		 * memory barriers as we'll eventually observe the right
+		 * values at the cost of a few extra spins.
+		 */
+		arch_mutex_cpu_relax();
+	}
+slowpath:
+#endif
+	spin_lock_mutex(&lock->wait_lock, flags);
+
+	/* We don't support DEBUG_MUTEXES on the backport */
+	/* debug_mutex_lock_common(lock, &waiter); */
+	/* debug_mutex_add_waiter(lock, &waiter, task_thread_info(task)); */
+
+	/* add waiting tasks to the end of the waitqueue (FIFO): */
+	list_add_tail(&waiter.list, &lock->wait_list);
+	waiter.task = task;
+
+	if (MUTEX_SHOW_NO_WAITER(lock) && (atomic_xchg(&lock->count, -1) == 1))
+		goto done;
+
+	lock_contended(&lock->dep_map, ip);
+
+	for (;;) {
+		/*
+		 * Lets try to take the lock again - this is needed even if
+		 * we get here for the first time (shortly after failing to
+		 * acquire the lock), to make sure that we get a wakeup once
+		 * it's unlocked. Later on, if we sleep, this is the
+		 * operation that gives us the lock. We xchg it to -1, so
+		 * that when we release the lock, we properly wake up the
+		 * other waiters:
+		 */
+		if (MUTEX_SHOW_NO_WAITER(lock) &&
+		   (atomic_xchg(&lock->count, -1) == 1))
+			break;
+
+		/*
+		 * got a signal? (This code gets eliminated in the
+		 * TASK_UNINTERRUPTIBLE case.)
+		 */
+		if (unlikely(signal_pending_state(state, task))) {
+			ret = -EINTR;
+			goto err;
+		}
+
+		if (!__builtin_constant_p(ww_ctx == NULL) && ww_ctx->acquired > 0) {
+			ret = __mutex_lock_check_stamp(lock, ww_ctx);
+			if (ret)
+				goto err;
+		}
+
+		__set_task_state(task, state);
+
+		/* didn't get the lock, go to sleep: */
+		spin_unlock_mutex(&lock->wait_lock, flags);
+		backport_schedule_preempt_disabled();
+		spin_lock_mutex(&lock->wait_lock, flags);
+	}
+
+done:
+	lock_acquired(&lock->dep_map, ip);
+	/* got the lock - rejoice! */
+	mutex_remove_waiter(lock, &waiter, current_thread_info());
+	mutex_set_owner(lock);
+
+	if (!__builtin_constant_p(ww_ctx == NULL)) {
+		struct ww_mutex *ww = container_of(lock,
+						      struct ww_mutex,
+						      base);
+		struct mutex_waiter *cur;
+
+		/*
+		 * This branch gets optimized out for the common case,
+		 * and is only important for ww_mutex_lock.
+		 */
+
+		ww_mutex_lock_acquired(ww, ww_ctx);
+		ww->ctx = ww_ctx;
+
+		/*
+		 * Give any possible sleeping processes the chance to wake up,
+		 * so they can recheck if they have to back off.
+		 */
+		list_for_each_entry(cur, &lock->wait_list, list) {
+			/* debug_mutex_wake_waiter(lock, cur); */
+			wake_up_process(cur->task);
+		}
+	}
+
+	/* set it to 0 if there are no waiters left: */
+	if (likely(list_empty(&lock->wait_list)))
+		atomic_set(&lock->count, 0);
+
+	spin_unlock_mutex(&lock->wait_lock, flags);
+
+	/* debug_mutex_free_waiter(&waiter); */
+	preempt_enable();
+
+	return 0;
+
+err:
+	mutex_remove_waiter(lock, &waiter, task_thread_info(task));
+	spin_unlock_mutex(&lock->wait_lock, flags);
+	/* debug_mutex_free_waiter(&waiter); */
+	mutex_release(&lock->dep_map, 1, ip);
+	preempt_enable();
+	return ret;
+}
+
+static noinline int __sched
+__ww_mutex_lock_slowpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+{
+	return __backport_mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, 0,
+					    NULL, _RET_IP_, ctx);
+}
+
+static noinline int __sched
+__ww_mutex_lock_interruptible_slowpath(struct ww_mutex *lock,
+					    struct ww_acquire_ctx *ctx)
+{
+	return __backport_mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, 0,
+					    NULL, _RET_IP_, ctx);
+}
+
+/**
+ * __mutex_fastpath_lock_retval - try to take the lock by moving the count
+ *				  from 1 to a 0 value
+ * @count: pointer of type atomic_t
+ *
+ * For backporting purposes we can't use the older kernel's
+ * __mutex_fastpath_lock_retval() since upon failure of a fastpath
+ * lock we want to call our a failure routine with more than one argument, in
+ * this case the context for ww mutexes. Refer to commit a41b56ef the
+ * argument increase. It'd be painful to backport all asm code for the
+ * supported architectures so instead lets penalize the backport ww mutex
+ * fastpath lock with the not so efficient generic atomic_dec_return()
+ * implementation.
+ *
+ * Change the count from 1 to a value lower than 1. This function returns 0
+ * if the fastpath succeeds, or -1 otherwise.
+ */
+static inline int
+__backport_mutex_fastpath_lock_retval(atomic_t *count)
+{
+	if (unlikely(atomic_dec_return(count) < 0))
+		return -1;
+	return 0;
+}
+
+int __sched
+__ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+{
+	int ret;
+
+	might_sleep();
+
+	ret = __backport_mutex_fastpath_lock_retval(&lock->base.count);
+
+	if (likely(!ret)) {
+		ww_mutex_set_context_fastpath(lock, ctx);
+		mutex_set_owner(&lock->base);
+	} else
+		ret = __ww_mutex_lock_slowpath(lock, ctx);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(__ww_mutex_lock);
+
+int __sched
+__ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+{
+	int ret;
+
+	might_sleep();
+
+	ret = __backport_mutex_fastpath_lock_retval(&lock->base.count);
+
+	if (likely(!ret)) {
+		ww_mutex_set_context_fastpath(lock, ctx);
+		mutex_set_owner(&lock->base);
+	} else
+		ret = __ww_mutex_lock_interruptible_slowpath(lock, ctx);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(__ww_mutex_lock_interruptible);
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC 8/9] backports: backport cross-device reservation support
  2013-07-28  1:16 [RFC 0/9] backports: take us to next-20130703 Luis R. Rodriguez
                   ` (6 preceding siblings ...)
  2013-07-28  1:16   ` Luis R. Rodriguez
@ 2013-07-28  1:16 ` Luis R. Rodriguez
  2013-07-28  1:16 ` [RFC 9/9] backports: refresh patches for next-20130703 Luis R. Rodriguez
  8 siblings, 0 replies; 11+ messages in thread
From: Luis R. Rodriguez @ 2013-07-28  1:16 UTC (permalink / raw)
  To: backports; +Cc: Luis R. Rodriguez, maarten.lankhorst, jglisse, airlied

From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>

This backports cross-device reservation support.
Given that this feature is built around the
CONFIG_DMA_SHARED_BUFFER and given that some older kernels
will have DMA_SHARED_BUFFER without cross device reservation
support we can't use the c-file and h-file backports Kconfig
trick to automatically backport this feature from the
target git tree.

commit 786d7257e537da0674c02e16e3b30a44665d1cee
Author: Maarten Lankhorst <m.b.lankhorst@gmail.com>
Date:   Thu Jun 27 13:48:16 2013 +0200

    reservation: cross-device reservation support, v4

    This adds support for a generic reservations framework that can be
    hooked up to ttm and dma-buf and allows easy sharing of reservations
    across devices.

    The idea is that a dma-buf and ttm object both will get a pointer
    to a struct reservation_object, which has to be reserved before
    anything is done with the contents of the dma-buf.

    Changes since v1:
     - Fix locking issue in ticket_reserve, which could cause
       mutex_unlock
       to be called too many times.
    Changes since v2:
     - All fence related calls and members have been taken out for now,
       what's left is the bare minimum to be useful for ttm locking conversion.
    Changes since v3:
     - Removed helper functions too. The documentation has an example
       implementation for locking. With the move to ww_mutex there is no
       need to have much logic any more.

    Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
    Reviewed-by: Jerome Glisse <jglisse@redhat.com>
    Signed-off-by: Dave Airlie <airlied@redhat.com>

Cc: maarten.lankhorst@canonical.com
Cc: jglisse@redhat.com
Cc: airlied@redhat.com
Signed-off-by: Luis R. Rodriguez <mcgrof@do-not-panic.com>
---
 backport/backport-include/linux/reservation.h |   70 +++++++++++++++++++++++++
 backport/compat/Kconfig                       |    7 +++
 backport/compat/Makefile                      |    1 +
 backport/compat/drivers-base-reservation.c    |   39 ++++++++++++++
 4 files changed, 117 insertions(+)
 create mode 100644 backport/backport-include/linux/reservation.h
 create mode 100644 backport/compat/drivers-base-reservation.c

diff --git a/backport/backport-include/linux/reservation.h b/backport/backport-include/linux/reservation.h
new file mode 100644
index 0000000..ff79ae8
--- /dev/null
+++ b/backport/backport-include/linux/reservation.h
@@ -0,0 +1,70 @@
+#ifndef _BACKPORT_LINUX_RESERVATION_H
+#define _BACKPORT_LINUX_RESERVATION_H
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(3,11,0)
+#include_next <linux/reservation.h>
+#else
+#ifdef CPTCFG_BACKPORT_BUILD_CROSS_RESERVATION
+/*
+ * Header file for reservations for dma-buf and ttm
+ *
+ * Copyright(C) 2011 Linaro Limited. All rights reserved.
+ * Copyright (C) 2012-2013 Canonical Ltd
+ * Copyright (C) 2012 Texas Instruments
+ *
+ * Authors:
+ * Rob Clark <rob.clark@linaro.org>
+ * Maarten Lankhorst <maarten.lankhorst@canonical.com>
+ * Thomas Hellstrom <thellstrom-at-vmware-dot-com>
+ *
+ * Based on bo.c which bears the following copyright notice,
+ * but is dual licensed:
+ *
+ * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the
+ * "Software"), to deal in the Software without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sub license, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the
+ * next paragraph) shall be included in all copies or substantial portions
+ * of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
+ * USE OR OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include <linux/ww_mutex.h>
+
+extern struct ww_class reservation_ww_class;
+
+struct reservation_object {
+	struct ww_mutex lock;
+};
+
+#define reservation_object_init LINUX_BACKPORT(reservation_object_init)
+static inline void
+reservation_object_init(struct reservation_object *obj)
+{
+	ww_mutex_init(&obj->lock, &reservation_ww_class);
+}
+
+#define reservation_object_fini LINUX_BACKPORT(reservation_object_fini)
+static inline void
+reservation_object_fini(struct reservation_object *obj)
+{
+	ww_mutex_destroy(&obj->lock);
+}
+
+#endif /* BACKPORT_BUILD_CROSS_RESERVATION */
+#endif /* LINUX_VERSION_CODE >= KERNEL_VERSION(3,11,0) */
+#endif /* _BACKPORT_LINUX_RESERVATION_H */
diff --git a/backport/compat/Kconfig b/backport/compat/Kconfig
index f3c1ab3..8377013 100644
--- a/backport/compat/Kconfig
+++ b/backport/compat/Kconfig
@@ -139,6 +139,13 @@ config BACKPORT_BUILD_DMA_SHARED_BUFFER
 	#h-file linux/dma-buf.h
 	#c-file drivers/base/dma-buf.c
 
+config BACKPORT_BUILD_CROSS_RESERVATION
+	bool
+	# not possible on kernel < 3.2
+	depends on !BACKPORT_KERNEL_3_2
+	depends on BACKPORT_BUILD_DMA_SHARED_BUFFER || DMA_SHARED_BUFFER
+	default y if BACKPORT_USERSEL_BUILD_ALL
+
 config BACKPORT_DMA_SHARED_BUFFER
 	bool
 
diff --git a/backport/compat/Makefile b/backport/compat/Makefile
index fec01c4..80c0294 100644
--- a/backport/compat/Makefile
+++ b/backport/compat/Makefile
@@ -42,3 +42,4 @@ compat-$(CPTCFG_BACKPORT_BUILD_GENERIC_ATOMIC64) += compat_atomic.o
 compat-$(CPTCFG_BACKPORT_BUILD_DMA_SHARED_HELPERS) += dma-shared-helpers.o
 compat-$(CPTCFG_BACKPORT_BUILD_RADIX_HELPERS) += lib-radix-tree-helpers.o
 compat-$(CPTCFG_BACKPORT_BUILD_WW_MUTEX) += kernel/ww_mutex.o
+compat-$(CPTCFG_BACKPORT_BUILD_CROSS_RESERVATION) += drivers-base-reservation.o
diff --git a/backport/compat/drivers-base-reservation.c b/backport/compat/drivers-base-reservation.c
new file mode 100644
index 0000000..a73fbf3
--- /dev/null
+++ b/backport/compat/drivers-base-reservation.c
@@ -0,0 +1,39 @@
+/*
+ * Copyright (C) 2012-2013 Canonical Ltd
+ *
+ * Based on bo.c which bears the following copyright notice,
+ * but is dual licensed:
+ *
+ * Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the
+ * "Software"), to deal in the Software without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sub license, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the
+ * next paragraph) shall be included in all copies or substantial portions
+ * of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
+ * USE OR OTHER DEALINGS IN THE SOFTWARE.
+ *
+ **************************************************************************/
+/*
+ * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com>
+ */
+
+#include <linux/reservation.h>
+#include <linux/export.h>
+
+DEFINE_WW_CLASS(reservation_ww_class);
+EXPORT_SYMBOL(reservation_ww_class);
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC 9/9] backports: refresh patches for next-20130703
  2013-07-28  1:16 [RFC 0/9] backports: take us to next-20130703 Luis R. Rodriguez
                   ` (7 preceding siblings ...)
  2013-07-28  1:16 ` [RFC 8/9] backports: backport cross-device reservation support Luis R. Rodriguez
@ 2013-07-28  1:16 ` Luis R. Rodriguez
  8 siblings, 0 replies; 11+ messages in thread
From: Luis R. Rodriguez @ 2013-07-28  1:16 UTC (permalink / raw)
  To: backports; +Cc: Luis R. Rodriguez

From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>

1   2.6.24              [  OK  ]
2   2.6.25              [  OK  ]
3   2.6.26              [  OK  ]
4   2.6.27              [  OK  ]
5   2.6.28              [  OK  ]
6   2.6.29              [  OK  ]
7   2.6.30              [  OK  ]
8   2.6.31              [  OK  ]
9   2.6.32              [  OK  ]
10  2.6.33              [  OK  ]
11  2.6.34              [  OK  ]
12  2.6.35              [  OK  ]
13  2.6.36              [  OK  ]
14  2.6.37              [  OK  ]
15  2.6.38              [  OK  ]
16  2.6.39              [  OK  ]
17  3.0.79              [  OK  ]
18  3.1.10              [  OK  ]
19  3.10-rc1            [  OK  ]
20  3.2.45              [  OK  ]
21  3.3.8               [  OK  ]
22  3.4.46              [  OK  ]
23  3.5.7               [  OK  ]
24  3.6.11              [  OK  ]
25  3.7.10              [  OK  ]
26  3.8.13              [  OK  ]
27  3.9.3               [  OK  ]

real    32m59.877s
user    880m31.524s
sys     124m11.996s

Signed-off-by: Luis R. Rodriguez <mcgrof@do-not-panic.com>
---
 .../02-revert-vm_mmap/drivers_gpu_drm_i915_i915_gem.patch  |    2 +-
 .../drm/07-intel-gtt/drivers_gpu_drm_i915_i915_gem.patch   |    2 +-
 .../drivers_gpu_drm_i915_i915_gem.patch                    |    2 +-
 .../drm/14-shrinkers-api/drivers_gpu_drm_i915.patch        |   12 ++++++------
 .../network/0001-netdev_ops/alx.patch                      |    2 +-
 .../network/0007-pci_dev_dev_flags/alx.patch               |    2 +-
 .../09-threaded-irq/drivers_net_wireless_b43_main.patch    |    6 +++---
 .../network/11-dev-pm-ops/drivers_bcma_host_pci.patch      |    2 +-
 .../drivers_net_ethernet_atheros_alx_main.patch            |    4 ++--
 .../drivers_net_ethernet_atheros_alx_main.patch            |   10 +++++-----
 .../drivers_net_wireless_b43_main.patch                    |    2 +-
 .../62-usb_driver_lpm/drivers_net_usb_cdc_ether.patch      |    2 +-
 12 files changed, 24 insertions(+), 24 deletions(-)

diff --git a/patches/collateral-evolutions/drm/02-revert-vm_mmap/drivers_gpu_drm_i915_i915_gem.patch b/patches/collateral-evolutions/drm/02-revert-vm_mmap/drivers_gpu_drm_i915_i915_gem.patch
index 9218716..3167c0b 100644
--- a/patches/collateral-evolutions/drm/02-revert-vm_mmap/drivers_gpu_drm_i915_i915_gem.patch
+++ b/patches/collateral-evolutions/drm/02-revert-vm_mmap/drivers_gpu_drm_i915_i915_gem.patch
@@ -1,6 +1,6 @@
 --- a/drivers/gpu/drm/i915/i915_gem.c
 +++ b/drivers/gpu/drm/i915/i915_gem.c
-@@ -1293,10 +1293,17 @@ i915_gem_mmap_ioctl(struct drm_device *d
+@@ -1292,10 +1292,17 @@ i915_gem_mmap_ioctl(struct drm_device *d
  		drm_gem_object_unreference_unlocked(obj);
  		return -EINVAL;
  	}
diff --git a/patches/collateral-evolutions/drm/07-intel-gtt/drivers_gpu_drm_i915_i915_gem.patch b/patches/collateral-evolutions/drm/07-intel-gtt/drivers_gpu_drm_i915_i915_gem.patch
index 354face..62c4bf8 100644
--- a/patches/collateral-evolutions/drm/07-intel-gtt/drivers_gpu_drm_i915_i915_gem.patch
+++ b/patches/collateral-evolutions/drm/07-intel-gtt/drivers_gpu_drm_i915_i915_gem.patch
@@ -1,6 +1,6 @@
 --- a/drivers/gpu/drm/i915/i915_gem.c
 +++ b/drivers/gpu/drm/i915/i915_gem.c
-@@ -4178,9 +4178,14 @@ i915_gem_init_hw(struct drm_device *dev)
+@@ -4164,9 +4164,14 @@ i915_gem_init_hw(struct drm_device *dev)
  	drm_i915_private_t *dev_priv = dev->dev_private;
  	int ret;
  
diff --git a/patches/collateral-evolutions/drm/08-shmem_truncate_range/drivers_gpu_drm_i915_i915_gem.patch b/patches/collateral-evolutions/drm/08-shmem_truncate_range/drivers_gpu_drm_i915_i915_gem.patch
index 7d42edc..052a2c5 100644
--- a/patches/collateral-evolutions/drm/08-shmem_truncate_range/drivers_gpu_drm_i915_i915_gem.patch
+++ b/patches/collateral-evolutions/drm/08-shmem_truncate_range/drivers_gpu_drm_i915_i915_gem.patch
@@ -1,6 +1,6 @@
 --- a/drivers/gpu/drm/i915/i915_gem.c
 +++ b/drivers/gpu/drm/i915/i915_gem.c
-@@ -1617,7 +1617,13 @@ i915_gem_object_truncate(struct drm_i915
+@@ -1616,7 +1616,13 @@ i915_gem_object_truncate(struct drm_i915
  	 * backing pages, *now*.
  	 */
  	inode = file_inode(obj->base.filp);
diff --git a/patches/collateral-evolutions/drm/14-shrinkers-api/drivers_gpu_drm_i915.patch b/patches/collateral-evolutions/drm/14-shrinkers-api/drivers_gpu_drm_i915.patch
index aef9aa3..3ee0727 100644
--- a/patches/collateral-evolutions/drm/14-shrinkers-api/drivers_gpu_drm_i915.patch
+++ b/patches/collateral-evolutions/drm/14-shrinkers-api/drivers_gpu_drm_i915.patch
@@ -42,7 +42,7 @@
  static long i915_gem_purge(struct drm_i915_private *dev_priv, long target);
  static long i915_gem_shrink_all(struct drm_i915_private *dev_priv);
  static void i915_gem_object_truncate(struct drm_i915_gem_object *obj);
-@@ -4390,8 +4395,12 @@ i915_gem_load(struct drm_device *dev)
+@@ -4377,8 +4382,12 @@ i915_gem_load(struct drm_device *dev)
  
  	dev_priv->mm.interruptible = true;
  
@@ -55,7 +55,7 @@
  	dev_priv->mm.inactive_shrinker.seeks = DEFAULT_SEEKS;
  	register_shrinker(&dev_priv->mm.inactive_shrinker);
  }
-@@ -4614,8 +4623,14 @@ static bool mutex_is_locked_by(struct mu
+@@ -4601,8 +4610,14 @@ static bool mutex_is_locked_by(struct mu
  #endif
  }
  
@@ -70,7 +70,7 @@
  {
  	struct drm_i915_private *dev_priv =
  		container_of(shrinker,
-@@ -4624,7 +4639,12 @@ i915_gem_inactive_count(struct shrinker
+@@ -4611,7 +4626,12 @@ i915_gem_inactive_count(struct shrinker
  	struct drm_device *dev = dev_priv->dev;
  	struct drm_i915_gem_object *obj;
  	bool unlock = true;
@@ -83,7 +83,7 @@
  
  	if (!mutex_trylock(&dev->struct_mutex)) {
  		if (!mutex_is_locked_by(&dev->struct_mutex, current))
-@@ -4636,6 +4656,17 @@ i915_gem_inactive_count(struct shrinker
+@@ -4623,6 +4643,17 @@ i915_gem_inactive_count(struct shrinker
  		unlock = false;
  	}
  
@@ -101,7 +101,7 @@
  	count = 0;
  	list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_list)
  		if (obj->pages_pin_count == 0)
-@@ -4649,6 +4680,7 @@ i915_gem_inactive_count(struct shrinker
+@@ -4636,6 +4667,7 @@ i915_gem_inactive_count(struct shrinker
  	return count;
  }
  
@@ -109,7 +109,7 @@
  static unsigned long
  i915_gem_inactive_scan(struct shrinker *shrinker, struct shrink_control *sc)
  {
-@@ -4682,3 +4714,4 @@ i915_gem_inactive_scan(struct shrinker *
+@@ -4669,3 +4701,4 @@ i915_gem_inactive_scan(struct shrinker *
  		mutex_unlock(&dev->struct_mutex);
  	return freed;
  }
diff --git a/patches/collateral-evolutions/network/0001-netdev_ops/alx.patch b/patches/collateral-evolutions/network/0001-netdev_ops/alx.patch
index 46ed1fe..2d419f8 100644
--- a/patches/collateral-evolutions/network/0001-netdev_ops/alx.patch
+++ b/patches/collateral-evolutions/network/0001-netdev_ops/alx.patch
@@ -1,6 +1,6 @@
 --- a/drivers/net/ethernet/atheros/alx/main.c
 +++ b/drivers/net/ethernet/atheros/alx/main.c
-@@ -1317,7 +1317,7 @@ static int alx_probe(struct pci_dev *pde
+@@ -1320,7 +1320,7 @@ static int alx_probe(struct pci_dev *pde
  		goto out_free_netdev;
  	}
  
diff --git a/patches/collateral-evolutions/network/0007-pci_dev_dev_flags/alx.patch b/patches/collateral-evolutions/network/0007-pci_dev_dev_flags/alx.patch
index aa782b7..17d311c 100644
--- a/patches/collateral-evolutions/network/0007-pci_dev_dev_flags/alx.patch
+++ b/patches/collateral-evolutions/network/0007-pci_dev_dev_flags/alx.patch
@@ -1,6 +1,6 @@
 --- a/drivers/net/ethernet/atheros/alx/main.c
 +++ b/drivers/net/ethernet/atheros/alx/main.c
-@@ -1322,8 +1322,10 @@ static int alx_probe(struct pci_dev *pde
+@@ -1325,8 +1325,10 @@ static int alx_probe(struct pci_dev *pde
  	netdev->irq = pdev->irq;
  	netdev->watchdog_timeo = ALX_WATCHDOG_TIME;
  
diff --git a/patches/collateral-evolutions/network/09-threaded-irq/drivers_net_wireless_b43_main.patch b/patches/collateral-evolutions/network/09-threaded-irq/drivers_net_wireless_b43_main.patch
index aaa0aa0..46e881e 100644
--- a/patches/collateral-evolutions/network/09-threaded-irq/drivers_net_wireless_b43_main.patch
+++ b/patches/collateral-evolutions/network/09-threaded-irq/drivers_net_wireless_b43_main.patch
@@ -1,6 +1,6 @@
 --- a/drivers/net/wireless/b43/main.c
 +++ b/drivers/net/wireless/b43/main.c
-@@ -4236,8 +4236,13 @@ redo:
+@@ -4238,8 +4238,13 @@ redo:
  	if (b43_bus_host_is_sdio(dev->dev)) {
  		b43_sdio_free_irq(dev);
  	} else {
@@ -14,7 +14,7 @@
  	}
  	mutex_lock(&wl->mutex);
  	dev = wl->current_dev;
-@@ -4283,9 +4288,17 @@ static int b43_wireless_core_start(struc
+@@ -4285,9 +4290,17 @@ static int b43_wireless_core_start(struc
  			goto out;
  		}
  	} else {
@@ -32,7 +32,7 @@
  		if (err) {
  			b43err(dev->wl, "Cannot request IRQ-%d\n",
  			       dev->dev->irq);
-@@ -5108,6 +5121,10 @@ static int b43_setup_bands(struct b43_wl
+@@ -5110,6 +5123,10 @@ static int b43_setup_bands(struct b43_wl
  
  static void b43_wireless_core_detach(struct b43_wldev *dev)
  {
diff --git a/patches/collateral-evolutions/network/11-dev-pm-ops/drivers_bcma_host_pci.patch b/patches/collateral-evolutions/network/11-dev-pm-ops/drivers_bcma_host_pci.patch
index 1b9b578..00790a6 100644
--- a/patches/collateral-evolutions/network/11-dev-pm-ops/drivers_bcma_host_pci.patch
+++ b/patches/collateral-evolutions/network/11-dev-pm-ops/drivers_bcma_host_pci.patch
@@ -10,7 +10,7 @@
  static SIMPLE_DEV_PM_OPS(bcma_pm_ops, bcma_host_pci_suspend,
  			 bcma_host_pci_resume);
  #define BCMA_PM_OPS	(&bcma_pm_ops)
-@@ -285,7 +288,12 @@ static struct pci_driver bcma_pci_bridge
+@@ -286,7 +289,12 @@ static struct pci_driver bcma_pci_bridge
  	.id_table = bcma_pci_bridge_tbl,
  	.probe = bcma_host_pci_probe,
  	.remove = bcma_host_pci_remove,
diff --git a/patches/collateral-evolutions/network/11-dev-pm-ops/drivers_net_ethernet_atheros_alx_main.patch b/patches/collateral-evolutions/network/11-dev-pm-ops/drivers_net_ethernet_atheros_alx_main.patch
index 4f4cce9..626f5c5 100644
--- a/patches/collateral-evolutions/network/11-dev-pm-ops/drivers_net_ethernet_atheros_alx_main.patch
+++ b/patches/collateral-evolutions/network/11-dev-pm-ops/drivers_net_ethernet_atheros_alx_main.patch
@@ -1,6 +1,6 @@
 --- a/drivers/net/ethernet/atheros/alx/main.c
 +++ b/drivers/net/ethernet/atheros/alx/main.c
-@@ -1590,6 +1590,8 @@ static const struct pci_error_handlers a
+@@ -1593,6 +1593,8 @@ static const struct pci_error_handlers a
  };
  
  #ifdef CONFIG_PM_SLEEP
@@ -9,7 +9,7 @@
  static SIMPLE_DEV_PM_OPS(alx_pm_ops, alx_suspend, alx_resume);
  #define ALX_PM_OPS      (&alx_pm_ops)
  #else
-@@ -1615,7 +1617,12 @@ static struct pci_driver alx_driver = {
+@@ -1618,7 +1620,12 @@ static struct pci_driver alx_driver = {
  	.remove      = alx_remove,
  	.shutdown    = alx_shutdown,
  	.err_handler = &alx_err_handlers,
diff --git a/patches/collateral-evolutions/network/40-netdev-hw-features/drivers_net_ethernet_atheros_alx_main.patch b/patches/collateral-evolutions/network/40-netdev-hw-features/drivers_net_ethernet_atheros_alx_main.patch
index 35e3371..4adbb0d 100644
--- a/patches/collateral-evolutions/network/40-netdev-hw-features/drivers_net_ethernet_atheros_alx_main.patch
+++ b/patches/collateral-evolutions/network/40-netdev-hw-features/drivers_net_ethernet_atheros_alx_main.patch
@@ -1,6 +1,6 @@
 --- a/drivers/net/ethernet/atheros/alx/main.c
 +++ b/drivers/net/ethernet/atheros/alx/main.c
-@@ -737,6 +737,7 @@ static int alx_init_sw(struct alx_priv *
+@@ -738,6 +738,7 @@ static int alx_init_sw(struct alx_priv *
  }
  
  
@@ -8,7 +8,7 @@
  static netdev_features_t alx_fix_features(struct net_device *netdev,
  					  netdev_features_t features)
  {
-@@ -745,6 +746,7 @@ static netdev_features_t alx_fix_feature
+@@ -746,6 +747,7 @@ static netdev_features_t alx_fix_feature
  
  	return features;
  }
@@ -16,7 +16,7 @@
  
  static void alx_netif_stop(struct alx_priv *alx)
  {
-@@ -820,7 +822,17 @@ static int alx_change_mtu(struct net_dev
+@@ -822,7 +824,17 @@ static int alx_change_mtu(struct net_dev
  	alx->hw.mtu = mtu;
  	alx->rxbuf_size = mtu > ALX_DEF_RXBUF_SIZE ?
  			   ALIGN(max_frame, 8) : ALX_DEF_RXBUF_SIZE;
@@ -34,7 +34,7 @@
  	if (netif_running(netdev))
  		alx_reinit(alx);
  	return 0;
-@@ -1238,7 +1250,9 @@ static const struct net_device_ops alx_n
+@@ -1241,7 +1253,9 @@ static const struct net_device_ops alx_n
  	.ndo_change_mtu         = alx_change_mtu,
  	.ndo_do_ioctl           = alx_ioctl,
  	.ndo_tx_timeout         = alx_tx_timeout,
@@ -44,7 +44,7 @@
  #ifdef CONFIG_NET_POLL_CONTROLLER
  	.ndo_poll_controller    = alx_poll_controller,
  #endif
-@@ -1361,7 +1375,11 @@ static int alx_probe(struct pci_dev *pde
+@@ -1364,7 +1378,11 @@ static int alx_probe(struct pci_dev *pde
  		}
  	}
  
diff --git a/patches/collateral-evolutions/network/48-use_skb_get_queue_mapping/drivers_net_wireless_b43_main.patch b/patches/collateral-evolutions/network/48-use_skb_get_queue_mapping/drivers_net_wireless_b43_main.patch
index 43c2163..850e1a3 100644
--- a/patches/collateral-evolutions/network/48-use_skb_get_queue_mapping/drivers_net_wireless_b43_main.patch
+++ b/patches/collateral-evolutions/network/48-use_skb_get_queue_mapping/drivers_net_wireless_b43_main.patch
@@ -1,6 +1,6 @@
 --- a/drivers/net/wireless/b43/main.c
 +++ b/drivers/net/wireless/b43/main.c
-@@ -3449,11 +3449,11 @@ static void b43_op_tx(struct ieee80211_h
+@@ -3451,11 +3451,11 @@ static void b43_op_tx(struct ieee80211_h
  	}
  	B43_WARN_ON(skb_shinfo(skb)->nr_frags);
  
diff --git a/patches/collateral-evolutions/network/62-usb_driver_lpm/drivers_net_usb_cdc_ether.patch b/patches/collateral-evolutions/network/62-usb_driver_lpm/drivers_net_usb_cdc_ether.patch
index 148925e..c8da6e7 100644
--- a/patches/collateral-evolutions/network/62-usb_driver_lpm/drivers_net_usb_cdc_ether.patch
+++ b/patches/collateral-evolutions/network/62-usb_driver_lpm/drivers_net_usb_cdc_ether.patch
@@ -1,6 +1,6 @@
 --- a/drivers/net/usb/cdc_ether.c
 +++ b/drivers/net/usb/cdc_ether.c
-@@ -740,7 +740,9 @@ static struct usb_driver cdc_driver = {
+@@ -752,7 +752,9 @@ static struct usb_driver cdc_driver = {
  	.resume =	usbnet_resume,
  	.reset_resume =	usbnet_resume,
  	.supports_autosuspend = 1,
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2013-07-28  1:17 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-28  1:16 [RFC 0/9] backports: take us to next-20130703 Luis R. Rodriguez
2013-07-28  1:16 ` [RFC 1/9] backports: simplify space regexp for src_line Luis R. Rodriguez
2013-07-28  1:16 ` [RFC 2/9] backports: fix wq_name_list initialization Luis R. Rodriguez
2013-07-28  1:16 ` [RFC 3/9] backports: fix DMI_EXACT_MATCH() backport Luis R. Rodriguez
2013-07-28  1:16 ` [RFC 4/9] backport: disable unused automatic backports Luis R. Rodriguez
2013-07-28  1:16 ` [RFC 5/9] backports: copy over mfd/max8998.h mfd/max8998-private.h Luis R. Rodriguez
2013-07-28  1:16 ` [RFC 6/9] backports: backport of_get_child_by_name() support Luis R. Rodriguez
2013-07-28  1:16 ` [RFC 7/9] backports: backport ww_mutex support Luis R. Rodriguez
2013-07-28  1:16   ` Luis R. Rodriguez
2013-07-28  1:16 ` [RFC 8/9] backports: backport cross-device reservation support Luis R. Rodriguez
2013-07-28  1:16 ` [RFC 9/9] backports: refresh patches for next-20130703 Luis R. Rodriguez

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.